index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
5,500
Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz algorithm Deanna Needell Department of Mathematical Sciences Claremont McKenna College Claremont CA 91711 dneedell@cmc.edu Nathan Srebro Toyota Technological Institute at Chicago and Dept. of Computer Science, Technion nati@ttic.edu Rachel Ward Department of Mathematics Univ. of Texas, Austin rward@math.utexas.edu Abstract We improve a recent guarantee of Bach and Moulines on the linear convergence of SGD for smooth and strongly convex objectives, reducing a quadratic dependence on the strong convexity to a linear dependence. Furthermore, we show how reweighting the sampling distribution (i.e. importance sampling) is necessary in order to further improve convergence, and obtain a linear dependence on average smoothness, dominating previous results, and more broadly discus how importance sampling for SGD can improve convergence also in other scenarios. Our results are based on a connection between SGD and the randomized Kaczmarz algorithm, which allows us to transfer ideas between the separate bodies of literature studying each of the two methods. 1 Introduction This paper concerns two algorithms which until now have remained somewhat disjoint in the literature: the randomized Kaczmarz algorithm for solving linear systems and the stochastic gradient descent (SGD) method for optimizing a convex objective using unbiased gradient estimates. The connection enables us to make contributions by borrowing from each body of literature to the other. In particular, it helps us highlight the role of weighted sampling for SGD and obtain a tighter guarantee on the linear convergence regime of SGD. Our starting point is a recent analysis on convergence of the SGD iterates. Considering a stochastic objective F(x) = Ei[fi(x)], classical analyses of SGD show a polynomial rate on the suboptimality of the objective value F(xk) −F(x⋆). Bach and Moulines [1] showed that if F(x) is µ-strongly convex, fi(x) are Li-smooth (i.e. their gradients are Li-Lipschitz), and x⋆is a minimizer of (almost) all fi(x) (i.e. Pi(∇fi(x⋆) = 0) = 1), then E∥xk −x⋆∥goes to zero exponentially, rather then polynomially, in k. That is, reaching a desired accuracy of E∥xk −x⋆∥2 ≤ε requires a number of steps that scales only logarithmically in 1/ε. Bach and Moulines’s bound on the required number of iterations further depends on the average squared conditioning number E[(Li/µ)2]. In a seemingly independent line of research, the Kaczmarz method was proposed as an iterative method for solving overdetermined systems of linear equations [7]. The simplicity of the method makes it popular in applications ranging from computer tomography to digital signal processing [5, 1 9, 6]. Recently, Strohmer and Vershynin [19] proposed a variant of the Kaczmarz method which selects rows with probability proportional to their squared norm, and showed that using this selection strategy, a desired accuracy of ε can be reached in the noiseless setting in a number of steps that scales with log(1/ε) and only linearly in the condition number. As we discuss in Section 5, the randomized Kaczmarz algorithm is in fact a special case of stochastic gradient descent. Inspired by the above analysis, we prove improved convergence results for generic SGD, as well as for SGD with gradient estimates chosen based on a weighted sampling distribution, highlighting the role of importance sampling in SGD: We first show that without perturbing the sampling distribution, we can obtain a linear dependence on the uniform conditioning (sup Li/µ), but it is not possible to obtain a linear dependence on the average conditioning E[Li]/µ. This is a quadratic improvement over [1] in regimes where the components have similar Lipschitz constants (Theorem 2.1 in Section 2). We then show that with weighted sampling we can obtain a linear dependence on the average conditioning E[Li]/µ, dominating the quadratic dependence of [1] (Corollary 3.1 in Section 3). In Section 4, we show how also for smooth but not-strongly-convex objectives, importance sampling can improve a dependence on a uniform bound over smoothness, (sup Li), to a dependence on the average smoothness E[Li]—such an improvement is not possible without importance sampling. For non-smooth objectives, we show that importance sampling can eliminate a dependence on the variance in the Lipschitz constants of the components. Finally, in Section 5, we turn to the Kaczmarz algorithm, and show we can improve known guarantees in this context as well. 2 SGD for Strongly Convex Smooth Optimization We consider the problem of minimizing a strongly convex function of the form F(x) = Ei∼Dfi(x) where fi : H →R are smooth functionals over H = Rd endowed with the standard Euclidean norm ∥·∥2, or over a Hilbert space H with the norm ∥·∥2. Here i is drawn from some source distribution D over an arbitrary probability space. Throughout this manuscript, unless explicitly specified otherwise, expectations will be with respect to indices drawn from the source distribution D. We denote the unique minimum x⋆= arg min F(x) and denote by σ2 the “residual” quantity at the minimum, σ2 = E∥∇fi(x⋆)∥2 2. Assumptions Our bounds will be based on the following assumptions and quantities: First, F has strong convexity parameter µ; that is, ⟨x −y, ∇F (x) −∇F (y)⟩≥µ∥x −y∥2 2 for all vectors x and y. Second, each fi is continuously differentiable and the gradient function ∇fi has Lipschitz constant Li; that is, ∥∇fi(x) −∇fi(y)∥2 ≤Li∥x −y∥2 for all vectors x and y. We denote sup L the supremum of the support of Li, i.e. the smallest L such that Li ≤L a.s., and similarly denote inf L the infimum. We denote the average Lipschitz constant as L = ELi. An unbiased gradient estimate for F(x) can be obtained by drawing i ∼D and using ∇fi(x) as the estimate. The SGD updates with (fixed) step size γ based on these gradient estimates are given by: xk+1 ←xk −γ∇fik(xk) (2.1) where {ik} are drawn i.i.d. from D. We are interested in the distance ∥xk −x⋆∥2 2 of the iterates from the unique minimum, and denote the initial distance by ε0 = ∥x0 −x⋆∥2 2. Bach and Moulines [1, Theorem 1] considered this setting1 and established that k = 2 log(ε0/ε) EL2 i µ2 + σ2 µ2ε  (2.2) SGD iterations of the form (2.1), with an appropriate step-size, are sufficient to ensure E∥xk −x⋆∥2 2 ≤ε, where the expectation is over the random sampling. As long as σ2 = 0, i.e. the 1Bach and Moulines’s results are somewhat more general. Their Lipschitz requirement is a bit weaker and more complicated, but in terms of Li yields (2.2). They also study the use of polynomial decaying step-sizes, but these do not lead to improved runtime if the target accuracy is known ahead of time. 2 same minimizer x⋆minimizes all components fi(x) (though of course it need not be a unique minimizer of any of them); this yields linear convergence to x⋆, with a graceful degradation as σ2 > 0. However, in the linear convergence regime, the number of required iterations scales with the expected squared conditioning EL2 i /µ2. In this paper, we reduce this quadratic dependence to a linear dependence. We begin with a guarantee ensuring linear dependence on sup L/µ: Theorem 2.1 Let each fi be convex where ∇fi has Lipschitz constant Li, with Li ≤sup L a.s., and let F(x) = Efi(x) be µ-strongly convex. Set σ2 = E∥∇fi(x⋆)∥2 2, where x⋆= argminx F(x). Suppose that γ ≤1/µ. Then the SGD iterates given by (2.1) satisfy: E∥xk −x⋆∥2 2 ≤ h 1 −2γµ(1 −γ sup L) ik ∥x0 −x⋆∥2 2 + γσ2 µ 1 −γ sup L . (2.3) That is, for any desired ε, using a step-size of γ = µε 2εµ sup L + 2σ2 ensures that after k = 2 log(ε0/ε) sup L µ + σ2 µ2ε  (2.4) SGD iterations, E∥xk −x⋆∥2 2 ≤ε, where ε0 = ∥x0 −x⋆∥2 2 and where both expectations are with respect to the sampling of {ik}. Proof sketch: The crux of the improvement over [1] is a tighter recursive equation. Instead of: ∥xk+1 −x⋆∥2 2 ≤ 1 −2γµ + 2γ2L2 ik  ∥xk −x⋆∥2 2 + 2γ2σ2, we use the co-coercivity Lemma (Lemma A.1 in the supplemental material) to obtain: ∥xk+1 −x⋆∥2 2 ≤ 1 −2γµ + 2γ2µLik  ∥xk −x⋆∥2 2 + 2γ2σ2. The significant difference is that one of the factors of Lik, an upper bound on the second derivative (where ik is the random index selected in the kth iteration) in the third term inside the parenthesis, is replaced by µ, a lower bound on the second derivative of F. A complete proof can be found in the supplemental material. Comparison to [1] Our bound (2.4) improves a quadratic dependence on µ2 to a linear dependence and replaces the dependence on the average squared smoothness EL2 i with a linear dependence on the smoothness bound sup L. When all Lipschitz constants Li are of similar magnitude, this is a quadratic improvement in the number of required iterations. However, when different components fi have widely different scaling, i.e. Li are highly variable, the supremum might be significantly larger then the average square conditioning. Tightness Considering the above, one might hope to obtain a linear dependence on the average smoothness L. However, as the following example shows, this is not possible. Consider a uniform source distribution over N + 1 quadratics, with the first quadratic f1 being N(x[1] −b)2 and all others being x[2]2, and b = ±1. Any method must examine f1 in order to recover x to within error less then one, but by uniformly sampling indices i, this takes N iterations in expectation. We can calculate sup L = L1 = 2N, L = 2(2N−1) N , EL2 i = 4(N 2+N−1) N , and µ = 1. Both sup L/µ = EL2 i /µ2 = O(N) scale correctly with the expected number of iterations, while error reduction in O(L/µ) = O(1) iterations is not possible for this example. We therefore see that the choice between EL2 i and sup L is unavoidable. In the next Section, we will show how we can obtain a linear dependence on the average smoothness L, using importance sampling, i.e. by sampling from a modified distribution. 3 Importance Sampling For a weight function w(i) which assigns a non-negative weight w(i) ≥0 to each index i, the weighted distribution D(w) is defined as the distribution such that PD(w) (I) ∝Ei ∼D [1I(i)w(i)] , 3 where I is an event (subset of indices) and 1I(·) its indicator function. For a discrete distribution D with probability mass function p(i) this corresponds to weighting the probabilities to obtain a new probability mass function, which we write as p(w)(i) ∝w(i)p(i). Similarly, for a continuous distribution, this corresponds to multiplying the density by w(i) and renormalizing. Importance sampling has appeared in both the Kaczmarz method [19] and in coordinate-descent methods [14, 15], where the weights are proportional to some power of the Lipschitz constants (of the gradient coordinates). Here we analyze this type of sampling in the context of SGD. One way to construct D(w) is through rejection sampling: sample i ∼D, and accept with probability w(i)/W, for some W ≥supi w(i). Otherwise, reject and continue to re-sample until a suggestion i is accepted. The accepted samples are then distributed according to D(w). We use E(w)[·] = Ei∼D(w)[·] to denote expectation where indices are sampled from the weighted distribution D(w). An important property of such an expectation is that for any quantity X(i): E(w) h 1 w(i)X(i) i = E [w(i)] · E [X(i)] , (3.1) where recall that the expectations on the r.h.s. are with respect to i ∼D. In particular, when E[w(i)] = 1, we have that E(w) h 1 w(i)X(i) i = EX(i). In fact, we will consider only weights s.t. E[w(i)] = 1, and refer to such weights as normalized. Reweighted SGD For any normalized weight function w(i), we can write: f (w) i (x) = 1 w(i)fi(x) and F(x) = E(w)[f (w) i (x)]. (3.2) This is an equivalent, and equally valid, stochastic representation of the objective F(x), and we can just as well base SGD on this representation. In this case, at each iteration we sample i ∼D(w) and then use ∇f (w) i (x) = 1 w(i)∇fi(x) as an unbiased gradient estimate. SGD iterates based on the representation (3.2), which we will refer to as w-weighted SGD, are then given by xk+1 ←xk − γ w(ik)∇fik(xk) (3.3) where {ik} are drawn i.i.d. from D(w). The important observation here is that all SGD guarantees are equally valid for the w-weighted updates (3.3)–the objective is the same objective F(x), the sub-optimality is the same, and the minimizer x⋆is the same. We do need, however, to calculate the relevant quantities controlling SGD convergence with respect to the modified components f (w) i and the weighted distribution D(w). Strongly Convex Smooth Optimization using Weighted SGD We now return to the analysis of strongly convex smooth optimization and investigate how re-weighting can yield a better guarantee. The Lipschitz constant L(w) i of each component f (w) i is now scaled, and we have L(w) i = 1 w(i)Li. The supremum is then given by: sup L(w) = sup i L(w) i = sup i Li w(i). (3.4) It is easy to verify that (3.4) is minimized by the weights w(i) = Li L , so that sup L(w) = sup i Li Li/L = L. (3.5) Before applying Theorem 2.1, we must also calculate: σ2 (w) = E(w)[∥∇f (w) i (x⋆)∥2 2] = E[ 1 w(i)∥∇fi(x⋆)∥2 2] = E[ L Li ∥∇fi(x⋆)∥2 2] ≤ L inf Lσ2. (3.6) 4 Now, applying Theorem 2.1 to the w-weighted SGD iterates (3.3) with weights (3.5), we have that, with an appropriate stepsize, k = 2 log(ε0/ε) sup L(w) µ + σ2 (w) µ2ε  = 2 log(ε0/ε) L µ + L inf L · σ2 µ2ε  (3.7) iterations are sufficient for E(w)∥xk −x⋆∥2 2 ≤ε, where x⋆, µ and ε0 are exactly as in Theorem 2.1. If σ2 = 0, i.e. we are in the “realizable” situation, with true linear convergence, then we also have σ2 (w) = 0. In this case, we already obtain the desired guarantee: linear convergence with a linear dependence on the average conditioning L/µ, strictly improving over the best known results [1]. However, when σ2 > 0 we get a dissatisfying scaling of the second term, by a factor of L/inf L. Fortunately, we can easily overcome this factor. To do so, consider sampling from a distribution which is a mixture of the original source distribution and its re-weighting: w(i) = 1 2 + 1 2 · Li L . (3.8) We refer to this as partially biased sampling. Instead of an even mixture as in (3.9), we could also use a mixture with any other constant proportion, i.e. w(i) = λ + (1 −λ)Li/L for 0 < λ < 1. Using these weights, we have sup L(w) = sup i 1 1 2 + 1 2 · Li L Li ≤2L and σ2 (w) = E[ 1 1 2 + 1 2 · Li L ∥∇fi(x⋆)∥2 2] ≤2σ2. (3.9) Corollary 3.1 Let each fi be convex where ∇fi has Lipschitz constant Li and let F(x) = Ei∼D[fi(x)], where F(x) is µ-strongly convex. Set σ2 = E∥∇fi(x⋆)∥2 2, where x⋆ = argminx F(x). For any desired ε, using a stepsize of γ = µε 4(εµL + σ2) ensures that after k = 4 log(ε0/ε) L µ + σ2 µ2ε  (3.10) iterations of w-weighted SGD (3.3) with weights specified by (3.8), E(w)∥xk −x⋆∥2 2 ≤ε, where ε0 = ∥x0 −x⋆∥2 2 and L = ELi. This result follows by substituting (3.9) into Theorem 2.1. We now obtain the desired linear scaling on L/µ, without introducing any additional factor to the residual term, except for a constant factor. We thus obtain a result which dominates Bach and Moulines (up to a factor of 2) and substantially improves upon it (with a linear rather than quadratic dependence on the conditioning). Such “partially biased weights” are not only an analysis trick, but might indeed improve actual performance over either no weighting or the “fully biased” weights (3.5), as demonstrated in Figure 1. Implementing Importance Sampling In settings where linear systems need to be solved repeatedly, or when the Lipschitz constants are easily computed from the data, it is straightforward to sample by the weighted distribution. However, when we only have sampling access to the source distribution D (or the implied distribution over gradient estimates), importance sampling might be difficult. In light of the above results, one could use rejection sampling to simulate sampling from D(w). For the weights (3.5), this can be done by accepting samples with probability proportional to Li/ sup L. The overall probability of accepting a sample is then L/ sup L, introducing an additional factor of sup L/L. This yields a sample complexity with a linear dependence on sup L, as in Theorem 2.1, but a reduction in the number of actual gradient calculations and updates. In even less favorable situations, if Lipschitz constants cannot be bounded for individual components, even importance sampling might not be possible. 4 Importance Sampling for SGD in Other Scenarios In the previous Section, we considered SGD for smooth and strongly convex objectives, and were particularly interested in the regime where the residual σ2 is low, and the linear convergence term is dominant. Weighted SGD is useful also in other scenarios, and we now briefly survey them, as well as relate them to our main scenario of interest. 5 0 1000 2000 3000 10 −1 10 0 10 1 Iteration k Error || xk − x* ||2 λ = 0 λ = 0.2 λ = 1 0 1000 2000 3000 4000 5000 10 −1 10 0 10 1 Iteration k Error || xk − x* ||2 λ = 0 λ = 0.2 λ = 1 0 1000 2000 3000 4000 5000 10 −1 10 0 10 1 Iteration k Error || xk − x* ||2 λ = 0 λ = 0.2 λ = 1 Figure 1: Performance of SGD with weights w(i) = λ + (1 −λ) Li L on synthetic overdetermined least squares problems of the form (5.1) (λ = 1 is unweighted, λ = 0 is fully weighted). Left: ai are standard spherical Gaussian, bi = ⟨ai, x0⟩+ N(0, 0.12). Center: ai is spherical Gaussian with variance i, bi = ⟨ai, x0⟩+ N(0, 202). Right: ai is spherical Gaussian with variance i, bi = ⟨ai, x0⟩+ N(0, 0.12). In all cases, matrix A with rows ai is 1000 × 100 and the corresponding least squares problem is strongly convex; the stepsize was chosen as in (3.10). 0 2000 4000 6000 8000 10 −2 10 0 10 2 Iteration k Error F(xk) − F(x*) λ=0 λ=.4 λ=1 0 5000 10000 15000 10 0 10 2 Iteration k Error F(xk) − F(x*) λ=0 λ=.4 λ=1 0 5000 10000 15000 10 0 10 1 Iteration k Error F(xk) − F(x*) λ=0 λ = .4 λ = 1 Figure 2: Performance of SGD with weights w(i) = λ + (1 −λ) Li L on synthetic underdetermined least squares problems of the form (5.1) (λ = 1 is unweighted, λ = 0 is fully weighted). We consider 3 cases. Left: ai are standard spherical Gaussian, bi = ⟨ai, x0⟩+N(0, 0.12). Center: ai is spherical Gaussian with variance i, bi = ⟨ai, x0⟩+ N(0, 202). Right: ai is spherical Gaussian with variance i, bi = ⟨ai, x0⟩+ N(0, 0.12). In all cases, matrix A with rows ai is 50 × 100 and so the corresponding least squares problem is not strongly convex; the step-size was chosen as in (3.10). Smooth, Not Strongly Convex When each component fi is convex, non-negative, and has an Li-Lipschitz gradient, but the objective F(x) is not necessarily strongly convex, then after k = O (sup L)∥x⋆∥2 2 ε · F(x⋆) + ε ε  (4.1) iterations of SGD with an appropriately chosen step-size we will have F(xk) ≤F(x⋆) + ε, where xk is an appropriate averaging of the k iterates [18]. The relevant quantity here determining the iteration complexity is again sup L. Furthermore, the dependence on the supremum is unavoidable and cannot be replaced with the average Lipschitz constant L [3, 18]: if we sample gradients according to the source distribution D, we must have a linear dependence on sup L. The only quantity in the bound (4.1) that changes with a re-weighting is sup L—all other quantities (∥x⋆∥2 2, F(x⋆), and the sub-optimality ε) are invariant to re-weightings. We can therefore replace the dependence on sup L with a dependence on sup L(w) by using a weighted SGD as in (3.3). As we already calculated, the optimal weights are given by (3.5), and using them we have sup L(w) = L. In this case, there is no need for partially biased sampling, and we obtain that k = O L∥x⋆∥2 2 ε · F(x⋆) + ε ε  (4.2) iterations of weighed SGD updates (3.3) using the weights (3.5) suffice. Empirical evidence suggests that this is not a theoretical artifact; full weighted sampling indeed exhibits better convergence rates compared to partially biased sampling in the non-strongly convex setting (see Figure 2), in contrast 6 to the strongly convex regime (see Figure 1). We again see that using importance sampling allows us to reduce the dependence on sup L, which is unavoidable without biased sampling, to a dependence on L. An interesting question for further consideration is to what extent importance sampling can also help with stochastic optimization procedures such as SAG [8] and SDCA [17] which achieve faster convergence on finite data sets. Indeed, weighted sampling was shown empirically to achieve faster convergence rates for SAG [16], but theoretical guarantees remain open. Non-Smooth Objectives We now turn to non-smooth objectives, where the components fi might not be smooth, but each component is Gi-Lipschitz. Roughly speaking, Gi is a bound on the first derivative (the subgradients) of fi, while Li is a bound on the second derivatives of fi. Here, the performance of SGD (actually stochastic subgradient decent) depends on the second moment G2 = E[G2 i ] [12]. The precise iteration complexity depends on whether the objective is strongly convex or whether x⋆is bounded, but in either case depends linearly on G2. Using weighted SGD, we get linear dependence on G2 (w) = E(w) h (G(w) i )2i = E(w)  G2 i w(i)2  = E  G2 i w(i)  (4.3) where G(w) i = Gi/w(i) is the Lipschitz constant of the scaled f (w) i . This is minimized by the weights w(i) = Gi/G, where G = EGi, yielding G2 (w) = G 2. Using importance sampling, we therefore reduce the dependence on G2 to a dependence on G 2. Its helpful to recall that G2 = G 2 + Var[Gi]. What we save is thus exactly the variance of the Lipschitz constants Gi. Parallel work we recently became aware of [22] shows a similar improvement for a non-smooth composite objective. Rather than relying on a specialized analysis as in [22], here we show this follows from SGD analysis applied to different gradient estimates. Non-Realizable Regime Returning to the smooth and strongly convex setting of Sections 2 and 3, let us consider more carefully the residual term σ2 = E∥∇fi(x⋆)∥2 2. This quantity depends on the weighting, and in Section 3, we avoided increasing it, introducing partial biasing for this purpose. However, if this is the dominant term, we might want to choose weights to minimize this term. The optimal weights here would be proportional to ∥∇fi(x⋆)∥2, which is not known in general. An alternative approach is to bound ∥∇fi(x⋆)∥2 ≤Gi and so σ2 ≤G2. Taking this bound, we are back to the same quantity as in the non-smooth case, and the optimal weights are proportional to Gi. Note that this differs from using weights proportional to Li, which optimize the linear-convergence term as studied in Section 3. To understand how weighting according to Gi and Li are different, consider a generalized linear objective fi(x) = φi(⟨zi, x⟩), where φi is a scalar function with bounded |φ′ i| , |φ′′ i |. We have that Gi ∝∥zi∥2 while Li ∝∥zi∥2 2. Weighting according to (3.5), versus weighting with w(i) = Gi/G, thus corresponds to weighting according to ∥zi∥2 2 versus ∥zi∥2, and are rather different. E.g., weighting by Li ∝∥zi∥2 2 yields G2 (w) = G2: the same sub-optimal dependence as if no weighting at all were used. A good solution could be to weight by a mixture of Gi and Li, as in the partial weighting scheme of Section 3. 5 The least squares case and the Randomized Kaczmarz Method A special case of interest is the least squares problem, where F(x) = 1 2 n X i=1 (⟨ai, x⟩−bi)2 = 1 2∥Ax −b∥2 2 (5.1) with b ∈Cn, A an n × d matrix with rows ai, and x⋆= argminx 1 2∥Ax −b∥2 2 is the least-squares solution. We can also write (5.1) as a stochastic objective, where the source distribution D is uniform over {1, 2, . . . , n} and fi = n 2 (⟨ai, x⟩−bi)2. In this setting, σ2 = ∥Ax⋆−b∥2 2 is the residual error 7 at the least squares solution x⋆, which can also be interpreted as noise variance in a linear regression model. The randomized Kaczmarz method introduced for solving the least squares problem (5.1) in the case where A is an overdetermined full-rank matrix, begins with an arbitrary estimate x0, and in the kth iteration selects a row i at random from the matrix A and iterates by: xk+1 = xk + c · bi −⟨ai, xk⟩ ∥ai∥2 2 ai, (5.2) where c = 1 in the standard method. This is almost an SGD update with step-size γ = c/n, except for the scaling by ∥ai∥2 2. Strohmer and Vershynin [19] provided the first non-asymptotic convergence rates, showing that drawing rows proportionally to ∥ai∥2 2 leads to provable exponential convergence in expectation [19]. With such a weighting, (5.2) is exactly weighted SGD, as in (3.3), with the fully biased weights (3.5). The reduction of the quadratic dependence on the conditioning to a linear dependence in Theorem 2.1, and the use of biased sampling, was inspired by the analysis of [19]. Indeed, applying Theorem 2.1 to the weighted SGD iterates with weights as in (3.5) and a stepsize of γ = 1 yields precisely the guarantee of [19]. Furthermore, understanding the randomized Kaczmarz method as SGD, allows us to obtain the following improvements: Partially Biased Sampling. Using partially biased sampling weights (3.8) yields a better dependence on the residual over the fully biased sampling weights (3.5) considered by [19]. Using Step-sizes. The randomized Kaczmarz method with weighted sampling exhibits exponential convergence, but only to within a radius, or convergence horizon, of the least-squares solution [19, 10]. This is because a step-size of γ = 1 is used, and so the second term in (2.3) does not vanish. It has been shown [21, 2, 20, 4, 11] that changing the step size can allow for convergence inside of this convergence horizon, but only asymptotically. Our results allow for finite-iteration guarantees with arbitrary step-sizes and can be immediately applied to this setting. Uniform Row Selection. Strohmer and Vershynin’s variant of the randomized Kaczmarz method calls for weighted row sampling, and thus requires pre-computing all the row norms. Although certainly possible in some applications, in other cases this might be better avoided. Understanding the randomized Kaczmarz as SGD allows us to apply Theorem 2.1 also with uniform weights (i.e. to the unbiased SGD), and obtain a randomized Kaczmarz using uniform sampling, which converges to the least-squares solution and enjoys finite-iteration guarantees. 6 Conclusion We consider this paper as making three main contributions. First, we improve the dependence on the conditioning for smooth and strongly convex SGD from quadratic to linear. Second, we investigate SGD and importance sampling and show how it can yield improvements not possible without reweighting. Lastly, we make connections between SGD and the randomized Kaczmarz method. This connection along with our new results show that the choice in step-size of the Kaczmarz method offers a tradeoff between convergence rate and horizon and also allows for a convergence bound when the rows are sampled uniformly. For simplicity, we only considered SGD with fixed step-size γ, which is appropriate when the target accuracy in known in advance. Our analysis can be adapted also to decaying step-sizes. Our discussion of importance sampling is limited to a static reweighting of the sampling distribution. A more sophisticated approach would be to update the sampling distribution dynamically as the method progresses, and as we gain more information about the relative importance of components (e.g. about ∥∇fi(x⋆)∥). Such dynamic sampling is sometimes attempted heuristically, and obtaining a rigorous framework for this would be desirable. 8 References [1] F. Bach and E. Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. Advances in Neural Information Processing Systems (NIPS), 2011. [2] Y. Censor, P. P. B. Eggermont, and D. Gordon. Strong underrelaxation in Kaczmarz’s method for inconsistent systems. Numerische Mathematik, 41(1):83–92, 1983. [3] R. Foygel and N. Srebro. Concentration-based guarantees for low-rank matrix reconstruction. 24th Ann. Conf. Learning Theory (COLT), 2011. [4] M. Hanke and W. Niethammer. On the acceleration of Kaczmarz’s method for inconsistent linear systems. Linear Algebra and its Applications, 130:83–98, 1990. [5] G. T. Herman. Fundamentals of computerized tomography: image reconstruction from projections. Springer, 2009. [6] G. N Hounsfield. Computerized transverse axial scanning (tomography): Part 1. description of system. British Journal of Radiology, 46(552):1016–1022, 1973. [7] S. Kaczmarz. Angen¨aherte aufl¨osung von systemen linearer gleichungen. Bull. Int. Acad. Polon. Sci. Lett. Ser. A, pages 335–357, 1937. [8] N. Le Roux, M. W. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2012. [9] F. Natterer. The mathematics of computerized tomography, volume 32 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2001. ISBN 0-89871-4931. doi: 10.1137/1.9780898719284. URL http://dx.doi.org/10.1137/1.9780898719284. Reprint of the 1986 original. [10] D. Needell. Randomized Kaczmarz solver for noisy linear systems. BIT, 50 (2):395–403, 2010. ISSN 0006-3835. doi: 10.1007/s10543-010-0265-5. URL http://dx.doi.org/10.1007/s10543-010-0265-5. [11] D. Needell and R. Ward. Two-subspace projection method for coherent overdetermined linear systems. Journal of Fourier Analysis and Applications, 19(2):256–269, 2013. [12] Arkadi Nemirovski. Efficient methods in convex programming. 2005. [13] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer, 2004. [14] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optimiz., 22(2):341–362, 2012. [15] P. Richt´arik and M. Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program., pages 1–38, 2012. [16] M. Schmidt, N. Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. arXiv preprint arXiv:1309.2388, 2013. [17] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. J. Mach. Learn. Res., 14(1):567–599, 2013. [18] N. Srebro, K. Sridharan, and A. Tewari. Smoothness, low noise and fast rates. In Advances in Neural Information Processing Systems, 2010. [19] T. Strohmer and R. Vershynin. A randomized Kaczmarz algorithm with exponential convergence. J. Fourier Anal. Appl., 15(2):262–278, 2009. ISSN 1069-5869. doi: 10.1007/s00041-008-9030-4. URL http://dx.doi.org/10.1007/s00041-008-9030-4. [20] K. Tanabe. Projection method for solving a singular system of linear equations and its applications. Numerische Mathematik, 17(3):203–214, 1971. [21] T. M. Whitney and R. K. Meany. Two algorithms related to the method of steepest descent. SIAM Journal on Numerical Analysis, 4(1):109–118, 1967. [22] P. Zhao and T. Zhang. Stochastic optimization with importance sampling. Submitted, 2014. 9
2014
389
5,501
Tight Bounds for Influence in Diffusion Networks and Application to Bond Percolation and Epidemiology R´emi Lemonnier1,2 Kevin Scaman1 Nicolas Vayatis1 1CMLA – ENS Cachan, CNRS, France, 21000mercis, Paris, France {lemonnier, scaman, vayatis}@cmla.ens-cachan.fr Abstract In this paper, we derive theoretical bounds for the long-term influence of a node in an Independent Cascade Model (ICM). We relate these bounds to the spectral radius of a particular matrix and show that the behavior is sub-critical when this spectral radius is lower than 1. More specifically, we point out that, in general networks, the sub-critical regime behaves in O(√n) where n is the size of the network, and that this upper bound is met for star-shaped networks. We apply our results to epidemiology and percolation on arbitrary networks, and derive a bound for the critical value beyond which a giant connected component arises. Finally, we show empirically the tightness of our bounds for a large family of networks. 1 Introduction The emergence of social graphs of the World Wide Web has had a considerable effect on propagation of ideas or information. For advertisers, these new diffusion networks have become a favored vector for viral marketing operations, that consist of advertisements that people are likely to share by themselves with their social circle, thus creating a propagation dynamics somewhat similar to the spreading of a virus in epidemiology ([1]). Of particular interest is the problem of influence maximization, which consists of selecting the top-k nodes of the network to infect at time t = 0 in order to maximize in expectation the final number of infected nodes at the end of the epidemic. This problem was first formulated by Domingues and Richardson in [2] and later expressed in [3] as an NP-hard discrete optimization problem under the Independent Cascade (IC) framework, a widely-used probabilistic model for information propagation. From an algorithmic point of view, influence maximization has been fairly well studied. Assuming the transmission probability of all edges are known, Kempe, Kleinberg and Tardos ([3]) derived a greedy algorithm based on Monte-Carlo simulations that was shown to approximate the optimal solution up to a factor 1−1 e, building on classical results of optimization theory. Since then, various techniques were proposed in order to significantly improve the scalability of this algorithm ([4, 5, 6, 7]), and also to provide an estimate of the transmission probabilities from real data ([8, 9]). Recently, a series of papers ([10, 11, 12]) introduced continuous-time diffusion networks in which infection spreads during a time period T at varying rates across the different edges. While these models provide a more accurate representation of real-world networks for finite T, they are equivalent to the IC model when T →∞. In this paper, will focus on such long-term behavior of the contagion. From a theoretical point of view, little is known about the influence maximization problem under the IC model framework. The most celebrated result established by Newman ([13]) proves the equivalence between bond percolation and the Susceptible-Infected-Removed (SIR) model in epidemiology ([14]) that can be identified to a special case of IC model where transmission probability are equal amongst all infectious edges. In this paper, we propose new bounds on the influence of any set of nodes. Moreover, we prove the existence of an epidemic threshold for a key quantity defined by the spectral radius of a given hazard 1 matrix. Under this threshold, the influence of any given set of nodes in a network of size n will be O(√n), while the influence of a randomly chosen set of nodes will be O(1). We provide empirical evidence that these bounds are sharp for a family of graphs and sets of initial influencers and can therefore be used as what is to our knowledge the first closed-form formulas for influence estimation. We show that these results generalize bounds obtained on the SIR model by Draief, Ganesh and Massouli´e ([15]) and are closely related to recent results on percolation on finite inhomogeneous random graphs ([16]). The rest of the paper is organized as follows. In Sec. 2, we recall the definition of Information Cascades Model and introduce useful notations. In Sec. 3, we derive theoretical bounds for the influence. In Sec. 4, we show that our results also apply to the fields of percolation and epidemiology and generalize existing results in these fields. In Sec. 5, we illustrate our results by applying them on simple networks and retrieving well-known results. In Sec. 6, we perform experiments in order to show that our bounds are sharp for a family of graphs and sets of initial nodes. 2 Information Cascades Model 2.1 Influence in random networks and infection dynamics Let G = (V, E) be a directed network of n nodes and A ⊂V be a set of n0 nodes that are initially contagious (e.g. aware of a piece of information, infected by a disease or adopting a product). In the sequel, we will refer to A as the influencers. The behavior of the cascade is modeled using a probabilistic framework. The influencer nodes spread the contagion through the network by means of transmission through the edges of the network. More specifically, each contagious node can infect its neighbors with a certain probability. The influence of A, denoted as σ(A), is the expected number of nodes reached by the contagion originating from A, i.e. σ(A) = X v∈V P(v is infected by the contagion |A). (1) We consider three infection dynamics that we will show in the next section to be equivalent regarding the total number of infected nodes at the end of the epidemic. Discrete-Time Information Cascades [DTIC(P)] At time t = 0, only the influencers are infected. Given a matrix P = (pij)ij ∈[0, 1]n×n, each node i that receives the contagion at time t may transmit it at time t + 1 along its outgoing edge (i, j) ∈E with probability pij. Node i cannot make any attempt to infect its neighbors in subsequent rounds. The process terminates when no more infections are possible. Continuous-Time Information Cascades [CTIC(F, T)] At time t = 0, only the influencers are infected. Given a matrix F = (fij)ij of non-negative integrable functions, each node i that receives the contagion at time t may transmit it at time s > t along its outgoing edge (i, j) ∈E with stochastic rate of occurrence fij(s −t). The process terminates at a given deterministic time T > 0. This model is much richer than Discrete-time IC, but we will focus here on its behavior when T = ∞. Random Networks [RN(P)] Given a matrix P = (pij)ij ∈[0, 1]n×n, each edge (i, j) ∈E is removed independently of the others with probability 1 −pij. A node i ∈V is said to be infected if i is linked to at least one element of A in the spanning subgraph G′ = (V, E′) where E′ ⊂E is the set of non-removed edges. For any v ∈V, we will designate by influence of v the influence of the set containing only v, i.e. σ({v}). We will show in Section 4.2 that, if P is symmetric and G undirected, these three infection processes are equivalent to bond percolation and the influence of a node v is also equal to the expected size of the connected component containing v in G′. This will make our results applicable to percolation in arbitrary networks. Following the percolation literature, we will denote as sub-critical a cascade whose influence is not proportional to the size of the network n. 2 2.2 The hazard matrix In order to linearize the influence problem and derive upper bounds, we introduce the concept of hazard matrix, which describes the behavior of the information cascade. As we will see in the following, in the case of Continuous-time Information Cascades, this matrix gives, for each edge of the network, the integral of the instantaneous rate of transmission (known as hazard function). The spectral radius of this matrix will play a key role in the influence of the cascade. Definition. For a given graph G = (V, E) and edge transmission probabilities pij, let H be the n × n matrix, denoted as the hazard matrix, whose coefficients are Hij =  −ln(1 −pij) if (i, j) ∈E 0 otherwise . (2) Next lemma shows the equivalence between the three definitions of the previous section. Lemma 1. For a given graph G = (V, E), set of influencers A, and transmission probabilities matrix P, the distribution of the set of infected nodes is equal under the infection dynamics DTIC(P), CTIC(F, ∞) and RN(P), provided that for any (i, j) ∈E, R ∞ 0 fij(t)dt = Hij. Definition. For a given set of influencers A ⊂V, we will denote as H(A) the hazard matrix except for zeros along the columns whose indices are in A: H(A)ij = 1{j /∈A}Hij. (3) We recall that for any square matrix M, its spectral radius ρ(M) is defined by ρ(M) = maxi(|λi|) where λ1, ..., λn are the (possibly repeated) eigenvalues of matrix M. We will also use that, when M is a real square matrix with positive entries, ρ( M+M ⊤ 2 ) = supX X⊤MX X⊤X . Remark. When the pij are small, the hazard matrix is very close to the transmission matrix P. This implies that, for low pij values, the spectral radius of H will be very close to that of P. More specifically, a simple calculation holds ρ(P) ≤ρ(H) ≤−ln(1 −∥P∥∞) ∥P∥∞ ρ(P), (4) where ∥P∥∞= maxi,j pij. The relatively slow increase of −ln(1−x) x for x →1−implies that the behavior of ρ(P) and ρ(H) will be of the same order of magnitude even for high (but lower than 1) values of ∥P∥∞. 3 Upper bounds for the influence of a set of nodes Given A ⊂V the set of influencer nodes and |A| = n0 < n, we derive here two upper bounds for the influence of A. The first bound (Proposition 1) applies to any set of influencers A such that |A| = n0. Intuitively, this result correspond to a best-case scenario (or a worst-case scenario, depending on the viewpoint), since we can target any set of nodes so as to maximize the resulting contagion. Proposition 1. Define ρc(A) = ρ( H(A)+H(A)⊤ 2 ). Then, for any A such that |A| = n0 < n, denoting by σ(A) the expected number of nodes reached by the cascade starting from A: σ(A) ≤n0 + γ1(n −n0), (5) where γ1 is the smallest solution in [0, 1] of the following equation: γ1 −1 + exp  −ρc(A)γ1 − ρc(A)n0 γ1(n −n0)  = 0. (6) Corollary 1. Under the same assumptions: 3 • if ρc(A) < 1, σ(A) ≤n0 + s ρc(A) 1 −ρc(A) p n0(n −n0), • if ρc(A) ≥1, σ(A) ≤n −(n −n0) exp −ρc(A) − 2ρc(A) p 4n/n0 −3 −1 ! . In particular, when ρc(A) < 1, σ(A) = O(√n) and the regime is sub-critical. The second result (Proposition 2) applies in the case where A is drawn from a uniform distribution over the ensemble of sets of n0 nodes chosen amongst n (denoted as Pn0(V)). This result corresponds to the average-case scenario in a setting where the initial influencer nodes are not known and drawn independently of the transmissions over each edge. Proposition 2. Define ρc = ρ( H+H⊤ 2 ). Assume the set of influencers A is drawn from a uniform distribution over Pn0(V). Then, denoting by σuniform the expected number of nodes reached by the cascade starting from A: σuniform ≤n0 + γ2(n −n0), (7) where γ2 is the unique solution in [0, 1] of the following equation: γ2 −1 + exp  −ρcγ2 −ρcn0 n −n0  = 0. (8) Corollary 2. Under the same assumptions: • if ρc < 1, σuniform ≤ n0 1 −ρc , • if ρc ≥1, σuniform ≤n −(n −n0) exp  − ρc 1 −n0 n  . In particular, when ρc < 1, σuniform = O(1) and the regime is sub-critical. The difference in the sub-critical regime between O(√n) and O(1) for the worst and average case influence is an important feature of our results, and is verified in our experiments (see Sec. 6). Intuitively, when the network is inhomogeneous and contains highly central nodes (e.g. scale-free networks), there will be a significant difference between specifically targeting the most central nodes and random targeting (which will most probably target a peripheral node). 4 Application to epidemiology and percolation Building on the celebrated equivalences between the fields of percolation, epidemiology and influence maximization, we show that our results generalize existing results in these fields. 4.1 Susceptible-Infected-Removed (SIR) model in epidemiology We show here that Proposition 1 further improves results on the SIR model in epidemiology. This widely used model was introduced by Kermac and McKendrick ([14]) in order to model the propagation of a disease in a given population. In this setting, nodes represent individuals, that can be in one of three possible states, susceptible (S), infected (I) or removed (R). At t = 0, a subset A of n0 nodes is infected and the epidemic spreads according to the following evolution. Each infected node transmits the infection along its outgoing edge (i, j) ∈E at stochastic rate of occurrence β and is removed from the graph at stochastic rate of occurrence δ. The process ends for a given T > 0. It is straightforward that, if the removed events are not observed, this infection process is equivalent to CTIC(F, T) where for any (i, j) ∈E,fij(t) = β exp(−δt). The hazard matrix H is therefore equal to β δ A where A = 1{(i,j)∈E}  ij is the adjacency matrix of the underlying network. Note 4 that, by Lemma 1, our results can be used in order to model the total number of infected nodes in a setting where infection and recovery rates of a given node exhibit a non-exponential behavior. For instance, incubation periods for different individuals generally follow a log-normal distribution [17], which indicates that continuous-time IC with a log-normal rate of removal might be well-suited to model some kind of infections. It was recently shown by Draief, Ganesh and Massouli´e ([15]) that, in the case of undirected networks, and if βρ(A) < δ, σ(A) ≤ √nn0 1 −β δ ρ(A) . (9) This result shows, that, when ρ(H) = β δ ρ(A) < 1, the influence of set of nodes A is O(√n). We show in the next lemma that this result is a direct consequence of Corollary 1: the condition ρc(A) < 1 is weaker than ρ(H) < 1 and, under these conditions, the bound of Corollary 1 is tighter. Lemma 2. For any symmetric adjacency matrix A, initial set of influencers A such that |A| = n0 < n, δ > 0 and β < δ ρ(A), we have simultaneously ρc(A) ≤β δ ρ(A) and n0 + s ρc(A) 1 −ρc(A) p n0(n −n0) ≤ √nn0 1 −β δ ρ(A) , (10) where the condition β < δ ρ(A) imposes that the regime is sub-critical. Moreover, these new bounds capture with more accuracy the behavior of the influence in extreme cases. In the limit β →0, the difference between the two bounds is significant, because Proposition 1 yields σ(A) →n0 whereas (9) only ensures σ(A) ≤√nn0. When n = n0, Proposition 1 also ensures that σ(A) = n0 whereas (9) yields σ(A) ≤ n0 1−β δ ρ(A). Secondly, Proposition 1 gives also bounds in the case βρ(A) ≥δ. Finally, Proposition 1 applies to more general cases that the classical homogeneous SIR model, and allows infection and recovery rates to vary across individuals. 4.2 Bond percolation Given a finite undirected graph G = (V, E), bond percolation theory describes the behavior of connected clusters of the spanning subgraph of G obtained by retaining a subset E′ ⊂E of edges of G according to a given distribution.When these removals occur independently along each edge with same probability 1−p, this process is called homogeneous percolation and is fairly well known (see e.g [18]). The inhomogeneous case, where the independent edge removal probabilities 1 −pij vary across the edges, is more intricate and has been the subject of recent studies. In particular, results on critical probabilities and size of the giant component have been obtained by Bollobas, Janson and Riordan in [16]. However, these bounds hold for a particular class of asymptotic graphs (inhomogeneous random graphs) when n →∞. In the next lemma, we show that our results can be used in order to obtain bounds that hold in expectation for any fixed graph. Lemma 3. Let P = (pij)ij ∈[0, 1]n×n be a symmetric matrix. Let G′ = (V, E′) be the undirected subgraph of G such that each edge {i, j} ∈E is removed independently with probability 1−pij. Let Gd = (V, Ed) be the directed graph such that (i, j) ∈Ed ⇐⇒{i, j} ∈E. Then, for any v ∈V, the expected size of the connected component containing v in G′ is equal to the influence of v in Gd under the infection process DTIC(P). We now derive an upper bound for C1(G′), the size of the largest connected component of the spanning subgraph G′ = (V, E′). In the following, we will denote by E[C1(G′)] the expected value of this random variable, given P = (pij)ij. Proposition 3. Let G = (V, E) be an undirected network where each edge {i, j} ∈E has an independent probability 1 −pij of being removed. The expected size of the largest connected component of the resulting subgraph G′ is upper bounded by: E[C1(G′)] ≤n√γ3, (11) where γ3 is the unique solution in [0, 1] of the following equation: γ3 −1 + n −1 n exp  − n n −1ρ(H)γ3  = 0. (12) 5 Moreover, the resulting network has a probability of being connected upper bounded by: P(G′ is connected) ≤γ3. (13) In the case ρ(H) < 1, we can further simplify our bounds in the same way than for Propositions 1 and 2. Corollary 3. In the case ρ(H) < 1, E[C1(G′)] ≤ q n 1−ρ(H). Whereas our results hold for any n ∈N, classical results in percolation theory study the asymptotic behavior of sequences of graphs when n →∞. In order to further compare our results, we therefore consider sequences of spanning subgraphs (G′n)n ∈N, obtained by removing each edge of graphs of n nodes (Gn)n ∈N with probability 1 −pn ij. A previous result ([16], Corollary 3.2 of section 5) states that, for particular sequences known as inhomogeneous random graphs and under a given sub-criticality condition, C1(G′n) = o(n) asymptotically almost surely (a.a.s.), i.e with probability going to 1 as n →∞. Using Proposition 3, we get for our part the following result: Corollary 4. Assume the sequence  Hn = −ln(1 −pn ij)  ij  n ∈N is such that lim sup n→∞ρ(Hn) < 1. (14) Then, for any ϵ > 0, we have asymptotically almost surely when n →∞, C1(G′ n) = o(n1/2+ϵ). (15) This result is to our knowledge the first to bound the expected size of the largest connected component in general arbitrary networks. 5 Application to particular networks In order to illustrate our theoretical results, we now apply our bounds to three specific networks and compare them to existing results, showing that our bounds are always of the same order than these specific results. We consider three particular networks: 1) star-shaped networks, 2) Erd¨os-R´enyi networks and 3) random graphs with an expected degree distribution. In order to simplify these problems and exploit existing theorems, we will consider in this section that pij = p is fixed for each edge {i, j} ∈E. Infection dynamics thus only depend on p, the set of influencers A, and the structure of the underlying network. 5.1 Star-shaped networks For a star shaped network centered around a given node v1, and A = {v1}, the exact influence is computable and writes σ({v1}) = 1 + p(n −1). As H(A)ij = −ln(1 −p)1{i=1,j̸=1}, the spectral radius is given by ρ H(A) + H(A)⊤ 2  = −ln(1 −p) 2 √ n −1. (16) Therefore, Proposition 1 states that σ({v1}) ≤1 + (n −1)γ1 where γ1 is the solution of equation 1 −γ1 = exp  γ1 √ n −1 + 1 γ1 √n −1  ln(1 −p) 2  . (17) It is worth mentionning that, when p = 1 √n−1, γ1 = 1 √n−1 is solution of (17) and therefore the bound is σ({v1}) ≤1 + √n −1 which is tight. Note that, in the case of star-shaped networks, the influence does not present a critical behavior and is always linear with respect to the total number of nodes n. 5.2 Erd¨os-R´enyi networks For Erd¨os-R´enyi networks G(n, p) (i.e. an undirected network with n nodes where each couple of nodes (i, j) ∈V2 belongs to E independently of the others with probability p), the exact influence 6 of a set of nodes is not known. However, percolation theory characterizes the limit behavior of the giant connected component when n →∞. In the simplest case of Erd¨os-R´enyi networks G(n, c n) the following result holds: Lemma 4. (taken from [16]) For a given sequence of Erd¨os-R´enyi networks G(n, c n), we have: • if c < 1, C1(G(n, c n)) ≤ 3 (1−c)2 log(n) a.a.s. • if c > 1, C1(G(n, c n)) = (1 + o(1))βn a.a.s. where β −1 + exp(−βc) = 0. As previously stated, our results hold for any given graph, and not only asymptotically. However, we get an asymptotic behavior consistent with the aforementioned result. Indeed, using notations of section 4.2, Hn ij = −ln(1 −c n)1{i̸=j} and ρ(Hn) = −(n −1) ln(1 −c n). Using Proposition 3, and noting that γ3 = (1 + o(1))β, we get that, for any ϵ > 0: • if c < 1, C1(G(n, c n)) = o(n1/2+ϵ) a.a.s. • if c > 1, C1(G(n, c n)) ≤(1 + o(1))βn1+ϵ a.a.s., where β −1 + exp(−βc) = 0. 5.3 Random graphs with given expected degree distribution In this section, we apply our bounds to random graphs whose expected degree distribution is fixed (see e.g [19], section 13.2.2). More specifically, let w = (wi)i∈{1,...,n} be the expected degree of each node of the network. For a fixed w, let G(w) be a random graph whose edges are selected independently and randomly with probability qij = 1{i̸=j}wiwj P k wk . (18) For these graphs, results on the volume of connected components (i.e the expected sum of degrees of the nodes in these components) were derived in [20] but our work gives to our knowledge the first result on the size of the giant component. Note that Erd¨os-R´enyi G(n, p) networks are a special case of (18) where wi = np for any i ∈V. In order to further compare our results, we note that these graphs are also very similar to the widely used configuration model where node degrees are fixed to a sequence w, the main difference being that the occupation probabilities pij are in this case not independent anymore. For configuration models, a giant component exists if and only if P i w2 i > 2 P i wi ([21, 22]). In the case of graphs with given expected degree distribution, we retrieve the key role played by the ratio P i w2 i / P i wi in our criterion of non-existence of the giant component given by ρ( H+H⊤ 2 ) < 1 where ρ H + H⊤ 2  ≈ρ((qij)ij) ≤ P i w2 i P i wi . (19) The left-hand approximation is particularly good when the qij are small. This is for instance the case as soon as there exists α < 1 such that, for any i ∈V, wi = o(nα). The right-hand side is based on the fact that the spectral radius of the matrix (qij +1{i=j}w2 i / P k wk)ij is given by P i w2 i / P i wi. 6 Experimental results In this section, we show that the bounds given in Sec. 3 are tight (i.e. very close to empirical results in particular graphs), and are good approximations of the influence on a large set of random networks. Fig. 1a compares experimental simulations of the influence to the bound derived in proposition 1. The considered networks have n = 1000 nodes and are of 6 types (see e.g [19] for further details on these different networks): 1) Erd¨os-R´enyi networks, 2) Preferential attachment networks, 3) Smallworld networks, 4) Geometric random networks ([23]), 5) 2D regular grids and 6) totally connected networks with fixed weight b ∈[0, 1] except for the ingoing and outgoing edges of the influencer node A = {v1} having weight a ∈[0, 1]. Except for totally connected networks, edge probabilities are set to the same value p for each edge (this parameter was used to tune the spectral radius ρc(A)). All points of the plots are averages over 100 simulations. The results show that the bound in proposition 1 is tight (see totally connected networks in Fig. 1a) and close to the real influence for a large 7 class of random networks. In particular, the tightness of the bound around ρc(A) = 1 validates the behavior in √n of the worst-case influence in the sub-critical regime. Similarly, Fig. 1b compares 0 2 4 6 8 10 0 100 200 300 400 500 600 700 800 900 1000 spectral radius of the Hazard matrix (ρc(A)) influence (σ(A)) totally connected erdos renyi preferential attachment small World geometric random 2D grid upper bound (a) Fixed set of influencers 0 2 4 6 8 10 0 100 200 300 400 500 600 700 800 900 1000 spectral radius of the Hazard matrix (ρc) influence (σuniform) totally connected erdos renyi preferential attachment small World geometric random 2D grid upper bound (b) Uniformly distributed set of influencers Figure 1: Empirical influence on random networks of various types. The solid lines are the upper bounds in propositions 1 (for Fig. 1a) and 2 (for Fig. 1b). experimental simulations of the influence to the bound derived in proposition 2 in the case of random initial influencers. While this bound is not as tight as the previous one, the behavior of the bound agrees with experimental simulations, and proves a relatively good approximation of the influence under a random set of initial influencers. It is worth mentioning that the bound is tight for the subcritical regime and shows that corollary 2 is a good approximation of σuniform when ρc < 1. In order to verify the criticality of ρc(A) = 1, we compared the behavior of σ(A) w.r.t the size of the network n. When ρc(A) < 1 (see Fig. 2a in which ρc(A) = 0.5), σ(A) = O(√n), and the bound is tight. On the contrary, when ρc(A) > 1 (see Fig. 2b in which ρc(A) = 1.5), σ(A) = O(n), and σ(A) is linear w.r.t. n for most random networks. 0 200 400 600 800 1000 0 5 10 15 20 25 30 size of the network (n) influence (σ(A)) totally connected erdos renyi preferential attachment small World geometric random 2D grid upper bound (a) Sub-critical regime: ρc(A) = 0.5 0 200 400 600 800 1000 0 100 200 300 400 500 size of the network (n) influence (σ(A)) totally connected erdos renyi preferential attachment small World geometric random 2D grid upper bound (b) Super-critical regime: ρc(A) = 1.5 Figure 2: Influence w.r.t. the size of the network in the sub-critical and super-critical regime. The solid line is the upper bound in proposition 1. Note the square-root versus linear behavior. 7 Conclusion In this paper, we derived the first upper bounds for the influence of a given set of nodes in any finite graph under the Independent Cascade Model (ICM) framework, and relate them to the spectral radius of a given hazard matrix. We show that these bounds can also be used to generalize previous results in the fields of epidemiology and percolation. Finally, we provide empirical evidence that these bounds are close to the best possible for general graphs. Acknowledgments This research is part of the SODATECH project funded by the French Government within the program of “Investments for the Future – Big Data”. 8 References [1] Justin Kirby and Paul Marsden. Connected marketing: the viral, buzz and word of mouth revolution. Elsevier, 2006. [2] Pedro Domingos and Matt Richardson. Mining the network value of customers. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 57–66. ACM, 2001. [3] David Kempe, Jon Kleinberg, and ´Eva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’03, pages 137–146, New York, NY, USA, 2003. ACM. [4] Wei Chen, Yajun Wang, and Siyu Yang. Efficient influence maximization in social networks. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 199–208. ACM, 2009. [5] Wei Chen, Chi Wang, and Yajun Wang. Scalable influence maximization for prevalent viral marketing in large-scale social networks. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1029–1038. ACM, 2010. [6] Amit Goyal, Wei Lu, and Laks VS Lakshmanan. Celf++: optimizing the greedy algorithm for influence maximization in social networks. In Proceedings of the 20th international conference companion on World wide web, pages 47–48. ACM, 2011. [7] Kouzou Ohara, Kazumi Saito, Masahiro Kimura, and Hiroshi Motoda. Predictive simulation framework of stochastic diffusion model for identifying top-k influential nodes. In Asian Conference on Machine Learning, pages 149–164, 2013. [8] Manuel Gomez Rodriguez, Jure Leskovec, and Andreas Krause. Inferring networks of diffusion and influence. In Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 1019–1028. ACM, 2010. [9] Seth A. Myers and Jure Leskovec. On the convexity of latent social network inference. In NIPS, pages 1741–1749, 2010. [10] Manuel Gomez-Rodriguez, David Balduzzi, and Bernhard Sch¨olkopf. Uncovering the temporal dynamics of diffusion networks. In ICML, pages 561–568, 2011. [11] Manuel G Rodriguez and Bernhard Sch¨olkopf. Influence maximization in continuous time diffusion networks. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), pages 313–320, 2012. [12] Nan Du, Le Song, Manuel Gomez-Rodriguez, and Hongyuan Zha. Scalable influence estimation in continuous-time diffusion networks. In NIPS, pages 3147–3155, 2013. [13] Mark EJ Newman. Spread of epidemic disease on networks. Physical review E, 66(1):016128, 2002. [14] William O Kermack and Anderson G McKendrick. Contributions to the mathematical theory of epidemics. ii. the problem of endemicity. Proceedings of the Royal society of London. Series A, 138(834):55– 83, 1932. [15] Moez Draief, Ayalvadi Ganesh, and Laurent Massouli´e. Thresholds for virus spread on networks. In Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, page 51. ACM, 2006. [16] B´ela Bollob´as, Svante Janson, and Oliver Riordan. The phase transition in inhomogeneous random graphs. Random Structures & Algorithms, 31(1):3–122, 2007. [17] Kenrad E Nelson. Epidemiology of infectious disease: general principles. Infectious Disease Epidemiology Theory and Practice. Gaithersburg, MD: Aspen Publishers, pages 17–48, 2007. [18] Svante Janson, Tomasz Luczak, and Andrzej Rucinski. Random graphs, volume 45. John Wiley & Sons, 2011. [19] Mark Newman. Networks: An Introduction. Oxford University Press, Inc., New York, NY, USA, 2010. [20] Fan Chung and Linyuan Lu. Connected components in random graphs with given expected degree sequences. Annals of combinatorics, 6(2):125–145, 2002. [21] Michael Molloy and Bruce Reed. A critical point for random graphs with a given degree sequence. Random structures & algorithms, 6(2-3):161–180, 1995. [22] Michael Molloy and Bruce Reed. The size of the giant component of a random graph with a given degree sequence. Combinatorics probability and computing, 7(3):295–305, 1998. [23] Mathew Penrose. Random geometric graphs, volume 5. Oxford University Press Oxford, 2003. 9
2014
39
5,502
Best-Arm Identification in Linear Bandits Marta Soare Alessandro Lazaric Rémi Munos∗† INRIA Lille – Nord Europe, SequeL Team {marta.soare,alessandro.lazaric,remi.munos}@inria.fr Abstract We study the best-arm identification problem in linear bandit, where the rewards of the arms depend linearly on an unknown parameter θ∗and the objective is to return the arm with the largest reward. We characterize the complexity of the problem and introduce sample allocation strategies that pull arms to identify the best arm with a fixed confidence, while minimizing the sample budget. In particular, we show the importance of exploiting the global linear structure to improve the estimate of the reward of near-optimal arms. We analyze the proposed strategies and compare their empirical performance. Finally, as a by-product of our analysis, we point out the connection to the G-optimality criterion used in optimal experimental design. 1 Introduction The stochastic multi-armed bandit problem (MAB) [16] offers a simple formalization for the study of sequential design of experiments. In the standard model, a learner sequentially chooses an arm out of K and receives a reward drawn from a fixed, unknown distribution relative to the chosen arm. While most of the literature in bandit theory focused on the problem of maximization of cumulative rewards, where the learner needs to trade-off exploration and exploitation, recently the pure exploration setting [5] has gained a lot of attention. Here, the learner uses the available budget to identify as accurately as possible the best arm, without trying to maximize the sum of rewards. Although many results are by now available in a wide range of settings (e.g., best-arm identification with fixed budget [2, 11] and fixed confidence [7], subset selection [6, 12], and multi-bandit [9]), most of the work considered only the multi-armed setting, with K independent arms. An interesting variant of the MAB setup is the stochastic linear bandit problem (LB), introduced in [3]. In the LB setting, the input space X is a subset of Rd and when pulling an arm x, the learner observes a reward whose expected value is a linear combination of x and an unknown parameter θ∗∈Rd. Due to the linear structure of the problem, pulling an arm gives information about the parameter θ∗and indirectly, about the value of other arms. Therefore, the estimation of K meanrewards is replaced by the estimation of the d features of θ∗. While in the exploration-exploitation setting the LB has been widely studied both in theory and in practice (e.g., [1, 14]), in this paper we focus on the pure-exploration scenario. The fundamental difference between the MAB and the LB best-arm identification strategies stems from the fact that in MAB an arm is no longer pulled as soon as its sub-optimality is evident (in high probability), while in the LB setting even a sub-optimal arm may offer valuable information about the parameter vector θ∗and thus improve the accuracy of the estimation in discriminating among near-optimal arms. For instance, consider the situation when K−2 out of K arms are already discarded. In order to identify the best arm, MAB algorithms would concentrate the sampling on the two remaining arms to increase the accuracy of the estimate of their mean-rewards until the discarding condition is met for one of them. On the contrary, a LB pure-exploration strategy would seek to pull the arm x ∈X whose observed reward allows to refine the estimate θ∗along the dimensions which are more suited in discriminating between the two remaining arms. Recently, the best-arm identification in linear bandits has been studied in a fixed budget setting [10], in this paper we study the sample complexity required to identify the best-linear arm with a fixed confidence. ∗This work was done when the author was a visiting researcher at Microsoft Research New-England. †Current affiliation: Google DeepMind. 1 2 Preliminaries The setting. We consider the standard linear bandit model. Let X ⊆Rd be a finite set of arms, where |X| = K and the ℓ2-norm of any arm x ∈X, denoted by ||x||, is upper-bounded by L. Given an unknown parameter θ∗∈Rd, we assume that each time an arm x ∈X is pulled, a random reward r(x) is generated according to the linear model r(x) = x⊤θ∗+ ε, where ε is a zero-mean i.i.d. noise bounded in [−σ; σ]. Arms are evaluated according to their expected reward x⊤θ∗and we denote by x∗= arg maxx∈X x⊤θ∗the best arm in X. Also, we use Π(θ) = arg maxx∈X x⊤θ to refer to the best arm corresponding to an arbitrary parameter θ. Let Δ(x, x′) = (x −x′)⊤θ∗be the value gap between two arms, then we denote by Δ(x) = Δ(x∗, x) the gap of x w.r.t. the optimal arm and by Δmin = minx∈X Δ(x) the minimum gap, where Δmin > 0. We also introduce the sets Y = {y = x −x′, ∀x, x′ ∈X} and Y∗= {y = x∗−x, ∀x ∈X} containing all the directions obtained as the difference of two arms (or an arm and the optimal arm) and we redefine accordingly the gap of a direction as Δ(y) = Δ(x, x′) whenever y = x −x′. The problem. We study the best-arm identification problem. Let ˆx(n) be the estimated best arm returned by a bandit algorithm after n steps. We evaluate the quality of ˆx(n) by the simple regret Rn = (x∗−ˆx(n))⊤θ∗. While different settings can be defined (see [8] for an overview), here we focus on the (ǫ, δ)-best-arm identification problem (the so-called PAC setting), where given ǫ and δ ∈(0, 1), the objective is to design an allocation strategy and a stopping criterion so that when the algorithm stops, the returned arm ˆx(n) is such that P  Rn ≥ǫ  ≤δ, while minimizing the needed number of steps. More specifically, we will focus on the case of ǫ = 0 and we will provide high-probability bounds on the sample complexity n. The multi-armed bandit case. In MAB, the complexity of best-arm identification is characterized by the gaps between arm values, following the intuition that the more similar the arms, the more pulls are needed to distinguish between them. More formally, the complexity is given by the problemdependent quantity HMAB = K i=1 1 Δ2 i i.e., the inverse of the pairwise gaps between the best arm and the suboptimal arms. In the fixed budget case, HMAB determines the probability of returning the wrong arm [2], while in the fixed confidence case, it characterizes the sample complexity [7]. Technical tools. Unlike in the multi-arm bandit scenario where pulling one arm does not provide any information about other arms, in a linear model we can leverage the rewards observed over time to estimate the expected reward of all the arms in X. Let xn = (x1, . . . , xn) ∈X n be a sequence of arms and (r1, . . . , rn) the corresponding observed (random) rewards. An unbiased estimate of θ∗can be obtained by ordinary least-squares (OLS) as ˆθn = A−1 xn bxn, where Axn = n t=1 xtx⊤ t ∈ Rd×d and bxn = n t=1 xtrt ∈Rd. For any fixed sequence xn, through Azuma’s inequality, the prediction error of the OLS estimate is upper-bounded in high-probability as follows. Proposition 1. Let c = 2σ √ 2 and c′ = 6/π2. For every fixed sequence xn, we have1 P  ∀n ∈N, ∀x ∈X, x⊤θ∗−x⊤ˆθn  ≤c||x||A−1 xn  log(c′n2K/δ)  ≥1 −δ. (1) While in the previous statement xn is fixed, a bandit algorithm adapts the allocation in response to the rewards observed over time. In this case a different high-probability bound is needed. Proposition 2 (Thm. 2 in [1]). Let ˆθη n be the solution to the regularized least-squares problem with regularizer η and let Aη x = ηId + Ax. Then for all x ∈X and every adaptive sequence xn such that at any step t, xt only depends on (x1, r1, . . . , xt−1, rt−1), w.p. 1 −δ, we have x⊤θ∗−x⊤ˆθη n  ≤||x||(  Aη xn)−1 σ d log 1 + nL2/η δ  + η1/2||θ∗|| . (2) The crucial difference w.r.t. Eq. 1 is an additional factor √ d, the price to pay for adapting xn to the samples. In the sequel we will often resort to the notion of design (or “soft” allocation) λ ∈Dk, which prescribes the proportions of pulls to arm x and Dk denotes the simplex X. The counterpart of the design matrix A for a design λ is the matrix Λλ =  x∈X λ(x)xx⊤. From an allocation xn we can derive the corresponding design λxn as λxn(x) = Tn(x)/n, where Tn(x) is the number of times arm x is selected in xn, and the corresponding design matrix is Axn = nΛλxn. 1Whenever Prop.1 is used for all directions y ∈Y, then the logarithmic term becomes log(c′n2K2/δ) because of an additional union bound. For the sake of simplicity, in the sequel we always use logn(K2/δ). 2 3 The Complexity of the Linear Best-Arm Identification Problem θ∗ x3 x1 x2 0 C(x3) C(x1) = C∗ C(x2) Figure 1: The cones corresponding to three arms (dots) in R2. Since θ∗∈C(x1), then x∗= x1. The confidence set S∗(xn) (in green) is aligned with directions x1−x2 and x1 −x3. Given the uncertainty in S∗(xn), both x1 and x3 may be optimal. As reviewed in Sect. 2, in the MAB case the complexity of the best-arm identification task is characterized by the reward gaps between the optimal and suboptimal arms. In this section, we propose an extension of the notion of complexity to the case of linear best-arm identification. In particular, we characterize the complexity by the performance of an oracle with access to the parameter θ∗. Stopping condition. Let C(x)={θ ∈Rd, x ∈Π(θ)} be the set of parameters θ which admit x as an optimal arm. As illustrated in Fig. 1, C(x) is the cone defined by the intersection of half-spaces such that C(x) = ∩x′∈X {θ ∈ Rd, (x −x′)⊤θ ≥0} and all the cones together form a partition of the Euclidean space Rd. We assume that the oracle knows the cone C(x∗) containing all the parameters for which x∗is optimal. Furthermore, we assume that for any allocation xn, it is possible to construct a confidence set S∗(xn) ⊆Rd such that θ∗∈S∗(xn) and the (random) OLS estimate ˆθn belongs to S∗(xn) with high probability, i.e., P ˆθn ∈S∗(xn)  ≥1 −δ. As a result, the oracle stopping criterion simply checks whether the confidence set S∗(xn) is contained in C(x∗) or not. In fact, whenever for an allocation xn the set S∗(xn) overlaps the cones of different arms x ∈X, there is ambiguity in the identity of the arm Π(ˆθn). On the other hand when all possible values of ˆθn are included with high probability in the “right” cone C(x∗), then the optimal arm is returned. Lemma 1. Let xn be an allocation such that S∗(xn) ⊆C(x∗). Then P  Π(ˆθn) = x∗ ≤δ. Arm selection strategy. From the previous lemma2 it follows that the objective of an arm selection strategy is to define an allocation xn which leads to S∗(xn) ⊆C(x∗) as quickly as possible.3 Since this condition only depends on deterministic objects (S∗(xn) and C(x∗)), it can be computed independently from the actual reward realizations. From a geometrical point of view, this corresponds to choosing arms so that the confidence set S∗(xn) shrinks into the optimal cone C(x∗) within the smallest number of pulls. To characterize this strategy we need to make explicit the form of S∗(xn). Intuitively speaking, the more S∗(xn) is “aligned” with the boundaries of the cone, the easier it is to shrink it into the cone. More formally, the condition S∗(xn) ⊆C(x∗) is equivalent to ∀x ∈X, ∀θ ∈S∗(xn), (x∗−x)⊤θ ≥0 ⇔∀y ∈Y∗, ∀θ ∈S∗(xn), y⊤(θ∗−θ) ≤Δ(y). Then we can simply use Prop. 1 to directly control the term y⊤(θ∗−θ) and define S∗(xn) = θ ∈Rd, ∀y ∈Y∗, y⊤(θ∗−θ) ≤c||y||A−1 xn  logn(K2/δ) . (3) Thus the stopping condition S∗(xn) ⊆C(x∗) is equivalent to the condition that, for any y ∈Y∗, c||y||A−1 xn  logn(K2/δ) ≤Δ(y). (4) From this condition, the oracle allocation strategy simply follows as x∗ n = arg min xn max y∈Y∗ c||y||A−1 xn  logn(K2/δ) Δ(y) = arg min xn max y∈Y∗ ||y||A−1 xn Δ(y) . (5) Notice that this strategy does not return an uniformly accurate estimate of θ∗but it rather pulls arms that allow to reduce the uncertainty of the estimation of θ∗over the directions of interest (i.e., Y∗) below their corresponding gaps. This implies that the objective of Eq. 5 is to exploit the global linear assumption by pulling any arm in X that could give information about θ∗over the directions in Y∗, so that directions with small gaps are better estimated than those with bigger gaps. 2For all the proofs in this paper, we refer the reader to the long version of the paper [18]. 3Notice that by definition of the confidence set and since θn →θ∗as n →∞, any strategy repeatedly pulling all the arms would eventually meet the stopping condition. 3 Sample complexity. We are now ready to define the sample complexity of the oracle, which corresponds to the minimum number of steps needed by the allocation in Eq. 5 to achieve the stopping condition in Eq. 4. From a technical point of view, it is more convenient to express the complexity of the problem in terms of the optimal design (soft allocation) instead of the discrete allocation xn. Let ρ∗(λ) = maxy∈Y∗||y||2 Λ−1 λ /Δ2(y) be the square of the objective function in Eq. 5 for any design λ ∈Dk. We define the complexity of a linear best-arm identification problem as the performance achieved by the optimal design λ∗= arg minλ ρ∗(λ), i.e. HLB = min λ∈Dk max y∈Y∗ ||y||2 Λ−1 λ Δ2(y) = ρ∗(λ∗). (6) This definition of complexity is less explicit than in the case of HMAB but it contains similar elements, notably the inverse of the gaps squared. Nonetheless, instead of summing the inverses over all the arms, HLB implicitly takes into consideration the correlation between the arms in the term ||y||2 Λ−1 λ , which represents the uncertainty in the estimation of the gap between x∗and x (when y = x∗−x). As a result, from Eq. 4 the sample complexity becomes N ∗= c2HLB logn(K2/δ), (7) where we use the fact that, if implemented over n steps, λ∗induces a design matrix Aλ∗= nΛλ∗ and maxy ||y||2 A−1 λ∗/Δ2(y) = ρ∗(λ∗)/n. Finally, we bound the range of the complexity. Lemma 2. Given an arm set X ⊆Rd and a parameter θ∗, the complexity HLB (Eq. 6) is such that max y∈Y∗||y||2/(LΔ2 min) ≤HLB ≤4d/Δ2 min. (8) Furthermore, if X is the canonical basis, the problem reduces to a MAB and HMAB ≤HLB ≤2HMAB. The previous bounds show that Δmin plays a significant role in defining the complexity of the problem, while the specific shape of X impacts the numerator in different ways. In the worst case the full dimensionality d appears (upper-bound), and more arm-set specific quantities, such as the norm of the arms L and of the directions Y∗, appear in the lower-bound. 4 Static Allocation Strategies Input: decision space X ∈Rd, confidence δ > 0 Set: t = 0; Y = {y = (x −x′); x = x′ ∈X}; while Eq. 11 is not true do if G-allocation then xt = arg min x∈X max x′∈X x′⊤(A + xx⊤)−1x′ else if XY-allocation then xt = arg min x∈X max y∈Y y⊤(A + xx⊤)−1y end if Update ˆθt = A−1 t bt, t = t + 1 end while Return arm Π(ˆθt) Figure 2: Static allocation algorithms The oracle stopping condition (Eq. 4) and allocation strategy (Eq. 5) cannot be implemented in practice since θ∗, the gaps Δ(y), and the directions Y∗are unknown. In this section we investigate how to define algorithms that only rely on the information available from X and the samples collected over time. We introduce an empirical stopping criterion and two static allocations. Empirical stopping criterion. The stopping condition S∗(xn) ⊆C(x∗) cannot be tested since S∗(xn) is centered in the unknown parameter θ∗ and C(x∗) depends on the unknown optimal arm x∗. Nonetheless, we notice that given X, for each x ∈X the cones C(x) can be constructed beforehand. Let S(xn) be a high-probability confidence set such that for any xn, ˆθn ∈S(xn) and P(θ∗∈S(xn)) ≥1 −δ. Unlike S∗, S can be directly computed from samples and we can stop whenever there exists an x such that S(xn) ⊆C(x). Lemma 3. Let xn = (x1, . . . , xn) be an arbitrary allocation sequence. If after n steps there exists an arm x ∈X such that S(xn) ⊆C(x) then P  Π(ˆθn) = x∗ ≤δ. Arm selection strategy. Similarly to the oracle algorithm, we should design an allocation strategy that guarantees that the (random) confidence set S(xn) shrinks in one of the cones C(x) within the fewest number of steps. Let Δn(x, x′) = (x −x′)⊤ˆθn be the empirical gap between arms x, x′. Then the stopping condition S(xn) ⊆C(x) can be written as ∃x ∈X, ∀x′ ∈X,∀θ ∈S(xn), (x −x′)⊤θ ≥0 ⇔∃x ∈X, ∀x′ ∈X, ∀θ ∈S(xn), (x −x′)⊤(ˆθn −θ) ≤Δn(x, x′). (9) 4 This suggests that the empirical confidence set can be defined as S(xn) = θ ∈Rd, ∀y ∈Y, y⊤(ˆθn −θ) ≤c||y||A−1 xn  logn(K2/δ) . (10) Unlike S∗(xn), S(xn) is centered in ˆθn and it considers all directions y ∈Y. As a result, the stopping condition in Eq. 9 could be reformulated as ∃x ∈X, ∀x′ ∈X, c||x −x′||A−1 xn  logn(K2/δ) ≤Δn(x, x′). (11) Although similar to Eq. 4, unfortunately this condition cannot be directly used to derive an allocation strategy. In fact, it is considerably more difficult to define a suitable allocation strategy to fit a random confidence set S into a cone C(x) for an x which is not known in advance. In the following we propose two allocations that try to achieve the condition in Eq. 11 as fast as possible by implementing a static arm selection strategy, while we present a more sophisticated adaptive strategy in Sect. 5. The general structure of the static allocations in summarized in Fig. 2. G-Allocation Strategy. The definition of the G-allocation strategy directly follows from the observation that for any pair (x, x′) ∈X 2 we have that ||x −x′||A−1 xn ≤2 maxx′′∈X ||x′′||A−1 xn . This suggests that an allocation minimizing maxx∈X ||x||A−1 xn reduces an upper bound on the quantity tested in the stopping condition in Eq. 11. Thus, for any fixed n, we define the G-allocation as xG n = arg min xn max x∈X ||x||A−1 xn . (12) We notice that this formulation coincides with the standard G-optimal design (hence the name of the allocation) defined in experimental design theory [15, Sect. 9.2] to minimize the maximal meansquared prediction error in linear regression. The G-allocation can be interpreted as the design that allows to estimate θ∗uniformly well over all the arms in X. Notice that the G-allocation in Eq. 12 is well defined only for a fixed number of steps n and it cannot be directly implemented in our case, since n is unknown in advance. Therefore we have to resort to a more “incremental” implementation. In the experimental design literature a wide number of approximate solutions have been proposed to solve the NP-hard discrete optimization problem in Eq. 12 (see [4, 17] for some recent results and [18] for a more thorough discussion). For any approximate G-allocation strategy with performance no worse than a factor (1 + β) of the optimal strategy xG n , the sample complexity N G is bounded as follows. Theorem 1. If the G-allocation strategy is implemented with a β-approximate method and the stopping condition in Eq. 11 is used, then P  N G ≤16c2d(1 + β) logn(K2/δ) Δ2 min ∧Π(ˆθN G) = x∗  ≥1 −δ. (13) Notice that this result matches (up to constants) the worst-case value of N ∗given the upper bound on HLB. This means that, although completely static, the G-allocation is already worst-case optimal. XY-Allocation Strategy. Despite being worst-case optimal, G-allocation is minimizing a rather loose upper bound on the quantity used to test the stopping criterion. Thus, we define an alternative static allocation that targets the stopping condition in Eq. 11 more directly by reducing its left-handside for any possible direction in Y. For any fixed n, we define the XY-allocation as xXY n = arg min xn max y∈Y ||y||A−1 xn . (14) XY-allocation is based on the observation that the stopping condition in Eq. 11 requires only the empirical gaps Δ(x, x′) to be well estimated, hence arms are pulled with the objective of increasing the accuracy of directions in Y instead of arms X. This problem can be seen as a transductive variant of the G-optimal design [19], where the target vectors Y are different from the vectors X used in the design. The sample complexity of the XY-allocation is as follows. Theorem 2. If the XY-allocation strategy is implemented with a β-approximate method and the stopping condition in Eq. 11 is used, then P  N XY ≤32c2d(1 + β) logn(K2/δ) Δ2 min ∧Π(ˆθN XY) = x∗  ≥1 −δ. (15) Although the previous bound suggests that XY achieves a performance comparable to the Gallocation, in fact XY may be arbitrarily better than G-allocation (for an example, see [18]). 5 5 XY-Adaptive Allocation Strategy Input: decision space X ∈Rd; parameter α; confidence δ Set j =1;  Xj =X; Y1 =Y; ρ0 =1; n0 =d(d + 1) + 1 while |  Xj| > 1 do ρj = ρj−1 t = 1; A0 = I while ρj/t ≥αρj−1(xj−1 nj−1)/nj−1 do Select arm xt = arg min x∈X max y∈Y y⊤(A + xx⊤)−1y Update At = At−1 + xtx⊤ t , t = t + 1 ρj = maxy∈ Yj y⊤A−1 t y end while Compute b = t s=1 xsrs; ˆθj = A−1 t b  Xj+1 = X for x ∈X do if ∃x′ :||x −x′||A−1 t  logn(K2/δ) ≤Δj(x′, x) then  Xj+1 =  Xj+1 −{x} end if end for Yj+1 = {y = (x −x′); x, x′ ∈ Xj+1} end while Return Π(ˆθj) Figure 3: XY-Adaptive allocation algorithm Fully adaptive allocation strategies. Although both G- and XY-allocation are sound since they minimize upper-bounds on the quantities used by the stopping condition (Eq. 11), they may be very suboptimal w.r.t. the ideal performance of the oracle introduced in Sec. 3. Typically, an improvement can be obtained by moving to strategies adapting on the rewards observed over time. Nonetheless, as reported in Prop. 2, whenever xn is not a fixed sequence, the bound in Eq. 2 should be used. As a result, a factor √ d would appear in the definition of the confidence sets and in the stopping condition. This directly implies that the sample complexity of a fully adaptive strategy would scale linearly with the dimensionality d of the problem, thus removing any advantage w.r.t. static allocations. In fact, the sample complexity of G- and XYallocation already scales linearly with d and from Lem. 2 we cannot expect to improve the dependency on Δmin. Thus, on the one hand, we need to use the tighter bounds in Eq. 1 and, on the other hand, we require to be adaptive w.r.t. samples. In the sequel we propose a phased algorithm which successfully meets both requirements using a static allocation within each phase but choosing the type of allocation depending on the samples observed in previous phases. Algorithm. The ideal case would be to define an empirical version of the oracle allocation in Eq. 5 so as to adjust the accuracy of the prediction only on the directions of interest Y∗and according to their gaps Δ(y). As discussed in Sect. 4 this cannot be obtained by a direct adaptation of Eq. 11. In the following, we describe a safe alternative to adjust the allocation strategy to the gaps. Lemma 4. Let xn be a fixed allocation sequence and ˆθn its corresponding estimate for θ∗. If an arm x ∈X is such that ∃x′ ∈X s.t. c||x′ −x||A−1 xn  logn(K2/δ) < Δn(x′, x), (16) then arm x is sub-optimal. Moreover, if Eq. 16 is true, we say that x′ dominates x. Lem. 4 allows to easily construct the set of potentially optimal arms, denoted  X(xn), by removing from X all the dominated arms. As a result, we can replace the stopping condition in Eq. 11, by just testing whether the number of non-dominated arms |  X(xn)| is equal to 1, which corresponds to the case where the confidence set is fully contained into a single cone. Using  X(xn), we construct Y(xn) = {y = x −x′; x, x′ ∈ X(xn)}, the set of directions along which the estimation of θ∗needs to be improved to further shrink S(xn) into a single cone and trigger the stopping condition. Note that if xn was an adaptive strategy, then we could not use Lem. 4 to discard arms but we should rely on the bound in Prop. 2. To avoid this problem, an effective solution is to run the algorithm through phases. Let j ∈N be the index of a phase and nj its corresponding length. We denote by  Xj the set of non-dominated arms constructed on the basis of the samples collected in the phase j −1. This set is used to identify the directions Yj and to define a static allocation which focuses on reducing the uncertainty of θ∗along the directions in Yj. Formally, in phase j we implement the allocation xj nj = arg min xnj max y∈ Yj ||y||A−1 xnj , (17) which coincides with a XY-allocation (see Eq. 14) but restricted on Yj. Notice that xj nj may still use any arm in X which could be useful in reducing the confidence set along any of the directions in 6 Yj. Once phase j is over, the OLS estimate ˆθj is computed using the rewards observed within phase j and then is used to test the stopping condition in Eq. 11. Whenever the stopping condition does not hold, a new set  Xj+1 is constructed using the discarding condition in Lem. 4 and a new phase is started. Notice that through this process, at each phase j, the allocation xj nj is static conditioned on the previous allocations and the use of the bound from Prop. 1 is still correct. A crucial aspect of this algorithm is the length of the phases nj. On the one hand, short phases allow a high rate of adaptivity, since  Xj is recomputed very often. On the other hand, if a phase is too short, it is very unlikely that the estimate ˆθj may be accurate enough to actually discard any arm. An effective way to define the length of a phase in a deterministic way is to relate it to the actual uncertainty of the allocation in estimating the value of all the active directions in Yj. In phase j, let ρj(λ) = maxy∈ Yj ||y||2 Λ−1 λ , then given a parameter α ∈(0, 1), we define nj = min  n ∈N : ρj(λxj n)/n ≤αρj−1(λj−1)/nj−1  , (18) where xj n is the allocation defined in Eq. 17 and λj−1 is the design corresponding to xj−1 nj−1, the allocation performed at phase j −1. In words, nj is the minimum number of steps needed by the XY-adaptive allocation to achieve an uncertainty over all the directions of interest which is a fraction α of the performance obtained in the previous iteration. Notice that given Yj and ρj−1 this quantity can be computed before the actual beginning of phase j. The resulting algorithm using the XY-Adaptive allocation strategy is summarized in Fig. 3. Sample complexity. Although the XY-Adaptive allocation strategy is designed to approach the oracle sample complexity N ∗, in early phases it basically implements a XY-allocation and no significant improvement can be expected until some directions are discarded from Y. At that point, XY-adaptive starts focusing on directions which only contain near-optimal arms and it starts approaching the behavior of the oracle. As a result, in studying the sample complexity of XY-Adaptive we have to take into consideration the unavoidable price of discarding “suboptimal” directions. This cost is directly related to the geometry of the arm space that influences the number of samples needed before arms can be discarded from X. To take into account this problem-dependent quantity, we introduce a slightly relaxed definition of complexity. More precisely, we define the number of steps needed to discard all the directions which do not contain x∗, i.e. Y −Y∗. From a geometrical point of view, this corresponds to the case when for any pair of suboptimal arms (x, x′), the confidence set S∗(xn) does not intersect the hyperplane separating the cones C(x) and C(x′). Fig. 1 offers a simple illustration for such a situation: S∗no longer intercepts the border line between C(x2) and C(x3), which implies that direction x2 −x3 can be discarded. More formally, the hyperplane containing parameters θ for which x and x′ are equivalent is simply C(x) ∩C(x′) and the quantity M ∗= min{n ∈N, ∀x = x∗, ∀x′ = x∗, S∗(xXY n ) ∩(C(x) ∩C(x′)) = ∅} (19) corresponds to the minimum number of steps needed by the static XY-allocation strategy to discard all the suboptimal directions. This term together with the oracle complexity N ∗characterizes the sample complexity of the phases of the XY-adaptive allocation. In fact, the length of the phases is such that either they correspond to the complexity of the oracle or they can never last more than the steps needed to discard all the sub-optimal directions. As a result, the overall sample complexity of the XY-adaptive algorithm is bounded as in the following theorem. Theorem 3. If the XY-Adaptive allocation strategy is implemented with a β-approximate method and the stopping condition in Eq. 11 is used, then P  N ≤(1 + β) max{M ∗, 16 α N ∗} log(1/α) log c  logn(K2/δ) Δmin  ∧Π(ˆθN) = x∗  ≥1 −δ. (20) We first remark that, unlike G and XY, the sample complexity of XY-Adaptive does not have any direct dependency on d and Δmin (except in the logarithmic term) but it rather scales with the oracle complexity N ∗and the cost of discarding suboptimal directions M ∗. Although this additional cost is probably unavoidable, one may have expected that XY-Adaptive may need to discard all the suboptimal directions before performing as well as the oracle, thus having a sample complexity of O(M ∗+N ∗). Instead, we notice that N scales with the maximum of M ∗and N ∗, thus implying that XY-Adaptive may actually catch up with the performance of the oracle (with only a multiplicative factor of 16/α) whenever discarding suboptimal directions is less expensive than actually identifying the best arm. 7 6 Numerical Simulations We illustrate the performance of XY-Adaptive and compare it to the XY-Oracle strategy (Eq. 5), the static allocations XY and G, as well as with the fully-adaptive version of XY where  X is updated at each round and the bound from Prop.2 is used. For a fixed confidence δ = 0.05, we compare the sampling budget needed to identify the best arm with probability at least 1 −δ. We consider a set of arms X ∈Rd, with |X| = d + 1 including the canonical basis (e1, . . . , ed) and an additional arm xd+1 = [cos(ω) sin(ω) 0 . . . 0]⊤. We choose θ∗= [2 0 0 . . . 0]⊤, and fix ω = 0.01, so that Δmin = (x1 −xd+1)⊤θ∗is much smaller than the other gaps. In this setting, an efficient sampling strategy should focus on reducing the uncertainty in the direction ˜y = (x1 −xd+1) by pulling the arm x2 = e2 which is almost aligned with ˜y. In fact, from the rewards obtained from x2 it is easier to decrease the uncertainty about the second component of θ∗, that is precisely the dimension which allows to discriminate between x1 and xd+1. Also, we fix α = 1/10, and the noise ε ∼N(0, 1). Each phase begins with an initialization matrix A0, obtained by pulling once each canonical arm. In Fig. 4 we report the sampling budget of the algorithms, averaged over 100 runs, for d = 2 . . . 10. d=2 d=3 d=4 d=5 d=6 d=7 d=8 d=9 d=10 0 0.5 1 1.5 2 2.5 3 3.5 x 10 5 Dimension of the input space Number of Samples Fully adaptive G XY XY−Adaptive XY−Oracle Figure 4: The sampling budget needed to identify the best arm, when the dimension grows from R2 to R10. The results. The numerical results show that XYAdaptive is effective in allocating the samples to shrink the uncertainty in the direction ˜y. Indeed, XY-adaptive identifies the most important direction after few phases and is able to perform an allocation which mimics that of the oracle. On the contrary, XY and G do not adjust to the empirical gaps and consider all directions as equally important. This behavior forces XY and G to allocate samples until the uncertainty is smaller than Δmin in all directions. Even though the Fully-adaptive algorithm also identifies the most informative direction rapidly, the √ d term in the bound delays the discarding of the arms and prevents the algorithm from gaining any advantage compared to XY and G. As shown in Fig. 4, the difference between the budget of XY-Adaptive and the static strategies increases with the number of dimensions. In fact, while additional dimensions have little to no impact on XY-Oracle and XY-Adaptive (the only important direction remains ˜y independently from the number of unknown features of θ∗), for the static allocations more dimensions imply more directions to be considered and more features of θ∗to be estimated uniformly well until the uncertainty falls below Δmin. 7 Conclusions In this paper we studied the problem of best-arm identification with a fixed confidence, in the linear bandit setting. First we offered a preliminary characterization of the problem-dependent complexity of the best arm identification task and shown its connection with the complexity in the MAB setting. Then, we designed and analyzed efficient sampling strategies for this problem. The G-allocation strategy allowed us to point out a close connection with optimal experimental design techniques, and in particular to the G-optimality criterion. Through the second proposed strategy, XY-allocation, we introduced a novel optimal design problem where the testing arms do not coincide with the arms chosen in the design. Lastly, we pointed out the limits that a fully-adaptive allocation strategy might have in the linear bandit setting and proposed a phased-algorithm, XY-Adaptive, that learns from previous observations, without suffering from the dimensionality of the problem. Since this is one of the first works that analyze pure-exploration problems in the linear-bandit setting, it opens the way for an important number of similar problems already studied in the MAB setting. For instance, we can investigate strategies to identify the best-linear arm when having a limited budget or study the best-arm identification when the set of arms is very large (or infinite). Some interesting extensions also emerge from the optimal experimental design literature, such as the study of sampling strategies for meeting the G-optimality criterion when the noise is heterosckedastic, or the design of efficient strategies for satisfying other related optimality criteria, such as V-optimality. Acknowledgments This work was supported by the French Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council and FEDER through the “Contrat de Projets Etat Region 2007–2013", and European Community’s Seventh Framework Programme under grant agreement no 270327 (project CompLACS). 8 References [1] Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011. [2] Jean-Yves Audibert, Sébastien Bubeck, and Rémi Munos. Best arm identification in multiarmed bandits. In Proceedings of the 23rd Conference on Learning Theory (COLT), 2010. [3] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research, 3:397–422, 2002. [4] Mustapha Bouhtou, Stephane Gaubert, and Guillaume Sagnol. Submodularity and randomized rounding techniques for optimal experimental design. Electronic Notes in Discrete Mathematics, 36:679–686, 2010. [5] Sébastien Bubeck, Rémi Munos, and Gilles Stoltz. Pure exploration in multi-armed bandits problems. In Proceedings of the 20th International Conference on Algorithmic Learning Theory (ALT), 2009. [6] Sébastien Bubeck, Tengyao Wang, and Nitin Viswanathan. Multiple identifications in multiarmed bandits. In Proceedings of the International Conference in Machine Learning (ICML), pages 258–265, 2013. [7] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problems. J. Mach. Learn. Res., 7:1079– 1105, December 2006. [8] Victor Gabillon, Mohammad Ghavamzadeh, and Alessandro Lazaric. Best arm identification: A unified approach to fixed budget and fixed confidence. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), 2012. [9] Victor Gabillon, Mohammad Ghavamzadeh, Alessandro Lazaric, and Sébastien Bubeck. Multi-bandit best arm identification. In Proceedings of the 25th Annual Conference on Neural Information Processing Systems (NIPS), pages 2222–2230, 2011. [10] Matthew D. Hoffman, Bobak Shahriari, and Nando de Freitas. On correlation and budget constraints in model-based bandit optimization with application to automatic machine learning. In Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), pages 365–374, 2014. [11] Kevin G. Jamieson, Matthew Malloy, Robert Nowak, and Sébastien Bubeck. lil’ UCB : An optimal exploration algorithm for multi-armed bandits. In Proceeding of the 27th Conference on Learning Theory (COLT), 2014. [12] Emilie Kaufmann and Shivaram Kalyanakrishnan. Information complexity in bandit subset selection. In Proceedings of the 26th Conference on Learning Theory (COLT), pages 228–251, 2013. [13] Jack Kiefer and Jacob Wolfowitz. The equivalence of two extremum problems. Canadian Journal of Mathematics, 12:363–366, 1960. [14] Lihong Li, Wei Chu, John Langford, and Robert E. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th International Conference on World Wide Web (WWW), pages 661–670, 2010. [15] Friedrich Pukelsheim. Optimal Design of Experiments. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, 2006. [16] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, pages 527–535, 1952. [17] Guillaume Sagnol. Approximation of a maximum-submodular-coverage problem involving spectral functions, with application to experimental designs. Discrete Appl. Math., 161(12):258–276, January 2013. [18] Marta Soare, Alessandro Lazaric, and Rémi Munos. Best-Arm Identification in Linear Bandits. Technical report, http://arxiv.org/abs/1409.6110. [19] Kai Yu, Jinbo Bi, and Volker Tresp. Active learning via transductive experimental design. In Proceedings of the 23rd International Conference on Machine Learning (ICML), pages 1081– 1088, 2006. 9
2014
390
5,503
Multivariate f-Divergence Estimation With Confidence Kevin R. Moon Department of EECS University of Michigan Ann Arbor, MI krmoon@umich.edu Alfred O. Hero III Department of EECS University of Michigan Ann Arbor, MI hero@eecs.umich.edu Abstract The problem of f-divergence estimation is important in the fields of machine learning, information theory, and statistics. While several nonparametric divergence estimators exist, relatively few have known convergence properties. In particular, even for those estimators whose MSE convergence rates are known, the asymptotic distributions are unknown. We establish the asymptotic normality of a recently proposed ensemble estimator of f-divergence between two distributions from a finite number of samples. This estimator has MSE convergence rate of O 1 T  , is simple to implement, and performs well in high dimensions. This theory enables us to perform divergence-based inference tasks such as testing equality of pairs of distributions based on empirical samples. We experimentally validate our theoretical results and, as an illustration, use them to empirically bound the best achievable classification error. 1 Introduction This paper establishes the asymptotic normality of a nonparametric estimator of the f-divergence between two distributions from a finite number of samples. For many nonparametric divergence estimators the large sample consistency has already been established and the mean squared error (MSE) convergence rates are known for some. However, there are few results on the asymptotic distribution of non-parametric divergence estimators. Here we show that the asymptotic distribution is Gaussian for the class of ensemble f-divergence estimators [1], extending theory for entropy estimation [2, 3] to divergence estimation. f-divergence is a measure of the difference between distributions and is important to the fields of machine learning, information theory, and statistics [4]. The f-divergence generalizes several measures including the Kullback-Leibler (KL) [5] and Rényiα [6] divergences. Divergence estimation is useful for empirically estimating the decay rates of error probabilities of hypothesis testing [7], extending machine learning algorithms to distributional features [8, 9], and other applications such as text/multimedia clustering [10]. Additionally, a special case of the KL divergence is mutual information which gives the capacities in data compression and channel coding [7]. Mutual information estimation has also been used in machine learning applications such as feature selection [11], fMRI data processing [12], clustering [13], and neuron classification [14]. Entropy is also a special case of divergence where one of the distributions is the uniform distribution. Entropy estimation is useful for intrinsic dimension estimation [15], texture classification and image registration [16], and many other applications. However, one must go beyond entropy and divergence estimation in order to perform inference tasks on the divergence. An example of an inference task is detection: to test the null hypothesis that the divergence is zero, i.e., testing that the two populations have identical distributions. Prescribing a p-value on the null hypothesis requires specifying the null distribution of the divergence estimator. Another statistical inference problem is to construct a confidence interval on the divergence based on 1 the divergence estimator. This paper provides solutions to these inference problems by establishing large sample asymptotics on the distribution of divergence estimators. In particular we consider the asymptotic distribution of the nonparametric weighted ensemble estimator of f-divergence from [1]. This estimator estimates the f-divergence from two finite populations of i.i.d. samples drawn from some unknown, nonparametric, smooth, d-dimensional distributions. The estimator [1] achieves a MSE convergence rate of O 1 T  where T is the sample size. See [17] for proof details. 1.1 Related Work Estimators for some f-divergences already exist. For example, Póczos & Schneider [8] and Wang et al [18] provided consistent k-nn estimators for Rényi-α and the KL divergences, respectively. Consistency has been proven for other mutual information and divergence estimators based on plug-in histogram schemes [19, 20, 21, 22]. Hero et al [16] provided an estimator for Rényi-α divergence but assumed that one of the densities was known. However none of these works study the convergence rates of their estimators nor do they derive the asymptotic distributions. Recent work has focused on deriving convergence rates for divergence estimators. Nguyen et al [23], Singh and Póczos [24], and Krishnamurthy et al [25] each proposed divergence estimators that achieve the parametric convergence rate (O 1 T  ) under weaker conditions than those given in [1]. However, solving the convex problem of [23] can be more demanding for large sample sizes than the estimator given in [1] which depends only on simple density plug-in estimates and an offline convex optimization problem. Singh and Póczos only provide an estimator for Rényi-α divergences that requires several computations at each boundary of the support of the densities which becomes difficult to implement as d gets large. Also, this method requires knowledge of the support of the densities which may not be possible for some problems. In contrast, while the convergence results of the estimator in [1] requires the support to be bounded, knowledge of the support is not required for implementation. Finally, the estimators given in [25] estimate divergences that include functionals of the form ´ f α 1 (x)f β 2 (x)dµ(x) for given α, β. While a suitable α-β indexed sequence of divergence functionals of the form in [25] can be made to converge to the KL divergence, this does not guarantee convergence of the corresponding sequence of divergence estimates, whereas the estimator in [1] can be used to estimate the KL divergence. Also, for some divergences of the specified form, numerical integration is required for the estimators in [25], which can be computationally difficult. In any case, the asymptotic distributions of the estimators in [23, 24, 25] are currently unknown. Asymptotic normality has been established for certain appropriately normalized divergences between a specific density estimator and the true density [26, 27, 28]. However, this differs from our setting where we assume that both densities are unknown. Under the assumption that the two densities are smooth, lower bounded, and have bounded support, we show that an appropriately normalized weighted ensemble average of kernel density plug-in estimators of f-divergence converges in distribution to the standard normal distribution. This is accomplished by constructing a sequence of interchangeable random variables and then showing (by concentration inequalities and Taylor series expansions) that the random variables and their squares are asymptotically uncorrelated. The theory developed to accomplish this can also be used to derive a central limit theorem for a weighted ensemble estimator of entropy such as the one given in [3].We verify the theory by simulation. We then apply the theory to the practical problem of empirically bounding the Bayes classification error probability between two population distributions, without having to construct estimates for these distributions or implement the Bayes classifier. Bold face type is used in this paper for random variables and random vectors. Let f1 and f2 be densities and define L(x) = f1(x) f2(x). The conditional expectation given a random variable Z is EZ. 2 The Divergence Estimator Moon and Hero [1] focused on estimating divergences that include the form [4] G(f1, f2) = ˆ g f1(x) f2(x)  f2(x)dx, (1) for a smooth, function g(f). (Note that although g must be convex for (1) to be a divergence, the estimator in [1] does not require convexity.) The divergence estimator is constructed us2 ing k-nn density estimators as follows. Assume that the d-dimensional multivariate densities f1 and f2 have finite support S = [a, b]d. Assume that T = N + M2 i.i.d. realizations {X1, . . . , XN, XN+1, . . . , XN+M2} are available from the density f2 and M1 i.i.d. realizations {Y1, . . . , YM1} are available from the density f1. Assume that ki ≤Mi. Let ρ2,k2(i) be the distance of the k2th nearest neighbor of Xi in {XN+1, . . . , XT } and let ρ1,k1(i) be the distance of the k1th nearest neighbor of Xi in {Y1, . . . , YM1} . Then the k-nn density estimate is [29] ˆfi,ki(Xj) = ki Mi¯cρd i,ki(j), where ¯c is the volume of a d-dimensional unit ball. To construct the plug-in divergence estimator, the data from f2 are randomly divided into two parts {X1, . . . , XN} and {XN+1, . . . , XN+M2}. The k-nn density estimate ˆf2,k2 is calculated at the N points {X1, . . . , XN} using the M2 realizations {XN+1, . . . , XN+M2}. Similarly, the knn density estimate ˆf1,k1 is calculated at the N points {X1, . . . , XN} using the M1 realizations {Y1, . . . , YM1}. Define ˆLk1,k2(x) = ˆf1,k1(x) ˆf2,k2(x). The functional G(f1, f2) is then approximated as ˆGk1,k2 = 1 N N X i=1 g  ˆLk1,k2 (Xi)  . (2) The principal assumptions on the densities f1 and f2 and the functional g are that: 1) f1, f2, and g are smooth; 2) f1 and f2 have common bounded support sets S; 3) f1 and f2 are strictly lower bounded. The full assumptions (A.0) −(A.5) are given in the supplementary material and in[17]. Moon and Hero [1] showed that under these assumptions, the MSE convergence rate of the estimator in Eq. 2 to the quantity in Eq. 1 depends exponentially on the dimension d of the densities. However, Moon and Hero also showed that an estimator with the parametric convergence rate O(1/T) can be derived by applying the theory of optimally weighted ensemble estimation as follows. Let ¯l = {l1, . . . , lL} be a set of index values and T the number of samples available. For an indexed ensemble of estimators n ˆEl o l∈¯l of the parameter E, the weighted ensemble estimator with weights w = {w (l1) , . . . , w (lL)} satisfying P l∈¯l w(l) = 1 is defined as ˆEw = P l∈¯l w (l) ˆEl. The key idea to reducing MSE is that by choosing appropriate weights w, we can greatly decrease the bias in exchange for some increase in variance. Consider the following conditions on n ˆEl o l∈¯l [3]: • C.1 The bias is given by Bias  ˆEl  = X i∈J ciψi(l)T −i/2d + O  1 √ T  , where ci are constants depending on the underlying density, J = {i1, . . . , iI} is a finite index set with I < L, min(J) > 0 and max(J) ≤d, and ψi(l) are basis functions depending only on the parameter l. • C.2 The variance is given by Var h ˆEl i = cv  1 T  + o  1 T  . Theorem 1. [3] Assume conditions C.1 and C.2 hold for an ensemble of estimators n ˆEl o l∈¯l. Then there exists a weight vector w0 such that E  ˆEw0 −E 2 = O  1 T  . The weight vector w0 is the solution to the following convex optimization problem: minw ||w||2 subject to P l∈¯l w(l) = 1, γw(i) = P l∈¯l w(l)ψi(l) = 0, i ∈J. 3 Algorithm 1 Optimally weighted ensemble divergence estimator Input: α, η, L positive real numbers ¯l, samples {Y1, . . . , YM1} from f1, samples {X1, . . . , XT } from f2, dimension d, function g, ¯c Output: The optimally weighted divergence estimator ˆGw0 1: Solve for w0 using Eq. 3 with basis functions ψi(l) = li/d, l ∈¯l and i ∈{1, . . . , d −1} 2: M2 ←αT, N ←T −M2 3: for all l ∈¯l do 4: k(l) ←l√M2 5: for i = 1 to N do 6: ρj,k(l)(i) ←the distance of the k(l)th nearest neighbor of Xi in {Y1, . . . , YM1} and {XN+1, . . . , XT } for j = 1, 2, respectively 7: ˆfj,k(l)(Xi) ← k(l) Mj¯cρd j,k(l)(i) for j = 1, 2, ˆLk(l)(Xi) ← ˆf1,k(l) ˆf2,k(l) 8: end for 9: ˆGk(l) ←1 N PN i=1 g  ˆLk(l)(Xi)  10: end for 11: ˆGw0 ←P l∈¯l w0(l) ˆGk(l) In order to achieve the rate of O (1/T) it is not necessary for the weights to zero out the lower order bias terms, i.e. that γw(i) = 0, i ∈J. It was shown in [3] that solving the following convex optimization problem in place of the optimization problem in Theorem 1 retains the MSE convergence rate of O (1/T): minw ϵ subject to P l∈¯l w(l) = 1, γw(i)T 1 2 −i 2d ≤ϵ, i ∈J, ∥w∥2 2 ≤η, (3) where the parameter η is chosen to trade-off between bias and variance. Instead of forcing γw(i) = 0, the relaxed optimization problem uses the weights to decrease the bias terms at the rate of O(1/ √ T) which gives an MSE rate of O(1/T). Theorem 1 was applied in [3] to obtain an entropy estimator with convergence rate O (1/T) . Moon and Hero [1] similarly applied Theorem 1 to obtain a divergence estimator with the same rate in the following manner. Let L > I = d −1 and choose ¯l = {l1, . . . , lL} to be positive real numbers. Assume that M1 = O (M2) . Let k(l) = l√M2, M2 = αT with 0 < α < 1, ˆGk(l) := ˆGk(l),k(l), and ˆGw := P l∈¯l w(l) ˆGk(l). Note that the parameter l indexes over different neighborhood sizes for the k-nn density estimates. From [1], the biases of the ensemble estimators n ˆGk(l) o l∈¯l satisfy the condition C.1 when ψi(l) = li/d and J = {1, . . . , d−1}. The general form of the variance of ˆGk(l) also follows C.2. The optimal weight w0 is found by using Theorem 1 to obtain a plug-in f-divergence estimator with convergence rate of O (1/T) . The estimator is summarized in Algorithm 1. 3 Asymptotic Normality of the Estimator The following theorem shows that the appropriately normalized ensemble estimator ˆGw converges in distribution to a normal random variable. Theorem 2. Assume that assumptions (A.0) −(A.5) hold and let M = O(M1) = O(M2) and k(l) = l √ M with l ∈¯l. The asymptotic distribution of the weighted ensemble estimator ˆGw is given by lim M,N→∞Pr     ˆGw −E h ˆGw i r Var h ˆGw i ≤t    = Pr(S ≤t), 4 where S is a standard normal random variable. Also E h ˆGw i →G(f1, f2) and Var h ˆGw i →0. The results on the mean and variance come from [1]. The proof of the distributional convergence is outlined below and is based on constructing a sequence of interchangeable random variables {YM,i}N i=1 with zero mean and unit variance. We then show that the YM,i are asymptotically uncorrelated and that the Y2 M,i are asymptotically uncorrelated as M →∞. This is similar to what was done in [30] to prove a central limit theorem for a density plug-in estimator of entropy. Our analysis for the ensemble estimator of divergence is more complicated since we are dealing with a functional of two densities and a weighted ensemble of estimators. In fact, some of the equations we use to prove Theorem 2 can be used to prove a central limit theorem for a weighted ensemble of entropy estimators such as that given in [3]. 3.1 Proof Sketch of Theorem 2 The full proof is included in the supplemental material. We use the following lemma from [30, 31]: Lemma 3. Let the random variables {YM,i}N i=1 belong to a zero mean, unit variance, interchangeable process for all values of M. Assume that Cov(YM,1, YM,2) and Cov(Y2 M,1, Y2 M,2) are O(1/M). Then the random variable SN,M = N X i=1 YM,i ! / v u u tVar " N X i=1 YM,i # (4) converges in distribution to a standard normal random variable. This lemma is an extension of work by Blum et al [32] which showed that if {Zi; i = 1, 2, . . . } is an interchangeable process with zero mean and unit variance, then SN = 1 √ N PN i=1 Zi converges in distribution to a standard normal random variable if and only if Cov [Z1, Z2] = 0 and Cov  Z2 1, Z2 2  = 0. In other words, the central limit theorem holds if and only if the interchangeable process is uncorrelated and the squares are uncorrelated. Lemma 3 shows that for a correlated interchangeable process, a sufficient condition for a central limit theorem is for the interchangeable process and the squared process to be asymptotically uncorrelated with rate O(1/M). For simplicity, let M1 = M2 = M and ˆLk(l) := ˆLk(l),k(l). Define YM,i = P l∈¯l w(l)g  ˆLk(l)(Xi)  −E hP l∈¯l w(l)g  ˆLk(l)(Xi) i r Var hP l∈¯l w(l)g  ˆLk(l)(Xi) i . Then from Eq. 4, we have that SN,M =  ˆGw −E h ˆGw i / r Var h ˆGw i . Thus it is sufficient to show from Lemma 3 that Cov(YM,1, YM,2) and Cov(Y2 M,1, Y2 M,2) are O(1/M). To do this, it is necessary to show that the denominator of YM,i converges to a nonzero constant or to zero sufficiently slowly. It is also necessary to show that the covariance of the numerator is O(1/M). Therefore, to bound Cov(YM,1, YM,2), we require bounds on the quantity Cov h g  ˆLk(l)(Xi)  , g  ˆLk(l′)(Xj) i where l, l′ ∈¯l. Define M(Z) := Z −EZ, ˆFk(l)(Z) := ˆLk(l)(Z) −EZ  ˆLk(l)(Z)  , and ˆei,k(l)(Z) := ˆfi,k(l)(Z) − EZˆfi,k(l)(Z). Assuming g is sufficiently smooth, a Taylor series expansion of g  ˆLk(l)(Z)  around EZˆLk(l)(Z) gives g  ˆLk(l)(Z)  = λ−1 X i=0 g(i)  EZˆLk(l)(Z)  i! ˆFi k(l)(Z) + g(λ) (ξZ) λ! ˆFλ k(l)(Z), 5 where ξZ ∈  EZ ˆFk(l)(Z), ˆFk(l)(Z)  . We use this expansion to bound the covariance. The expected value of the terms containing the derivatives of g is controlled by assuming that the densities are lower bounded. By assuming the densities are sufficiently smooth, an expression for ˆFq k(l) (Z) in terms of powers and products of the density error terms ˆe1,k(l) and ˆe2,k(l) is obtained by expanding ˆLk(l)(Z) around EZˆf1,k(l)(Z) and EZˆf2,k(l)(Z) and applying the binomial theorem. The expected value of products of these density error terms is bounded by applying concentration inequalities and conditional independence. Then the covariance between ˆFq k(l)(Z) terms is bounded by bounding the covariance between powers and products of the density error terms by applying Cauchy-Schwarz and other concentration inequalities. This gives the following lemma which is proved in the supplemental material. Lemma 4. Let l, l′ ∈¯l be fixed, M1 = M2 = M, and k(l) = l √ M. Let γ1(x), γ2(x) be arbitrary functions with 1 partial derivative wrt x and supx |γi(x)| < ∞, i = 1, 2 and let 1{·} be the indicator function. Let Xi and Xj be realizations of the density f2 independent of ˆf1,k(l), ˆf1,k(l′), ˆf2,k(l), and ˆf2,k(l′) and independent of each other when i ̸= j. Then Cov h γ1(Xi)ˆFq k(l)(Xi), γ2(Xj)ˆFr k(l′)(Xj) i = o(1), i = j 1{q,r=1}c8 (γ1(x), γ2(x)) 1 M  + o 1 M  , i ̸= j. Note that k(l) is required to grow with √ M for Lemma 4 to hold. Define hl,g(X) = g  EXˆLk(l)(X)  . Lemma 4 can then be used to show that Cov h g  ˆLk(l)(Xi)  , g  ˆLk(l′)(Xj) i = E [M (hl,g(Xi)) M (hl′,g(Xi))] + o(1), i = j c8 (hl,g′(x), hl′,g′(x)) 1 M  + o 1 M  , i ̸= j. For the covariance of Y2 M,i and Y2 M,j, assume WLOG that i = 1 and j = 2. Then for l, l′, j, j′ we need to bound the term Cov h M  g  ˆLk(l)(X1)  M  g  ˆLk(l′)(X1)  , M  g  ˆLk(j)(X2)  M  g  ˆLk(j′)(X2) i . (5) For the case where l = l′ and j = j′, we can simply apply the previous results to the functional d(x) = (M (g(x)))2. For the more general case, we need to show that Cov h γ1(X1)ˆFs k(l)(X1)ˆFq k(l′)(X1), γ2(X2)ˆFt k(j)(X2)ˆFr k(j′)(X2) i = O  1 M  . (6) To do this, bounds are required on the covariance of up to eight distinct density error terms. Previous results can be applied by using Cauchy-Schwarz when the sum of the exponents of the density error terms is greater than or equal to 4. When the sum is equal to 3, we use the fact that k(l) = O(k(l′)) combined with Markov’s inequality to obtain a bound of O (1/M). Applying Eq. 6 to the term in Eq. 5 gives the required bound to apply Lemma 3. 3.2 Broad Implications of Theorem 2 To the best of our knowledge, Theorem 2 provides the first results on the asymptotic distribution of an f-divergence estimator with MSE convergence rate of O (1/T) under the setting of a finite number of samples from two unknown, non-parametric distributions. This enables us to perform inference tasks on the class of f-divergences (defined with smooth functions g) on smooth, strictly lower bounded densities with finite support. Such tasks include hypothesis testing and constructing a confidence interval on the error exponents of the Bayes probability of error for a classification problem. This greatly increases the utility of these divergence estimators. Although we focused on a specific divergence estimator, we suspect that our approach of showing that the components of the estimator and their squares are asymptotically uncorrelated can be adapted to derive central limit theorems for other divergence estimators that satisfy similar assumptions (smooth g, and smooth, strictly lower bounded densities with finite support). We speculate that this would be easiest for estimators that are also based on k-nearest neighbors such as in [8] and [18]. It is also possible that the approach can be adapted to other plug-in estimator approaches such as in [24] and [25]. However, the qualitatively different convex optimization approach of divergence estimation in [23] may require different methods. 6 Figure 1: Q-Q plot comparing quantiles from the normalized weighted ensemble estimator of the KL divergence (vertical axis) to the quantiles from the standard normal distribution (horizontal axis). The red line shows . The linearity of the Q-Q plot points validates the central limit theorem, Theorem. 2, for the estimator. 4 Experiments We first apply the weighted ensemble estimator of divergence to simulated data to verify the central limit theorem. We then use the estimator to obtain confidence intervals on the error exponents of the Bayes probability of error for the Iris data set from the UCI machine learning repository [33, 34]. 4.1 Simulation To verify the central limit theorem of the ensemble method, we estimated the KL divergence between two truncated normal densities restricted to the unit cube. The densities have means ¯µ1 = 0.7 ∗¯1d, ¯µ2 = 0.3 ∗¯1d and covariance matrices σiId where σ1 = 0.1, σ2 = 0.3, ¯1d is a d-dimensional vector of ones, and Id is a d-dimensional identity matrix. We show the Q-Q plot of the normalized optimally weighted ensemble estimator of the KL divergence with d = 6 and 1000 samples from each density in Fig. 1. The linear relationship between the quantiles of the normalized estimator and the standard normal distribution validates Theorem 2. 4.2 Probability of Error Estimation Our ensemble divergence estimator can be used to estimate a bound on the Bayes probability of error [7]. Suppose we have two classes C1 or C2 and a random observation x. Let the a priori class probabilities be w1 = Pr(C1) > 0 and w2 = Pr(C2) = 1 −w1 > 0. Then f1 and f2 are the densities corresponding to the classes C1 and C2, respectively. The Bayes decision rule classifies x as C1 if and only if w1f1(x) > w2f2(x). The Bayes error P ∗ e is the minimum average probability of error and is equivalent to P ∗ e = ˆ min (Pr(C1|x), Pr(C2|x)) p(x)dx = ˆ min (w1f1(x), w2f2(x)) dx, (7) where p(x) = w1f1(x) + w2f2(x). For a, b > 0, we have min(a, b) ≤aαb1−α, ∀α ∈(0, 1). Replacing the minimum function in Eq. 7 with this bound gives P ∗ e ≤wα 1 w1−α 2 cα(f1||f2), (8) where cα(f1||f2) = ´ f α 1 (x)f 1−α 2 (x)dx is the Chernoff α-coefficient. The Chernoff coefficient is found by choosing the value of α that minimizes the right hand side of Eq. 8: c∗(f1||f2) = cα∗(f1||f2) = min α∈(0,1) ˆ f α 1 (x)f 1−α 2 (x)dx. Thus if α∗= arg minα∈(0,1) cα(f1||f2), an upper bound on the Bayes error is P ∗ e ≤wα∗ 1 w1−α∗ 2 c∗(f1||f2). (9) 7 Setosa-Versicolor Setosa-Virginica Versicolor-Virginica Estimated Confidence Interval (0, 0.0013) (0, 0.0002) (0, 0.0726) QDA Misclassification Rate 0 0 0.04 Table 1: Estimated 95% confidence intervals for the bound on the pairwise Bayes error and the misclassification rate of a QDA classifier with 5-fold cross validation applied to the Iris dataset. The right endpoint of the confidence intervals is nearly zero when comparing the Setosa class to the other two classes while the right endpoint is much higher when comparing the Versicolor and Virginica classes. This is consistent with the QDA performance and the fact that the Setosa class is linearly separable from the other two classes. Equation 9 includes the form in Eq. 1 (g(x) = xα). Thus we can use the optimally weighted ensemble estimator described in Sec. 2 to estimate a bound on the Bayes error. In practice, we estimate cα(f1||f2) for multiple values of α (e.g. 0.01, 0.02, . . . , 0.99) and choose the minimum. We estimated a bound on the pairwise Bayes error between the three classes (Setosa, Versicolor, and Virginica) in the Iris data set [33, 34] and used bootstrapping to calculate confidence intervals. We compared the bounds to the performance of a quadratic discriminant analysis classifier (QDA) with 5-fold cross validation. The pairwise estimated 95% confidence intervals and the misclassification rates of the QDA are given in Table 1. Note that the right endpoint of the confidence interval is less than 1/50 when comparing the Setosa class to either of the other two classes. This is consistent with the performance of the QDA and the fact that the Setosa class is linearly separable from the other two classes. In contrast, the right endpoint of the confidence interval is higher when comparing the Versicolor and Virginica classes which are not linearly separable. This is also consistent with the QDA performance. Thus the estimated bounds provide a measure of the relative difficulty of distinguishing between the classes, even though the small number of samples for each class (50) limits the accuracy of the estimated bounds. 5 Conclusion In this paper, we established the asymptotic normality for a weighted ensemble estimator of fdivergence using d-dimensional truncated k-nn density estimators. To the best of our knowledge, this gives the first results on the asymptotic distribution of an f-divergence estimator with MSE convergence rate of O (1/T) under the setting of a finite number of samples from two unknown, nonparametric distributions. Future work includes simplifying the constants in front of the convergence rates given in [1] for certain families of distributions, deriving Berry-Esseen bounds on the rate of distributional convergence, extending the central limit theorem to other divergence estimators, and deriving the nonasymptotic distribution of the estimator. Acknowledgments This work was partially supported by NSF grant CCF-1217880 and a NSF Graduate Research Fellowship to the first author under Grant No. F031543. References [1] K. R. Moon and A. O. Hero III, “Ensemble estimation of multivariate f-divergence,” in IEEE International Symposium on Information Theory, pp. 356–360, 2014. [2] K. Sricharan and A. O. Hero III, “Ensemble weighted kernel estimators for multivariate entropy estimation,” in Adv. Neural Inf. Process. Syst., pp. 575–583, 2012. [3] K. Sricharan, D. Wei, and A. O. Hero III, “Ensemble estimators for multivariate entropy estimation,” IEEE Trans. on Inform. Theory, vol. 59, no. 7, pp. 4374–4388, 2013. [4] I. Csiszar, “Information-type measures of difference of probability distributions and indirect observations,” Studia Sci. Math. Hungar., vol. 2, pp. 299–318, 1967. [5] S. Kullback and R. A. Leibler, “On information and sufficiency,” The Annals of Mathematical Statistics, vol. 22, no. 1, pp. 79–86, 1951. [6] A. Rényi, “On measures of entropy and information,” in Fourth Berkeley Sympos. on Mathematical Statistics and Probability, pp. 547–561, 1961. 8 [7] T. M. Cover and J. A. Thomas, Elements of Information Theory. John Wiley & Sons, 2006. [8] B. Póczos and J. G. Schneider, “On the estimation of alpha-divergences,” in International Conference on Artificial Intelligence and Statistics, pp. 609–617, 2011. [9] J. B. Oliva, B. Póczos, and J. Schneider, “Distribution to distribution regression,” in International Conference on Machine Learning, pp. 1049–1057, 2013. [10] I. S. Dhillon, S. Mallela, and R. Kumar, “A divisive information theoretic feature clustering algorithm for text classification,” The Journal of Machine Learning Research, vol. 3, pp. 1265–1287, 2003. [11] H. Peng, F. Long, and C. Ding, “Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 27, no. 8, pp. 1226–1238, 2005. [12] B. Chai, D. Walther, D. Beck, and L. Fei-Fei, “Exploring functional connectivities of the human brain using multivariate information analysis,” in Adv. Neural Inf. Process. Syst., pp. 270–278, 2009. [13] J. Lewi, R. Butera, and L. Paninski, “Real-time adaptive information-theoretic optimization of neurophysiology experiments,” in Adv. Neural Inf. Process. Syst., pp. 857–864, 2006. [14] E. Schneidman, W. Bialek, and M. J. Berry, “An information theoretic approach to the functional classification of neurons,” in Adv. Neural Inf. Process. Syst., pp. 197–204, 2002. [15] K. M. Carter, R. Raich, and A. O. Hero III, “On local intrinsic dimension estimation and its applications,” Signal Processing, IEEE Transactions on, vol. 58, no. 2, pp. 650–663, 2010. [16] A. O. Hero III, B. Ma, O. J. Michel, and J. Gorman, “Applications of entropic spanning graphs,” Signal Processing Magazine, IEEE, vol. 19, no. 5, pp. 85–95, 2002. [17] K. R. Moon and A. O. Hero III, “Ensemble estimation of multivariate f-divergence,” CoRR, vol. abs/1404.6230, 2014. [18] Q. Wang, S. R. Kulkarni, and S. Verdú, “Divergence estimation for multidimensional densities via knearest-neighbor distances,” IEEE Trans. Inform. Theory, vol. 55, no. 5, pp. 2392–2405, 2009. [19] G. A. Darbellay, I. Vajda, et al., “Estimation of the information by an adaptive partitioning of the observation space,” IEEE Trans. Inform. Theory, vol. 45, no. 4, pp. 1315–1321, 1999. [20] Q. Wang, S. R. Kulkarni, and S. Verdú, “Divergence estimation of continuous distributions based on data-dependent partitions,” IEEE Trans. Inform. Theory, vol. 51, no. 9, pp. 3064–3074, 2005. [21] J. Silva and S. S. Narayanan, “Information divergence estimation based on data-dependent partitions,” Journal of Statistical Planning and Inference, vol. 140, no. 11, pp. 3180–3198, 2010. [22] T. K. Le, “Information dependency: Strong consistency of Darbellay–Vajda partition estimators,” Journal of Statistical Planning and Inference, vol. 143, no. 12, pp. 2089–2100, 2013. [23] X. Nguyen, M. J. Wainwright, and M. I. Jordan, “Estimating divergence functionals and the likelihood ratio by convex risk minimization,” IEEE Trans. Inform. Theory, vol. 56, no. 11, pp. 5847–5861, 2010. [24] S. Singh and B. Póczos, “Generalized exponential concentration inequality for Rényi divergence estimation,” in International Conference on Machine Learning, pp. 333–341, 2014. [25] A. Krishnamurthy, K. Kandasamy, B. Póczos, and L. Wasserman, “Nonparametric estimation of Rényi divergence and friends,” in International Conference on Machine Learning, vol. 32, 2014. [26] A. Berlinet, L. Devroye, and L. Györfi, “Asymptotic normality of L1 error in density estimation,” Statistics, vol. 26, pp. 329–343, 1995. [27] A. Berlinet, L. Györfi, and I. Dénes, “Asymptotic normality of relative entropy in multivariate density estimation,” Publications de l’Institut de Statistique de l’Université de Paris, vol. 41, pp. 3–27, 1997. [28] P. J. Bickel and M. Rosenblatt, “On some global measures of the deviations of density function estimates,” The Annals of Statistics, pp. 1071–1095, 1973. [29] D. O. Loftsgaarden and C. P. Quesenberry, “A nonparametric estimate of a multivariate density function,” The Annals of Mathematical Statistics, pp. 1049–1051, 1965. [30] K. Sricharan, R. Raich, and A. O. Hero III, “Estimation of nonlinear functionals of densities with confidence,” IEEE Trans. Inform. Theory, vol. 58, no. 7, pp. 4135–4159, 2012. [31] K. Sricharan, Neighborhood graphs for estimation of density functionals. PhD thesis, Univ. Michigan, 2012. [32] J. Blum, H. Chernoff, M. Rosenblatt, and H. Teicher, “Central limit theorems for interchangeable processes,” Canad. J. Math, vol. 10, pp. 222–229, 1958. [33] K. Bache and M. Lichman, “UCI machine learning repository,” 2013. [34] R. A. Fisher, “The use of multiple measurements in taxonomic problems,” Annals of eugenics, vol. 7, no. 2, pp. 179–188, 1936. 9
2014
391
5,504
Online Decision-Making in General Combinatorial Spaces Arun Rajkumar Shivani Agarwal Department of Computer Science and Automation Indian Institute of Science, Bangalore 560012, India {arun r,shivani}@csa.iisc.ernet.in Abstract We study online combinatorial decision problems, where one must make sequential decisions in some combinatorial space without knowing in advance the cost of decisions on each trial; the goal is to minimize the total regret over some sequence of trials relative to the best fixed decision in hindsight. Such problems have been studied mostly in settings where decisions are represented by Boolean vectors and costs are linear in this representation. Here we study a general setting where costs may be linear in any suitable low-dimensional vector representation of elements of the decision space. We give a general algorithm for such problems that we call low-dimensional online mirror descent (LDOMD); the algorithm generalizes both the Component Hedge algorithm of Koolen et al. (2010), and a recent algorithm of Suehiro et al. (2012). Our study offers a unification and generalization of previous work, and emphasizes the role of the convex polytope arising from the vector representation of the decision space; while Boolean representations lead to 0-1 polytopes, more general vector representations lead to more general polytopes. We study several examples of both types of polytopes. Finally, we demonstrate the benefit of having a general framework for such problems via an application to an online transportation problem; the associated transportation polytopes generalize the Birkhoff polytope of doubly stochastic matrices, and the resulting algorithm generalizes the PermELearn algorithm of Helmbold and Warmuth (2009). 1 Introduction In an online combinatorial decision problem, the decision space is a set of combinatorial structures, such as subsets, trees, paths, permutations, etc. On each trial, one selects a combinatorial structure from the decision space, and incurs a loss; the goal is to minimize the regret over some sequence of trials relative to the best fixed structure in hindsight. Such problems have been studied extensively in the last several years, primarily in the setting where the combinatorial structures are represented by Boolean vectors, and costs are linear in this representation; this includes online learning of paths, permutations, and various other specific combinatorial structures [16, 17, 12], as well as the Component Hedge algorithm of Koolen et al. [14] which generalizes many of these previous studies. More recently, Suehiro et al. [15] considered a setting where the combinatorial structures of interest are represented by the vertices of the base polytope of a submodular function, and costs are linear in this representation; this includes as special cases several of the Boolean examples considered earlier, as well as new settings such as learning permutations with certain position-based losses (see also [2]). In this work, we consider a general form of the online combinatorial decision problem, where costs can be linear in any suitable low-dimensional vector representation of the combinatorial structures of interest. This encompasses representations as Boolean vectors and vertices of submodular base polytopes as special cases, but also includes many other settings. We give a general algorithm for 1 such problems that we call low-dimensional online mirror descent (LDOMD); the algorithm generalizes both the Component Hedge algorithm of Koolen et al. for Boolean representations [14], and the algorithm of Suehiro et al. for submodular polytope vertex representations [15].1 As we show, in many settings of interest, the regret bounds for LDOMD are better than what can be obtained with other algorithms for online decision problems, such as the Hedge algorithm of Freund and Schapire [10] and the Follow the Perturbed Leader algorithm of Kalai and Vempala [13]. We start with some preliminaries and background in Section 2, and describe the LDOMD algorithm and its analysis in Section 3. Our study emphasizes the role of the convex polytope arising from the vector representation of the decision space; we study several examples of such polytopes, including matroid polytopes, polytopes associated with submodular functions, and permutation polytopes in Sections 4–6, respectively. Section 7 applies our framework to an online transportation problem. 2 Preliminaries and Background Notation. For n ∈Z+, we will denote [n] = {1, . . . , n}. For a vector z ∈Rd, we will denote by ∥z∥1, ∥z∥2, and ∥z∥∞the standard L1, L2, and L∞norms of z, respectively. For a set Z ⊆Rd, we will denote by conv(Z) the convex hull of Z, and by int(Z) the interior of Z. For a closed convex set K ⊆Rd and Legendre function F : K→R,2 we will denote by BF : K × int(K)→R+ the Bregman divergence associated with F, defined as BF (x, x′) = F(x) −F(x′) −∇F(x′) · (x −x′), and by F ∗: ∇F(int(K))→R the Fenchel conjugate of F, defined as F ∗(u) = supx∈K(x·u−F(x)). Online Combinatorial Decision-Making Inputs: Finite set of combinatorial structures C Mapping φ : C→Rd For t = 1 . . . T: – Predict ct ∈C – Receive loss vector ℓt ∈[0, 1]d – Incur loss φ(ct) · ℓt Figure 1: Online decision-making in a general combinatorial space. Problem Setup. Let C be a (finite but large) set of combinatorial structures. Let φ : C→Rd be some injective mapping that maps each c ∈C to a unique vector φ(c) ∈Rd (so that |φ(C)| = |C|). We will generally assume d ≪|C| (e.g. d = poly log(|C|)). The online combinatorial decision-making problem we consider can be described as follows: On each trial t, one makes a decision in C by selecting a structure ct ∈C, and receives a loss vector ℓt ∈[0, 1]d; the loss incurred is given by φ(ct) · ℓt (see Figure 1). The goal is to minimize the regret relative to the single best structure in C in hindsight; specifically, the regret of an algorithm A that selects ct ∈C on trial t over T trials is defined as RT [A] = PT t=1 φ(ct) · ℓt −minc∈C PT t=1 φ(c) · ℓt . In particular, we would like to design algorithms whose worst-case regret (over all possible loss sequences) is sublinear in T (and also has as good a dependence as possible on other relevant problem parameters). From standard results, it follows that for any deterministic algorithm, there is always a loss sequence that forces the regret to be linear in T; as is common in the online learning literature, we will therefore consider randomized algorithms that maintain a probability distribution pt over C from which ct is randomly drawn, and consider bounding the expected regret of such algorithms. Online Mirror Descent (OMD). Recall that online mirror descent (OMD) is a general algorithmic framework for online convex optimization problems, where on each trial t, one selects a point xt in some convex set Ω⊆Rn, receives a convex cost function ft : Ω→R, and incurs a loss ft(xt); the goal is to minimize the regret relative to the best single point in Ωin hindsight. The OMD algorithm makes use of a Legendre function F : K→R defined on a closed convex set K ⊇Ω, and effectively performs a form of projected gradient descent in the dual space of int(K) under F, the projections being in terms of the Bregman divergence BF associated with F. See Appendix A.1 for an outline of OMD and its regret bound for the special case of online linear optimization, where costs ft are linear (so that ft(x) = ℓt · x for some ℓt ∈Rn), which will be relevant to our study. 1We note that the recent online stochastic mirror descent (OSMD) algorithm of Audibert et al. [3] also generalizes the Component Hedge algorithm, but in a different direction: OSMD (as described in [3]) applies to only Boolean representations, but allows also for partial information (bandit) settings; here we consider only full information settings, but allow for more general vector representations. 2Recall that for a closed convex set K ⊆Rd, a function F : K→R is Legendre if it is strictly convex, differentiable on int(K), and (for any norm ∥· ∥on Rd) ∥∇F(xn)∥→+ ∞whenever {xn} converges to a point in the boundary of K. 2 Hedge/Na¨ıve OMD. The Hedge algorithm proposed by Freund and Schapire [10] is widely used for online decision problems in general. The algorithm maintains a probability distribution over the decision space, and can be viewed as an instantiation of the OMD framework, with Ω(and K) the probability simplex over the decision space, linear costs ft (since one works with expected losses), and F the negative entropy. When applied to online combinatorial decision problems in a na¨ıve manner, the Hedge algorithm requires maintaining a probability distribution over the combinatorial decision space C, which in many cases can be computationally prohibitive (see Appendix A.2 for an outline of the algorithm, which we also refer to as Na¨ıve OMD). The following bound on the expected regret of the Hedge/Na¨ıve OMD algorithm is well known: Theorem 1 (Regret bound for Hedge/Na¨ıve OMD). Let φ(c) · ℓt ∈[a, b] ∀c ∈C, t ∈[T]. Then setting η∗= 2 (b−a) q 2 ln |C| T gives E h RT  Hedge(η∗) i ≤(b −a) r T ln |C| 2 . Follow the Perturbed Leader (FPL). Another widely used algorithm for online decision problems is the Follow the Perturbed Leader (FPL) algorithm proposed by Kalai and Vempala [13] (see Appendix A.3 for an outline of the algorithm). Note that in the combinatorial setting, FPL requires the solution to a combinatorial optimization problem on each trial, which may or may not be efficiently solvable depending on the form of the mapping φ. The following bound on the expected regret of the FPL algorithm is well known: Theorem 2 (Regret bound for FPL). Let ∥φ(c) −φ(c′)∥1 ≤D1, ∥ℓt∥1 ≤G1, and |φ(c) · ℓt| ≤B ∀c, c′ ∈C, t ∈[T]. Then setting η∗= q D1 BG1T gives E h RT  FPL(η∗) i ≤2 p D1BG1T . Polytopes. Recall that a set S ⊂Rd is a polytope if there exist a finite number of points x1, . . . , xn ∈ Rd such that S = conv({x1, . . . , xn}). Any polytope S ⊂Rd has a unique minimal set of points x′ 1, . . . , x′ m ∈Rd such that S = conv({x′ 1, . . . , x′ m}); these points are called the vertices of S. A polytope S ⊂Rd is said to be a 0-1 polytope if all its vertices lie in the Boolean hypercube {0, 1}d. As we shall see, in our study of online combinatorial decision problems as above, the polytope conv(φ(C)) ⊂Rd will play a central role. Clearly, if φ(C) ⊆{0, 1}d, then conv(φ(C)) is a 0-1 polytope; in general, however, conv(φ(C)) can be any polytope in Rd. 3 Low-Dimensional Online Mirror Descent (LDOMD) We describe the Low-Dimensional OMD (LDOMD) algorithm in Figure 2. The algorithm maintains a point xt in the polytope conv(φ(C)). It makes use of a Legendre function F : K→R defined on a closed convex set K ⊇conv(φ(C)), and effectively performs OMD in a d-dimensional space rather than in a |C|-dimensional space as in the case of Hedge/Na¨ıve OMD. Note that an efficient implementation of LDOMD requires two operations to be performed efficiently: (a) given a point xt ∈conv(φ(C)), one needs to be able to efficiently find a ‘decomposition’ of xt into a convex combination of a small number of points in φ(C) (this yields a distribution pt ∈∆C that satisfies Ec∼pt[φ(c)] = xt and also has small support, allowing efficient sampling); and (b) given a point ext+1 ∈K, one needs to be able to efficiently find a ‘projection’ of ext+1 onto conv(φ(C)) in terms of the Bregman divergence BF . The following regret bound for LDOMD follows directly from the standard OMD regret bound (see Theorem 4 in Appendix A.1): Theorem 3 (Regret bound for LDOMD). Let BF (φ(c), x1) ≤D2 ∀c ∈C. Let ∥· ∥be any norm in Rd such that ∥ℓt∥≤G ∀t ∈[T], and such that the restriction of F to conv(φ(C)) is α-strongly convex w.r.t. ∥· ∥∗, the dual norm of ∥· ∥. Then setting η∗= D G q 2α T gives E h RT  LDOMD(η∗) i ≤DG r 2T α . As we shall see below, the LDOMD algorithm generalizes both the Component Hedge algorithm of Koolen et al. [14], which applies to settings where φ(C) ⊆{0, 1}d (Section 3.1), and the recent algorithm of Suehiro et al. [15], which applies to settings where conv(φ(C)) is the base polytope associated with a submodular function (Section 5). 3 Algorithm Low-Dimensional OMD (LDOMD) for Online Combinatorial Decision-Making Inputs: Finite set of combinatorial structures C Mapping φ : C→Rd Parameters: η > 0 Closed convex set K ⊇conv(φ(C)), Legendre function F : K→R Initialize: x1 = argminx∈conv(φ(C)) F(x) (or x1 = any other point in conv(φ(C))) For t = 1 . . . T: – Let pt be any distribution over C such that Ec∼pt[φ(c)] = xt [Decomposition step] – Randomly draw ct ∼pt – Receive loss vector ℓt ∈[0, 1]d – Incur loss φ(ct) · ℓt – Update: ext+1 ←∇F ∗(∇F(xt) −ηℓt) xt+1 ←argminx∈conv(φ(C)) BF (x, ext+1) [Bregman projection step] Figure 2: The LDOMD algorithm. 3.1 LDOMD with 0-1 Polytopes Consider first a setting where each c ∈C is represented as a Boolean vector, so that φ(C) ⊆{0, 1}d. In this case conv(φ(C)) is a 0-1 polytope. This is the setting commonly studied under the term ‘online combinatorial learning’ [14, 8, 3]. In analyzing this setting, one generally introduces an additional problem parameter, namely an upper bound m on the ‘size’ of each Boolean vector φ(c). Specifically, let us assume ∥φ(c)∥1 ≤m ∀c ∈C for some m ∈[d]. Under the above assumption, it is easy to verify that applying Theorems 1 and 2 gives E h RT  Hedge(η∗) i = O  m q Tm ln( d m)  ; E h RT  FPL(η∗) i = O(m √ Td) . For the LDOMD algorithm, since conv(φ(C)) ⊆[0, 1]d ⊂Rd +, it is common to take K = Rd + and to let F : K→R be the unnormalized negative entropy, defined as F(x) = Pd i=1 xi ln xi −Pd i=1 xi, which leads to a multiplicative update algorithm; the resulting algorithm was termed Component Hedge in [14]. For the above choice of F, it is easy to see that BF (φ(c), x1) ≤m ln( d m) ∀c ∈C; moreover, ∥ℓt∥∞≤1 ∀t, and the restriction of F on conv(φ(C)) is ( 1 m)-strongly convex w.r.t. ∥·∥1. Therefore, applying Theorem 3 with appropriate η∗, one gets E h RT  LDOMD(η∗) i = O  m q T ln( d m)  . Thus, when φ(C) ⊆{0, 1}d, the LDOMD algorithm with the above choice of F gives a better regret bound than both Hedge/Na¨ıve OMD and FPL; in fact the performance of LDOMD in this setting is essentially optimal, as one can easily show a matching lower bound [3]. Below we will see how several online combinatorial decision problems studied in the literature can be recovered under the above framework (e.g. see [16, 17, 12, 14, 8]); in many of these cases, both decomposition and unnormalized relative entropy projection steps in LDOMD can be performed efficiently (in poly(d) time) (e.g. see [14]). As a warm-up, consider the following simple example: Example 1 (m-sets with element-based losses). Here C contains all size-m subsets of a ground set of d elements: C = {S ⊆[d] | |S| = m}. On each trial t, one selects a subset St ∈C and receives a loss vector ℓt ∈[0, 1]d, with ℓt i specifying the loss for including element i ∈[d]; the loss for the subset St is given by P i∈St ℓt i. Here it is natural to define a mapping φ : C→{0, 1}d that maps each S ∈C to its characteristic vector, defined as φi(S) = 1(i ∈S) ∀i ∈[d]; the loss incurred on predicting St ∈C is then simply φ(St) · ℓt. Thus φ(C) = {x ∈{0, 1}d | ∥x∥1 = m}, and conv(φ(C)) = {x ∈[0, 1]d | ∥x∥1 = m}. LDOMD with unnormalized negative entropy as above has a regret bound of O m q T ln( d m)  . It can be shown that both decomposition and unnormalized relative entropy projection steps take O(d2) time [17, 14]. 4 3.2 LDOMD with General Polytopes Now consider a general setting where φ : C→Rd, and conv(φ(C)) ⊂Rd is an arbitrary polytope. Let us assume again ∥φ(c)∥1 ≤m ∀c ∈C for some m > 0. Again, it is easy to verify that applying Theorems 1 and 2 gives E h RT  Hedge(η∗) i = O(m p T ln |C|) ; E h RT  FPL(η∗) i = O(m √ Td) . For the LDOMD algorithm, we consider two cases: Case 1: φ(C) ⊂Rd +. Here one can again take K = Rd + and let F : K→R be the unnormalized negative entropy. In this case, one gets BF (φ(c), x1) ≤m ln(d) + m ∀c ∈C if m < d, and BF (φ(c), x1) ≤m ln(m) + d ∀c ∈C if m ≥d. As before, ∥ℓt∥∞≤1 ∀t, and the restriction of F on conv(φ(C)) is ( 1 m)-strongly convex w.r.t. ∥· ∥1, so applying Theorem 3 for appropriate η∗gives E h RT  LDOMD(η∗) i = ( O m p T ln(d)  if m < d O m p T ln(m)  if m ≥d. Thus, when φ(C) ⊂Rd +, if ln |C| = ω(max(ln(m), ln(d)))) and d = ω(ln(m)), then the LDOMD algorithm with unnormalized negative entropy again gives a better regret bound than both Hedge/Na¨ıve OMD and FPL. Case 2: φ(C) ̸⊂Rd +. Here one can no longer use the unnormalized negative entropy in LDOMD. One possibility is to take K = Rd and let F : K→R be defined as F(x) = 1 2∥x∥2 2, which leads to an additive update algorithm. In this case, one gets BF (φ(c), x1) = 1 2∥φ(c) −x1∥2 2 ≤2m2 ∀c ∈C; moreover, ∥ℓt∥2 ≤ √ d ∀t, and F is 1-strongly convex w.r.t. ∥· ∥2. Applying Theorem 3 for appropriate η∗then gives E h RT  LDOMD(η∗) i = O(m √ Td) . Thus in general, when φ(C) ̸⊂Rd +, LDOMD with squared L2-norm has a similar regret bound as that of Hedge/Na¨ıve OMD and FPL. Note however that in some cases, Hedge/Na¨ıve OMD and FPL may be infeasible to implement efficiently, while LDOMD with squared L2-norm may be efficiently implementable; moreover, in certain cases it may be possible to implement LDOMD with other choices of K and F that lead to better regret bounds. In the following sections we will consider several examples of applications of LDOMD to online combinatorial decision problems involving both 0-1 polytopes and general polytopes in Rd. 4 Matroid Polytopes Consider an online decision problem in which the decision space C contains (not necessarily all) independent sets in a matroid M = (E, I). Specifically, on each trial t, one selects an independent set It ∈C, and receives a loss vector ℓt ∈[0, 1]|E|, with ℓt e specifying the loss for including element e ∈E; the loss for the independent set It is given by P e∈It ℓt e. Here it is natural to define a mapping φ : C→{0, 1}|E| that maps each independent set I ∈C to its characteristic vector, defined as φe(I) = 1(e ∈I); the loss on selecting It ∈C is then φ(It) · ℓt. Thus here d = |E|, and φ(C) ⊆{0, 1}|E|. A particularly interesting case is obtained by taking C to contain all the maximal independent sets (bases) in I; in this case, the polytope conv(φ(C)) is known as the matroid base polytope of M. This polytope, often denoted as B(M), is also given by B(M) = n x ∈R|E| P e∈S xe ≤rankM(S) ∀S ⊂E, and P e∈E xe = rankM(E) o , where rankM : 2E→R is the matroid rank function of M defined as rankM(S) = max  |I| | I ∈I, I ⊆S ∀S ⊆E . We will see below (Section 5) that both decomposition and unnormalized relative entropy projection steps in this case can be performed efficiently assuming an appropriate oracle. We note that Example 1 (m-subsets of a ground set of d elements) can be viewed as a special case of the above setting for the matroid Msub = (E, I) defined by E = [d] and I = {S ⊆E | |S| ≤m}; the set C of m-subsets of [d] is then simply the set of bases in I, and conv(φ(C)) = B(Msub). The following is another well-studied example: 5 Example 2 (Spanning trees with edge-based losses). Here one is given a connected, undirected graph G = ([n], E), and the decision space C is the set of all spanning trees in G. On each trial t, one selects a spanning tree T t ∈C and receives a loss vector ℓt ∈[0, 1]|E|, with ℓt e specifying the loss for using edge e; the loss for the tree T t is given by P e∈T t ℓt e. It is well known that the set of all spanning trees in G is the set of bases in the graphic matroid MG = (E, I), where I contains edge sets of all acyclic subgraphs of G. Therefore here d = |E|, φ(C) is the set of incidence vectors of all spanning trees in G, and conv(φ(C)) = B(MG), also known as the spanning tree polytope. Here LDOMD with unnormalized negative entropy has a regret bound of O n q T ln( |E| n−1)  . 5 Polytopes Associated with Submodular Functions Next we consider settings where the decision space C is in one-to-one correspondence with the set of vertices of the base polytope associated with a submodular function, and losses are linear in the corresponding vertex representations of elements in C. This setting was considered recently in [15], and as we shall see, encompasses both of the examples we saw earlier, as well as many others. Let f : 2[n]→R be a submodular function with f(∅) = 0. The base polytope of f is defined as B(f) = n x ∈Rn P i∈S xi ≤f(S) ∀S ⊂[n], and Pn i=1 xi = f([n]) o . Let φ : C→Rn be a bijective mapping from C to the vertices of B(f); thus conv(φ(C)) = B(f). 5.1 Monotone Submodular Functions It is known that when f is a monotone submodular function (which means U ⊆V =⇒f(U) ≤ f(V )), then B(f) ⊆Rn + [4]. Therefore in this case one can take K = Rn + and F : K→R to be the unnormalized negative entropy. Both decomposition and unnormalized relative entropy projection steps can be performed in time O(n6 + n5Q), where Q is the time taken by an oracle that given S returns f(S); for cardinality-based submodular functions, for which f(S) = g(|S|) for some g : [n]→R, these steps can be performed in just O(n2) time [15]. Remark on matroid base polytopes and spanning trees. We note that the matroid rank function of any matroid M is a monotone submodular function, and that the matroid base polytope B(M) is the same as B(rankM). Therefore Examples 1 and 2 can also be viewed as special cases of the above setting. For the spanning trees of Example 2, the decomposition step of [14] makes use of a linear programming formulation whose exact time complexity is unclear. Instead, one could use the decomposition step associated with the submodular function rankMG, which takes O(|E|6) time. Matroid polytopes are 0-1 polytopes; the example below illustrates a more general polytope: Example 3 (Permutations with a certain position-based loss). Let C = Sn, the set of all permutations of n objects: C = {σ : [n]→[n] | σ is bijective}. On each trial t, one selects a permutation σt ∈C and receives a loss vector ℓt ∈[0, 1]n; the loss of the permutation is given by Pn i=1 ℓt i (n−σt(i)+1). This type of loss arises in scheduling applications, where ℓt i denotes the time taken to complete the i-th job, and the loss of a job schedule (permutation of jobs) is the total waiting time of all jobs (the waiting time of a job is its own completion time plus the sum of completion times of all jobs scheduled before it) [15]. Here it is natural to define a mapping φ : C→Rn + that maps σ ∈C to φ(σ) = (n −σ(1) + 1, . . . , n −σ(n) + 1); the loss on selecting σt ∈C is then φ(σt) · ℓt. Thus here we have d = n, and φ(C) = {(σ(1), . . . , σ(n)) | σ ∈Sn}. It is known that the n! vectors in φ(C) are exactly the vertices of the base polytope corresponding to the monotone (cardinality-based) submodular function fperm : 2[n]→R defined as fperm(S) = P|S| i=1(n −i + 1). Thus conv(φ(C)) = B(fperm); this is a well-known polytope called the permutahedron [21], and has recently been studied in the context of online learning applications in [18, 15, 1]. Here ∥φ(σ)∥1 = n(n+1) 2 ∀σ ∈C, and therefore LDOMD with unnormalized negative entropy has a regret bound of O n2p T ln(n)  . As noted above, decomposition and unnormalized relative entropy projection steps take O(n2) time. 5.2 General Submodular Functions In general, when f is non-monotone, B(f) ⊂Rn can contain vectors with non-negative entries. Here one can use LOMD with the squared L2-norm. The Euclidean projection step can again be performed in time O(n6 + n5Q) in general, where Q is the time taken by an oracle that given S returns f(S), and in O(n2) time for cardinality-based submodular functions [15]. 6 6 Permutation Polytopes There has been increasing interest in recent years in online decision problems involving rankings or permutations, largely due to their role in applications such as information retrieval, recommender systems, rank aggregation, etc [12, 18, 19, 15, 1, 2]. Here the decision space is C = Sn, the set of all permutations of n objects: C = {σ : [n]→[n] | σ is bijective} . On each trial t, one predicts a permutation σt ∈C and receives some type of loss. We saw one special type of loss in Example 3; we now consider any loss that can be represented as a linear function of some vector representation of the permutations in C. Specifically, let d ∈Z+, and let φ : C→Rd be any injective mapping such that on predicting σt, one receives a loss vector ℓt ∈[0, 1]d and incurs loss φ(σt) · ℓt. For any such mapping φ, the polytope conv(φ(C)) is called a permutation polytope [5].3 The permutahedron we saw in Example 3 is one example of a permutation polytope; here we consider various other examples. For any such polytope, if one can perform the decomposition and suitable Bregman projection steps efficiently, then one can use the LDOMD algorithm to obtain good regret guarantees with respect to the associated loss. Example 4 (Permutations with assignment-based losses). Here on each trial t, one selects a permutation σt ∈C and receives a loss matrix ℓt ∈[0, 1]n×n, with ℓt ij specifying the loss for assigning element i to position j; the loss for the permutation σt is given by Pn i=1 ℓt i,σt(i). Here it is natural to define a mapping φ : C→{0, 1}n×n that maps each σ ∈C to its associated permutation matrix P σ ∈{0, 1}n×n, defined as P σ ij = 1(σ(i) = j) ∀i, j ∈[n]; the loss incurred on predicting σt ∈C is then Pn i=1 Pn j=1 φij(σt)ℓt ij. Thus we have here that d = n2, φ(C) = {P σ ∈{0, 1}n×n | σ ∈Sn}, and conv(φ(C)) is the well-known Birkhoff polytope containing all doubly stochastic matrices in [0, 1]n×n (also known as the assignment polytope or the perfect matching polytope of the complete bipartite graph Kn,n). Here LDOMD with unnormalized negative entropy has a regret bound of O n p T ln(n)  . This recovers exactly the PermELearn algorithm used in [12]; see [12] for efficient implementations of the decomposition and unnormalized relative entropy projection steps. Example 5 (Permutations with general position-based losses). Here on each trial t, one selects a permutation σt ∈C and receives a loss vector ℓt ∈[0, 1]n. There is a weight function γ : [n]→R+ that weights the loss incurred at each position, such that the loss contributed by element i is ℓt i γ(σt(i)); the total loss of the permutation σt is given by Pn i=1 ℓt i γ(σt(i)). Note that the particular loss considered in Example 3 (and in [15]) is a special case of such a position-based loss, with weight function γ(i) = (n−i+1). Several other position-dependent losses are used in practice; for example, the discounted cumulative gain (DCG) based loss, which is widely used in information retrieval applications, effectively uses γ(i) = 1 − 1 log2(i)+1 [9]. For a general position-based loss with weight function γ, one can define φ : C→Rn + as φ(σ) = (γ(σ(1)), . . . , γ(σ(n))). This yields a permutation polytope conv(φ(C)) = conv  (γ(σ(1)), . . . , γ(σ(n))) | σ ∈Sn  ⊂Rn +. Provided one can implement the decomposition and suitable Bregman projection steps efficiently, one can use the LDOMD algorithm to get a sublinear regret. 7 Application to an Online Transportation Problem Consider now the following transportation problem: there are m supply locations for a particular commodity and n demand locations, with a supply vector a ∈Zm + and demand vector b ∈Zn + specifying the (integer) quantities of the commodity supplied/demanded by the various locations. Assume Pm i=1 ai = Pn j=1 bj △= q. In the offline setting, there is a cost matrix ℓ∈[0, 1]m×n, with ℓij specifying the cost of transporting one unit of the commodity from supply location i to demand location j, and the goal is to decide on a transportation matrix Q ∈Zm×n + that specifies suitable (integer) quantities of the commodity to be transported between the various supply and demand locations so as to minimize the total transportation cost, Pm i=1 Pn j=1 Qijℓij. Here we consider an online variant of this problem where the supply vector a and demand vector b are viewed as remaining constant over some period of time, while the costs of transporting the com3The term ‘permutation polytope’ is sometimes used to refer to various polytopes obtained through specific mappings φ : Sn→Rd; here we use the term in a broad sense for any such polytope, following terminology of Bowman [5]. (Note that the description Bowman [5] gives of a particular 0-1 permutation polytope in Rn(n−1), known as the binary choice polytope or the linear ordering polytope [20], is actually incorrect; e.g. see [11].) 7 Algorithm Decomposition Step for Transportation Polytopes Input: X ∈T (a, b) (where a ∈Zm +, b ∈Zn +) Initialize: A1 ←X; k ←0 Repeat: – k ←k + 1 – Find an extreme point Qk ∈T (a, b) such that Ak ij = 0 =⇒Qk ij = 0 (see Appendix B) – αk ←min(i,j):Qk ij>0  Ak ij Qk ij  – Ak+1 ←Ak −αkQk Until all entries of Ak+1 are zero  Ouput: Decomposition of X as convex combination of extreme points Q1, . . . , Qk: X = Pk r=1 αrQr (it can be verified that αr ∈(0, 1] ∀r and Pk r=1 αr = 1) Figure 3: Decomposition step in applying LDOMD to transportation polytopes. modity between various supply and demand locations change over time. Specifically, the decision space here is the set of all valid (integer) transportation matrices satisfying constraints given by a, b: C =  Q ∈Zm×n + | Pn j=1 Qij = ai ∀i ∈[m] , Pm i=1 Qij = bj ∀j ∈[n] . On each trial t, one selects a transportation matrix Qt ∈C, and receives a cost matrix ℓt ∈ [0, 1]m×n; the loss incurred is Pm i=1 Pn j=1 Qt ijℓt ij. A natural mapping here is simply the identity: φ : C→Zm×n + with φ(Q) = Q ∀Q ∈C. Thus we have here d = mn, φ(C) = C, and conv(φ(C)) is the well-known transportation polytope T (a, b) (e.g. see [6]): conv(φ(C)) = T (a, b) =  X ∈Rm×n + | Pn j=1 Xij = ai ∀i ∈[m] , Pm i=1 Xij = bj ∀j ∈[n] . Transportation polytopes generalize the Birkhoff polytope of doubly stochastic matrices, which can be seen to arise as a special case when m = n and ai = bi = 1 ∀i ∈[n] (see Example 4). While the Birkhoff polytope is a 0-1 polytope, a general transportation polytope clearly includes non-Boolean vertices. Nevertheless, we do have T (a, b) ⊂Rm×n + , which suggests we can use the LDOMD algorithm with unnormalized negative entropy. For the decomposition step in LDOMD, one can use an algorithm broadly similar to that used for the Birkhoff polytope in [12]. Specifically, given a matrix X ∈conv(φ(C)) = T (a, b), one successively subtracts off multiples of extreme points Qk ∈C from X until one is left with a zero matrix (see Figure 3). However, a key step of this algorithm is to find a suitable extreme point to subtract off on each iteration. In the case of the Birkhoff polytope, this involved finding a suitable permutation matrix, and was achieved by finding a perfect matching in a suitable bipartite graph. For general transportation polytopes, we make use of a characterization of extreme points in terms of spanning forests in a suitable bipartite graph (see Appendix B for details). The overall decomposition results in a convex combination of at most mn extreme points in C, and takes O(m3n3) time. The unnormalized relative entropy projection step can be performed efficiently by using a procedure similar to the Sinkhorn balancing used for the Birkhoff polytope in [12]. Specifically, given a nonnegative matrix e X ∈Rm×n + , one alternately scales the rows and columns to match the desired row and column sums until some convergence criterion is met. As with Sinkhorn balancing, this results in an approximate projection step, but does not hurt the overall regret analysis (other than a constant additive term), yielding a regret bound of O q p T ln(max(mn, q))  . 8 Conclusion We have considered a general form of online combinatorial decision problems, where costs can be linear in any suitable low-dimensional vector representation of elements of the decision space, and have given a general algorithm termed low-dimensional online mirror descent (LDOMD) for such problems. Our study emphasizes the role of the convex polytope arising from the vector representation of the decision space; this both yields a unification and generalization of previous algorithms, and gives a general framework that can be used to design new algorithms for specific applications. Acknowledgments. Thanks to the anonymous reviewers for helpful comments and Chandrashekar Lakshminarayanan for helpful discussions. AR is supported by a Microsoft Research India PhD Fellowship. SA thanks DST and the Indo-US Science & Technology Forum for their support. 8 References [1] Nir Ailon. Bandit online optimization over the permutahedron. CoRR, abs/1312.1530, 2013. [2] Nir Ailon. Online ranking: Discrete choice, spearman correlation and other feedback. CoRR, abs/1308.6797, 2013. [3] Jean-Yves Audibert, S´ebastien Bubeck, and G´abor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31–45, 2014. [4] Francis Bach. Learning with submodular functions: A convex optimization perspective. Foundations and Trends in Machine Learning, 6(2-3):145–373, 2013. [5] V. J. Bowman. Permutation polyhedra. SIAM Journal on Applied Mathematics, 22(4):580– 589, 1972. [6] Richard A Brualdi. Combinatorial Matrix Classes. Cambridge University Press, 2006. [7] S´ebastion Bubeck. Introduction to online optimization. Lecture Notes, Princeton University, 2011. [8] Nicol`o Cesa-Bianchi and G´abor Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404–1422, 2012. [9] David Cossock and Tong Zhang. Statistical analysis of Bayes optimal subset ranking. IEEE Transactions on Information Theory, 54(11):5140–5154, 2008. [10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997. [11] M. Gr¨otschel, M. J¨unger, and G. Reinelt. Facets of the linear ordering polytope. Mathematical Programming, 33:43–60, 1985. [12] David P. Helmbold and Manfred K. Warmuth. Learning permutations with exponential weights. Journal of Machine Learning Research, 10:1705–1736, 2009. [13] Adam Tauman Kalai and Santosh Vempala. Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71(3):291–307, 2005. [14] Wouter M. Koolen, Manfred K. Warmuth, and Jyrki Kivinen. Hedging structured concepts. In COLT, 2010. [15] Daiki Suehiro, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Kiyohito Nagano. Online prediction under submodular constraints. In ALT, 2012. [16] Eiji Takimoto and Manfred K. Warmuth. Path kernels and multiplicative updates. Journal of Machine Learning Research, 4:773–818, 2003. [17] Manfred K. Warmuth and Dima Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 9:2287– 2320, 2008. [18] Shota Yasutake, Kohei Hatano, Shuji Kijima, Eiji Takimoto, and Masayuki Takeda. Online linear optimization over permutations. In ISAAC, pages 534–543, 2011. [19] Shota Yasutake, Kohei Hatano, Eiji Takimoto, and Masayuki Takeda. Online rank aggregation. In ACML, 2012. [20] Jun Zhang. Binary choice, subset choice, random utility, and ranking: A unified perspective using the permutahedron. Journal of Mathematical Psychology, 48:107–134, 2004. [21] G¨unter M. Ziegler. Lectures on Polytopes. Springer, 1995. 9
2014
392
5,505
Altitude Training: Strong Bounds for Single-Layer Dropout Stefan Wager⇤, William Fithian⇤, Sida Wang†, and Percy Liang⇤,† Departments of Statistics⇤and Computer Science† Stanford University, Stanford, CA-94305, USA {swager, wfithian}@stanford.edu, {sidaw, pliang}@cs.stanford.edu Abstract Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieves this gain much like a marathon runner who practices at altitude: once a classifier learns to perform reasonably well on training examples that have been artificially corrupted by dropout, it will do very well on the uncorrupted test set. We also show that, under similar conditions, dropout preserves the Bayes decision boundary and should therefore induce minimal bias in high dimensions. 1 Introduction Dropout training [1] is an increasingly popular method for regularizing learning algorithms. Dropout is most commonly used for regularizing deep neural networks [2, 3, 4, 5], but it has also been found to improve the performance of logistic regression and other single-layer models for natural language tasks such as document classification and named entity recognition [6, 7, 8]. For single-layer linear models, learning with dropout is equivalent to using “blankout noise” [9]. The goal of this paper is to gain a better theoretical understanding of why dropout regularization works well for natural language tasks. We focus on the task of document classification using linear classifiers where data comes from a generative Poisson topic model. In this setting, dropout effectively deletes random words from a document during training; this corruption makes the training examples harder. A classifier that is able to fit the training data will therefore receive an accuracy boost at test time on the much easier uncorrupted examples. An apt analogy is altitude training, where athletes practice in more difficult situations than they compete in. Importantly, our analysis does not rely on dropout merely creating more pseudo-examples for training, but rather on dropout creating more challenging training examples. Somewhat paradoxically, we show that removing information from training examples can induce a classifier that performs better at test time. Main Result Consider training the zero-one loss empirical risk minimizer (ERM) using dropout, where each word is independently removed with probability δ 2 (0, 1). For a class of Poisson generative topic models, we show that dropout gives rise to what we call the altitude training phenomenon: dropout improves the excess risk of the ERM by multiplying the exponent in its decay rate by 1/(1 −δ). This improvement comes at the cost of an additive term of O(1/ p λ), where λ is the average number of words per document. More formally, let h⇤and ˆh0 be the expected and S. Wager and W. Fithian are supported by a B.C. and E.J. Eaves Stanford Graduate Fellowship and NSF VIGRE grant DMS–0502385 respectively. 1 empirical risk minimizers, respectively; let h⇤ δ and ˆhδ be the corresponding quantities for dropout training. Let Err(h) denote the error rate (on test examples) of h. In Section 4, we show that: Err ⇣ ˆhδ ⌘ −Err (h⇤ δ) | {z } dropout excess risk = e OP 0 B B @ ⇣ Err ⇣ ˆh0 ⌘ −Err (h⇤) ⌘ | {z } ERM excess risk 1 1−δ + 1 p λ 1 C C A , (1) where e OP is a variant of big-O in probability notation that suppresses logarithmic factors. If λ is large (we are classifying long documents rather than short snippets of text), dropout considerably accelerates the decay rate of excess risk. The bound (1) holds for fixed choices of δ. The constants in the bound worsen as δ approaches 1, and so we cannot get zero excess risk by sending δ to 1. Our result is modular in that it converts upper bounds on the ERM excess risk to upper bounds on the dropout excess risk. For example, recall from classic VC theory that the ERM excess risk is e OP ( p d/n), where d is the number of features (vocabulary size) and n is the number of training examples. With dropout δ = 0.5, our result (1) directly implies that the dropout excess risk is e OP (d/n + 1/ p λ). The intuition behind the proof of (1) is as follows: when δ = 0.5, we essentially train on half documents and test on whole documents. By conditional independence properties of the generative topic model, the classification score is roughly Gaussian under a Berry-Esseen bound, and the error rate is governed by the tails of the Gaussian. Compared to half documents, the coefficient of variation of the classification score on whole documents (at test time) is scaled down by p 1 −δ compared to half documents (at training time), resulting in an exponential reduction in error. The additive penalty of 1/ p λ stems from the Berry-Esseen approximation. Note that the bound (1) only controls the dropout excess risk. Even if dropout reduces the excess risk, it may introduce a bias Err(h⇤ δ)−Err(h⇤), and thus (1) is useful only when this bias is small. In Section 5, we will show that the optimal Bayes decision boundary is not affected by dropout under the Poisson topic model. Bias is thus negligible when the Bayes boundary is close to linear. It is instructive to compare our generalization bound to that of Ng and Jordan [10], who showed that the naive Bayes classifier exploits a strong generative assumption—conditional independence of the features given the label—to achieve an excess risk of OP ( p (log d)/n). However, if the generative assumption is incorrect, then naive Bayes can have a large bias. Dropout enables us to cut excess risk without incurring as much bias. In fact, naive Bayes is closely related to logistic regression trained using an extreme form of dropout with δ ! 1. Training logistic regression with dropout rates from the range δ 2 (0, 1) thus gives a family of classifiers between unregularized logistic regression and naive Bayes, allowing us to tune the bias-variance tradeoff. Other perspectives on dropout In the general setting, dropout only improves generalization by a multiplicative factor. McAllester [11] used the PAC-Bayes framework to prove a generalization bound for dropout that decays as 1 −δ. Moreover, provided that δ is not too close to 1, dropout behaves similarly to an adaptive L2 regularizer with parameter δ/(1−δ) [6, 12], and at least in linear regression such L2 regularization improves generalization error by a constant factor. In contrast, by leveraging the conditional independence assumptions of the topic model, we are able to improve the exponent in the rate of convergence of the empirical risk minimizer. It is also possible to analyze dropout as an adaptive regularizer [6, 9, 13]: in comparison with L2 regularization, dropout favors the use of rare features and encourages confident predictions. If we believe that good document classification should produce confident predictions by understanding rare words with Poisson-like occurrence patterns, then the work on dropout as adaptive regularization and our generalization-based analysis are two complementary explanations for the success of dropout in natural language tasks. 2 Dropout Training for Topic Models In this section, we introduce binomial dropout, a form of dropout suitable for topic models, and the Poisson topic model, on which all our analyses will be based. 2 Binomial Dropout Suppose that we have a binary classification problem1 with count features x(i) 2 {0, 1, 2, . . .}d and labels y(i) 2 {0, 1}. For example, x(i) j is the number of times the j-th word in our dictionary appears in the i-th document, and y(i) is the label of the document. Our goal is to train a weight vector bw that classifies new examples with features x via a linear decision rule ˆy = I{ bw · x > 0}. We start with the usual empirical risk minimizer: bw0 def = argminw2Rd ( n X i=1 ` ⇣ w; x(i), y(i)⌘) (2) for some loss function ` (we will analyze the zero-one loss but use logistic loss in experiments [e.g., 10, 14, 15]). Binomial dropout trains on perturbed features ˜x(i) instead of the original features x(i): bwδ def = argminw ( n X i=1 E h ` ⇣ w; ˜x(i), y(i)⌘i) , where ˜x(i) j = Binom ⇣ x(i) j ; 1 −δ ⌘ . (3) In other words, during training, we randomly thin the j-th feature xj with binomial noise. If xj counts the number of times the j-th word appears in the document, then replacing xj with ˜xj is equivalent to independently deleting each occurrence of word j with probability δ. Because we are only interested in the decision boundary, we do not scale down the weight vector obtained by dropout by a factor 1 −δ as is often done [e.g., 1]. Binomial dropout differs slightly from the usual definition of (blankout) dropout, which alters the feature vector x by setting random coordinates to 0 [6, 9, 11, 12]. The reason we chose to study binomial rather than blankout dropout is that Poisson random variables remain Poisson even after binomial thinning; this fact lets us streamline our analysis. For rare words that appear once in the document, the two types of dropout are equivalent. A Generative Poisson Topic Model Throughout our analysis, we assume that the data is drawn from a Poisson topic model depicted in Figure 1a and defined as follows. Each document i is assigned a label y(i) according to some Bernoulli distribution. Then, given the label y(i), the document gets a topic ⌧(i) 2 ⇥from a distribution ⇢y(i). Given the topic ⌧(i), for every word j in the vocabulary, we generate its frequency x(i) j according to x(i) j 55 ⌧(i) ⇠Poisson(λ(⌧(i)) j ), where λ(⌧) j 2 [0, 1) is the expected number of times word j appears under topic ⌧. Note that kλ(⌧)k1 is the average length of a document with topic ⌧. Define λ def = min⌧2⇥kλ(⌧)k1 to be the shortest average document length across topics. If ⇥contains only two topics—one for each class—we get the naive Bayes model. If ⇥is the (K −1)-dimensional simplex where λ(⌧) is a ⌧-mixture over K basis vectors, we get the K-topic latent Dirichlet allocation [16].2 Note that although our generalization result relies on a generative model, the actual learning algorithm is agnostic to it. Our analysis shows that dropout can take advantage of a generative structure while remaining a discriminative procedure. If we believed that a certain topic model held exactly and we knew the number of topics, we could try to fit the full generative model by EM. This, however, could make us vulnerable to model misspecification. In contrast, dropout benefits from generative assumptions while remaining more robust to misspecification. 3 Altitude Training: Linking the Dropout and Data-Generating Measures Our goal is to understand the behavior of a classifier ˆhδ trained using dropout. During dropout, the error of any classifier h is characterized by two measures. In the end, we are interested in the usual generalization error (expected risk) of h where x is drawn from the underlying data-generating measure: Err (h) def = P [y 6= h(x)] . (4) 1Dropout training is known to work well in practice for multi-class problems [8]. For simplicity, however, we will restrict our theoretical analysis to a two-class setup. 2 In topic modeling, the vertices of the simplex ⇥are “topics” and ⌧is a mixture of topics, whereas we call ⌧itself a topic. 3 However, since dropout training works on the corrupted data ˜x (see (3)), in the limit of infinite data, the dropout estimator will converge to the minimizer of the generalization error with respect to the dropout measure over ˜x: Errδ (h) def = P [y 6= h(˜x)] . (5) The main difficulty in analyzing the generalization of dropout is that classical theory tells us that the generalization error with respect to the dropout measure will decrease as n ! 1, but we are interested in the original measure. Thus, we need to bound Err in terms of Errδ. In this section, we show that the error on the original measure is actually much smaller than the error on the dropout measure; we call this the altitude training phenomenon. Under our generative model, the count features xj are conditionally independent given the topic ⌧. We thus focus on a single fixed topic ⌧and establish the following theorem, which provides a per-topic analogue of (1). Section 4 will then use this theorem to obtain our main result. Theorem 1. Let h be a binary linear classifier with weights w, and suppose that our features are drawn from the Poisson generative model given topic ⌧. Let c⌧be the more likely label given ⌧: c⌧ def = arg max c2{0,1} P h y(i) = c 55 ⌧(i) = ⌧ i . (6) Let ˜"⌧be the sub-optimal prediction rate in the dropout measure ˜"⌧ def = P h I n w · ˜x(i) > 0 o 6= c⌧ 55 ⌧(i) = ⌧ i , (7) where ˜x(i) is an example thinned by binomial dropout (3), and P is taken over the data-generating process. Let "⌧be the sub-optimal prediction rate in the original measure "⌧ def = P h I n w · x(i) > 0 o 6= c⌧ 55 ⌧(i) = ⌧ i . (8) Then: "⌧= e O ✓ ˜" 1 1−δ ⌧ + p ⌧ ◆ , (9) where ⌧= maxj : w2 j /Pd j=1 λ(⌧) j w2 j, and the constants in the bound depend only on δ. Theorem 1 only provides us with a useful bound when the term ⌧is small. Whenever the largest w2 j is not much larger than the average w2 j, then p ⌧scales as O(1/ p λ), where λ is the average document length. Thus, the bound (9) is most useful for long documents. A Heuristic Proof of Theorem 1. The proof of Theorem 1 is provided in the technical appendix. Here, we provide a heuristic argument for intuition. Given a fixed topic ⌧, suppose that it is optimal to predict c⌧= 1, so our test error is "⌧= P ⇥ w · x 0 55 ⌧ ⇤ . For long enough documents, by the central limit theorem, the score s def = w · x will be roughly Gaussian s ⇠N ? µ⌧, σ2 ⌧ @ , where µ⌧= Pd j=1 λ(⌧) j wj and σ2 ⌧= Pd j=1 λ(⌧) j w2 j. This implies that "⌧⇡Φ (−µ⌧/σ⌧) , where Φ is the cumulative distribution function of the Gaussian. Now, let ˜s def = w · ˜x be the score on a dropout sample. Clearly, E [˜s] = (1 −δ) µ⌧and Var [˜s] = (1 −δ) σ2 ⌧, because the variance of a Poisson random variable scales with its mean. Thus, ˜"⌧⇡Φ ✓ − p 1 −δ µ⌧ σ⌧ ◆ ⇡Φ ✓ −µ⌧ σ⌧ ◆(1−δ) ⇡"(1−δ) ⌧ . (10) Figure 1b illustrates the relationship between the two Gaussians. This explains the first term on the right-hand side of (9). The extra error term p ⌧arises from a Berry-Esseen bound that approximates Poisson mixtures by Gaussian random variables. 4 A Generalization Bound for Dropout By setting up a bridge between the dropout measure and the original data-generating measure, Theorem 1 provides a foundation for our analysis. It remains to translate this result into a statement about the generalization error of dropout. For this, we need to make a few assumptions. 4 y ⌧ x λ ⇢ J I (a) Graphical representation of the Poisson topic model: Given a document with label y, we draw a document topic ⌧from the multinomial distribution with probabilities ⇢y. Then, we draw the words x from the topic’s Poisson distribution with mean λ(⌧). Boxes indicate repeated observations, and greyed-out nodes are observed during training. −2 −1 0 1 2 3 0.0 0.1 0.2 0.3 0.4 0.5 score density Original Dropout (b) For a fixed classifier w, the probabilities of error on an example drawn from the original and dropout measures are governed by the tails of two Gaussians (shaded). The dropout Gaussian has a larger coefficient of variation, which means the error on the original measure (test) is much smaller than the error on the dropout measure (train) (10). In this example, µ = 2.5, σ = 1, and δ = 0.5. Figure 1: (a) Graphical model. (b) The altitude training phenomenon. Our first assumption is fundamental: if the classification signal is concentrated among just a few features, then we cannot expect dropout training to do well. The second and third assumptions, which are more technical, guarantee that a classifier can only do well overall if it does well on every topic; this lets us apply Theorem 1. A more general analysis that relaxes Assumptions 2 and 3 may be an interesting avenue for future work. Assumption 1: well-balanced weights First, we need to assume that all the signal is not concentrated in a few features. To make this intuition formal, we say a linear classifier with weights w is well-balanced if the following holds for each topic ⌧: maxj : w2 j Pd j=1 λ(⌧) j Pd j=1 λ(⌧) j w2 j for some 0 < < 1. (11) For example, suppose each word was either useful (|wj| = 1) or not (wj = 0); then is the inverse expected fraction of words in a document that are useful. In Theorem 2 we restrict the ERM to well-balanced classifiers and assume that the expected risk minimizer h⇤over all linear rules is also well-balanced. Assumption 2: discrete topics Second, we assume that there are a finite number T of topics, and that the available topics are not too rare or ambiguous: the minimal probability of observing any topic ⌧is bounded below by P [⌧] ≥pmin > 0, (12) and that each topic-conditional probability is bounded away from 1 2 (random guessing): 5555P h y(i) = c 55 ⌧(i) = ⌧ i −1 2 5555 ≥↵> 0 (13) for all topics ⌧2 {1, ..., T}. This assumption substantially simplifies our arguments, allowing us to apply Theorem 1 to each topic separately without technical overhead. Assumption 3: distinct topics Finally, as an extension of Assumption 2, we require that the topics be “well separated.” First, define Errmin = P[y(i) 6= c⌧(i)], where c⌧is the most likely label given topic ⌧(6); this is the error rate of the optimal decision rule that sees topic ⌧. We assume that the best linear rule h⇤ δ satisfying (11) is almost as good as always guessing the best label c⌧under the dropout measure: Errδ (h⇤ δ) = Errmin + O ✓1 p λ ◆ , (14) 5 where, as usual, λ is a lower bound on the average document length. If the dimension d is larger than the number of topics T, this assumption is fairly weak: the condition (14) holds whenever the matrix ⇧of topic centers has full rank, and the minimum singular value of ⇧is not too small (see Proposition 6 in the Appendix for details). This assumption is satisfied if the different topics can be separated from each other with a large margin. Under Assumptions 1–3 we can turn Theorem 1 into a statement about generalization error. Theorem 2. Suppose that our features x are drawn from the Poisson generative model (Figure 1a), and Assumptions 1–3 hold. Define the excess risks of the dropout classifier ˆhδ on the dropout and data-generating measures, respectively: ˜⌘ def = Errδ ⇣ ˆhδ ⌘ −Errδ (h⇤ δ) and ⌘ def = Err ⇣ ˆhδ ⌘ −Err (h⇤ δ) . (15) Then, the altitude training phenomenon applies: ⌘= e O ✓ ˜⌘ 1 1−δ + 1 p λ ◆ . (16) The above bound scales linearly in p−1 min and ↵−1; the full dependence on δ is shown in the appendix. In a sense, Theorem 2 is a meta-generalization bound that allows us to transform generalization bounds with respect to the dropout measure (˜⌘) into ones on the data-generating measure (⌘) in a modular way. As a simple example, standard VC theory provides an ˜⌘= e OP ( p d/n) bound which, together with Theorem 2, yields: Corollary 3. Under the same conditions as Theorem 2, the dropout classifier ˆhδ achieves the following excess risk: Err ⇣ ˆhδ ⌘ −Err (h⇤ δ) = e OP 0 @ r d n ! 1 1−δ + 1 p λ 1 A . (17) More generally, we can often check that upper bounds for Err(ˆh) −Err(h⇤) also work as upper bounds for Errδ(ˆhδ) −Errδ(h⇤ δ); this gives us the heuristic result from (1). 5 The Bias of Dropout In the previous section, we showed that under the Poisson topic model in Figure 1a, dropout can achieve a substantial cut in excess risk Err(ˆhδ) −Err(h⇤ δ). But to complete our picture of dropout’s performance, we must address the bias of dropout: Err(h⇤ δ) −Err(h⇤). Dropout can be viewed as importing “hints” from a generative assumption about the data. Each observed (x, y) pair (each labeled document) gives us information not only about the conditional class probability at x, but also about the conditional class probabilities at numerous other hypothetical values ˜x representing shorter documents of the same class that did not occur. Intuitively, if these ˜x are actually good representatives of that class, the bias of dropout should be mild. For our key result in this section, we will take the Poisson generative model from Figure 1a, but further assume that document length is independent of the topic. Under this assumption, we will show that dropout preserves the Bayes decision boundary in the following sense: Proposition 4. Let (x, y) be distributed according to the Poisson topic model of Figure 1a. Assume that document length is independent of topic: kλ(⌧)k1 = λ for all topics ⌧. Let ˜x be a binomial dropout sample of x with some dropout probability δ 2 (0, 1). Then, for every feature vector v 2 Rd, we have: P ⇥ y = 1 55 ˜x = v ⇤ = P ⇥ y = 1 55 x = v ⇤ . (18) If we had an infinite amount of data (˜x, y) corrupted under dropout, we would predict according to I{P ⇥ y = 1 55 ˜x = v ⇤ > 1 2}. The significance of Proposition 4 is that this decision rule is identical to the true Bayes decision boundary (without dropout). Therefore, the empirical risk minimizer of a sufficiently rich hypothesis class trained with dropout would incur very small bias. 6 ● ●● ● ● ●●● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●● ●● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ●● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ●● ● ● ● ●● ● ● ●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ●● ● ●●● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ● ●●● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ● ●● ● ● ● ● ●● ● ● ● ●● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ●●● ●● ● ● ● ● ● ● ●●● ● ●● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●●● ● ● ● 0 100 200 300 400 500 X1 X2 0 100 200 300 ●●● ● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ●●● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ●● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ●● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ●● ●● ●● ● ● ● ● ●● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ●● ● ●● ● ● ● ● ● ● ● ●●● ● ● ● ●●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ●●●● ●● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ●● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● 0 LogReg Boundary Dropout Boundary Bayes Boundary o Long Documents Short (Dropout) Documents (a) Dropout (δ = 0.75) with d = 2. For long documents (circles in the upper-right), logistic regression focuses on capturing the small red cluster; the large red cluster has almost no influence. Dropout (dots in the lower-left) distributes influence more equally between the two red clusters. 50 100 200 500 1000 2000 n Test Error Rate (%) 0.5 1 2 5 10 20 LR 0.25 0.5 0.75 0.9 0.95 0.99 NB (b) Learning curves for the synthetic experiment. Each axis is plotted on a log scale. Here the dropout rate δ ranges from 0 (logistic regression) to 1 (naive Bayes) for multiple values of training set sizes n. As n increases, less dropout is preferable, as the bias-variance tradeoff shifts. Figure 2: Behavior of binomial dropout in simulations. In the left panel, the circles are the original data, while the dots are dropout-thinned examples. The Monte Carlo error is negligible. However, Proposition 4 does not guarantee that dropout incurs no bias when we fit a linear classifier. In general, the best linear approximation for classifying shorter documents is not necessarily the best for classifying longer documents. As n ! 1, a linear classifier trained on (x, y) pairs will eventually outperform one trained on (˜x, y) pairs. Dropout for Logistic Regression To gain some more intuition about how dropout affects linear classifiers, we consider logistic regression. A similar phenomenon should also hold for the ERM, but discussing this solution is more difficult since the ERM solution does not have have a simple characterization. The relationship between the 0-1 loss and convex surrogates has been studied by, e.g., [14, 15]. The score criterion for logistic regression is 0 = Pn i=1 ? y(i) −ˆpi @ x(i), where ˆpi = (1 + e−b w·x(i))−1 are the fitted probabilities. Note that easily-classified examples (where ˆpi is close to y(i)) play almost no role in driving the fit. Dropout turns easy examples into hard examples, giving more examples a chance to participate in learning a good classification rule. Figure 2a illustrates dropout’s tendency to spread influence more democratically for a simple classification problem with d = 2. The red class is a 99:1 mixture over two topics, one of which is much less common, but harder to classify, than the other. There is only one topic for the blue class. For long documents (open circles in the top right), the infrequent, hard-to-classify red cluster dominates the fit while the frequent, easy-to-classify red cluster is essentially ignored. For dropout documents with δ = 0.75 (small dots, lower left), both red clusters are relatively hard to classify, so the infrequent one plays a less disproportionate role in driving the fit. As a result, the fit based on dropout is more stable but misses the finer structure near the decision boundary. Note that the solid gray curve, the Bayes boundary, is unaffected by dropout, per Proposition 4. But, because it is nonlinear, we obtain a different linear approximation under dropout. 6 Experiments and Discussion Synthetic Experiment Consider the following instance of the Poisson topic model: We choose the document label uniformly at random: P ⇥ y(i) = 1 ⇤ = 1 2. Given label 0, we choose topic ⌧(i) = 0 deterministically; given label 1, we choose a real-valued topic ⌧(i) ⇠Exp(3). The per-topic Poisson intensities λ(⌧) are defined as follows: ✓(⌧) = 8 < : (1, . . . , 1 55 0, . . . , 0 55 0, . . . , 0) if ⌧= 0, (0, . . . , 0 | {z } 7 55 ⌧, . . . , ⌧ | {z } 7 55 0, . . . , 0 | {z } 486 ) otherwise, λ(⌧) j = 1000 · e✓(⌧) j P500 j0=1 e✓(⌧) j0 . (19) The first block of 7 independent words are indicative of label 0, the second block of 7 correlated words are indicative of label 1, and the remaining 486 words are indicative of neither. 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 Fraction of data used Error rate Log.Reg. Naive Bayes Dropout−0.8 Dropout−0.5 Dropout−0.2 (a) Polarity 2.0 dataset [17]. 0 0.2 0.4 0.6 0.8 1 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24 0.26 Fraction of data used Error rate Log.Reg. Naive Bayes Dropout−0.8 Dropout−0.5 Dropout−0.2 (b) IMDB dataset [18]. Figure 3: Experiments on sentiment classification. More dropout is better relative to logistic regression for small datasets and gradually worsens with more training data. We train a model on training sets of various size n, and evaluate the resulting classifiers’ error rates on a large test set. For dropout, we recalibrate the intercept on the training set. Figure 2b shows the results. There is a clear bias-variance tradeoff, with logistic regression (δ = 0) and naive Bayes (δ = 1) on the two ends of the spectrum. For moderate values of n, dropout improves performance, with δ = 0.95 (resulting in roughly 50-word documents) appearing nearly optimal for this example. Sentiment Classification We also examined the performance of dropout as a function of training set size on a document classification task. Figure 3a shows results on the Polarity 2.0 task [17], where the goal is to classify positive versus negative movie reviews on IMDB. We divided the dataset into a training set of size 1,200 and a test set of size 800, and trained a bag-of-words logistic regression model with 50,922 features. This example exhibits the same behavior as our simulation. Using a larger δ results in a classifier that converges faster at first, but then plateaus. We also ran experiments on a larger IMDB dataset [18] with training and test sets of size 25,000 each and approximately 300,000 features. As Figure 3b shows, the results are similar, although the training set is not large enough for the learning curves to cross. When using the full training set, all but three pairwise comparisons in Figure 3 are statistically significant (p < 0.05 for McNemar’s test). Dropout and Generative Modeling Naive Bayes and empirical risk minimization represent two divergent approaches to the classification problem. ERM is guaranteed to find the best model as n ! 1 but can have suboptimal generalization error when n is not large relative to d. Conversely, naive Bayes has very low generalization error, but suffers from asymptotic bias. In this paper, we showed that dropout behaves as a link between ERM and naive Bayes, and can sometimes achieve a more favorable bias-variance tradeoff. By training on randomly generated sub-documents rather than on whole documents, dropout implicitly codifies a generative assumption about the data, namely that excerpts from a long document should have the same label as the original document (Proposition 4). Logistic regression with dropout appears to have an intriguing connection to the naive Bayes SVM [NBSVM, 19], which is a way of using naive Bayes generative assumptions to strengthen an SVM. In a recent survey of bag-of-words classifiers for document classification, NBSVM and dropout often obtain state-of-the-art accuracies [e.g., 7]. This suggests that a good way to learn linear models for document classification is to use discriminative models that borrow strength from an approximate generative assumption to cut their generalization error. Our analysis presents an interesting contrast to other work that directly combine generative and discriminative modeling by optimizing a hybrid likelihood [20, 21, 22, 23, 24, 25]. Our approach is more guarded in that we only let the generative assumption speak through pseudo-examples. Conclusion We have presented a theoretical analysis that explains how dropout training can be very helpful under a Poisson topic model assumption. Specifically, by making training examples artificially difficult, dropout improves the exponent in the generalization bound for ERM. We believe that this work is just the first step in understanding the benefits of training with artificially corrupted features, and we hope the tools we have developed can be extended to analyze other training schemes under weaker data-generating assumptions. 8 References [1] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv:1207.0580, 2012. [2] Jimmy Ba and Brendan Frey. Adaptive dropout for training deep neural networks. In Advances in Neural Information Processing Systems, 2013. [3] Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In Proceedings of the International Conference on Machine Learning, 2013. [4] Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [5] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In Proceedings of the International Conference on Machine Learning, 2013. [6] Stefan Wager, Sida Wang, and Percy Liang. Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems, 2013. [7] Sida I Wang and Christopher D Manning. Fast dropout training. In Proceedings of the International Conference on Machine Learning, 2013. [8] Sida I Wang, Mengqiu Wang, Stefan Wager, Percy Liang, and Christopher D Manning. Feature noising for log-linear structured prediction. In Empirical Methods in Natural Language Processing, 2013. [9] Laurens van der Maaten, Minmin Chen, Stephen Tyree, and Kilian Q Weinberger. Learning with marginalized corrupted features. In International Conference on Machine Learning, 2013. [10] Andrew Ng and Michael Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive Bayes. Advances in Neural Information Processing Systems, 14, 2001. [11] David McAllester. A PAC-Bayesian tutorial with a dropout bound. arXiv:1307.2118, 2013. [12] Pierre Baldi and Peter Sadowski. The dropout learning algorithm. Artificial Intelligence, 210:78–122, 2014. [13] Amir Globerson and Sam Roweis. Nightmare at test time: robust learning by feature deletion. In Proceedings of the International Conference on Machine Learning, 2006. [14] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [15] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, 32(1):56–85, 2004. [16] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022, 2003. [17] Bo Pang and Lillian Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the Association for Computational Linguistics, 2004. [18] Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics, 2011. [19] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Association for Computational Linguistics, 2012. [20] R. Raina, Y. Shen, A. Ng, and A. McCallum. Classification with hybrid generative/discriminative models. In Advances in Neural Information Processing Systems, 2004. [21] G. Bouchard and B. Triggs. The trade-off between generative and discriminative classifiers. In International Conference on Computational Statistics, 2004. [22] J. A. Lasserre, C. M. Bishop, and T. P. Minka. Principled hybrids of generative and discriminative models. In Computer Vision and Pattern Recognition, 2006. [23] Guillaume Bouchard. Bias-variance tradeoff in hybrid generative-discriminative models. In International Conference on Machine Learning and Applications. IEEE, 2007. [24] A. McCallum, C. Pal, G. Druck, and X. Wang. Multi-conditional learning: Generative/discriminative training for clustering and classification. In Association for the Advancement of Artificial Intelligence, 2006. [25] Percy Liang and Michael I Jordan. An asymptotic analysis of generative, discriminative, and pseudolikelihood estimators. In Proceedings of the International Conference on Machine Learning, 2008. [26] Willliam Feller. An introduction to probability theory and its applications, volume 2. John Wiley & Sons, 1971. [27] Olivier Bousquet, St´ephane Boucheron, and G´abor Lugosi. Introduction to statistical learning theory. In Advanced Lectures on Machine Learning, pages 169–207. Springer, 2004. 9
2014
393
5,506
Probabilistic low-rank matrix completion on finite alphabets Jean Lafond Institut Mines-T´el´ecom T´el´ecom ParisTech CNRS LTCI jean.lafond@telecom-paristech.fr Olga Klopp CREST et MODAL’X Universit´e Paris Ouest Olga.KLOPP@math.cnrs.fr ´Eric Moulines Institut Mines-T´el´ecom T´el´ecom ParisTech CNRS LTCI moulines@telecom-paristech.fr Joseph Salmon Institut Mines-T´el´ecom T´el´ecom ParisTech CNRS LTCI joseph.salmon@telecom-paristech.fr Abstract The task of reconstructing a matrix given a sample of observed entries is known as the matrix completion problem. It arises in a wide range of problems, including recommender systems, collaborative filtering, dimensionality reduction, image processing, quantum physics or multi-class classification to name a few. Most works have focused on recovering an unknown real-valued low-rank matrix from randomly sub-sampling its entries. Here, we investigate the case where the observations take a finite number of values, corresponding for examples to ratings in recommender systems or labels in multi-class classification. We also consider a general sampling scheme (not necessarily uniform) over the matrix entries. The performance of a nuclear-norm penalized estimator is analyzed theoretically. More precisely, we derive bounds for the Kullback-Leibler divergence between the true and estimated distributions. In practice, we have also proposed an efficient algorithm based on lifted coordinate gradient descent in order to tackle potentially high dimensional settings. 1 Introduction Matrix completion has attracted a lot of contributions over the past decade. It consists in recovering the entries of a potentially high dimensional matrix, based on their random and partial observations. In the classical noisy matrix completion problem, the entries are assumed to be real valued and observed in presence of additive (homoscedastic) noise. In this paper, it is assumed that the entries take values in a finite alphabet that can model categorical data. Such a problem arises in analysis of voting patterns, recovery of incomplete survey data (typical survey responses are true/false, yes/no or do not know, agree/disagree/indifferent), quantum state tomography [13] (binary outcomes), recommender systems [18, 2] (for instance in common movie rating datasets, e.g., MovieLens or Neflix, ratings range from 1 to 5) among many others. It is customary in this framework that rows represent individuals while columns represent items e.g., movies, survey responses, etc. Of course, the observations are typically incomplete, in the sense that a significant proportion of the entries are missing. Then, a crucial question to be answered is whether it is possible to predict the missing entries from these partial observations. 1 Since the problem of matrix completion is ill-posed in general, it is necessary to impose a lowdimensional structure on the matrix, one particularly popular example being a low rank constraint. The classical noisy matrix completion problem (real valued observations and additive noise), can be solved provided that the unknown matrix is low rank, either exactly or approximately; see [7, 15, 17, 20, 5, 16] and the references therein. Most commonly used methods amount to solve a least square program under a rank constraint or a convex relaxation of a rank constraint provided by the nuclear (or trace norm) [10]. The problem of probabilistic low rank matrix completion over a finite alphabet has received much less attention; see [22, 8, 6] among others. To the best of our knowledge, only the binary case (also referred to as the 1-bit matrix completion problem) has been covered in depth. In [8], the authors proposed to model the entries as Bernoulli random variables whose success rate depend upon the matrix to be recovered through a convex link function (logistic and probit functions being natural examples). The estimated matrix is then obtained as a solution of a maximization of the log-likelihood of the observations under an explicit low-rank constraint. Moreover, the sampling model proposed in [8] assumes that the entries are sampled uniformly at random. Unfortunately, this condition is not totally realistic in recommender system applications: in such a context some users are more active than others and some popular items are rated more frequently. Theoretically, an important issue is that the method from [8] requires the knowledge of an upper bound on the nuclear norm or on the rank of the unknown matrix. Variations on the 1-bit matrix completion was further considered in [6] where a max-norm (though the name is similar, this is different from the sup-norm) constrained minimization is considered. The method of [6] allows more general non-uniform samplings but still requires an upper bound on the max-norm of the unknown matrix. In the present paper we consider a penalized maximum log-likelihood method, in which the loglikelihood of the observations is penalized by the nuclear norm (i.e., we focus on the Lagrangian version rather than on the constrained one). We first establish an upper bound of the KullbackLeibler divergence between the true and the estimated distribution under general sampling distributions; see Section 2 for details. One should note that our method only requires the knowledge of an upper bound on the maximum absolute value of the probabilities, and improves upon previous results found in the literature. Last but not least, we propose an efficient implementation of our statistical procedure, which is adapted from the lifted coordinate descent algorithm recently introduced in [9, 14]. Unlike other methods, this iterative algorithm is designed to solve the convex optimization and not (possibly nonconvex) approximated formulation as in [21]. It also has the benefit that it does not need to perform full/partial SVD (Singular Value Decomposition) at every iteration; see Section 3 for details. Notation Define m1 ∧m2 := min(m1, m2) and m1 ∨m2 := max(m1, m2). We equip the set of m1 × m2 matrices with real entries (denoted Rm1×m2) with the scalar product ⟨X|X′⟩:= tr(X⊤X′). For a given matrix X ∈Rm1×m2 we write ∥X∥∞:= maxi,j |Xi,j| and, for q ≥1, we denote its Schatten q-norm by ∥X∥σ,q := m1∧m2 X i=1 σi(X)q !1/q , where σi(X) are the singular values of X ordered in decreasing order (see [1] for more details on such norms). The operator norm of X is given by ∥X∥σ,∞:= σ1(X). Consider two vectors of p −1 matrices (Xj)p−1 j=1 and (X′j)p−1 j=1 such that for any (k, l) ∈[m1] × [m2] we have Xj k,l ≥0, X′j k,l ≥0, 1 −Pp−1 j=1 Xj k,l ≥0 and 1 −Pp−1 j=1 X′j k,l ≥0. Their square Hellinger distance is d2 H(X, X′) := 1 m1m2 X k∈[m1] l∈[m2]   p−1 X j=1 q Xj k,l − q X′j k,l 2 +   v u u t1 − p−1 X j=1 Xj k,l − v u u t1 − p−1 X j=1 X′j k,l   2  2 and their Kullback-Leibler divergence is KL (X, X′) := 1 m1m2 X k∈[m1] l∈[m2]   p−1 X j=1 Xj k,l log Xj k,l X′j k,l + (1 − p−1 X j=1 Xj k,l) log 1 −Pp−1 j=1 Xj k,l 1 −Pp−1 j=1 X′j k,l  . Given an integer p > 1, a function f : Rp−1 →Rp−1 is called a p-link function if for any x ∈Rp−1 it satisfies f j(x) ≥0 for j ∈[p −1] and 1 −Pp−1 j=1 f j(x) ≥0. For any collection of p −1 matrices (Xj)p−1 j=1, f(X) denotes the vector of matrices (f(X)j)p−1 j=1 such that f(X)j k,l = f(Xj k,l) for any (k, l) ∈[m1] × [m2] and j ∈[p −1]. 2 Main results Let p denote the cardinality of our finite alphabet, that is the number of classes of the logistic model (e.g., ratings have p possible values or surveys p possible answers). For a vector of p −1 matrices X = (Xj)p−1 j=1 of Rm1×m2 and an index ω ∈[m1] × [m2], we denote by Xω the vector (Xj ω)p−1 j=1. We consider an i.i.d. sequence (ωi)1≤i≤n over [m1] × [m2], with a probability distribution function Π that controls the way the matrix entries are revealed. It is customary to consider the simple uniform sampling distribution over the set [m1] × [m2], though more general sampling schemes could be considered as well. We observe n independent random elements (Yi)1≤i≤n ∈[p]n. The observations (Y1, . . . , Yn) are assumed to be independent and to follow a multinomial distribution with success probabilities given by P(Yi = j) = f j( ¯X1 ωi, . . . , ¯Xp−1 ωi ) j ∈[p −1] and P(Yi = p) = 1 − p−1 X j=1 P(Yi = j) where {f j}p−1 j=1 is a p-link function and ¯X = ( ¯Xj)p−1 j=1 is the vector of true (unknown) parameters we aim at recovering. For ease of notation, we often write ¯Xi instead of ¯Xωi. Let us denote by ΦY the (normalized) negative log-likelihood of the observations: ΦY(X) = −1 n n X i=1   p−1 X j=1 1{Yi=j} log f j(Xi)  + 1{Yi=p} log  1 − p−1 X j=1 f j(Xi)    , (1) For any γ > 0 our proposed estimator is the following: ˆX = arg min X∈(Rm1×m2)p−1 maxj∈[p−1] ∥Xj∥∞≤γ Φλ Y (X) , where Φλ Y (X) = ΦY(X) + λ p−1 X j=1 ∥Xj∥σ,1 , (2) with λ > 0 being a regularization parameter controlling the rank of the estimator. In the rest of the paper we assume that the negative log-likelihood ΦY is convex (this is the case for the multinomial logit function, see for instance [3]). In this section we present two results controlling the estimation error of ˆX in the binomial setting (i.e., when p = 2). Before doing so, let us introduce some additional notation and assumptions. The score function (defined as the gradient of the negative log-likelihood) taken at the true parameter ¯X, is denoted by ¯Σ := ∇ΦY( ¯X). We also need the following constants depending on the link function f and γ > 0: Mγ = sup |x|≤γ 2| log(f(x))| , Lγ = max sup |x|≤γ |f ′(x)| f(x) , sup |x|≤γ |f ′(x)| 1 −f(x) ! , Kγ = inf |x|≤γ f ′(x)2 8f(x)(1 −f(x)) . 3 In our framework, we allow for a general distribution for observing the coefficients. However, we need to control deviations of the sampling mechanism from the uniform distribution and therefore we consider the following assumptions. H1. There exists a constant µ ≥1 such that for all indexes (k, l) ∈[m1] × [m2] min k,l (πk,l) ≥1/(µm1m2) . with πk,l := Π(ω1 = (k, l)). Let us define Cl := Pm1 k=1 πk,l (resp. Rk := Pm2 l=1 πk,l) for any l ∈[m2] (resp. k ∈[m1]) the probability of sampling a coefficient in column l (resp. in row k). H2. There exists a constant ν ≥1 such that max k,l (Rk, Cl) ≤ν/(m1 ∧m2) , Assumption H1 ensures that each coefficient has a non-zero probability of being sampled whereas H2 requires that no column nor row is sampled with too high probability (see also [11, 16] for more details on this condition). We define the sequence of matrices (Ei)n i=1 associated to the revealed coefficient (ωi)n i=1 by Ei := eki(e′ li)⊤where (ki, li) = ωi and with (ek)m1 k=1 (resp. (e′ l)m2 l=1) being the canonical basis of Rm1 (resp. Rm2). Furthermore, if (εi)1≤i≤n is a Rademacher sequence independent from (ωi)n i=1 and (Yi)1≤i≤n we define ΣR := 1 n n X i=1 εiEi . We can now state our first result. For completeness, the proofs can be found in the supplementary material. Theorem 1. Assume H1 holds, λ ≥2∥¯Σ∥σ,∞and ∥¯X∥∞≤γ. Then, with probability at least 1 −2/d the Kullback-Leibler divergence between the true and estimated distribution is bounded by KL  f( ¯X), f( ˆX)  ≤8 max µ2 Kγ m1m2 rank( ¯X) λ2 + c∗L2 γ(E∥ΣR∥σ,∞)2 , µeMγ p log(d) n ! , where c∗is a universal constant. Note that ∥¯Σ∥σ,∞is stochastic and that its expectation E∥ΣR∥σ,∞is unknown. However, thanks to Assumption H2 these quantities can be controlled. To ease notation let us also define m := m1 ∧m2, M := m1 ∨m2 and d := m1 + m2. Theorem 2. Assume H 1 and H 2 hold and that ∥¯X∥∞≤γ. Assume in addition that n ≥ 2m log(d)/(9ν). Taking λ = 6Lγ p 2ν log(d)/(mn), then with probability at least 1 −3/d the folllowing holds Kγ ∥¯X −ˆX∥2 σ,2 m1m2 ≤KL  f( ¯X), f( ˆX)  ≤max ¯cνµ2L2 γ Kγ M rank( ¯X) log(d) n , 8µeMγ p log(d) n ! , where ¯c is a universal constant. Remark. Let us compare the rate of convergence of Theorem 2 with those obtained in previous works on 1-bit matrix completion. In [8], the parameter ¯X is estimated by minimizing the negative log-likelihood under the constraints ∥X∥∞≤γ and ∥X∥σ,1 ≤γ√rm1m2 for some r > 0. Under the assumption that rank( ¯X) ≤r, they could prove that ∥¯X −ˆX∥2 σ,2 m1m2 ≤Cγ r rd n , where Cγ is a constant depending on γ (see [8, Theorem 1]). This rate of convergence is slower than the rate of convergence given by Theorem 2. [6] studied a max-norm constrained maximum likelihood estimate and obtained a rate of convergence similar to [8]. 4 3 Numerical Experiments Implementation For numerical experiments, data were simulated according to a multinomial logit distribution. In this setting, an observation Yk,l associated to row k and column l is distributed as P(Yk,l = j) = f j(X1 k,l, . . . , Xp−1 k,l ) where f j(x1, . . . , xp−1) = exp(xj)  1 + p−1 X j=1 exp(xj)   −1 , for j ∈[p −1] . (3) With this choice, ΦY is convex and problem (2) can be solved using convex optimization algorithms. Moreover, following the advice of [8] we considered the unconstrained version of problem (2) (i.e., with no constraint on ∥X∥∞), which reduces significantly the computation burden and has no significant impact on the solution in practice. To solve this problem, we have extended to the multinomial case the coordinate gradient descent algorithm introduced by [9]. This type of algorithm has the advantage, say over the Soft-Impute [19] or the SVT [4] algorithm, that it does not require the computation of a full SVD at each step of the main loop of an iterative (proximal) algorithm (bare in mind that the proximal operator associated to the nuclear norm is the soft-thresholding operator of the singular values). The proposed version only computes the largest singular vectors and singular values. This potentially decreases the computation by a factor close to the value of the upper bound on the rank commonly used (see the aforementioned paper for more details). Let us present the algorithm. Any vector of p −1 matrices X = (Xj)p−1 j=1 is identified as an element of the tensor product space Rm1×m2 ⊗Rp−1 and denoted by: X = p−1 X j=1 Xj ⊗ej , (4) where again (ej)p−1 j=1 is the canonical basis on Rp−1 and ⊗stands for the tensor product. The set of normalized rank-one matrices is denoted by M :=  M ∈Rm1×m2|M = uv⊤| ∥u∥= ∥v∥= 1, u ∈Rm1, v ∈Rm2 . Define Θ the linear space of real-valued functions on M with finite support, i.e., θ(M) = 0 except for a finite number of M ∈M. This space is equipped with the ℓ1-norm ∥θ∥1 = P M∈M |θ(M)|. Define by Θ+ the positive orthant, i.e., the cone of functions θ ∈Θ such that θ(M) ≥0 for all M ∈M. Any tensor X can be associated with a vector θ = (θ1, . . . , θp−1) ∈Θp−1 + , i.e., X = p−1 X j=1 X M∈M θj(M)M ⊗ej . (5) Such representations are not unique, and among them, the one associated to the SVD plays a key role, as we will see below. For a given X represented by (4) and for any j ∈{1, . . . , p −1}, denote by {σj k}nj k=1 the (non-zero) singular values of the matrix Xj and {uj k,vj k}nj k=1 the associated singular vectors. Then, X may be expressed as X = p−1 X j=1 nj X k=1 σj kuj k(vj k)⊤⊗ej . (6) Defining θj the function θj(M) = σj k if M = uj k(vj k)⊤, k ∈[nj] and θj(M) = 0 otherwise, one obtains a representation of the type given in Eq. (5). Conversely, for any θ = (θ1, . . . , θp−1) ∈Θp−1, define the map W : θ →Wθ := p−1 X j=1 W j θ ⊗ej with W j θ := X M∈M θj(M)M and the auxiliary objective function ˜Φλ Y (θ) = λ p−1 X j=1 X M∈M θj(M) + ΦY(Wθ) . (7) 5 The map θ 7→Wθ is a continuous linear map from (Θp−1, ∥· ∥1) to Rm1×m2 ⊗Rp−1, where ∥θ∥1 = Pp−1 j=1 P M∈M |θj(M)|. In addition, for all θ ∈Θp−1 + p−1 X j=1 ∥W j θ ∥σ,1 ≤∥θ∥1 , and one obtains ∥θ∥1 = Pp−1 j=1 ∥W j θ ∥σ,1 when θ is the representation associated to the SVD decomposition. An important consequence, outlined in [9, Proposition 3.1], is that the minimization of (7) is actually equivalent to the minimization of (2); see [9, Theorem 3.2]. The proposed coordinate gradient descent algorithm updates at each step the nonnegative finite support function θ. For θ ∈Θ we denote by supp(θ) the support of θ and for M ∈M, by δM ∈Θ the Dirac function on M satisfying δM(M) = 1 and δM(M ′) = 0 if M ′ ̸= M. In our experiments we have set to zero the initial θ0. Algorithm 1: Multinomial lifted coordinate gradient descent Data: Observations: Y , tuning parameter λ initial parameter: θ0 ∈Θp−1 + ; tolerance: ϵ; maximum number of iterations: K Result: θ ∈Θp−1 + Initialization: θ ←θ0, k ←0 while k ≤K do for j = 0 to p −1 do Compute top singular vectors pair of (−∇ΦY(Wθ))j: uj, vj Let g = λ + minj=1,...,p−1⟨∇ΦY | uj(vj)⊤⟩ if g ≤−ϵ/2 then (β0, . . . , βp−1) = arg min (b0,...,bp−1)∈Rp−1 + ˜Φλ Y θ + (b0δu0(v0)⊤, . . . , bp−1δup−1(vp−1)⊤)  θ ←θ + (β0δu0(v0)⊤, . . . , βp−1δup−1(vp−1)⊤) k ←k + 1 else Let gmax = maxj∈[p−1] maxuj(vj)⊤∈supp(θj) |λ + ⟨∇ΦY | uj(vj)⊤⟩| if gmax ≤ϵ then break else θ ← arg min θ′∈Θp−1 + ,supp(θ′j)⊂supp(θj),j∈[p−1] ˜Φλ Y (θ′) k ←k + 1 A major interest of Algorithm 1 is that it requires to store the value of the parameter entries only for the indexes which are actually observed. Since in practice the number of observations is much smaller than the total number of coefficients m1m2, this algorithm is both memory and computationally efficient. Moreover, using an SVD algorithm such as Arnoldi iterations to compute the top singular values and vector pairs (see [12, Section 10.5] for instance) allows us to take full advantage of gradient sparse structure. Algorithm 1 was implemented in C and Table 1 gives a rough idea of the execution time for the case of two classes on a 3.07Ghz w3550 Xeon CPU (RAM 1.66 Go, Cache 8Mo). Simulated experiments To evaluate our procedure we have performed simulations for matrices with p = 2 or 5. For each class matrix Xj we sampled uniformly five unitary vector pairs (uj k, vj k)5 k=1. We have then generated matrices of rank equals to 5, such that Xj = Γ√m1m2 5 X k=1 αkuj k(vj k)⊤, with (α1, . . . , α5) = (2, 1, 0.5, 0.25, 0.1) and Γ is a scaling factor. The √m1m2 factor, guarantees that E[∥Xj∥∞] does not depend on the sizes of the problem m1 and m2. 6 Parameter Size 103 × 103 3 · 103 × 3 · 103 104 × 104 Observations 105 105 107 Execution Time (s.) 4.5 52 730 Table 1: Execution time of the proposed algorithm for the binary case. We then sampled the entries uniformly and the observations according to a logit distribution given by Eq. (3). We have then considered and compared the two following estimators both computed using Algorithm 1: • the logit version of our method (with the link function given by Eq. (3)) • the Gaussian completion method (denoted by ˆXN ), that consists in using the Gaussian log-likelihood instead of the multinomial in (2), i.e., using a classical squared Frobenius norm (the implementation being adapted mutatis mutandis). Moreover an estimation of the standard deviation is obtained by the classical analysis of the residue. Contrary to the logit version, the Gaussian matrix completion does not directly recover the probabilities of observing a rating. However, we can estimate this probability by the following quantity: P( ˆXN k,l = j) = FN (0,1)(pj+1) −FN (0,1)(pj) with pj =      0 if j = 1 , j−0.5−ˆ XN k,l ˆσ if 0 < j < p 1 if j = p , where FN (0,1) is the cdf of a zero-mean standard Gaussian random variable. As we see on Figure 1, the logistic estimator outperforms the Gaussian for both cases p = 2 and p = 5 in terms of the Kullback-Leibler divergence. This was expected because the Gaussian model allows uniquely symmetric distributions with the same variance for all the ratings, which is not the case for logistic distributions. The choice of the λ parameter has been set for both methods by performing 5-fold cross-validation on a geometric grid of size 0.8 log(n). Table 2 and Table 3 summarize the results obtained for a 900 × 1350 matrix respectively for p = 2 and p = 5. For both the binomial case p = 2 and the multinomial case p = 5, the logistic model slightly outperforms the Gaussian model. This is partly due to the fact that in the multinomial case, some ratings can have a multi-modal distribution. In such a case, the Gaussian model is unable to predict these ratings, because its distribution is necessarily centered around a single value and is not flexible enough. For instance consider the case of a rating distribution with high probability of seeing 1 or 5, low probability of getting 2, 3 and 4, where we observed both 1’s and 5’s. The estimator based on a Gaussian model will tend to center its distribution around 2.5 and therefore misses the bimodal shape of the distribution. Observations 10 · 103 50 · 103 100 · 103 500 · 103 Gaussian prediction error 0.49 0.34 0.29 0.26 Logistic prediction error 0.42 0.30 0.27 0.24 Table 2: Prediction errors for a binomial (2 classes) underlying model, for a 900 × 1350 matrix. Observations 10 · 103 50 · 103 100 · 103 500 · 103 Gaussian prediction error 0.78 0.76 0.73 0.69 Logistic prediction error 0.75 0.54 0.47 0.43 Table 3: Prediction Error for a multinomial (5 classes) distribution against a 900 × 1350 matrix. Real dataset We have also run the same estimators on the MovieLens 100k dataset. In the case of real data we cannot calculate the Kullback-Leibler divergence since no ground truth is available. Therefore, to compare the prediction errors, we randomly selected 20% of the entries as a test set, and the remaining entries were split between a training set (80%) and a validation set (20%). 7 100000 200000 300000 400000 500000 0.00 0.05 0.10 0.15 0.20 size: 100x150 size: 300x450 size: 900x1350 Normalized KL divergence for logistic (plain), Gaussian (dashed) Mean KL divergence Number of observations Normalized KL divergence for logistic (plain), Gaussian (dashed) Mean KL divergence Number of observations 100000 200000 300000 400000 500000 0.00 0.05 0.10 0.15 0.20 size: 100x150 size: 300x450 size: 900x1350 Figure 1: Kullback-Leibler divergence between the estimated and the true model for different matrices sizes and sampling fraction, normalized by number of classes. Right figure: binomial and Gaussian models ; left figure: multinomial with five classes and Gaussian model. Results are averaged over five samples. For this dataset, ratings range from 1 to 5. To consider the benefit of a binomial model, we have tested each rating against the others (e.g., ratings 5 are set to 0 and all others are set to 1). Interestingly we see that the Gaussian prediction error is significantly better when choosing labels −1, 1 instead of labels 0, 1. This is another motivation for not using the Gaussian version: the sensibility to the alphabet choice seems to be crucial for the Gaussian version, whereas the binomial/multinomial ones are insensitive to it. These results are summarized in table 4. Rating 1 2 3 4 5 Gaussian prediction error (labels −1 and 1) 0.06 0.12 0.28 0.35 0.19 Gaussian prediction error (labels 0 and 1) 0.12 0.20 0.39 0.46 0.30 Logistic prediction error 0.06 0.11 0.27 0.34 0.20 Table 4: Binomial prediction error when performing one versus the others procedure on the MovieLens 100k dataset. 4 Conclusion and future work We have proposed a new nuclear norm penalized maximum log-likelihood estimator and have provided strong theoretical guarantees on its estimation accuracy in the binary case. Compared to previous works on 1-bit matrix completion, our method has some important advantages. First, it works under quite mild assumptions on the sampling distribution. Second, it requires only an upper bound on the maximal absolute value of the unknown matrix. Finally, the rates of convergence given by Theorem 2 are faster than the rates of convergence obtained in [8] and [6]. In future work, we could consider the extension to more general data fitting terms, and to possibly generalize the results to tensor formulations, or to penalize directly the nuclear norm of the matrix probabilities themselves. Acknowledgments Jean Lafond is grateful for fundings from the Direction G´en´erale de l’Armement (DGA) and to the labex LMH through the grant no ANR-11-LABX-0056-LMH in the framework of the ”Programme des Investissements d’Avenir”. Joseph Salmon acknowledges Chair Machine Learning for Big Data for partial financial support. The authors would also like to thank Alexandre Gramfort for helpful discussions. 8 References [1] R. Bhatia. Matrix analysis, volume 169 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1997. [2] J. Bobadilla, F. Ortega, A. Hernando, and A. Guti´errez. Recommender systems survey. Knowledge-Based Systems, 46(0):109 – 132, 2013. [3] S. Boyd and L. Vandenberghe. Convex optimization. Cambridge University Press, Cambridge, 2004. [4] J-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [5] T. T. Cai and W-X. Zhou. Matrix completion via max-norm constrained optimization. CoRR, abs/1303.0341, 2013. [6] T. T. Cai and W-X. Zhou. A max-norm constrained minimization approach to 1-bit matrix completion. J. Mach. Learn. Res., 14:3619–3647, 2013. [7] E. J. Cand`es and Y. Plan. Matrix completion with noise. Proceedings of the IEEE, 98(6):925– 936, 2010. [8] M. A. Davenport, Y. Plan, E. van den Berg, and M. Wootters. 1-bit matrix completion. CoRR, abs/1209.3672, 2012. [9] M. Dud´ık, Z. Harchaoui, and J. Malick. Lifted coordinate descent for learning with trace-norm regularization. In AISTATS, 2012. [10] M. Fazel. Matrix rank minimization with applications. PhD thesis, Stanford University, 2002. [11] R. Foygel, R. Salakhutdinov, O. Shamir, and N. Srebro. Learning with the weighted trace-norm under arbitrary sampling distributions. In NIPS, pages 2133–2141, 2011. [12] G. H. Golub and C. F. van Loan. Matrix computations. Johns Hopkins University Press, Baltimore, MD, fourth edition, 2013. [13] D. Gross. Recovering low-rank matrices from few coefficients in any basis. Information Theory, IEEE Transactions on, 57(3):1548–1566, 2011. [14] Z. Harchaoui, A. Juditsky, and A. Nemirovski. Conditional gradient algorithms for normregularized smooth convex optimization. Mathematical Programming, pages 1–38, 2014. [15] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from noisy entries. J. Mach. Learn. Res., 11:2057–2078, 2010. [16] O. Klopp. Noisy low-rank matrix completion with general sampling distribution. Bernoulli, 2(1):282–303, 02 2014. [17] V. Koltchinskii, A. B. Tsybakov, and K. Lounici. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. Ann. Statist., 39(5):2302–2329, 2011. [18] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009. [19] R. Mazumder, T. Hastie, and R. Tibshirani. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res., 11:2287–2322, 2010. [20] S. Negahban and M. J. Wainwright. Restricted strong convexity and weighted matrix completion: optimal bounds with noise. J. Mach. Learn. Res., 13:1665–1697, 2012. [21] B. Recht and C. R´e. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201–226, 2013. [22] A. Todeschini, F. Caron, and M. Chavent. Probabilistic low-rank matrix completion with adaptive spectral regularization algorithms. In NIPS, pages 845–853, 2013. 9
2014
394
5,507
Tight Continuous Relaxation of the Balanced k-Cut Problem Syama Sundar Rangapuram, Pramod Kaushik Mudrakarta and Matthias Hein Department of Mathematics and Computer Science Saarland University, Saarbr¨ucken Abstract Spectral Clustering as a relaxation of the normalized/ratio cut has become one of the standard graph-based clustering methods. Existing methods for the computation of multiple clusters, corresponding to a balanced k-cut of the graph, are either based on greedy techniques or heuristics which have weak connection to the original motivation of minimizing the normalized cut. In this paper we propose a new tight continuous relaxation for any balanced k-cut problem and show that a related recently proposed relaxation is in most cases loose leading to poor performance in practice. For the optimization of our tight continuous relaxation we propose a new algorithm for the difficult sum-of-ratios minimization problem which achieves monotonic descent. Extensive comparisons show that our method outperforms all existing approaches for ratio cut and other balanced k-cut criteria. 1 Introduction Graph-based techniques for clustering have become very popular in machine learning as they allow for an easy integration of pairwise relationships in data. The problem of finding k clusters in a graph can be formulated as a balanced k-cut problem [1, 2, 3, 4], where ratio and normalized cut are famous instances of balanced graph cut criteria employed for clustering, community detection and image segmentation. The balanced k-cut problem is known to be NP-hard [4] and thus in practice relaxations [4, 5] or greedy approaches [6] are used for finding the optimal multi-cut. The most famous approach is spectral clustering [7], which corresponds to the spectral relaxation of the ratio/normalized cut and uses k-means in the embedding of the vertices found by the first k eigenvectors of the graph Laplacian in order to obtain the clustering. However, the spectral relaxation has been shown to be loose for k = 2 [8] and for k > 2 no guarantees are known of the quality of the obtained k-cut with respect to the optimal one. Moreover, in practice even greedy approaches [6] frequently outperform spectral clustering. This paper is motivated by another line of recent work [9, 10, 11, 12] where it has been shown that an exact continuous relaxation for the two cluster case (k = 2) is possible for a quite general class of balancing functions. Moreover, efficient algorithms for its optimization have been proposed which produce much better cuts than the standard spectral relaxation. However, the multi-cut problem has still to be solved via the greedy recursive splitting technique. Inspired by the recent approach in [13], in this paper we tackle directly the general balanced k-cut problem based on a new tight continuous relaxation. We show that the relaxation for the asymmetric ratio Cheeger cut proposed recently by [13] is loose when the data does not contain k well-separated clusters and thus leads to poor performance in practice. Similar to [13] we can also integrate label information leading to a transductive clustering formulation. Moreover, we propose an efficient algorithm for the minimization of our continuous relaxation for which we can prove monotonic descent. This is in contrast to the algorithm proposed in [13] for which no such guarantee holds. In extensive experiments we show that our method outperforms all existing methods in terms of the 1 achieved balanced k-cuts. Moreover, our clustering error is competitive with respect to several other clustering techniques based on balanced k-cuts and recently proposed approaches based on nonnegative matrix factorization. Also we observe that already with small amount of label information the clustering error improves significantly. 2 Balanced Graph Cuts Graphs are used in machine learning typically as similarity graphs, that is the weight of an edge between two instances encodes their similarity. Given such a similarity graph of the instances, the clustering problem into k sets can be transformed into a graph partitioning problem, where the goal is to construct a partition of the graph into k sets such that the cut, that is the sum of weights of the edge from each set to all other sets, is small and all sets in the partition are roughly of equal size. Before we introduce balanced graph cuts, we briefly fix the setting and notation. Let G(V, W) denote an undirected, weighted graph with vertex set V with n = |V | vertices and weight matrix W ∈Rn×n + with W = W T . There is an edge between two vertices i, j ∈V if wij > 0. The cut between two sets A, B ⊂V is defined as cut(A, B) = P i∈A,j∈B wij and we write 1A for the indicator vector of set A ⊂V . A collection of k sets (C1, . . . , Ck) is a partition of V if ∪k i=1Ci = V , Ci ∩Cj = ∅if i ̸= j and |Ci| ≥1, i = 1, . . . , k. We denote the set of all k-partitions of V by Pk. Furthermore, we denote by ∆k the simplex {x : x ∈Rk, x ≥0, Pk i=1 xi = 1}. Finally, a set function ˆS : 2V →R is called submodular if for all A, B ⊂V , ˆS(A∪B)+ ˆS(A∩B) ≤ ˆS(A) + ˆS(B). Furthermore, we need the concept of the Lovasz extension of a set function. Definition 1 Let ˆS : 2V →R be a set function with ˆS(∅) = 0. Let f ∈RV be ordered in increasing order f1 ≤f2 ≤. . . ≤fn and define Ci = {j ∈V | fj > fi} where C0 = V . Then S : RV →R given by, S(f) = Pn i=1 fi  ˆS(Ci−1) −ˆS(Ci)  , is called the Lovasz extension of ˆS. Note that S(1A) = ˆS(A) for all A ⊂V . The Lovasz extension of a set function is convex if and only if the set function is submodular [14]. The cut function cut(C, C), where C = V \C, is submodular and its Lovasz extension is given by TV(f) = 1 2 Pn i,j=1 wij|fi −fj|. 2.1 Balanced k-cuts The balanced k-cut problem is defined as min (C1,...,Ck)∈Pk k X i=1 cut(Ci, Ci) ˆS(Ci) =: BCut(C1, . . . , Ck) (1) where ˆS : 2V →R+ is a balancing function with the goal that all sets Ci are of the same “size”. In this paper, we assume that ˆS(∅) = 0 and for any C ⊊V, C ̸= ∅, ˆS(C) ≥m, for some m > 0. In the literature one finds mainly the following submodular balancing functions (in brackets is the name of the overall balanced graph cut criterion BCut(C1, . . . , Ck)), ˆS(C) = |C|, (Ratio Cut), (2) ˆS(C) = min{|C|, |C|}, (Ratio Cheeger Cut), ˆS(C) = min{(k −1)|C|, C} (Asymmetric Ratio Cheeger Cut). The Ratio Cut is well studied in the literature e.g. [3, 7, 6] and corresponds to a balancing function without bias towards a particular size of the sets, whereas the Asymmetric Ratio Cheeger Cut recently proposed in [13] has a bias towards sets of size |V | k ( ˆS(C) attains its maximum at this point) which makes perfect sense if one expects clusters which have roughly equal size. An intermediate version between the two is the Ratio Cheeger Cut which has a symmetric balancing function and strongly penalizes overly large clusters. For the ease of presentation we restrict ourselves to these balancing functions. However, we can also handle the corresponding weighted cases e.g., ˆS(C) = vol(C) = P i∈C di, where di = Pn j=1 wij, leading to the normalized cut[4]. 2 3 Tight Continuous Relaxation for the Balanced k-Cut Problem In this section we discuss our proposed relaxation for the balanced k-cut problem (1). It turns out that a crucial question towards a tight multi-cut relaxation is the choice of the constraints so that the continuous problem also yields a partition (together with a suitable rounding scheme). The motivation for our relaxation is taken from the recent work of [9, 10, 11], where exact relaxations are shown for the case k = 2. Basically, they replace the ratio of set functions with the ratio of the corresponding Lovasz extensions. We use the same idea for the objective of our continuous relaxation of the k-cut problem (1) which is given as min F =(F1,...,Fk), F ∈Rn×k + k X l=1 TV(Fl) S(Fl) (3) subject to : F(i) ∈∆k, i = 1, . . . , n, (simplex constraints) max{F(i)} = 1, ∀i ∈I, (membership constraints) S(Fl) ≥m, l = 1, . . . , k, (size constraints) where S is the Lovasz extension of the set function ˆS and m = minC⊊V, C̸=∅ˆS(C). We have m = 1, for Ratio Cut and Ratio Cheeger Cut whereas m = k −1 for Asymmetric Ratio Cheeger Cut. Note that TV is the Lovasz extension of the cut functional cut(C, C). In order to simplify notation we denote for a matrix F ∈Rn×k by Fl the l-th column of F and by F(i) the i-th row of F. Note that the rows of F correspond to the vertices of the graph and the j-th column of F corresponds to the set Cj of the desired partition. The set I ⊂V in the membership constraints is chosen adaptively by our method during the sequential optimization described in Section 4. An obvious question is how to get from the continuous solution F ∗of (3) to a partition (C1, . . . , Ck) ∈Pk which is typically called rounding. Given F ∗we construct the sets, by assigning each vertex i to the column where the i-th row attains its maximum. Formally, Ci = {j ∈V | i = arg max s=1,...,k Fjs}, i = 1, . . . , k, (Rounding) (4) where ties are broken randomly. If there exists a row such that the rounding is not unique, we say that the solution is weakly degenerated. If furthermore the resulting set (C1, . . . , Ck) do not form a partition, that is one of the sets is empty, then we say that the solution is strongly degenerated. First, we connect our relaxation to the previous work of [11] for the case k = 2. Indeed for symmetric balancing function such as the Ratio Cheeger Cut, our continuous relaxation (3) is exact even without membership and size constraints. Theorem 1 Let ˆS be a non-negative symmetric balancing function, ˆS(C) = ˆS(C), and denote by p∗the optimal value of (3) without membership and size constraints for k = 2. Then it holds p∗= min (C1,C2)∈P2 2 X i=1 cut(Ci, Ci) ˆS(Ci) . Furthermore there exists a solution F ∗of (3) such that F ∗= [1C∗, 1C∗], where (C∗, C∗) is the optimal balanced 2-cut partition. Note that rounding trivially yields a solution in the setting of the previous theorem. A second result shows that indeed our proposed optimization problem (3) is a relaxation of the balanced k-cut problem (1). Furthermore, the relaxation is exact if I = V . Proposition 1 The continuous problem (3) is a relaxation of the k-cut problem (1). The relaxation is exact, i.e., both problems are equivalent, if I = V . The row-wise simplex and membership constraints enforce that each vertex in I belongs to exactly one component. Note that these constraints alone (even if I = V ) can still not guarantee that F corresponds to a k-way partition since an entire column of F can be zero. This is avoided by the column-wise size constraints that enforce that each component has at least one vertex. 3 If I = V it is immediate from the proof that problem (3) is no longer a continuous problem as the feasible set are only indicator matrices of partitions. In this case rounding yields trivially a partition. On the other hand, if I = ∅(i.e., no membership constraints), and k > 2 it is not guaranteed that rounding of the solution of the continuous problem yields a partition. Indeed, we will see in the following that for symmetric balancing functions one can, under these conditions, show that the solution is always strongly degenerated and rounding does not yield a partition (see Theorem 2). Thus we observe that the index set I controls the degree to which the partition constraint is enforced. The idea behind our suggested relaxation is that it is well known in image processing that minimizing the total variation yields piecewise constant solutions (in fact this follows from seeing the total variation as Lovasz extension of the cut). Thus if |I| is sufficiently large, the vertices where the values are fixed to 0 or 1 propagate this to their neighboring vertices and finally to the whole graph. We discuss the choice of I in more detail in Section 4. Simplex constraints alone are not sufficient to yield a partition: Our approach has been inspired by [13] who proposed the following continuous relaxation for the Asymmetric Ratio Cheeger Cut min F =(F1,...,Fk), F ∈Rn×k + k X l=1 TV(Fl) Fl −quantk−1(Fl) 1 (5) subject to : F(i) ∈∆k, i = 1, . . . , n, (simplex constraints) where S(f) = f −quantk−1(f) 1 is the Lovasz extension of ˆS(C) = min{(k −1)|C|, C} and quantk−1(f) is the k−1-quantile of f ∈Rn. Note that in their approach no membership constraints and size constraints are present. We now show that the usage of simplex constraints in the optimization problem (3) is not sufficient to guarantee that the solution F ∗can be rounded to a partition for any symmetric balancing function in (1). For asymmetric balancing functions as employed for the Asymmetric Ratio Cheeger Cut by [13] in their relaxation (5) we can prove such a strong result only in the case where the graph is disconnected. However, note that if the number of components of the graph is less than the number of desired clusters k, the multi-cut problem is still non-trivial. Theorem 2 Let ˆS(C) be any non-negative symmetric balancing function. Then the continuous relaxation min F =(F1,...,Fk), F ∈Rn×k + k X l=1 TV(Fl) S(Fl) (6) subject to : F(i) ∈∆k, i = 1, . . . , n, (simplex constraints) of the balanced k-cut problem (1) is void in the sense that the optimal solution F ∗of the continuous problem can be constructed from the optimal solution of the 2-cut problem and F ∗cannot be rounded into a k-way partition, see (4). If the graph is disconnected, then the same holds also for any non-negative asymmetric balancing function. The proof of Theorem 2 shows additionally that for any balancing function if the graph is disconnected, the solution of the continuous relaxation (6) is always zero, while clearly the solution of the balanced k-cut problem need not be zero. This shows that the relaxation can be arbitrarily bad in this case. In fact the relaxation for the asymmetric case can even fail if the graph is not disconnected but there exists a cut of the graph which is very small as the following corollary indicates. Corollary 1 Let ˆS be an asymmetric balancing function and C∗= arg min C⊂V cut(C,C) ˆS(C) and suppose that φ∗:= (k −1) cut(C∗,C∗) ˆS(C∗) + cut(C∗,C∗) ˆS(C∗) < min(C1,...,Ck)∈Pk Pk i=1 cut(Ci,Ci) ˆS(Ci) . Then there exists a feasible F with F1 = 1C∗and Fl = αl1C∗, l = 2, . . . , k such that Pk l=2 αl = 1, αl > 0 for (6) which has objective Pk i=1 TV(Fi) S(Fi) = φ∗and which cannot be rounded to a k-way partition. Theorem 2 shows that the membership and size constraints which we have introduced in our relaxation (3) are essential to obtain a partition for symmetric balancing functions. For the asymmetric 4 (a) 0 1 0 0 0 1 1 0 0 (b) 1 0 0 0 0 1 (c) 0 1 0 0 0 1 1 0 0 (d) 0 1 0 0 0 1 1 0 0 (e) Figure 1: Toy example illustrating that the relaxation of [13] converges to a degenerate solution when applied to a graph with dominating 2-cut. (a) 10NN-graph generated from three Gaussians in 10 dimensions (b) continuous solution of (5) from [13] for k = 3, (c) rounding of the continuous solution of [13] does not yield a 3-partition (d) continuous solution found by our method together with the vertices i ∈I (black) where the membership constraint is enforced. Our continuous solution corresponds already to a partition. (e) clustering found by rounding of our continuous solution (trivial as we have converged to a partition). In (b)-(e), we color data point i according to F(i) ∈R3. balancing function failure of the relaxation (6) and thus also of the relaxation (5) of [13] is only guaranteed for disconnected graphs. However, Corollary 1 indicates that degenerated solutions should also be a problem when the graph is still connected but there exists a dominating cut. We illustrate this with a toy example in Figure 1 where the algorithm of [13] for solving (5) fails as it converges exactly to the solution predicted by Corollary 1 and thus only produces a 2-partition instead of the desired 3-partition. The algorithm for our relaxation enforcing membership constraints converges to a continuous solution which is in fact a partition matrix so that no rounding is necessary. 4 Monotonic Descent Method for Minimization of a Sum of Ratios Apart from the new relaxation another key contribution of this paper is the derivation of an algorithm which yields a sequence of feasible points for the difficult non-convex problem (3) and reduces monotonically the corresponding objective. We would like to note that the algorithm proposed by [13] for (5) does not yield monotonic descent. In fact it is unclear what the derived guarantee for the algorithm in [13] implies for the generated sequence. Moreover, our algorithm works for any non-negative submodular balancing function. The key insight in order to derive a monotonic descent method for solving the sum-of-ratio minimization problem (3) is to eliminate the ratio by introducing a new set of variables β = (β1, . . . , βk). min F =(F1,...,Fk), F ∈Rn×k + , β∈Rk + k X l=1 βl (7) subject to : TV(Fl) ≤βlS(Fl), l = 1, . . . , k, (descent constraints) F(i) ∈∆k, i = 1, . . . , n, (simplex constraints) max{F(i)} = 1, ∀i ∈I, (membership constraints) S(Fl) ≥m, l = 1, . . . , k. (size constraints) Note that for the optimal solution (F ∗, β∗) of this problem it holds TV(F ∗ l ) = β∗ l S(F ∗ l ), l = 1, . . . , k (otherwise one can decrease β∗ l and hence the objective) and thus equivalence holds. This is still a non-convex problem as the descent, membership and size constraints are non-convex. Our algorithm proceeds now in a sequential manner. At each iterate we do a convex inner approximation of the constraint set, that is the convex approximation is a subset of the non-convex constraint set, based on the current iterate (F t, βt). Then we optimize the resulting convex optimization problem and repeat the process. In this way we get a sequence of feasible points for the original problem (7) for which we will prove monotonic descent in the sum-of-ratios. Convex approximation: As ˆS is submodular, S is convex. Let st l ∈∂S(F t l ) be an element of the sub-differential of S at the current iterate F t l . We have by Prop. 3.2 in [14], (st l)ji = ˆS(Cli−1) − ˆS(Cli), where ji is the ith smallest component of F t l and Cli = {j ∈V | (F t l )j > (F t l )i}. Moreover, using the definition of subgradient, we have S(Fl) ≥S(F t l ) + ⟨st l, Fl −F t l ⟩= ⟨st l, Fl⟩. 5 For the descent constraints, let λt l = TV(F t l ) S(F t l ) and introduce new variables δl = βl −λt l that capture the amount of change in each ratio. We further decompose δl as δl = δ+ l −δ− l , δ+ l ≥0, δ− l ≥0. Let M = maxf∈[0,1]n S(f) = maxC⊂V ˆS(C), then for S(Fl) ≥m, TV(Fl) −βlS(Fl) ≤TV(Fl) −λt l st l, Fl −δ+ l S(Fl) + δ− l S(Fl) ≤TV(Fl) −λt l st l, Fl −δ+ l m + δ− l M Finally, note that because of the simplex constraints, the membership constraints can be rewritten as max{F(i)} ≥1. Let i ∈I and define ji := arg maxj F t ij (ties are broken randomly). Then the membership constraints can be relaxed as follows: 0 ≥1 −max{F(i)} ≥1 −Fiji =⇒Fiji ≥1. As Fij ≤1 we get Fiji = 1. Thus the convex approximation of the membership constraints fixes the assignment of the i-th point to a cluster and thus can be interpreted as “label constraint”. However, unlike the transductive setting, the labels for the vertices in I are automatically chosen by our method. The actual choice of the set I will be discussed in Section 4.1. We use the notation L = {(i, ji) | i ∈I} for the label set generated from I (note that L is fixed once I is fixed). Descent algorithm: Our descent algorithm for minimizing (7) solves at each iteration t the following convex optimization problem (8). min F ∈Rn×k + , δ+∈Rk +, δ−∈Rk + k X l=1 δ+ l −δ− l (8) subject to : TV(Fl) ≤λt l st l, Fl + δ+ l m −δ− l M, l = 1, . . . k, (descent constraints) F(i) ∈∆k, i = 1, . . . , n, (simplex constraints) Fiji = 1, ∀(i, ji) ∈L, (label constraints) st l, F t l ≥m, l = 1, . . . , k. (size constraints) As its solution F t+1 is feasible for (3) we update λt+1 l = TV(F t+1 l ) S(F t+1 l ) and st+1 l ∈∂S(F t+1 l ), l = 1, . . . , k and repeat the process until the sequence terminates, that is no further descent is possible as the following theorem states, or the relative descent in Pk l=1 λt l is smaller than a predefined ϵ. The following Theorem 3 shows the monotonic descent property of our algorithm. Theorem 3 The sequence {F t} produced by the above algorithm satisfies Pk l=1 TV(F t+1 l ) S(F t+1 l ) < Pk l=1 TV(F t l ) S(F t l ) for all t ≥0 or the algorithm terminates. The inner problem (8) is convex, but contains the non-smooth term TV in the constraints. We eliminate the non-smoothness by introducing additional variables and derive an equivalent linear programming (LP) formulation. We solve this LP via the PDHG algorithm [15, 16]. The LP and the exact iterates can be found in the supplementary material. 4.1 Choice of membership constraints I The overall algorithm scheme for solving the problem (1) is given in the supplementary material. For the membership constraints we start initially with I0 = ∅and sequentially solve the inner problem (8). From its solution F t+1 we construct a P ′ k = (C1, . . . , Ck) via rounding, see (4). We repeat this process until we either do not improve the resulting balanced k-cut or P ′ k is not a partition. In this case we update It+1 and double the number of membership constraints. Let (C∗ 1, . . . , C∗ k) be the currently optimal partition. For each l ∈{1, . . . , k} and i ∈C∗ l we compute b∗ li = cut C∗ l \{i}, C∗ l ∪{i}  ˆS(C∗ l \{i}) + min s̸=l cut C∗ s ∪{i}, C∗s \{i}  ˆS(C∗s ∪{i}) (9) and define Ol = {(π1, . . . , π|C∗ l |) | b∗ lπ1 ≥b∗ lπ2 ≥. . . ≥b∗ lπ|C∗ l |}. The top-ranked vertices in Ol correspond to the ones which lead to the largest minimal increase in BCut when moved from C∗ l to another component and thus are most likely to belong to their current component. Thus it is 6 natural to fix the top-ranked vertices for each component first. Note that the rankings Ol, l = 1, . . . , k are updated when a better partition is found. Thus the membership constraints correspond always to the vertices which lead to largest minimal increase in BCut when moved to another component. In Figure 1 one can observe that the fixed labeled points are lying close to the centers of the found clusters. The number of membership constraints depends on the graph. The better separated the clusters are, the less membership constraints need to be enforced in order to avoid degenerate solutions. Finally, we stop the algorithm if we see no more improvement in the cut or the continuous objective and the continuous solution corresponds to a partition. 5 Experiments We evaluate our method against a diverse selection of state-of-the-art clustering methods like spectral clustering (Spec) [7], BSpec [11], Graclus1 [6], NMF based approaches PNMF [18], NSC [19], ONMF [20], LSD [21], NMFR [22] and MTV [13] which optimizes (5). We used the publicly available code [22, 13] with default settings. We run our method using 5 random initializations, 7 initializations based on the spectral clustering solution similar to [13] (who use 30 such initializations). In addition to the datasets provided in [13], we also selected a variety of datasets from the UCI repository shown below. For all the datasets not in [13], symmetric k-NN graphs are built with Gaussian weights exp − s∥x−y∥2 min{σ2 x,k,σ2 y,k}  , where σx,k is the k-NN distance of point x. We chose the parameters s and k in a method independent way by testing for each dataset several graphs using all the methods over different choices of k ∈{3, 5, 7, 10, 15, 20, 40, 60, 80, 100} and s ∈{0.1, 1, 4}. The best choice in terms of the clustering error across all the methods and datasets, is s = 1, k = 15. Iris wine vertebral ecoli 4moons webkb4 optdigits USPS pendigits 20news MNIST # vertices 150 178 310 336 4000 4196 5620 9298 10992 19928 70000 # classes 3 3 3 6 4 4 10 10 10 20 10 Quantitative results: In our first experiment we evaluate our method in terms of solving the balanced k-cut problem for various balancing functions, data sets and graph parameters. The following table reports the fraction of times a method achieves the best as well as strictly best balanced k-cut over all constructed graphs and datasets (in total 30 graphs per dataset). For reference, we also report the obtained cuts for other clustering methods although they do not directly minimize this criterion in italic; methods that directly optimize the criterion are shown in normal font. Our algorithm can handle all balancing functions and significantly outperforms all other methods across all criteria. For ratio and normalized cut cases we achieve better results than [7, 11, 6] which directly optimize this criterion. This shows that the greedy recursive bi-partitioning affects badly the performance of [11], which, otherwise, was shown to obtain the best cuts on several benchmark datasets [23]. This further shows the need for methods that directly minimize the multi-cut. It is striking that the competing method of [13], which directly minimizes the asymmetric ratio cut, is beaten significantly by Graclus as well as our method. As this clear trend is less visible in the qualitative experiments, we suspect that extreme graph parameters lead to fast convergence to a degenerate solution. Ours MTV BSpec Spec Graclus PNMF NSC ONMF LSD NMFR RCC-asym Best (%) 80.54 25.50 23.49 7.38 38.26 2.01 5.37 2.01 4.03 1.34 Strictly Best (%) 44.97 10.74 1.34 0.00 4.70 0.00 0.00 0.00 0.00 0.00 RCC-sym Best (%) 94.63 8.72 19.46 6.71 37.58 0.67 4.03 0.00 0.67 0.67 Strictly Best (%) 61.74 0.00 0.67 0.00 4.70 0.00 0.00 0.00 0.00 0.00 NCC-asym Best (%) 93.29 13.42 20.13 10.07 38.26 0.67 5.37 2.01 4.70 2.01 Strictly Best (%) 56.38 2.01 0.00 0.00 2.01 0.00 0.00 0.67 0.00 1.34 NCC-sym Best (%) 98.66 10.07 20.81 9.40 40.27 1.34 4.03 0.67 3.36 1.34 Strictly Best (%) 59.06 0.00 0.00 0.00 1.34 0.00 0.00 0.00 0.00 0.00 Rcut Best (%) 85.91 7.38 20.13 10.07 32.89 0.67 4.03 0.00 1.34 1.34 Strictly Best (%) 58.39 0.00 2.68 2.01 8.72 0.00 0.00 0.00 0.00 0.67 Ncut Best (%) 95.97 10.07 20.13 9.40 37.58 1.34 4.70 0.67 3.36 0.67 Strictly Best (%) 61.07 0.00 0.00 0.00 4.03 0.00 0.00 0.00 0.00 0.00 Qualitative results: In the following table, we report the clustering errors and the balanced k-cuts obtained by all methods using the graphs built with k = 15, s = 1 for all datasets. As the main goal 1Since [6], a multi-level algorithm directly minimizing Rcut/Ncut, is shown to be superior to METIS [17], we do not compare with [17]. 7 is to compare to [13] we choose their balancing function (RCC-asym). Again, our method always achieved the best cuts across all datasets. In three cases, the best cut also corresponds to the best clustering performance. In case of vertebral, 20news, and webkb4 the best cuts actually result in high errors. However, we see in our next experiment that integrating ground-truth label information helps in these cases to improve the clustering performance significantly. Iris wine vertebral ecoli 4moons webkb4 optdigits USPS pendigits 20news MNIST BSpec Err(%) 23.33 37.64 50.00 19.35 36.33 60.46 11.30 20.09 17.59 84.21 11.82 BCut 1.495 6.417 1.890 2.550 0.634 1.056 0.386 0.822 0.081 0.966 0.471 Spec Err(%) 22.00 20.22 48.71 14.88 31.45 60.32 7.81 21.05 16.75 79.10 22.83 BCut 1.783 5.820 1.950 2.759 0.917 1.520 0.442 0.873 0.141 1.170 0.707 PNMF Err(%) 22.67 27.53 50.00 16.37 35.23 60.94 10.37 24.07 17.93 66.00 12.80 BCut 1.508 4.916 2.250 2.652 0.737 3.520 0.548 1.180 0.415 2.924 0.934 NSC Err(%) 23.33 17.98 50.00 14.88 32.05 59.49 8.24 20.53 19.81 78.86 21.27 BCut 1.518 5.140 2.046 2.754 0.933 3.566 0.482 0.850 0.101 2.233 0.688 ONMF Err(%) 23.33 28.09 50.65 16.07 35.35 60.94 10.37 24.14 22.82 69.02 27.27 BCut 1.518 4.881 2.371 2.633 0.725 3.621 0.548 1.183 0.548 3.058 1.575 LSD Err(%) 23.33 17.98 39.03 18.45 35.68 47.93 8.42 22.68 13.90 67.81 24.49 BCut 1.518 5.399 2.557 2.523 0.782 2.082 0.483 0.918 0.188 2.056 0.959 NMFR Err(%) 22.00 11.24 38.06 22.92 36.33 40.73 2.08 22.17 13.13 39.97 fail BCut 1.627 4.318 2.713 2.556 0.840 1.467 0.369 0.992 0.240 1.241 Graclus Err(%) 23.33 8.43 49.68 16.37 0.45 39.97 1.67 19.75 10.93 60.69 2.43 BCut 1.534 4.293 1.890 2.414 0.589 1.581 0.350 0.815 0.092 1.431 0.440 MTV Err(%) 22.67 18.54 34.52 22.02 7.72 48.40 4.11 15.13 20.55 72.18 3.77 BCut 1.508 5.556 2.433 2.500 0.774 2.346 0.374 0.940 0.193 3.291 0.458 Ours Err(%) 23.33 6.74 50.00 16.96 0.45 60.46 1.71 19.72 19.95 79.51 2.37 BCut 1.495 4.168 1.890 2.399 0.589 1.056 0.350 0.802 0.079 0.895 0.439 Transductive Setting: We evaluate our method against [13] in a transductive setting. As in [13], we randomly sample either one label or a fixed percentage of labels per class from the ground truth. We report clustering errors and the cuts (RCC-asym) for both methods for different choices of labels. For label experiments their initialization strategy seems to work better as the cuts improve compared to the unlabeled case. However, observe that in some cases their method seems to fail completely (Iris and 4moons for one label per class). Labels Iris wine vertebral ecoli 4moons webkb4 optdigits USPS pendigits 20news MNIST 1 MTV Err(%) 33.33 9.55 42.26 13.99 35.75 51.98 1.69 12.91 14.49 50.96 2.45 BCut 3.855 4.288 2.244 2.430 0.723 1.596 0.352 0.846 0.127 1.286 0.439 Ours Err(%) 22.67 8.99 50.32 15.48 0.57 45.11 1.69 12.98 10.98 68.53 2.36 BCut 1.571 4.234 2.265 2.432 0.610 1.471 0.352 0.812 0.113 1.057 0.439 1% MTV Err(%) 33.33 10.67 39.03 14.29 0.45 48.38 1.67 5.21 7.75 40.18 2.41 BCut 3.855 4.277 2.300 2.429 0.589 1.584 0.354 0.789 0.129 1.208 0.443 Ours Err(%) 22.67 6.18 41.29 13.99 0.45 41.63 1.67 5.13 7.75 37.42 2.33 BCut 1.571 4.220 2.288 2.419 0.589 1.462 0.354 0.789 0.128 1.157 0.442 5% MTV Err(%) 17.33 7.87 40.65 14.58 0.45 40.09 1.51 4.85 1.79 31.89 2.18 BCut 1.685 4.330 2.701 2.462 0.589 1.763 0.369 0.812 0.188 1.254 0.455 Ours Err(%) 17.33 6.74 37.10 13.99 0.45 38.04 1.53 4.85 1.76 30.07 2.18 BCut 1.685 4.224 2.724 2.461 0.589 1.719 0.369 0.811 0.188 1.210 0.455 10% MTV Err(%) 18.67 7.30 39.03 13.39 0.38 40.63 1.41 4.19 1.24 27.80 2.03 BCut 1.954 4.332 3.187 2.776 0.592 2.057 0.377 0.833 0.197 1.346 0.465 Ours Err(%) 14.67 6.74 33.87 13.10 0.38 41.97 1.41 4.25 1.24 26.55 2.02 BCut 1.960 4.194 3.134 2.778 0.592 1.972 0.377 0.833 0.197 1.314 0.465 6 Conclusion We presented a framework for directly minimizing the balanced k-cut problem based on a new tight continuous relaxation. Apart from the standard ratio/normalized cut, our method can also handle new application-specific balancing functions. Moreover, in contrast to a recursive splitting approach [24], our method enables the direct integration of prior information available in form of must/cannotlink constraints, which is an interesting topic for future research. Finally, the monotonic descent algorithm proposed for the difficult sum-of-ratios problem is another key contribution of the paper that is of independent interest. 8 References [1] W. E. Donath and A. J. Hoffman. Lower bounds for the partitioning of graphs. IBM J. Res. Develop., 17:420–425, 1973. [2] A. Pothen, H. D. Simon, and K.-P. Liou. Partitioning sparse matrices with eigenvectors of graphs. SIAM J. Matrix Anal. Appl., 11(3):430–452, 1990. [3] L. Hagen and A. B. Kahng. Fast spectral methods for ratio cut partitioning and clustering. In ICCAD, pages 10–13, 1991. [4] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 22:888–905, 2000. [5] A. Ng, M. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. [6] I. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors: A multilevel approach. IEEE Trans. Pattern Anal. Mach. Intell., pages 1944–1957, 2007. [7] U. von Luxburg. A tutorial on spectral clustering. Statistics and Computing, 17:395–416, 2007. [8] S. Guattery and G. Miller. On the quality of spectral separators. SIAM J. Matrix Anal. Appl., 19:701–719, 1998. [9] A. Szlam and X. Bresson. Total variation and Cheeger cuts. In ICML, pages 1039–1046, 2010. [10] M. Hein and T. B¨uhler. An inverse power method for nonlinear eigenproblems with applications in 1spectral clustering and sparse PCA. In NIPS, pages 847–855, 2010. [11] M. Hein and S. Setzer. Beyond spectral clustering - tight relaxations of balanced graph cuts. In NIPS, pages 2366–2374, 2011. [12] X. Bresson, T. Laurent, D. Uminsky, and J. H. von Brecht. Convergence and energy landscape for Cheeger cut clustering. In NIPS, pages 1394–1402, 2012. [13] X. Bresson, T. Laurent, D. Uminsky, and J. H. von Brecht. Multiclass total variation clustering. In NIPS, pages 1421–1429, 2013. [14] F. Bach. Learning with submodular functions: A convex optimization perspective. Foundations and Trends in Machine Learning, 6(2-3):145–373, 2013. [15] A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. J. of Math. Imaging and Vision, 40:120–145, 2011. [16] T. Pock and A. Chambolle. Diagonal preconditioning for first order primal-dual algorithms in convex optimization. In ICCV, pages 1762–1769, 2011. [17] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput., 20(1):359–392, 1998. [18] Z. Yang and E. Oja. Linear and nonlinear projective nonnegative matrix factorization. IEEE Transactions on Neural Networks, 21(5):734–749, 2010. [19] C. Ding, T. Li, and M. I. Jordan. Nonnegative matrix factorization for combinatorial optimization: Spectral clustering, graph matching, and clique finding. In ICDM, pages 183–192, 2008. [20] C. Ding, T. Li, W. Peng, and H. Park. Orthogonal nonnegative matrix tri-factorizations for clustering. In KDD, pages 126–135, 2006. [21] R. Arora, M. R. Gupta, A. Kapila, and M. Fazel. Clustering by left-stochastic matrix factorization. In ICML, pages 761–768, 2011. [22] Z. Yang, T. Hao, O. Dikmen, X. Chen, and E. Oja. Clustering by nonnegative matrix factorization using graph random walk. In NIPS, pages 1088–1096, 2012. [23] A. J. Soper, C. Walshaw, and M. Cross. A combined evolutionary search and multilevel optimisation approach to graph-partitioning. J. of Global Optimization, 29(2):225–241, 2004. [24] S. S. Rangapuram and M. Hein. Constrained 1-spectral clustering. In AISTATS, pages 1143–1151, 2012. 9
2014
395
5,508
Discrete Graph Hashing Wei Liu† Cun Mu‡ Sanjiv Kumar♯ Shih-Fu Chang‡ †IBM T. J. Watson Research Center ‡Columbia University ♯Google Research weiliu@us.ibm.com cm3052@columbia.edu sfchang@ee.columbia.edu sanjivk@google.com Abstract Hashing has emerged as a popular technique for fast nearest neighbor search in gigantic databases. In particular, learning based hashing has received considerable attention due to its appealing storage and search efficiency. However, the performance of most unsupervised learning based hashing methods deteriorates rapidly as the hash code length increases. We argue that the degraded performance is due to inferior optimization procedures used to achieve discrete binary codes. This paper presents a graph-based unsupervised hashing model to preserve the neighborhood structure of massive data in a discrete code space. We cast the graph hashing problem into a discrete optimization framework which directly learns the binary codes. A tractable alternating maximization algorithm is then proposed to explicitly deal with the discrete constraints, yielding high-quality codes to well capture the local neighborhoods. Extensive experiments performed on four large datasets with up to one million samples show that our discrete optimization based graph hashing method obtains superior search accuracy over state-of-the-art unsupervised hashing methods, especially for longer codes. 1 Introduction During the past few years, hashing has become a popular tool for tackling a variety of large-scale computer vision and machine learning problems including object detection [6], object recognition [35], image retrieval [22], linear classifier training [19], active learning [24], kernel matrix approximation [34], multi-task learning [36], etc. In these problems, hashing is exploited to map similar data points to adjacent binary hash codes, thereby accelerating similarity search via highly efficient Hamming distances in the code space. In practice, hashing with short codes, say about one hundred bits per sample, can lead to significant gains in both storage and computation. This scenario is called Compact Hashing in the literature, which is the focus of this paper. Early endeavors in hashing concentrated on using random permutations or projections to construct randomized hash functions. The well-known representatives include Min-wise Hashing (MinHash) [3] and Locality-Sensitive Hashing (LSH) [2]. MinHash estimates the Jaccard set similarity and is improved by b-bit MinHash [18]. LSH can accommodate a variety of distance or similarity metrics such as ℓp distances for p ∈(0, 2], cosine similarity [4], and kernel similarity [17]. Due to randomized hashing, one needs more bits per hash table to achieve high precision. This typically reduces recall, and multiple hash tables are thus required to achieve satisfactory accuracy of retrieved nearest neighbors. The overall number of hash bits used in an application can easily run into thousands. Beyond the data-independent randomized hashing schemes, a recent trend in machine learning is to develop data-dependent hashing techniques that learn a set of compact hash codes using a training set. Binary codes have been popular in this scenario for their simplicity and efficiency in computation. The compact hashing scheme can accomplish almost constant-time nearest neighbor search, after encoding the whole dataset to short binary codes and then aggregating them into a hash table. Additionally, compact hashing is particularly beneficial to storing massive-scale data. For example, saving one hundred million samples each with 100 binary bits costs less than 1.5 GB, which 1 can easily fit in memory. To create effective compact codes, several methods have been proposed. These include the unsupervised methods, e.g., Iterative Quantization [9], Isotropic Hashing [14], Spectral Hashing [38, 37], and Anchor Graph Hashing [23], the semi-supervised methods, e.g., Weakly-Supervised Hashing [25], and the supervised methods, e.g., Semantic Hashing [30], Binary Reconstruction Embeddings [16], Minimal Loss Hashing [27], Kernel-based Supervised Hashing [22], Hamming Distance Metric Learning [28], and Column Generation Hashing [20]. This paper focuses on the problem of unsupervised learning of compact hash codes. Here we argue that most unsupervised hashing methods suffer from inadequate search performance, particularly low recall, when applied to learn relatively longer codes (say around 100 bits) in order to achieve higher precision. The main reason is that the discrete (binary) constraints which should be imposed on the codes during learning itself have not been treated adequately. Most existing methods either neglect the discrete constraints like PCA Hashing and Isotropic Hashing, or discard the constraints to solve the relaxed optimizations and afterwards round the continuous solutions to obtain the binary codes like Spectral Hashing and Anchor Graph Hashing. Crucially, we find that the hashing performance of the codes obtained by such relaxation + rounding schemes deteriorates rapidly when the code length increases (see Fig. 2). Till now, very few approaches work directly in the discrete code space. Parameter-Sensitive Hashing [31] and Binary Reconstruction Embeddings (BRE) learn the parameters of predefined hash functions by progressively tuning the codes generated by such functions; Iterative Quantization (ITQ) iteratively learns the codes by explicitly imposing the binary constraints. While ITQ and BRE work in the discrete space to generate the hash codes, they do not capture the local neighborhoods of raw data in the code space well. ITQ targets at minimizing the quantization error between the codes and the PCA-reduced data. BRE trains the Hamming distances to mimic the ℓ2 distances among a limited number of sampled data points, but could not incorporate the entire dataset into training due to its expensive optimization procedure. In this paper, we leverage the concept of Anchor Graphs [21] to capture the neighborhood structure inherent in a given massive dataset, and then formulate a graph-based hashing model over the whole dataset. This model hinges on a novel discrete optimization procedure to achieve nearly balanced and uncorrelated hash bits, where the binary constraints are explicitly imposed and handled. To tackle the discrete optimization in a computationally tractable manner, we propose an alternating maximization algorithm which consists of solving two interesting subproblems. For brevity, we call the proposed discrete optimization based graph hashing method as Discrete Graph Hashing (DGH). Through extensive experiments carried out on four benchmark datasets with size up to one million, we show that DGH consistently obtains higher search accuracy than state-of-the-art unsupervised hashing methods, especially when relatively longer codes are learned. 2 Discrete Graph Hashing First we define a few main notations used throughout this paper: sgn(x) denotes the sign function which returns 1 for x > 0 and −1 otherwise; In denotes the n×n identity matrix; 1 denotes a vector with all 1 elements; 0 denotes a vector or matrix of all 0 elements; diag(c) represents a diagonal matrix with elements of vector c being its diagonal entries; tr(·), ∥· ∥F, ∥· ∥1, and ⟨·, ·⟩express matrix trace norm, matrix Frobenius norm, ℓ1 norm, and inner-product operator, respectively. Anchor Graphs. In the discrete graph hashing model, we need to choose a neighborhood graph that can easily scale to massive data points. For simplicity and efficiency, we choose Anchor Graphs [21], which involve no special indexing scheme but still have linear construction time in the number of data points. An anchor graph uses a small set of m points (called anchors), U = {uj ∈Rd}m j=1, to approximate the neighborhood structure underlying the input dataset X = {xi ∈Rd}n i=1. Affinities (or similarities) of all n data points are computed with respect to these m anchors in linear time O(dmn) where m ≪n. The true affinity matrix Ao ∈Rn×n is then approximated by using these affinities. Specifically, an anchor graph leverages a nonlinear data-to-anchor mapping (Rd →Rm) z(x) =  δ1 exp(−D2(x,u1) t ), · · · , δm exp(−D2(x,um) t ) ⊤ /M, where δj ∈{1, 0} and δj = 1 if and only if anchor uj is one of s ≪m closest anchors of x in U according to some distance function D() (e.g., ℓ2 distance), t > 0 is the bandwidth parameter, and M = m j=1 δj exp(−D2(x,uj) t ) leading to ∥z(x)∥1 = 1. Then, the anchor graph builds a data-to-anchor affinity matrix Z = 2  z(x1), · · · , z(xn) ⊤∈Rn×m that is highly sparse. Finally, the anchor graph gives a data-to-data affinity matrix as A = ZΛ−1Z⊤∈Rn×n where Λ = diag(Z⊤1) ∈Rm×m. Such an affinity matrix empirically approximates the true affinity matrix Ao, and has two nice characteristics: 1) A is a low-rank positive semidefinite (PSD) matrix with rank at most m, so the anchor graph does not need to compute it explicitly but instead keeps its low-rank form and only saves Z and Λ in memory; 2) A has unit row and column sums, so the resulting graph Laplacian is L = In −A. The two characteristics permit convenient and efficient matrix manipulations upon A, as shown later on. We also define an anchor graph affinity function as A(x, x′) = z⊤(x)Λ−1z(x′) in which (x, x′) is any pair of points in Rd. Learning Model. The purpose of unsupervised hashing is to learn to map each data point xi to an r-bit binary hash code b(xi) ∈{1, −1}r given a training dataset X = {xi}n i=1. For simplicity, let us denote b(xi) as bi, and the corresponding code matrix as B = [b1, · · · , bn]⊤∈{1, −1}n×r. The standard graph-based hashing framework, proposed by [38], aims to learn the hash codes such that the neighbors in the input space have small Hamming distances in the code space. This is formulated as: min B 1 2 n  i,j=1 ∥bi −bj∥2Ao ij = tr  B⊤LoB , s.t. B ∈{±1}n×r, 1⊤B = 0, B⊤B = nIr, (1) where Lo is the graph Laplacian based on the true affinity matrix Ao1. The constraint 1⊤B = 0 is imposed to maximize the information from each hash bit, which occurs when each bit leads to a balanced partitioning of the dataset X. Another constraint B⊤B = nIr makes r bits mutually uncorrelated to minimize the redundancy among these bits. Problem (1) is NP-hard, and Weiss et al. [38] therefore solved a relaxed problem by dropping the discrete (binary) constraint B ∈{±1}n×r and making a simplifying assumption of data being distributed uniformly. We leverage the anchor graph to replace Lo by the anchor graph Laplacian L = In −A. Hence, the objective in Eq. (1) can be rewritten as a maximization problem: max B tr  B⊤AB , s.t. B ∈{1, −1}n×r, 1⊤B = 0, B⊤B = nIr. (2) In [23], the solution to this problem is obtained via spectral relaxation [33] in which B is relaxed to be a matrix of reals followed by a thresholding step (threshold is 0) that brings the final discrete B. Unfortunately, this procedure may result in poor codes due to amplification of the error caused by the relaxation as the code length r increases. To this end, we propose to directly solve the binary codes B without resorting to such error-prone relaxations. Let us define a set Ω = Y ∈Rn×r|1⊤Y = 0, Y⊤Y = nIr}. Then we formulate a more general graph hashing framework which softens the last two hard constraints in Eq. (2) as: max B tr  B⊤AB −ρ 2dist2(B, Ω), s.t. B ∈{1, −1}n×r, (3) where dist(B, Ω) = minY∈Ω ∥B −Y∥F measures the distance from any matrix B to the set Ω, and ρ ≥0 is a tuning parameter. If problem (2) is feasible, we can enforce dist(B, Ω) = 0 in Eq. (3) by imposing a very large ρ, thereby turning problem (3) into problem (2). However, in Eq. (3) we allow a certain discrepancy between B and Ω (controlled by ρ), which makes problem (3) more flexible. Since tr  B⊤B) = tr  Y⊤Y) = nr, problem (3) can be equivalently transformed to the following problem: max B,Y Q(B, Y) := tr  B⊤AB + ρtr  B⊤Y , s.t. B ∈{1, −1}n×r, Y ∈Rn×r, 1⊤Y = 0, Y⊤Y = nIr. (4) We call the code learning model formulated in Eq. (4) as Discrete Graph Hashing (DGH). Because concurrently imposing B ∈{±1}n×r and B ∈Ω will make graph hashing computationally intractable, DGH does not pursue the latter constraint but penalizes the distance from the target code matrix B to Ω. Different from the previous graph hashing methods which discard the discrete constraint B ∈{±1}n×r to obtain continuously relaxed B, our DGH model enforces this constraint to directly achieve discrete B. As a result, DGH yields nearly balanced and uncorrelated binary bits. In Section 3, we will propose a computationally tractable optimization algorithm to solve this discrete programming problem in Eq. (4). 1The spectral hashing method in [38] did not compute the true affinity matrix Ao because of the scalability issue, but instead used a complete graph built over 1D PCA embeddings. 3 Algorithm 1 Signed Gradient Method (SGM) for B-Subproblem Input: B(0) ∈{1, −1}n×r and Y ∈Ω. j := 0; repeat B(j+1) := sgn  C  2AB(j) + ρY, B(j) , j := j + 1, until B(j) converges. Output: B = B(j). Out-of-Sample Hashing. Since a hashing scheme should be able to generate the hash code for any data point q ∈Rd beyond the points in the training set X, here we address the out-of-sample extension of the DGH model. Similar to the objective in Eq. (1), we minimize the Hamming distances between a novel data point q and its neighbors (revealed by the affinity function A) in X as b(q) ∈arg min b(q)∈{±1}r 1 2 n  i=1 b(q) −b∗ i 2A(q, xi) = arg max b(q)∈{±1}r b(q), (B∗)⊤ZΛ−1z(q) , where B∗= [b∗ 1, · · · , b∗ n]⊤is the solution of problem (4). After pre-computing a matrix W = (B∗)⊤ZΛ−1 ∈Rr×m in the training phase, one can compute the hash code b∗(q) = sgn  Wz(q) for any novel data point q very efficiently. 3 Alternating Maximization The graph hashing problem in Eq. (4) is essentially a nonlinear mixed-integer program involving both discrete variables in B and continuous variables in Y. It turns out that problem (4) is generally NP-hard and also difficult to approximate. In specific, since the Max-Cut problem is a special case of problem (4) when ρ = 0 and r = 1, there exists no polynomial-time algorithm which can achieve the global optimum, or even an approximate solution with its objective value beyond 16/17 of the global maximum unless P = NP [11]. To this end, we propose a tractable alternating maximization algorithm to optimize problem (4), leading to good hash codes which are demonstrated to exhibit superior search performance through extensive experiments conducted in Section 5. The proposed algorithm proceeds by alternately solving the B-subproblem max B∈{±1}n×r f(B) := tr  B⊤AB + ρtr  Y⊤B (5) and the Y-subproblem max Y∈Rn×r tr  B⊤Y , s.t. 1⊤Y = 0, Y⊤Y = nIr. (6) In what follows, we propose an iterative ascent procedure called Signed Gradient Method for subproblem (5) and derive a closed-form optimal solution to subproblem (6). As we can show, our alternating algorithm is provably convergent. Schemes for choosing good initializations are also discussed. Due to the space limit, all the proofs of lemmas, theorems and propositions presented in this section are placed in the supplemental material. 3.1 B-Subproblem We tackle subproblem (5) with a simple iterative ascent procedure described in Algorithm 1. In the j-th iteration, we define a local function ˆfj(B) that linearizes f(B) at the point B(j), and employ ˆfj(B) as a surrogate of f(B) for discrete optimization. Given B(j), the next discrete point is derived as B(j+1) ∈arg maxB∈{±1}n×r ˆfj(B) := f  B(j) + ∇f  B(j) , B −B(j) . Note that since ∇f  B(j) may include zero entries, multiple solutions for B(j+1) could exist. To avoid this ambiguity, we introduce the function C(x, y) =  x, x ̸= 0 y, x = 0 to specify the following update: B(j+1) := sgn  C  ∇f  B(j) , B(j)  = sgn  C  2AB(j) + ρY, B(j)  , (7) in which C is applied in an element-wise manner, and no update is carried out to the entries where ∇f  B(j) vanishes. Due to the PSD property of the matrix A, f is a convex function and thus f(B) ≥ˆfj(B) for any B. Taking advantage of the fact f  B(j+1) ≥ˆfj  B(j+1) ≥ˆfj  B(j) ≡f  B(j) , Lemma 1 ensures that both the sequence of cost values f(B(j))  and the sequence of iterates B(j) converge. 4 Algorithm 2 Discrete Graph Hashing (DGH) Input: B0 ∈{1, −1}n×r and Y0 ∈Ω. k := 0; repeat Bk+1 := SGM(Bk, Yk), Yk+1 ∈Φ(JBk+1), k := k+1, until Q(Bk, Yk) converges. Output: B∗= Bk, Y∗= Yk. Lemma 1. If B(j) is the sequence of iterates produced by Algorithm 1, then f  B(j+1) ≥ f  B(j) holds for any integer j ≥0, and both f(B(j))  and B(j) converge. Our idea of optimizing a proxy function ˆfj(B) can be considered as a special case of majorization methodology exploited in the field of optimization. The majorization method typically deals with a generic constrained optimization problem: min g(x), s.t. x ∈F, where g : Rn →R is a continuous function and F ⊆Rn is a compact set. The majorization method starts with a feasible point x0 ∈F, and then proceeds by setting xj+1 as a minimizer of ˆgj(x) over F, where ˆgj satisfying ˆgj(xj) = g(xj) and ˆgj(x) ≥g(x) ∀x ∈F is called a majorization function of g at xj. In specific, in our scenario, problem (5) is equivalent to minB∈{±1}n×r −f(B), and the linear surrogate −ˆfj is a majorization function of −f at point B(j). The majorization method was first systematically introduced by [5] to deal with multidimensional scaling problems, although the EM algorithm [7], proposed at the same time, also falls into the framework of majorization methodology. Since then, the majorization method has played an important role in various statistics problems such as multidimensional data analysis [12], hyperparameter learning [8], conditional random fields and latent likelihoods [13], and so on. 3.2 Y-Subproblem An analytical solution to subproblem (6) can be obtained with the aid of a centering matrix J = In− 1 n11⊤. Write the singular value decomposition (SVD) of JB as JB = UΣV⊤= r′ k=1 σkukv⊤ k , where r′ ≤r is the rank of JB, σ1, · · · , σr′ are the positive singular values, and U = [u1, · · · , ur′] and V = [v1, · · · , vr′] contain the left- and right-singular vectors, respectively. Then, by employing a Gram-Schmidt process, one can easily construct matrices ¯U ∈Rn×(r−r′) and ¯V ∈Rr×(r−r′) such that ¯U⊤¯U = Ir−r′, [U 1]⊤¯U = 0, and ¯V⊤¯V = Ir−r′, V⊤¯V = 02. Now we are ready to characterize a closed-form solution of the Y-subproblem by Lemma 2. Lemma 2. Y⋆= √n[U ¯U][V ¯V]⊤is an optimal solution to the Y-subproblem in Eq. (6). For notational convenience, we define the set of all matrices in the form of √n[U ¯U][V ¯V]⊤ as Φ(JB). Lemma 2 reveals that any matrix in Φ(JB) is an optimal solution to subproblem (6). In practice, to compute such an optimal Y⋆, we perform the eigendecomposition over the small r × r matrix B⊤JB to have B⊤JB = [V ¯V]  Σ2 0 0 0  [V ¯V]⊤, which gives V, ¯V, Σ, and immediately leads to U = JBVΣ−1. The matrix ¯U is initially set to a random matrix followed by the aforementioned Gram-Schmidt orthogonalization. It can be seen that Y⋆is uniquely optimal when r′ = r (i.e., JB is full column rank). 3.3 DGH Algorithm The proposed alternating maximization algorithm, also referred to as Discrete Graph Hashing (DGH), for solving the raw problem in Eq. (4) is summarized in Algorithm 2, in which we introduce SGM(·, ·) to represent the functionality of Algorithm 1. The convergence of Algorithm 2 is guaranteed by Theorem 1, whose proof is based on the nature of the proposed alternating maximization procedure that always generates a monotonically non-decreasing and bounded sequence. Theorem 1. If (Bk, Yk)  is the sequence generated by Algorithm 2, then Q(Bk+1, Yk+1) ≥ Q(Bk, Yk) holds for any integer k ≥0, and Q(Bk, Yk)  converges starting with any feasible initial point (B0, Y0). Initialization. Since the DGH algorithm deals with discrete and non-convex optimization, a good choice of an initial point (B0, Y0) is vital. Here we suggest two different initial points which are both feasible to problem (4). 2Note that when r′ = r, ¯U and ¯V are nothing but 0. 5 Let us perform the eigendecomposition over A to obtain A = PΘP⊤= m k=1 θkpkp⊤ k , where θ1, · · · , θm are the eigenvalues arranged in a non-increasing order, and p1, · · · , pm are the corresponding normalized eigenvectors. We write Θ = diag(θ1, · · · , θm) and P = [p1, · · · , pm]. Note that θ1 = 1 and p1 = 1/√n. The first initialization used is  Y0 = √nH, B0 = sgn(H) , where H = [p2, · · · , pr+1] ∈Rn×r. The initial codes B0 were used as the final codes by [23]. Alternatively, Y0 can be allowed to consist of orthonormal columns within the column space of H, i.e., Y0 = √nHR subject to some orthogonal matrix R ∈Rr×r. We can obtain R along with B0 by solving a new discrete optimization problem: max R,B0 tr  R⊤H⊤AB0 , s.t. R ∈Rr×r, RR⊤= Ir, B0 ∈{1, −1}n×r, (8) which is motivated by the proposition below. Proposition 1. For any orthogonal matrix R ∈Rr×r and any binary matrix B ∈{1, −1}n×r, we have tr  B⊤AB ≥1 r tr2 R⊤H⊤AB . Proposition 1 implies that the optimization in Eq. (8) can be interpreted as to maximize a lower bound of tr  B⊤AB which is the first term of the objective Q(B, Y) in the original problem (4). We still exploit an alternating maximization procedure to solve problem (8). Noticing AH = H ˆΘ where ˆΘ = diag(θ2, · · · , θr+1), the objective in Eq. (8) is equal to tr  R⊤ˆΘH⊤B0). The alternating procedure starts with R0 = Ir, and then makes the simple updates Bj 0 := sgn  H ˆΘRj , Rj+1 := ˜Uj ˜V⊤ j for j = 0, 1, 2, · · · , where ˜Uj, ˜Vj ∈Rr×r stem from the full SVD ˜Uj ˜Σj ˜V⊤ j of the matrix ˆΘH⊤Bj 0. When convergence is reached, we obtain the optimized rotation R that yields the second initialization  Y0 = √nHR, B0 = sgn(H ˆΘR) . Empirically, we find that the second initialization typically gives a better objective value Q(B0, Y0) at the start than the first one, as it aims to maximize the lower bound of the first term in the objective Q. We also observe that the second initialization often results in a higher objective value Q(B∗, Y∗) at convergence (Figs. 1-2 in the supplemental material show convergence curves of Q starting from the two initial points). We call DGH using the first and second initializations as DGH-I and DGH-R, respectively. Regarding the convergence property, we would like to point out that since the DGH algorithm (Algorithm 2) works on a mixed-integer objective, it is hard to quantify the convergence to a local optimum of the objective function Q. Nevertheless, this does not affect the performance of our algorithm in practice. In our experiments in Section 5, we consistently find a convergent sequence {(Bk, Yk)} arriving at a good objective value when started with the suggested initializations. 4 Discussions Here we analyze space and time complexities of DGH-I/DGH-R. The space complexity is O  (d + s + r)n in the training stage and O(rn) for storing hash codes in the test stage for DGH-I/DGH-R. Let TB and TG be the budget iteration numbers of optimizing the B-subproblem and the whole DGH problem, respectively. Then, the training time complexity of DGH-I is O  dmn + m2n + (mTB + sTB + r)rTGn , and the training time complexity of DGH-R is O  dmn + m2n + (mTB + sTB + r)rTGn + r2TRn , where TR is the budget iteration number for seeking the initial point via Eq. (8). Note that the time for finding anchors and building the anchor graph is O(dmn) which is included in the above training time. Their test time (referring to encoding a query to an r-bit code) is both O(dm + sr). In our experiments, we fix m, s, TB, TG, TR to constants independent of the dataset size n, and make r ≤128. Thus, DGH-I/DGH-R enjoy linear training time and constant test time. It is worth mentioning again that the low-rank PSD property of the anchor graph affinity matrix A is advantageous for training DGH, permitting efficient matrix computations in O(n) time, such as the eigendecomposition of A (encountered in initializations) and multiplying A with B (encountered in solving the B-subproblem with Algorithm 1). It is interesting to point out that DGH falls into the asymmetric hashing category [26] in the sense that hash codes are generated differently for samples within the dataset and queries outside the dataset. Unlike most existing hashing techniques, DGH directly solves the hash codes B∗of the training samples via the proposed discrete optimization in Eq. (4) without relying on any explicit or predefined hash functions. On the other hand, the hash code for any query q is induced from the solved codes B∗, leading to a hash function b∗(q) = sgn  Wz(q) parameterized by the matrix 6 8 1216 24 32 48 64 96 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Hash lookup success rate @ CIFAR−10 # bits Success rate LSH KLSH ITQ IsoH SH IMH 1−AGH 2−AGH BRE DGH−I DGH−R 8 1216 24 32 48 64 96 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 # bits (b) Hash lookup success rate @ SUN397 LSH KLSH ITQ IsoH SH IMH 1−AGH 2−AGH BRE DGH−I DGH−R 1216 24 32 48 64 96 128 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Hash lookup success rate @ YouTube Faces # bits LSH KLSH ITQ IsoH SH IMH 1−AGH 2−AGH BRE DGH−I DGH−R 1216 24 32 48 64 96 128 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 # bits (d) Hash lookup success rate @ Tiny−1M LSH KLSH ITQ IsoH SH IMH 1−AGH 2−AGH BRE DGH−I DGH−R Figure 1: Hash lookup success rates for different hashing techniques. DGH tends to achieve nearly 100% success rates even for longer code lengths. 8 1216 24 32 48 64 96 0 0.05 0.1 0.15 0.2 0.25 0.3 # bits F−measure within Hamming radius 2 (a) Hash lookup F−measure @ CIFAR−10 LSH KLSH ITQ IsoH SH IMH 1−AGH 2−AGH BRE DGH−I DGH−R 8 1216 24 32 48 64 96 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 # bits (b) Hash lookup F−measure @ SUN397 1216 24 32 48 64 96 128 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 (c) Hash lookup F−measure @ YouTube Faces # bits 1216 24 32 48 64 96 128 0 0.05 0.1 0.15 0.2 0.25 (d) Hash lookup F−measure @ Tiny−1M # bits Figure 2: Mean F-measures of hash lookup within Hamming radius 2 for different techniques. DGH tends to retain good recall even for longer codes, leading to much higher F-measures than the others. W which was computed using B∗. While the hashing mechanisms for producing B∗and b∗(q) are distinct, they are tightly coupled and prone to be adaptive to specific datasets. The flexibility of the asymmetric hashing nature of DGH is validated through the experiments shown in the next section. 5 Experiments We conduct large-scale similarity search experiments on four benchmark datasets: CIFAR-10 [15], SUN397 [40], YouTube Faces [39], and Tiny-1M. CIFAR-10 is a labeled subset of the 80 Million Tiny Images dataset [35], which consists of 60K images from ten object categories with each image represented by a 512-dimensional GIST feature vector [29]. SUN397 contains about 108K images from 397 scene categories, where each image is represented by a 1,600-dimensional feature vector extracted by PCA from 12,288-dimensional Deep Convolutional Activation Features [10]. The raw YouTube Faces dataset contains 1,595 different people, from which we choose 340 people such that each one has at least 500 images to form a subset of 370,319 face images, and represent each face image as a 1,770-dimensional LBP feature vector [1]. Tiny-1M is one million subset of the 80M tiny images, where each image is represented by a 384-dimensional GIST vector. In CIFAR-10, 100 images are sampled uniformly randomly from each object category to form a separate test (query) set of 1K images; in SUN397, 100 images are sampled uniformly randomly from each of the 18 largest scene categories to form a test set of 1.8K images; in YouTube Faces, the test set includes 3.8K face images which are evenly sampled from the 38 people each containing more than 2K faces; in Tiny-1M, a separate subset of 5K images randomly sampled from the 80M images is used as the test set. In the first three datasets, groundtruth neighbors are defined based on whether two samples share the same class label; in Tiny-1M which does not have full annotations, we define groundtruth neighbors for a given query as the samples among the top 2% ℓ2 distances from the query in the 1M training set, so each query has 20K groundtruth neighbors. We evaluate twelve unsupervised hashing methods including: two randomized methods LSH [2] and Kernelized LSH (KLSH) [17], two linear projection based methods Iterative Quantization (ITQ) [9] and Isotropic Hashing (IsoH) [14], two spectral methods Spectral Hashing (SH) [38] and its weighted version MDSH [37], one manifold based method Inductive Manifold Hashing (IMH) [32], two existing graph-based methods One-Layer Anchor Graph Hashing (1-AGH) and Two-Layer Anchor Graph Hashing (2-AGH) [23], one distance preservation method Binary Reconstruction Embeddings (BRE) [16] (unsupervised version), and our proposed discrete optimization based methods DGH-I and DGH-R. We use the publicly available codes of the competing methods, and follow the conventional parameter settings therein. In particular, we use the Gaussian kernel and 300 randomly sampled exemplars (anchors) to run KLSH; IMH, 1-AGH, 2-AGH, DGH-I and DGH-R also use m = 300 anchors (obtained by K-means clustering with 5 iterations) for fair comparison. This choice of m gives a good trade-off between hashing speed and performance. For 1-AGH, 2-AGH, DGH-I and DGH-R that all use anchor graphs, we adopt the same construction parameters s, t on each dataset (s = 3 and t is tuned following AGH), and ℓ2 distance as D(·). For BRE, we uniformly 7 Table 1: Hamming ranking performance on YouTube Faces and Tiny-1M. r denotes the number of hash bits used in the hashing methods. All training and test times are in seconds. Method YouTube Faces Tiny-1M Mean Precision / Top-2K TrainTime TestTime Mean Precision / Top-20K TrainTime TestTime r = 48 r = 96 r = 128 r = 128 r = 128 r = 48 r = 96 r = 128 r = 128 r = 128 ℓ2 Scan 0.7591 – 1 – LSH 0.0830 0.1005 0.1061 6.4 1.8×10−5 0.1155 0.1324 0.1766 6.1 1.0×10−5 KLSH 0.3982 0.5210 0.5871 16.1 4.8×10−5 0.3054 0.4105 0.4705 20.7 4.6×10−5 ITQ 0.7017 0.7493 0.7562 169.0 1.8×10−5 0.3925 0.4726 0.5052 297.3 1.0×10−5 IsoH 0.6093 0.6962 0.7058 73.6 1.8×10−5 0.3896 0.4816 0.5161 13.5 1.0×10−5 SH 0.5897 0.6655 0.6736 108.9 2.0×10−4 0.1857 0.1923 0.2079 61.4 1.6×10−4 MDSH 0.6110 0.6752 0.6795 118.8 4.9×10−5 0.3312 0.3878 0.3955 193.6 2.8×10−5 IMH 0.3150 0.3641 0.3889 92.1 2.3×10−5 0.2257 0.2497 0.2557 139.3 2.7×10−5 1-AGH 0.7138 0.7571 0.7646 84.1 2.1×10−5 0.4061 0.4117 0.4107 141.4 3.4×10−5 2-AGH 0.6727 0.7377 0.7521 94.7 3.5×10−5 0.3925 0.4099 0.4152 272.5 4.7×10−5 BRE 0.5564 0.6238 0.6483 10372.0 9.0×10−5 0.3943 0.4836 0.5218 8419.0 8.8×10−5 DGH-I 0.7086 0.7644 0.7750 402.6 2.1×10−5 0.4045 0.4865 0.5178 1769.4 3.3×10−5 DGH-R 0.7245 0.7672 0.7805 408.9 2.1×10−5 0.4208 0.5006 0.5358 2793.4 3.3×10−5 randomly sample 1K, and 2K training samples to train the distance preservations on CIFAR-10 & SUN397, and YouTube Faces & Tiny-1M, respectively. For DGH-I and DGH-R, we set the penalty parameter ρ to the same value in [0.1, 5] on each dataset, and fix TR = 100, TB = 300, TG = 20. We employ two widely used search procedures hash lookup and Hamming ranking with 8 to 128 hash bits for evaluations. The Hamming ranking procedure ranks the dataset samples according to their Hamming distances to a given query, while the hash lookup procedure finds all the points within a certain Hamming radius away from the query. Since hash lookup can be achieved in constant time by using a single hash table, it is the main focus of this work. We carry out hash lookup within a Hamming ball of radius 2 centered on each query, and report the search recall and F-measure which are averaged over all queries for each dataset. Note that if table lookup fails to find any neighbors within a given radius for a query, we call it a failed query and assign it zero recall and F-measure. To quantify the failed queries, we report the hash lookup success rate which gives the proportion of the queries for which at least one neighbor is retrieved. For Hamming ranking, mean average precision (MAP) and mean precision of top-retrieved samples are computed. The hash lookup results are shown in Figs. 1-2. DGH-I/DGH-R achieve the highest (close to 100%) hash lookup success rates, and DGH-I is slightly better than DGH-R. The reason is that the asymmetric hashing scheme exploited by DGH-I/DGH-R poses a tight linkage to connect queries and database samples, providing a more adaptive out-of-sample extension than the traditional symmetric hashing schemes used by the competing methods. Also, DGH-R achieves the highest F-measure except on CIFAR-10, where DGH-I is highest while DGH-R is the second. The F-measures of KLSH, IsoH, SH and BRE deteriorate quickly and are with very poor values (< 0.05) when r ≥48 due to poor recall3. Although IMH achieves nice hash lookup succuss rates, its F-measures are much lower than DGH-I/DGH-R due to lower precision. MDSH produces the same hash bits as SH, so is not included in the hash lookup experiments. DGH-I/DGH-R employ the proposed discrete optimization to yield high-quality codes that preserve the local neighborhood of each data point within a small Hamming ball, so obtain much higher search accuracy in F-measure and recall than SH, 1-AGH and 2-AGH which rely on relaxed optimizations and degrade drastically when r ≥48. Finally, we report the Hamming ranking results in Table 1 and the table in the sup-material, which clearly show the superiority of DGH-R over the competing methods in MAP and mean precision; on the first three datasets, DGH-R even outperforms exhaustive ℓ2 scan. The training time of DGHI/DGH-R is acceptable and faster than BRE, and their test time (i.e., coding time since hash lookup time is small enough to be ignored) is comparable with 1-AGH. 6 Conclusion This paper investigated a pervasive problem of not enforcing the discrete constraints in optimization pertaining to most existing hashing methods. Instead of resorting to error-prone continuous relaxations, we introduced a novel discrete optimization technique that learns the binary hash codes directly. To achieve this, we proposed a tractable alternating maximization algorithm which solves two interesting subproblems and provably converges. When working with a neighborhood graph, the proposed method yields high-quality codes to well preserve the neighborhood structure inherent in the data. Extensive experimental results on four large datasets up to one million showed that our discrete optimization based graph hashing technique is highly competitive. 3The recall results are shown in Fig. 3 of the supplemental material, which indicate that DGH-I achieves the highest recall except on YouTube Faces, where DGH-R is highest while DGH-I is the second. 8 References [1] T. Ahonen, A. Hadid, and M. Pietikainen. Face description with local binary patterns: Application to face recognition. TPAMI, 28(12):2037–2041, 2006. [2] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. Communications of the ACM, 51(1):117–122, 2008. [3] A. Z. Broder, M. Charikar, A. M. Frieze, and M. Mitzenmacher. Min-wise independent permutations. In Proc. STOC, 1998. [4] M. Charikar. Similarity estimation techniques from rounding algorithms. In Proc. STOC, 2002. [5] J. de Leeuw. Applications of convex analysis to multidimensinal scaling. Recent Developments in Statistics, pages 133–146, 1977. [6] T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In Proc. CVPR, 2013. [7] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977. [8] C.-S. Foo, C. B. Do, and A. Y. Ng. A majorization-minimization algorithm for (multiple) hyperparameter learning. In Proc. ICML, 2009. [9] Y. Gong, S. Lazebnik, A. Gordo, and F. Perronnin. Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval. TPAMI, 35(12):2916–2929, 2013. [10] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In Proc. ECCV, 2014. [11] J. Hastad. Some optimal inapproximability results. Journal of the ACM, 48(4):798–859, 2001. [12] W. J. Heiser. Convergent computation by iterative majorization: theory and applications in multidimensional data analysis. Recent advances in descriptive multivariate analysis, pages 157–189, 1995. [13] T. Jebara and A. Choromanska. Majorization for crfs and latent likelihoods. In NIPS 25, 2012. [14] W. Kong and W.-J. Li. Isotropic hashing. In NIPS 25, 2012. [15] A. Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. [16] B. Kulis and T. Darrell. Learning to hash with binary reconstructive embeddings. In NIPS 22, 2009. [17] B. Kulis and K. Grauman. Kernelized locality-sensitive hashing. TPAMI, 34(6):1092–1104, 2012. [18] P. Li and A. C. Konig. Theory and applications of b-bit minwise hashing. Communications of the ACM, 54(8):101–109, 2011. [19] P. Li, A. Shrivastava, J. Moore, and A. C. Konig. Hashing algorithms for large-scale learning. In NIPS 24, 2011. [20] X. Li, G. Lin, C. Shen, A. van den Hengel, and A. R. Dick. Learning hash functions using column generation. In Proc. ICML, 2013. [21] W. Liu, J. He, and S.-F. Chang. Large graph construction for scalable semi-supervised learning. In Proc. ICML, 2010. [22] W. Liu, J. Wang, R. Ji, Y.-G. Jiang, and S.-F. Chang. Supervised hashing with kernels. In Proc. CVPR, 2012. [23] W. Liu, J. Wang, S. Kumar, and S.-F. Chang. Hashing with graphs. In Proc. ICML, 2011. [24] W. Liu, J. Wang, Y. Mu, S. Kumar, and S.-F. Chang. Compact hyperplane hashing with bilinear functions. In Proc. ICML, 2012. [25] Y. Mu, J. Shen, and S. Yan. Weakly-supervised hashing in kernel space. In Proc. CVPR, 2010. [26] B. Neyshabur, P. Yadollahpour, Y. Makarychev, R. Salakhutdinov, and N. Srebro. The power of asymmetry in binary hashing. In NIPS 26, 2013. [27] M. Norouzi and D. J. Fleet. Minimal loss hashing for compact binary codes. In Proc. ICML, 2011. [28] M. Norouzi, D. J. Fleet, and R. Salakhudinov. Hamming distance metric learning. In NIPS 25, 2012. [29] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42(3):145–175, 2001. [30] R. Salakhutdinov and G. Hinton. Semantic hashing. International Journal of Approximate Reasoning, 50(7):969–978, 2009. [31] G. Shakhnarovich, P. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In Proc. ICCV, 2003. [32] F. Shen, C. Shen, Q. Shi, A. van den Hengel, and Z. Tang. Inductive hashing on manifolds. In Proc. CVPR, 2013. [33] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8):888–905, 2000. [34] Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, and S. V. N. Vishwanathan. Hash kernels for structured data. JMLR, 10:2615–2637, 2009. [35] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large dataset for non-parametric object and scene recognition. TPAMI, 30(11):1958–1970, 2008. [36] K. Q. Weinberger, A. Dasgupta, J. Langford, A. J. Smola, and J. Attenberg. Feature hashing for large scale multitask learning. In Proc. ICML, 2009. [37] Y. Weiss, R. Fergus, and A. Torralba. Multidimensional spectral hashing. In Proc. ECCV, 2012. [38] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS 21, 2008. [39] L. Wolf, T. Hassner, and I. Maoz. Face recognition in unconstrained videos with matched background similarity. In Proc. CVPR, 2011. [40] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In Proc. CVPR, 2010. 9
2014
396
5,509
Orbit Regularization Renato Negrinho Instituto de Telecomunicac¸˜oes Instituto Superior T´ecnico 1049–001 Lisboa, Portugal renato.negrinho@gmail.com Andr´e F. T. Martins∗ Instituto de Telecomunicac¸˜oes Instituto Superior T´ecnico 1049–001 Lisboa, Portugal atm@priberam.pt Abstract We propose a general framework for regularization based on group-induced majorization. In this framework, a group is defined to act on the parameter space and an orbit is fixed; to control complexity, the model parameters are confined to the convex hull of this orbit (the orbitope). We recover several well-known regularizers as particular cases, and reveal a connection between the hyperoctahedral group and the recently proposed sorted ℓ1-norm. We derive the properties a group must satisfy for being amenable to optimization with conditional and projected gradient algorithms. Finally, we suggest a continuation strategy for orbit exploration, presenting simulation results for the symmetric and hyperoctahedral groups. 1 Introduction The main motivation behind current sparse estimation methods and regularized empirical risk minimization is the principle of parsimony, which states that simple explanations should be preferred over complex ones. Traditionally, this has been done by defining a function Ω: V →R that evaluates the complexity of a model w ∈V and trades off this quantity with a data-dependent term. The penalty function Ωis often designed to be a convex surrogate of an otherwise non-tractable quantity, a strategy which has led to important achievements in sparse regression [1], compressed sensing [2], and matrix completion [3], allowing to successfully recover parameters from highly incomplete information. Prior knowledge about the structure of the variables and the intended sparsity pattern, when available, can be taken into account when designing Ωvia sparsity-inducing norms [4]. Performance bounds under different regimes have been established theoretically [5, 6], contributing to a better understanding of the success and failure modes of these techniques. In this paper, we introduce a new way to characterize the complexity of a model via the concept of group-induced majorization. Rather than regarding complexity in an absolute manner via Ω, we define it relative to a prototype model v ∈V , by requiring that the estimated model w satisfies w ⪯G v, (1) where ⪯G is an ordering relation on V induced by a group G. This idea is rooted in majorization theory, a well-established field [7, 8] which, to the best of our knowledge, has never been applied to machine learning. We therefore review these concepts in §2, where we show that this formulation subsumes several well-known regularizers and motivates new ones. Then, in §3, we introduce two important properties of groups that serve as building blocks for the rest of the paper: the notions of matching function and region cones. In §4, we apply these tools to the permutation and signed permutation groups, unveiling connections with the recent sorted ℓ1-norm [9] as a byproduct. In §5 we turn to algorithmic considerations, pinpointing the group-specific operations that make a group amenable to optimization with conditional and projected gradient algorithms. ∗Also at Priberam Labs, Alameda D. Afonso Henriques, 41 - 2◦, 1000–123, Lisboa, Portugal. 1 Figure 1: Examples of orbitopes for the orthogonal group O(d) (left) and the hyperoctahedral group P± (right). Shown are also the corresponding region cones, which in the case of O(d) degenerates into a ray. A key aspect of our framework is a decoupling in which the group G captures the invariances of the regularizer, while the data-dependent term is optimized in the group orbitopes. In §6, we build on this intuition to propose a simple continuation algorithm for orbit exploration. Finally, §7 shows some simulation results, and we conclude in §8. 2 Orbitopes and Majorization 2.1 Vector Spaces and Groups Let V be a vector space with an inner product ⟨·, ·⟩. We will be mostly concerned with the case where V = Rd, i.e., the d-dimensional real Euclidean space, but some of the concepts introduced here generalize to arbitrary Hilbert spaces. A group is a set G endowed with an operation · : G×G →G satisfying closure (g · h ∈G, ∀g, h ∈G), associativity ((f · g) · h = f · (g · h), ∀f, g, h ∈G), existence of identity (∃1G ∈G such that 1G · g = g · 1G = g, ∀g ∈G), and existence of inverses (each g ∈G has an inverse g−1 ∈G such that g · g−1 = g−1 · g = 1G). Throughout, we use boldface letters u, v, w, . . . for vectors, and g, h, . . . for group elements. We also omit the group operation symbol, writing gh instead of g · h. 2.2 Group Actions, Orbits, and Orbitopes A (left) group action of G on V [10] is a function ψ : G × V →V satisfying ψ(g, ψ(h, v)) = ψ(g ·h, v) and ψ(1G, v) = v for all g, h ∈G and v ∈V . When the action is clear from the context, we omit the letter ψ, writing simply gv for the action of the group element g on v, instead of ψ(g, v). In this paper, we always assume our actions are linear, i.e., g(c1v1 + c2v2) = c1gv1 + c2gv2 for scalars c1 and c2 and vectors v1 and v2. In some cases, we also assume they are norm-preserving, i.e., ∥gv∥= ∥v∥for any g ∈G and v ∈V . When V = Rd, we may regard the groups underlying these actions as subgroups of the general linear group GL(d) and of the orthogonal group O(d), respectively. GL(d) is the set of d-by-d invertible matrices, and O(d) the set of d-by-d orthogonal matrices {U ∈Rd×d | U ⊤U = UU ⊤= Id}, where Id denotes the d-dimensional identity matrix. A group action defines an equivalence relation on V , namely w ≡v iff there is g ∈G such that w = gv. The orbit of a vector v ∈V under the action of G is the set Gv := {gv | g ∈G}, i.e., the vectors that result from acting on v with some element of G. Its convex hull is called the orbitope: OG(v) := conv(Gv). (2) Fig. 1 (left) illustrates this concept for the orthogonal group in R2. An important concept associated with group actions and orbitopes is that of G-majorization [7]: Definition 1 Let v, w ∈V . We say that w is G-majorized by v, denoted w ⪯G v, if w ∈OG(v). Proposition 2 If the group action is linear, then ⪯G is reflexive and transitive, i.e., it is a pre-order. Proof: See supplemental material. Group majorization plays an important role in the area of multivariate inequalities in statistics [11]. In this paper, we use this concept for representing model complexity, as described next. 2.3 Orbit Regularization We formulate our learning problem as follows: minimize L(w) s.t. w ⪯G v, (3) where L : V →R is a loss function, G is a given group, and v ∈V is a seed vector. This formulation subsumes several well-known cases, outlined below. 2 • ℓ2-regularization. If G := O(d) is the orthogonal group acting by multiplication, we recover ℓ2 regularization. Indeed, we have Gv = {Uv ∈Rd | U ∈O(d)} = {w ∈Rd | ∥w∥2 = ∥v∥2}, for any seed v ∈Rd. That is, the orbitope OG(v) = conv(Gv) becomes the ℓ2-ball with radius ∥v∥2. The only property of the seed that matters in this case is its ℓ2-norm. • Permutahedron. Let P be the symmetric group (also called the permutation group), which can be represented as the set of d-by-d permutation matrices. Given v ∈Rd, the orbitope induced by v under P is the convex hull of all the permutations of v, which can be equivalently described as the vectors that are transformations of v through a doubly stochastic matrix: OP(v) = conv{Pv | P ∈P} = {Mv | M1 = 1, M ⊤1 = 1, M ≥0}. (4) This set is called the permutahedron [12]. We will revisit this case in §4. • Signed permutahedron. Let P± be the hyperoctahedral group (also called the signed permutation group), i.e., the d-by-d matrices with entries in {0, ±1} such that the sum of the absolute values in each row and column is 1. The action of P± on Rd permutes the entries of vectors and arbitrarily switches signs. Given v ∈Rd, the orbitope induced by v under P± is: OP±(v) = conv{Diag(s)Pv | P ∈P, s ∈{±1}d}, (5) where Diag(s) denotes a diagonal matrix formed by the entries of s. We call this set the signed permutahedron; it is depicted in Fig. 1 and will also be revisited in §4. • ℓ1 and ℓ∞-regularization. As a particular case of the signed permutahedron, we recover ℓ1 and ℓ∞balls by choosing seeds of the form v = γe1 (a scaled canonical basis vector) and v = γ1 (a constant vector), respectively, where γ is a scalar. In the first case, we obtain the ℓ1-ball, OG(v) = γ conv({e1, . . . , ed}) and in the second case, we get the ℓ∞-ball OG(v) = γ conv({±1}d). • Symmetric matrices with majorized eigenvalues. Let G := O(d) be again the orthogonal group, but now acting by conjugation on the vector space of d-by-d symmetric matrices, V = Sd. Given a seed v ≡A ∈Sd, its orbit is Gv = {UAU ⊤| U ∈O(d)} = {U Diag(λ(A))U ⊤| U ∈ O(d)}, where λ(A) denotes a vector containing the eigenvalues of A in decreasing order (so we may assume without loss of generality that the seed is diagonal). The orbitope OG(v) becomes: OG(v) := {B ∈Sd | λ(B) ⪯P λ(A)}, (6) which is the set of matrices whose eigenvalues are in the permutahedron OP(λ(A)) (see example above). This is called the Schur-Horn orbitope in the literature [8]. • Squared matrices with majorized singular values. Let G := O(d) × O(d) act on Rd×d (the space of squared matrices, not necessarily symmetric) as gU,V A := UAV ⊤. Given a seed v ≡A, its orbit is Gv = {UAV ⊤| U, V ∈O(d)} = {U Diag(σ(A))V ⊤| U, V ∈O(d)}, where σ(A) contains the singular values of A in decreasing order (so we may assume without loss of generality that the seed is diagonal and non-negative). The orbitope OG(v) becomes: OG(v) := {B ∈Rd×d | σ(B) ⪯P σ(A)}, (7) which is the set of matrices whose singular values are in the permutahedron OP(σ(A)). • Spectral and nuclear norm regularization. The previous case subsumes spectral and nuclear norm balls: indeed, for a seed A = γId, the orbitope becomes the convex hull of orthogonal matrices, which is the spectral norm ball {A ∈Rd×d | ∥A∥2 := σ1(A) ≤γ}; while for a seed A = γ Diag(e1), the orbitope becomes the convex hull of rank-1 matrices with norm bounded by γ, which is the nuclear norm ball {A ∈Rd×d | ∥A∥∗:= P i σi ≤γ}. This norm has been widely used for low-rank matrix factorization and matrix completion [3]. Besides these examples, other regularization strategies, such as non-overlapping ℓ2,1 and ℓ∞,1 norms [13, 4] can be obtained by considering products of the groups above. We omit details for space. 2.4 Relation with Atomic Norms Atomic norms have been recently proposed as a toolbox for structured sparsity [6]. Let A ⊆V be a centrally symmetric set of atoms, i.e., v ∈A iff −v ∈A. The atomic norm induced by A is defined as ∥w∥A := inf{t > 0 | w ∈t conv(A)}. The corresponding atomic ball is the set {w | ∥w∥A ≤1} = conv(A). Not surprisingly, orbitopes are often atomic norm balls. 3 Proposition 3 (Atomic norms) If G is a subgroup of the general linear group GL(d) and satisfies −v ∈Gv, then the set OG(v) is the ball of an atomic norm. Proof: Under the given assumption, the set Gv is centrally symmetric, i.e., it satisfies w ∈Gv iff −w ∈Gv (indeed, the left hand side implies that w = gv for some g ∈G, and −v ∈Gv implies that −v = hv for some h ∈G, therefore, −w = −gh−1(−v) = gh−1v ∈Gv). As shown by Chandrasekaran et al. [6], this guarantees that ∥.∥Gv satisfies the axioms of a norm. Corollary 4 For any choice of seed, the signed permutahedron OP±(v) and the orbitope formed by the squared matrices with majorized singular values are both atomic norm balls. If d is even and v is of the form v = (v+, −v+), with v+ ∈Rd/2 + , then the permutahedron OP(v) and the orbitope formed by the symmetric matrices with eigenvalues majorized by λ(v) are both atomic norm balls. 3 Matching Function and Region Cones We now construct a unifying perspective that highlights the role of the group G. Two key concepts that play a crucial role in our analysis are that of matching function and region cone. In the sequel, these will work as building blocks for important algorithmic and geometric characterizations. Definition 5 (Matching function) The matching function of G, mG : V × V →R, is defined as: mG(u, v) := sup{⟨u, w⟩| w ∈Gv}. (8) Intuitively, mG(u, v) “aligns” the orbits of u and v before taking the inner product. Note also that mG(u, v) = sup{⟨u, w⟩| w ∈OG(v)}, since we may equivalently maximize the linear objective over OG(v), which is the convex hull of Gv. We therefore have the following Proposition 6 (Duality) Fix v ∈V , and define the indicator function of the orbitope, IOG(v)(w) = 0 if w ∈OG(v), and −∞otherwise. The Fenchel dual of IOG(v) is mG(., v). As a consequence, letting L⋆: V →R is the Fenchel dual of the loss L, the dual problem of Eq. 3 is: maximize −L⋆(−u) −mG(u, v) w.r.t. u ∈V. (9) Note that if ∥.∥Gv is a norm (e.g., if the conditions of Prop. 3 are satisfied), then the statement above means that mG(., v) = ∥.∥⋆ Gv is its dual norm. We will revisit this dual formulation in §4. The following properties have been established in [14, 15]. Proposition 7 For any u, v ∈V , we have: (i) mG(c1u, c2v) = c1c2mG(u, v) for c1, c2 ≥0; (ii) mG(g1u, g2v) = mG(u, v) for g1, g2 ∈G; (iii) mG(u, v) = mG(v, u). Furthermore, the following three statements are equivalent: (i) w ⪯G v, (ii) f(w) ≤f(v) for all G-invariant convex functions f : V →R, (iii) mG(u, w) ≤mG(u, v) for all u ∈V . In the sequel, we always assume that G is a subgroup of the orthogonal group O(d). This implies that the orbitope OG(v) is compact for any v ∈V (and therefore the sup in Eq. 8 can be replaced by a max), and that ∥gv∥= ∥v∥for any v ∈V . Another important concept is that of the normal cone of a point w ∈V with respect to the orbitope OG(v), denoted as NGv(w) and defined as follows: NGv(w) := {u ∈V | ⟨u, w′ −w⟩≤0 ∀w′ ⪯G v}. (10) Normal cones plays an important role in convex analysis [16]. The particular case of the normal cone at the seed v (illustrated in Fig. 1) is of great importance, as will be seen below. Definition 8 (Region cone) Given v ∈V , the region cone at v is KG(v) := NGv(v). It is the set of points that are “maximally aligned” with v in terms of the matching function: KG(v) = {u ∈V | mG(u, v) = ⟨u, v⟩}. (11) 4 Permutahedra and Sorted ℓ1-Norms In this section, we focus on the permutahedra introduced in §2. Below, given a vector w ∈Rd, we denote by w(k) its kth order statistic, i.e., we will “sort” w so that w(1) ≥w(2) ≥. . . ≥w(d). We also consider the order statistics of the magnitudes |w|(k) by sorting the absolute values. 4 4.1 Signed Permutahedron We start by defining the “sorted ℓ1-norm,” proposed by Bogdan et al. [9] in their recent SLOPE method as a means to control the false discovery rate, and studied by Zeng and Figueiredo [17]. Definition 9 (Sorted ℓ1-norm) Let v, w ∈Rd, with v1 ≥v2 ≥. . . ≥vd ≥0 and v1 > 0. The sorted ℓ1-norm of w (weighted by v) is defined as: ∥w∥SLOPE,v := Pd j=1 vj|w|(j). In [9] it is shown that ∥.∥SLOPE,v satisfies the axioms of a norm. The rationale is that larger components of w are penalized more than smaller ones, in a way controlled by the prescribed v. For v = 1, we recover the standard ℓ1-norm, while the ℓ∞-norm corresponds to v = e1. Another special case is the OSCAR regularizer [18, 19], ∥w∥OSCAR,τ1,τ2 := τ1∥w∥1 + τ2 P i<j max{|wi|, |wj|}, corresponding to a linearly spaced v, vj = (τ1 + τ2(d −j)) for j = 1, . . . , d. The next proposition reveals a connection between SLOPE and the atomic norm induced by the signed permutahedron. Proposition 10 Let v ∈Rd + be as in Def. 9. The sorted ℓ1-norm weighted by v and the atomic norm induced by the P±-orbitope seeded at v are dual to each other: ∥.∥⋆ P±v = ∥.∥SLOPE,v. Proof: From Prop. 6, we have ∥w∥⋆ P±v = mP±(w, v). Let P be a signed permutation matrix s.t. ˜w := Pw has its components sorted by decreasing magnitude, | ˜w|1 ≥. . . ≥| ˜w|d. From Prop. 7, we have mP±(w, v) = m( ˜w, v) = ⟨| ˜w|, v⟩= ∥w∥SLOPE,v. The next proposition [7, 14] provides a characterization of the P±-orbitope in terms of inequalities about the cumulative distribution of the order statistics. Proposition 11 (Submajorization ordering) The orbitope OP±(v) can be characterized as: OP±(v) = n w ∈Rd P j≤i |w|(j) ≤P j≤i |v|(j), ∀i = 1, . . . , d o . (12) Prop. 11 leads to a precise characterization of the atomic norm ∥w∥P±v, and therefore of the dual norm of SLOPE: ∥w∥P±v = maxi=1,...,d P j≤i |w|(j)/P j≤i |v|(j). 4.2 Permutahedron The unsigned counterpart of Prop. 11 goes back to Hardy et al. [20]. Proposition 12 (Majorization ordering) The P-orbitope seeded at v can be characterized as: OP(v) = n w ∈Rd 1⊤w = 1⊤v ∧P j≤i w(j) ≤P j≤i v(j), ∀i = 1, . . . , d −1 o . (13) As seen in Corollary 4, if d is even and v = (v+, −v+), with v ≥0, then ∥w∥Pv qualifies as a norm (we need to confine to the linear subspace V := {w ∈Rd | Pd j=1 wj = 0}). From Prop. 12, we have that this norm can be written as: ∥w∥Pv = maxi=1,...,d−1 P j≤i w(j)/P j≤i v(j). Proposition 13 Assume the conditions above hold and that v1 ≥v2 ≥. . . ≥vd/2 ≥0 and v1 > 0. The dual norm of ∥.∥Pv is ∥w∥⋆ Pv = Pd/2 j=1 vj(w(j) −w(d−j+1)). Proof: Similar to the proof of Prop. 11. 5 Conditional and Projected Gradient Algorithms Two important classes of algorithms in sparse modeling are the conditional gradient method [21, 22] and the proximal gradient method [23, 24]. Under Ivanov regularization as in Eq. 3, the latter reduces to the projected gradient method. In this section, we show that both algorithms are a good fit for solving Eq. 3 for arbitrary groups, as long as the two building blocks mentioned in §3 are available: (i) a procedure for evaluating the matching function (necessary for conditional gradient methods) and (ii) a procedure for projecting onto the region cone (necessary for projected gradient). 5 1: Initialize w1 = 0 2: for t = 1, 2, . . . do 3: ut = arg maxu⪯Gv⟨−∇L(wt), u⟩ 4: ηt = 2/(t + 2) 5: wt+1 = (1 −ηt)wt + ηtut 6: end for 1: Initialize w1 = 0 2: for t = 1, 2, . . . do 3: Choose a stepsize ηt 4: a = wt −ηt∇L(wt) 5: wt+1 = arg minw⪯Gv ∥w −a∥ 6: end for Figure 2: Conditional gradient (left) and projected gradient (right) algorithms. 5.1 Conditional Gradient The conditional gradient method is shown in Fig. 2 (left). We assume that a procedure is available for computing the gradient of the loss. The relevant part is the maximization in line 3, which corresponds precisely to an evaluation of the matching function m(s, v), with s = −∇L(wt) (cf. Eq. 8). Fortunately, this step is efficient for a variety of cases: Permutations. If G = P, the matching function can be evaluated in time O(d log d) with a simple sort operation. Without losing generality, we assume the seed v is sorted in descending order (otherwise, pre-sort it before the main loop starts). Then, each time we need to evaluate m(s, v), we compute a permutation P such that Ps is also sorted. The minimizer in line 3 will equal P −1v. Signed permutations. If G = P±, a similar procedure with the same O(d log d) runtime also works, except that now we sort the absolute values, and set the signs of P −1v to match those of s. Symmetric matrices with majorized eigenvalues. Let A = UAλ(A)U ⊤ A ∈Sd and B = UBλ(B)U ⊤ B ∈Sd, where the eigenvalues λ(A) and λ(B) are sorted in decreasing order. In this case, the matching function becomes mG(A, B) = maxV ∈O(d) trace(A⊤V BV ⊤) = ⟨λ(A), λ(B)⟩due to von Neumann’s trace inequality [25], the maximizer being V = UAU ⊤ B . Therefore, we need only to make an eigen-decomposition and set B′ = UAλ(B)U ⊤ A . Squared matrices with majorized singular values. Let A = UAσ(A)V ⊤ A ∈Rd×d and B = UBσ(B)V ⊤ B ∈Rd×d, where the singular values are sorted. We have mG(A, B) = maxU,V ∈O(d) trace(A⊤UBV ⊤) = ⟨σ(A), σ(B)⟩also from von Neumann’s inequality [25]. To evaluate the matching function, we need only to make an SVD and set B′ = UAσ(B)V ⊤ A . 5.2 Projected Gradient The projected gradient algorithm is illustrated in Fig. 2 (right); the relevant part is line 5, which involves a projection onto the orbitope OG(v). This projection may be hard to compute directly, since the orbitope may lack a concise half-space representation. However, we next transform this problem into a projection onto the region cone KG(v) (the proof is in the supplemental material). Proposition 14 Assume G is a subgroup of O(d). Let g ∈G be such that ⟨a, gv⟩= mG(a, v). Then, the solution of the problem in line 5 is w∗= a −ΠKG(gv)(a −gv). Thus, all is necessary is computing the arg-max associated with the matching function, and a black box that projects onto the region cone KG(v). Again, this step is efficient in several cases: Permutations. If G = P, the region cone of a point v is the set of points w satisfying vi > vj ⇒ wi ≥wj, for all i, j ∈1, . . . , d. Projecting onto this cone is a well-studied problem in isotonic regression [26, 27], with existing O(d) algorithms. Signed permutations. If G = P±, this problem is precisely the evaluation of the proximity operator of the sorted ℓ1-norm, also solvable in O(d) runtime with a stack-based algorithm [9]. 6 Continuation Algorithm Finally, we present a general continuation procedure for exploring regularization paths when L is a convex loss function (not necessarily differentiable) and the seed v is not prescribed. The 6 Require: Factor ϵ > 0, interpolation parameter α ∈[0, 1] 1: Initialize seed v0 randomly and set ∥v0∥= ϵ 2: Set t = 0 3: repeat 4: Solve wt = arg minw⪯Gvt L(w) 5: Pick v′ t ∈Gvt ∩KG(wt) 6: Set next seed vt+1 = (1 + ϵ)(αv′ t + (1 −α)wt) 7: t ←t + 1 8: until ∥wt∥Gvt < 1. 9: Use cross-validation to choose the best bw ∈{w1, w2, . . .} Figure 3: Left: Continuation algorithm. Right: Reachable region WG for the hyperoctahedral group, with a reconstruction loss L(w) = ∥w −a∥2. Only points v s.t. −∇L(v) = a −v ∈KG(v) belong to this set. Different initializations of v0 lead to different paths along WG, all ending in a. procedure—outlined in Fig. 3—solves instances of Eq. 3 for a sequence of seeds v1, v2, . . ., using a simple heuristic for choosing the next seed given the previous one and the current solution. The basic principle behind this procedure is the same as in other homotopy continuation methods [28, 29, 30, 31]: we start with very strong regularization (using a small norm ball), and then gradually weaken the regularization (increasing the ball) while “tracking” the solution. The process stops when the solution is found to be in the interior of the ball (the condition in line 8), which means the regularization constraint is no longer active. The main difference with respect to classical homotopy methods is that we do not just scale the ball (in our case, the G-orbitope); we also generate new seeds that shape the ball along the way. To do so, we adopt a simple heuristic (line 6) to make the seed move toward the current solution wt before scaling the orbitope. This procedure depends on the initialization (see Fig. 3 for an illustration), which drives the search into different regions. Reasoning in terms of groups, line 4 makes us move inside the orbits, while line 6 is an heuristic to jump to a nearby orbit. For any choice of ϵ > 0 and α ∈[0, 1], the algorithm is convergent and produces a strictly decreasing sequence L(w1) > L(w2) > · · · before it terminates (a proof is provided as supplementary material). We expect that, eventually, a seed v will be generated that is close to the true model bw. Although it may not be obvious at first sight why would it be desirable that v ≈bw, we provide a simple result below (Prop. 15) that sheds some light on this matter, by characterizing the set of points in V that are “reachable” by optimizing Eq. 3. From the optimality conditions of convex programming [32, p. 257], we have that w∗is a solution of the optimization problem in Eq. 3 if and only if 0 ∈∂L(w∗) + NGv(w∗), where ∂L(w) denotes the subdifferential of L at w, and NGv(w) is the normal cone to OG(v) at w, defined in §3. For certain seeds v ∈V , it may happen that the optimal solution w∗of Eq. 3 is the seed itself. Let WG be the set of seeds with this property: WG := {v ∈V | L(v) ≤L(w), ∀w ⪯G v} = {v ∈V | 0 ∈∂L(v) + KG(v)}, (14) where KG(v) is the region cone and the right hand side follows from the optimality conditions. We next show that this set is all we need to care about. Proposition 15 Consider the set of points that are solutions of Eq. 3 for some seed v ∈V , c WG :=  w∗∈V ∃v ∈V : w∗∈arg minw⪯Gv L(w) . We have c WG = WG. Proof: Obviously, v ∈WG ⇒v ∈c WG. For the reverse direction, suppose that w∗∈c WG, in which case there is some v ∈V such that w∗⪯G v and L(w∗) ≤L(w) for any w ⪯G v. Since ⪯G is a pre-order, it must hold in particular that L(w∗) ≤L(w) for any w ⪯G w∗⪯G v. Therefore, we also have that w∗∈arg minw⪯Gw∗L(w), i.e., w∗∈WG. 7 Simulation Results We describe the results of numerical experiments when regularizing with the permutahedron (symmetric group) and the signed permutahedron (hyperoctahedral group). All problems were solved using the conditional gradient algorithm, as described in §5. We generated the true model bw ∈Rd 7 Figure 4: Learning curves for the permutahedron and signed permutahedron regularizers with a perfect seed. Shown are averages and standard deviations over 10 trials. The baselines are ℓ1 (three leftmost plots, resp. with k = 150, 250, 400), and ℓ2 (last plot, with k = 500). Figure 5: Mean squared errors in the training set (left) and the test set (right) along the regularization path. For the permutahedra regularizers, this path was traced with the continuation algorithm. The baseline is ℓ1 regularization. The horizontal lines in the right plot show the solutions found with validation in a held-out set. by sampling the entries from a uniform distribution in [0, 1] and subtracted the mean, keeping k ≤d nonzeros; after which bw was normalized to have unit ℓ2-norm. Then, we sampled a random nby-d matrix X with i.i.d. Gaussian entries and variance σ2 = 1/d, and simulated measurements y = X bw + n, where n ∼N(0, σ2 n) is Gaussian noise. We set d = 500 and σn = 0.3σ. For the first set of experiments (Fig. 4), we set k ∈{150, 250, 400, 500} and varied the number of measurements n. To assess the advantage of knowing the true parameters up to a group transformation, we used for the orbitope regularizers a seed in the orbit of the true bw, up to a constant factor (this constant, and the regularization constants for ℓ1 and ℓ2, were all chosen with validation in a held-out set). As expected, this information was beneficial, and no significant difference was observed between the permutahedron and the signed permutahedron. For the second set of experiments (Fig. 5), where the aim is to assess the performance of the continuation method, no information about the true model was given. Here, we fixed n = 250 and k = 300 and ran the continuation algorithm with ϵ = 0.1 and α = 0.0, for 5 different initializations of v0. We observe that this procedure was effective at exploring the orbits, eventually finding a slightly better model than the one found with ℓ1 and ℓ2 regularizers. 8 Conclusions and Future Work In this paper, we proposed a group-based regularization scheme using the notion of orbitopes. Simple choices of groups recover commonly used regularizers such as ℓ1, ℓ2, ℓ∞, spectral and nuclear matrix norms; as well as some new ones, such as the permutahedron and signed permutahedron. As a byproduct, we revealed a connection between the permutahedra and the recently proposed sorted ℓ1-norm. We derived procedures for learning with these orbit regularizers via conditional and projected gradient algorithms, and a continuation strategy for orbit exploration. There are several avenues for future research. For example, certain classes of groups, such as reflection groups [33], have additional properties that may be exploited algorithmically. Our work should be regarded as a first step toward group-based regularization—we believe that the regularizers studied here are just the tip of the iceberg. Groups and their representations are well studied in other disciplines [10], and chances are high that this framework can lead to new regularizers that are a good fit to specific machine learning problems. Acknowledgments We thank all reviewers for their valuable comments. This work was partially supported by FCT grants PTDC/EEI-SII/2312/2012 and PEst-OE/EEI/LA0008/2011, and by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803). 8 References [1] R. Tibshirani. Regression Shrinkage and Selection via the Lasso. Journal of the Royal Statistical Society B., pages 267–288, 1996. [2] D. Donoho. Compressed Sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006. [3] E. Cand`es and B. Recht. Exact Matrix Completion via Convex Optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [4] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Convex optimization with sparsity-inducing norms. In Optimization for Machine Learning. MIT Press, 2011. [5] S. Negahban, P. Ravikumar, M. Wainwright, and B. Yu. A Unified Framework for High-Dimensional Analysis of M-estimators with Decomposable Regularizers. In Neural Information Processing Systems, pages 1348–1356, 2009. [6] V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky. The Convex Geometry of Linear Inverse Problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [7] A. Marshall, I. Olkin, and B. Arnold. Inequalities: Theory of Majorization and Its Applications. Springer, 2010. [8] R. Sanyal, F. Sottile, and B. Sturmfels. Orbitopes. Technical report, arXiv:0911.5436, 2009. [9] M. Bogdan, E. Berg, W. Su, and E. Cand`es. Statistical estimation and testing via the ordered ℓ1 norm. Technical report, arXiv:1310.1969, 2013. [10] J. Serre and L. Scott. Linear Representations of Finite Groups, volume 42. Springer, 1977. [11] Y. Tong. Probability Inequalities in Multivariate Distributions, volume 5. Academic Press New York, 1980. [12] G. Ziegler. Lectures on Polytopes, volume 152. Springer, 1995. [13] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society Series B (Statistical Methodology), 68(1):49, 2006. [14] M. Eaton. On group induced orderings, monotone functions, and convolution theorems. Lecture NotesMonograph Series, pages 13–25, 1984. [15] A. Giovagnoli and H. Wynn. G-Majorization with Applications to Matrix Orderings. Linear algebra and its applications, 67:111–135, 1985. [16] R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970. [17] X. Zeng and M. A. T. Figueiredo. Decreasing weighted sorted ℓ1 regularization. Technical report, arXiv:1404.3184, 2014. [18] H. Bondell and B. Reich. Simultaneous Regression Shrinkage, Variable Selection, and Supervised Clustering of Predictors with OSCAR. Biometrics, 64(1):115–123, 2008. [19] L. Zhong and J. Kwok. Efficient Sparse Modeling with Automatic Feature Grouping. IEEE Transactions on Neural Networks and Learning Systems, 23(9):1436–1447, 2012. [20] G. Hardy, J. Littlewood, and G. P´olya. Inequalities. Cambridge University Press, 1952. [21] M. Frank and P. Wolfe. An Algorithm for Quadratic Programming. Naval research logistics quarterly, 3 (1-2):95–110, 1956. [22] M. Jaggi. Revisiting Frank-Wolfe: Projection-free Sparse Convex Optimization. In Proc. of the International Conference on Machine Learning, pages 427–435, 2013. [23] S.J. Wright, R. Nowak, and M. A. T. Figueiredo. Sparse reconstruction by separable approximation. IEEE Transactions on Signal Processing, 57(7):2479–2493, 2009. [24] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [25] L. Mirsky. A trace inequality of john von neumann. Monatshefte f¨ur Mathematik, 79(4):303–306, 1975. [26] P. Pardalos and G. Xue. Algorithms for a Class of Isotonic Regression Problems. Algorithmica, 23(3): 211–222, 1999. [27] R. Luss, S. Rosset, and M. Shahar. Decomposing Isotonic Regression for Efficiently Solving Large Problems. In Neural Information Processing Systems, pages 1513–1521, 2010. [28] M.R. Osborne, B. Presnell, and B.A. Turlach. A new approach to variable selection in least squares problems. IMA Journal of Numerical Analysis, 20:389–403, 2000. [29] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression. The Annals of statistics, 32: 407–499, 2004. [30] M. A. T. Figueiredo, R. Nowak, and S. Wright. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE Journal of Selected Topics in Signal Processing, 1(4):586–597, 2007. [31] E. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for l1-minimization: Methodology and convergence. SIAM Journal on Optimization, 19:1107–1130, 2008. [32] D.P. Bertsekas, A. Nedic, and A.E. Ozdaglar. Convex analysis and optimization. Athena Scientific, 2003. [33] A. Steerneman. g-majorization, group-induced cone orderings, and reflection groups. Linear Algebra and its Applications, 127:107–119, 1990. [34] J.J. Moreau. Fonctions convexes duales et points proximaux dans un espace hilbertien. CR de l’Acad´emie des Sciences de Paris S´erie A Mathematics, 255:2897–2899, 1962. 9
2014
397
5,510
A Synaptical Story of Persistent Activity with Graded Lifetime in a Neural System Yuanyuan Mi, Luozheng Li State Key Laboratory of Cognitive Neuroscience & Learning, Beijing Normal University, Beijing 100875, China miyuanyuan0102@163.com, liluozheng@mail.bnu.edu.cn Dahui Wang State Key Laboratory of Cognitive Neuroscience & Learning, School of System Science, Beijing Normal University,Beijing 100875, China wangdh@bnu.edu.cn Si Wu State Key Laboratory of Cognitive Neuroscience & Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University ,Beijing 100875, China wusi@bnu.edu.cn Abstract Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neuronal responses is moved. Persistent activity is widely believed to be the substrate for a neural system retaining a memory trace of the stimulus information. In a conventional view, persistent activity is regarded as an attractor of the network dynamics, but it faces a challenge of how to be closed properly. Here, in contrast to the view of attractor, we consider that the stimulus information is encoded in a marginally unstable state of the network which decays very slowly and exhibits persistent firing for a prolonged duration. We propose a simple yet effective mechanism to achieve this goal, which utilizes the property of short-term plasticity (STP) of neuronal synapses. STP has two forms, short-term depression (STD) and short-term facilitation (STF), which have opposite effects on retaining neuronal responses. We find that by properly combining STF and STD, a neural system can hold persistent activity of graded lifetime, and that persistent activity fades away naturally without relying on an external drive. The implications of these results on neural information representation are discussed. 1 Introduction Stimulus information is encoded in neuronal responses. Persistent activity refers to the phenomenon that cortical neurons keep firing even after the stimulus triggering the initial neural responses is removed [1, 2, 3]. It has been widely suggested that persistent activity is the substrate for a neural system to retain a memory trace of the stimulus information [4]. For instance, in the classical delayed-response task where an animal needs to memorize the stimulus location for a given period of time before taking an action, it was found that neurons in the prefrontal cortex retained high-frequency firing during this waiting period, indicating that persistent activity may serve as the 1 neural substrate of working memory [2]. Understanding the mechanism of how persistent activity is generated in neural systems has been at the core of theoretical neuroscience for decades [5, 6, 7]. In a conventional view, persistent activity is regarded as an emergent property of network dynamics: neurons in a network are reciprocally connected with each other via excitatory synapses, which form a positive feedback loop to maintain neural responses in the absence of an external drive; and meanwhile a matched inhibition process suppresses otherwise explosive neural activities. Mathematically, this view is expressed as the dynamics of an attractor network, in which persistent activity corresponds to a stationary state (i.e., an attractor) of the network. The notion of attractor dynamics is appealing, which qualitatively describes a number of brain functions, but its detailed implementation in neural systems remains to be carefully evaluated. A long-standing debate on the feasibility of attractor dynamics is on how to properly close the attractor states in a network: once a neural system is evolved into a self-sustained active state, it will stay there forever until an external force pulls it out. Solutions including applying a strong global inhibitory input to shut-down all neurons simultaneously, or applying a strong global excitatory input to excite all neurons and force them to fall into the refractory period simultaneously, were suggested [9], but none of them appears to be natural or feasible in all conditions. From the computational point of view, it is also unnecessary for a neural system to hold a mathematically perfect attractor state lasting forever. In reality, the brain only needs to hold the stimulus information for a finite amount of time necessary for the task. For instance, in the delayed-response task, the animal only needed to memorize the stimulus location for the waiting period [1]. To address the above issues, here we propose a novel mechanism to retain persistent activity in neural systems, which gives up the concept of prefect attractor, but rather consider that a neural system is in a marginally unstable state which decays very slowly and exhibits persistent firing for a prolonged period. The proposed mechanism utilizes a general feature of neuronal interaction, i.e., the short-term plasticity (STP) of synapses [10, 11]. STP has two forms: short-term depression (STD) and short-term facilitation (STF). The former is due to depletion of neurotransmitters after neural firing, and the latter is due to elevation of calcium level after neural firing which increases the release probability of neurotransmitters. STD and STP have opposite effects on retaining prolonged neuronal responses: the former weakens neuronal interaction and hence tends to suppress neuronal activities; whereas, the latter strengthens neuronal interaction and tends to enhance neuronal activities. Interestingly, we find that the interplay between the two processes endows a neural system with the capacity of holding persistent activity with desirable properties, including: 1) the lifetime of persistent activity can be arbitrarily long depending on the parameters; and 2) persistent activity fades away naturally in a network without relying on an external force. The implications of these results on neural information representation are discussed. 2 The Model Without loss of generality, we consider a homogeneous network in which neurons are randomly and sparsely connected with each other with a small probability p. The dynamics of a single neuron is described by an integrate-and-fire process, which is given by τ dvi dt = −(vi −VL) + Rmhi, for i = 1 . . . N, (1) where vi is the membrane potential of the ith neuron and τ the membrane time constant. VL is the resting potential. hi is the synaptic current and Rm the membrane resistance. A neuron fires when its potential exceeds the threshold, i.e., vi > Vth, and after that vi is reset to be VL. N the number of neurons. The dynamics of the synaptic current is given by τs dhi dt = −hi + 1 Np ∑ j Jiju+ j x− j δ(t −tsp j ) + Iextδ(t −text i ), (2) where τs is the synaptic time constant, which is about 2 ∼5ms. Jij is the absolute synaptic efficacy from neurons j to i. Jij = J0 if there is a connection from the neurons j to i, and Jij = 0 otherwise. tsp j denotes the spiking moment of the neuron j. All neurons in the network receive an external input 2 in the form of Poisson spike train. Iext represents the external input strength and text i the moment of the Poisson spike train the neuron i receives. The variables uj and xj measure, respectively, the STF and STD effects on the synapses of the jth neuron, whose dynamics are given by [12, 13] τf duj dt = −uj + τfU(1 −u− j )δ(t −tsp j ), (3) τd dxj dt = 1 −xj −τdu+ j x− j δ(t −tsp j ), (4) where uj is the release probability of neurotransmitters, with u+ j and u− j denoting, respectively, the values of uj just after and just before the arrival of a spike. τf is the time constant of STF. U controls the increment of uj produced by a spike. Upon the arrival of a spike, u+ j = u− j + U(1 −u− j ). xj represents the fraction of available neurotransmitters, with x+ j and x− j denoting, respectively, the values of xj just after and just before the arrival of a spike. τd is the recover time of neurotransmitters. Upon the arrival of a spike, x+ j = x− j −u+ j x− j . The time constants τf and τd are typically in the time order of hundreds to thousands of milliseconds, much larger than τ and τs, that is, STP is a slow process compared to neural firing. 2.1 Mean-field approximation As to be confirmed by simulation, neuronal firings in the state of persistent activity are irregular and largely independent to each other. Therefore, we can assume that the responses of individual neurons are statistically equivalent in the state of persistent activity. Under this mean-field approximation, the dynamics of a single neuron, and so does the mean activity of the network, can be written as [7] τs dh dt = −h + J0uxR + I, (5) τf du dt = −u + τfU(1 −u)R, (6) τd dx dt = 1 −x −τduxR, (7) where the state variables are the same for all neurons. R is the firing rate of a neuron, which is also the mean activity of the neuron ensemble. I = Iextλ denotes the external input with λ the rate of the Poisson spike train. The exact relationship between the firing rate R and the synaptic input h is difficult to obtain. Here, we assume it to be of the form, R = max(βh, 0), (8) with β a positive constant. 3 The Mechanism By using the mean-field model, we first elucidate the working mechanism underlying the generation of persistent activity of finite lifetime. Later we carry out simulation to confirm the theoretical analysis. 3.1 How to generate persistent activity of finite lifetime For the illustration purpose, we only study the dynamics of the firing rate R and assume that the variables u and x reach to their steady values instantly. This approximation is in general inaccurate, since u and x are slow variables compared to R. Nevertheless, it gives us insight into understanding the network dynamics. By setting du/dt = 0 and dx/dt = 0 in Eqs.(6,7) and substituting them into Eqs.(5,8), we get that, for I = 0 and R ≥0, τs dR dt = −R + J0βτfUR2 1 + τfUR + τdτfUR2 ≡F(R). (9) 3 0 R F(R) J 0<Jc J 0>Jc J 0=Jc R* Figure 1: The steady states of the network, i.e., the solutions of Eq.(9), have three forms depending on the parameter values. The three lines correspond to the different neuronal connection strenghths, which are J0 = 4, 4.38, 5, respectively. The other parameters are: τs = 5ms, τd = 100ms, τf = 700ms, β = 1, U = 0.05 and Jc = 4.38. Define a critical connection strength Jc ≡ ( 1 + 2 √ τd/(τfU) ) /β, which is the point the network dynamics experiences saddle-node bifurcation (see Figure 1). Depending on the parameters, the steady states of the network have three forms • When J0 < Jc, F(R) = 0 has only one solution at R = 0, i.e., the network is only stable at the silent state; • When J0 > Jc, F(R) = 0 has three solutions, and the network can be stable at the silent state and an active state; • When J0 = Jc, F(R) = 0 has two solutions, one is the stable silent state, and the other is a neutral stable state, referred to as R∗. The interesting behavior occurs at J0 = J− c , i.e., J0 is slightly smaller than the critical connection strength Jc. In this case, the network is only stable at the silent state. However, since near to the state R∗, F(R) is very close to zero (and so does |dR/dt|), the decay of the network activity is very slow in this region (Figure 2A). Suppose that the network is initially at a state R > R∗, under the network dynamics, the system will take a considerable amount of time to pass through the state R∗ before reaching to silence. This is manifested by that the decay of the network activity exhibits a long plateau around R∗before dropping to silence rapidly (Figure 2B). Thus, persistent activity of finite lifetime is achieved. The lifetime of persistent activity, which is dominated by the time of the network state passing through the point R∗, is calculated to be (see Appendix A), T ∼ 2τs √ F(R∗)F ′′(R∗) , (10) where F ′′(R∗) = d2F(R)/d2R|R∗. By varying the STP effects, such as τd and τf, the value of F(R∗)F ′′(R∗) is changed, and the lifetime of persistent activity can be adjusted. 3.2 Persistent activity of graded lifetime We formally analyze the condition for the network holding persistent activity of finite lifetime. Inspired by the result in the proceeding section, we focus on the parameter regime of J0 = Jc, i.e., the situation when the network has the stable silent state and a neutral stable active state. Denote (R∗, u∗, x∗) to be the neutral stable state of the network at J0 = Jc. Linearizing the network dynamics at this point, we obtain d dt ( R −R∗ u −u∗ x −x∗ ) ≃A ( R −R∗ u −u∗ x −x∗ ) , (11) 4 0 R F(R) R* 0 A B 0 2 4 6 8 R* t(s) R(Hz) Figure 2: Persistent activity of finite lifetime. Obtained by solving Eqs.(5-8). (A) When J0 = J− c , the function F(R), and so does dR/dt, is very close to zero at the state R∗. Around this point, the network activity decays very slowly. The inset shows the fine structure in the vicinity of R∗. (B) An external input (indicated by the red bar) triggers the network response. After removing the external input, the network activity first decays quickly, and then experiences a long plateau before dropping to silence rapidly. The parameters are: τs = 5ms, τd = 10ms, τf = 800ms, β = 1, U = 0.5, I = 10, Jc = 1.316 and J0 = 1.315. where A is the Jacobian matrix (see Appendix B). It turns out that the matrix A always has one eigenvector with vanishing eigenvalue, a property due to that (R∗, u∗, x∗) is the neutral stable state of the network dynamics. As demonstrated in Sec.3.1, by choosing J0 = J− c , we expect that the network state will decay very slowly along the eigenvector of vanishing eigenvalue, which we call the decay-direction. To ensure this always happens, it requires that the real parts of the other two eigenvalues of A are negative, so that any perturbation of the network state away from the decay-direction will be pulled back; otherwise, the network state may approach to silence rapidly via other routes avoiding the state (R∗, u∗, x∗). This idea is illustrated in Fig.3. The condition for the real parts of the other two eigenvalues of A being smaller than zero is calculated to be (see Appendix B): 2 τfτd + 1 τd √ U τfτd + 1 τdτs 1 1 + √ τf U τd − 1 τfτs > 0. (12) This inequality together with J0 = J− c form the condition for the network holding persistent activity of finite lifetime. t R R* 3-D view Decay-direction Figure 3: Illustration of the slow-decaying process of the network activity. The network dynamics experiences a long plateau before dropping to silence quickly. The inset presents a 3-D view of the local dynamics in the plateau region, where the network state is attracted to the decay-direction to ensure slow-decaying. By solving the network dynamics Eqs.(5-8), we calculate how the lifetime of persistent activity changes with the STP effect. Fig.4A presents the results of fixing U and J0 and varying τd and 5 τf, We see that below the critical line J0 = Jc, which is the region for J0 > Jc, the network has prefect attractor states never decaying; and above the critical line, the network has only the stable silent state. Close to the critical line, the network activity decays slowly and displays persistent activity of finite lifetime. Fig.4B shows a case that when the STF strength (τf) is fixed, the lifetime of persistent activity decreases with the STD strength (τd). This is understandable, since STD tends to suppress neuronal responses. Fig.4C shows a case that when τd is fixed, the lifetime of persistent activity increases with τf, due to that STF enhances neuronal responses. These results demonstrate that by regulating the effects of STF and STD, the lifetime of persistent activity can be adjusted. 0 0.5 1 1.5 0 5 10 τd (s) Decay time (s) 0 0.5 1 1.5 0 5 10 τf (s) Decay time (s) attractor A B C Figure 4: (A). The lifetimes of the network states with respect to τf and τd when U and J0 are fixed. We use an external input to trigger a strong response of the network and then remove the input. The lifetime of a network state is measured from the offset of the external input to the moment when the network returns to silence. The white line corresponds to the condition of J0 = Jc, below which the network has attractors lasting forever; and above which, the lifetime of a network state gradually decreases (coded by colour). (B) When τf = 1250ms is fixed, the lifetime of persistent activity decreases with τd (the vertical dashed line in A). (C) When τd = 260ms is fixed, the lifetime of persistent activity increases with τf (the horizontal dashed line in A). The other parameters are: τs = 5ms, β = 1, U = 0.05 and J0 = 5. 4 Simulation Results We carry out simulation with the spiking neuron network model given by Eqs.(1-4) to further confirm the above theoretical analysis. A homogenous network with N = 1000 neurons is used, and in the network neurons are randomly and sparsely connected with each other with a probability p = 0.1. At the state of persistent activity, neurons fire irregularly (the mean value of Coefficient of Variation is 1.29)and largely independent to each other(the mean correlation of all spike train pairs is 0.30) with each other (Fig.5A). Fig.5 present the examples of the network holding persistent activity with varied lifetimes, through different combinations of STF and STD satisfying the condition Eq.(12). 5 Conclusions In the present study, we have proposed a simple yet effective mechanism to generate persistent activity of graded lifetime in a neural system. The proposed mechanism utilizes the property of STP, a general feature of neuronal synapses, and that STF and STD have opposite effects on retaining neuronal responses. We find that with properly combined STF and STD, a neural system can be in a marginally unstable state which decays very slowly and exhibits persistent firing for a finite lifetime. This persistent activity fades away naturally without relying on an external force, and hence avoids the difficulty of closing an active state faced by the conventional attractor networks. STP has been widely observed in the cortex and displays large diversity in different regions [14, 15, 16]. Compared to static ones, dynamical synapses with STP greatly enriches the response patterns and dynamical behaviors of neural networks, which endows neural systems with information processing capacities which are otherwise difficult to implement using purely static synapses. The research on the computational roles of STP is receiving increasing attention in the field [12]. In 6 A B C D E Figure 5: The simulation results of the spiking neural network. (A) A raster plot of the responses of 50 example neurons randomly chosen from the network. The external input is applied for the first 0.5 second. The persistent activity lasts about 1100ms. The parameters are: τf = 800ms, τd = 500ms, U = 0.5, J0 = 28.6. (B) The firing rate of the network for the case (A). (C) An example of persistent activity of negligible lifetime. The parameters are:τf = 800ms, τd = 1800ms, U = 0.5, J0 = 28.6. (D) An example of persistent activity of around 400ms lifetime. The parameters are:τf = 600ms, τd = 500ms, U = 0.5, J0 = 28.6. (E) An example of the network holding an attractor lasting forever. The parameters are: τf = 800ms, τd = 490ms, U = 0.5, J0 = 28.6. terms of information presentation, a number of appealing functions contributed by STP were proposed. For instances, Mongillo et al. proposed an economical way of using the facilitated synapses due to STF to realize working memory in the prefrontal cortex without recruiting neural firing [8]; Pfister et al. suggested that STP enables a neuron to estimate the membrane potential information of the pre-synaptic neuron based on the spike train it receives [17]. Torres et al. found that STD induces instability of attractor states in a network, which could be useful for memory searching [18]; Fung et al. found that STD enables a continuous attractor network to have a slow-decaying state in the time order of STD, which could serve for passive sensory memory [19]. Here, our study reveals that through combining STF and STD properly, a neural system can hold stimulus information for an arbitrary time, serving for different computational purposes. In particular, STF tends to increase the lifetime of persistent activity; whereas, STD tends to decrease the lifetime of persistent activity. This property may justify the diverse distribution of STF and STD in different cortical regions. For instances, in the prefrontal cortex where the stimulus information often needs to be held for a long time in order to realize higher cognitive functions, such as working memory, STF is found to be dominating; whereas, in the sensory cortex where the stimulus information will be forwarded to higher cortical regions shortly, STD is found to be dominating. Furthermore, our findings suggest that a neural system may actively regulate the combination of STF and STD, e.g., by applying appropriate neural modulators [10], so that it can hold the stimulus information for a flexible amount of time depending on the actual computational requirement. Further experimental and theoretical studies are needed to clarify these interesting issues. 6 Acknowledgments This work is supported by grants from National Key Basic Research Program of China (NO.2014CB846101), and National Foundation of Natural Science of China (No.11305112, Y.Y.M.; No.31261160495, S.W.; No.31271169,D.H.W.), and the Fundamental Research Funds for the central Universities (No.31221003, S.W.), and SRFDP (No.20130003110022, S.W), and Natural Science Foundation of Jiangsu Province BK20130282. 7 Appendix A: The lifetime of persistent activity Consider the network dynamics Eq.(9). When J0 = Jc, the network has a stable silent state (R = 0) and an unstable active state, referred to as R∗(Fig.1). We consider that J0 = J− c . In this case, F(R∗) is slightly smaller than zero (Fig.2A). Starting from a state R > R∗, the network will take a considerable amount of time to cross the point R∗, since dR/dt is very small in this region, and the network exhibits persistent activity for a considerable amount of time. We estimate the time consuming for the network crossing the point R∗. According to Eq.(9), we have T ∫ 0 dt = R∗ + ∫ R∗ − τs F(R)dR ≈ R∗ + ∫ R∗ − τsdR F(R∗) + (R −R∗)2F ′′(R∗)/2, = 2τs √ F(R∗)F ′′(R∗) [ arctg R∗ + −R∗ √ F(R∗)/F ′′(R∗) −arctg R∗ −−R∗ √ F(R∗)/F ′′(R∗) ] , = 2τs √ F(R∗)F ′′(R∗) G(R∗), (13) where R∗ + and R∗ −denote, respectively, the points slightly larger or smaller than R∗, F ′(R∗) = dF(R)/dR|R∗, and F ′′(R∗) = dF ′(R)/dR|R∗. To get the above result, we used the second-order Taylor expansion of F(R) at R∗, and the condition F ′(R∗) = 0. In the limit of F(R∗) →0, the value of G(R∗) is bounded. Thus, the lifetime of persistent activity is in the order of T ∼ 2τs √ F(R∗)F ′′(R∗) . (14) Appendix B: The condition for the network holding persistent activity of finite lifetime Denote (R∗, u∗, x∗) to be the neutral stable state of the network when J0 = Jc, which is calculated to be (by solving Eqs.(5-8)), R∗= √ 1/τfτdU, u∗= τfUR∗ 1 + τfUR∗, x∗= 1 + τfUR∗ 1 + τfUR∗+ τfτdUR∗2 . (15) Linearizing the network dynamics at this point, we obtain Eq.(12), in which the Jacobian matrix A is given by A = ( (J0u∗x∗−1)/τs, J0x∗R∗/τs, J0u∗R∗/τs U(1 −u∗), −1/τf −UR∗, 0 −u∗x∗, −x∗R∗, −1/τd −u∗R∗ ) . (16) The eigenvalues of the Jacobian matrix satisfy the equality |A −λI| = 0. Utilizing Eqs.(15), this equality becomes λ(λ2 + bλ + cλ) = 0, (17) where the coefficients b and c are given by b = 1 τd + 1 τf + u∗R∗+ UR∗, (18) c = 2 τfτd + 1 τd √ U τfτd + 1 τdτs 1 1 + √ τf U τd − 1 τfτs . (19) From Eq.(17), we see that the matrix A has three eigenvalues. One eigenvalue, referred to as λ1, is always zero. The other two eigenvalues satisfy that λ2 + λ3 = −b and λ2λ3 = c. Since b > 0, the condition for the real parts of λ2 and λ3 being negative is c > 0. 8 References [1] J. Fuster and G. Alexander. Neuron activity related to short-term memory. Science 173, 652654 (1971). [2] S. Funahashi, C. J. Bruce and P.S. Goldman-Rakic. Mnemonic coding of visual space in the monkeys dorsolateral prefrontal cortex. J. Neurophysiol. 61, 331-349 (1989). [3] R. Romo, C. D. Brody, A. Hernandez. Lemus L. Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399, 470-473 (1999). [4] D.J. Amit. Modelling brain function. New York: Cambridge University Press. (1989) [5] S. Amari. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol. Cybern. 27, 77-87 (1977). [6] X.J. Wang. Synaptic basis of cortical persistent activity: the importance of NMDA receptors to working memory. J. Neurosci. 19, 9587-9603 (1999). [7] O. Barak and M. Tsodyks. Persistent Activity in Neural Networks with Dynamic Synapses. PLoS Computational Biology.3(2): e35(2007). [8] G. Mongillo, O. Barak and M. Tsodyks. Synaptic theory of working memory. Science 319,1543-1546(2008). [9] B. Gutkin, C. Laing, C. Colby, C. Chow, and B. Ermentrout. Turning on and off with excitation: the role of spike-timing asynchrony and synchrony in sustained neural activity. J. Comput. Neurosci. 11, 121C134 (2001). [10] H. Markram and M. Tsodyks. Redistribution of synaptic efficacy between neocortical pyramidal neurons. Nature. 382(6594): 807-810(1996). [11] L. F. Abbott and W. G. Regehr. Synaptic computation. Nature. 431(7010): 796-803(2004). [12] M. Tsodyks and S. Wu. Short-Term Synaptic Plasticity. Scholarpedia, 8(10): 3153(2013). [13] M. Tsodyks, K. Pawelzik and H. Markram. Neural Networks with Dynamic Synapses. Neural Computation. 10(4): 821-835(1998). [14] H. Markram, Y. Wang and M. Tsodyks. Differential signaling via the same axon of neocortical pyramidal neurons. Proceedings of the National Academy of Sciences. 95(9): 53235328(1998). [15] J. S. Dittman, A. C. Kreitzer and W. G. Regehr. Interplay between facilitation, depression, and residual calcium at three presynaptic terminals. J. Neurosci. 20: 1374-1385(2000). [16] Y. Wang, H. Markram, P. H. Goodman, T. K. Berger, J. Y. Ma and P. S. Goldman-Rakic. Heterogeneity in the pyramidal network of the medial prefrontal cortex. Nature Neuroscience. 9(4): 534-542(2006). [17] J. P. Pfister, P. Dayan and M. Lengyel. Synapses with short-term plasticity are optimal estimators of presynaptic membrane potentials. Nature Neuroscience 13,1271-1275(2010). [18] J.J. Torres, J. M. Cortes, J. Marro and H. J. Kappen. Competition Between Synaptic Depression and Facilitation in Attractor Neural Networks. Neural Computation. 19(10): 2739-2755(2007). [19] C. C. Fung, K. Y. Michael Wong, H. Wang and S. Wu. Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy and Mobility. Neural Computation 24(5):11471185(2012). 9
2014
398
5,511
Log-Hilbert-Schmidt metric between positive definite operators on Hilbert spaces H`a Quang Minh Marco San Biagio Vittorio Murino Istituto Italiano di Tecnologia Via Morego 30, Genova 16163, ITALY {minh.haquang,marco.sanbiagio,vittorio.murino}@iit.it Abstract This paper introduces a novel mathematical and computational framework, namely Log-Hilbert-Schmidt metric between positive definite operators on a Hilbert space. This is a generalization of the Log-Euclidean metric on the Riemannian manifold of positive definite matrices to the infinite-dimensional setting. The general framework is applied in particular to compute distances between covariance operators on a Reproducing Kernel Hilbert Space (RKHS), for which we obtain explicit formulas via the corresponding Gram matrices. Empirically, we apply our formulation to the task of multi-category image classification, where each image is represented by an infinite-dimensional RKHS covariance operator. On several challenging datasets, our method significantly outperforms approaches based on covariance matrices computed directly on the original input features, including those using the Log-Euclidean metric, Stein and Jeffreys divergences, achieving new state of the art results. 1 Introduction and motivation Symmetric Positive Definite (SPD) matrices, in particular covariance matrices, have been playing an increasingly important role in many areas of machine learning, statistics, and computer vision, with applications ranging from kernel learning [12], brain imaging [9], to object detection [24, 23]. One key property of SPD matrices is the following. For a fixed n ∈N, the set of all SPD matrices of size n × n is not a subspace in Euclidean space, but is a Riemannian manifold with nonpositive curvature, denoted by Sym++(n). As a consequence of this manifold structure, computational methods for Sym++(n) that simply rely on Euclidean metrics are generally suboptimal. In the current literature, many methods have been proposed to exploit the non-Euclidean structure of Sym++(n). For the purposes of the present work, we briefly describe three common approaches here, see e.g. [9] for other methods. The first approach exploits the affine-invariant metric, which is the classical Riemannian metric on Sym++(n) [18, 16, 3, 19, 4, 24]. The main drawback of this framework is that it tends to be computationally intensive, especially for large scale applications. Overcoming this computational complexity is one of the main motivations for the recent development of the Log-Euclidean metric framework of [2], which has been exploited in many computer vision applications, see e.g. [25, 11, 17]. The third approach defines and exploits Bregman divergences on Sym++(n), such as Stein and Jeffreys divergences, see e.g. [12, 22, 8], which are not Riemannian metrics but are fast to compute and have been shown to work well on nearest-neighbor retrieval tasks. While each approach has its advantages and disadvantages, the Log-Euclidean metric possesses several properties which are lacking in the other two approaches. First, it is faster to compute than the affine-invariant metric. Second, unlike the Bregman divergences, it is a Riemannian metric on Sym++(n) and thus can better capture its manifold structure. Third, in the context of kernel 1 learning, it is straightforward to construct positive definite kernels, such as the Gaussian kernel, using this metric. This is not always the case with the other two approaches: the Gaussian kernel constructed with the Stein divergence, for instance, is only positive definite for certain choices of parameters [22], and the same is true with the affine-invariant metric, as can be numerically verified. Our contributions: In this work, we generalize the Log-Euclidean metric to the infinitedimensional setting, both mathematically, computationally, and empirically. Our novel metric, termed Log-Hilbert-Schmidt metric (or Log-HS for short), measures the distances between positive definite unitized Hilbert-Schmidt operators, which are scalar perturbations of Hilbert-Schmidt operators on a Hilbert space and which are infinite-dimensional generalizations of positive definite matrices. These operators have recently been shown to form an infinite-dimensional Riemann-Hilbert manifold by [14, 1, 15], who formulated the infinite-dimensional version of the affine-invariant metric from a purely mathematical viewpoint. While our Log-Hilbert-Schmidt metric framework includes the Log-Euclidean metric as a special case, the infinite-dimensional formulation is significantly different from its corresponding finite-dimensional version, as we demonstrate throughout the paper. In particular, one cannot obtain the infinite-dimensional formulas from the finite-dimensional ones by letting the dimension approach infinity. Computationally, we apply our abstract mathematical framework to compute distances between covariance operators on an RKHS induced by a positive definite kernel. From a kernel learning perspective, this is motivated by the fact that covariance operators defined on nonlinear features, which are obtained by mapping the original data into a high-dimensional feature space, can better capture input correlations than covariance matrices defined on the original data. This is a viewpoint that goes back to KernelPCA [21]. In our setting, we obtain closed form expressions for the LogHilbert-Schmidt metric between covariance operators via the Gram matrices. Empirically, we apply our framework to the task of multi-class image classification. In our approach, the original features extracted from each input image are implicitly mapped into the RKHS induced by a positive definite kernel. The covariance operator defined on the RKHS is then used as the representation for the image and the distance between two images is the Log-Hilbert-Schmidt distance between their corresponding covariance operators. On several challenging datasets, our method significantly outperforms approaches based on covariance matrices computed directly on the original input features, including those using the Log-Euclidean metric, Stein and Jeffreys divergences. Related work: The approach most closely related to our current work is [26], which computed probabilistic distances in RKHS. This approach has recently been employed by [10] to compute Bregman divergences between RKHS covariance operators. There are two main theoretical issues with the approach in [26, 10]. The first issue is that it is assumed implicitly that the concepts of trace and determinant can be extended to any bounded linear operator on an infinite-dimensional Hilbert space H. This is not true in general, as the concepts of trace and determinant are only welldefined for certain classes of operators. Many quantities involved in the computation of the Bregman divergences in [10] are in fact infinite when dim(H) = ∞, which is the case if H is the Gaussian RKHS, and only cancel each other out in special cases 1. The second issue concerns the use of the Stein divergence by [10] to define the Gaussian kernel, which is not always positive definite, as discussed above. In contrast, the Log-HS metric formulation proposed in this paper is theoretically rigorous and it is straightforward to define many positive definite kernels, including the Gaussian kernel, with this metric. Furthermore, our empirical results consistently outperform those of [10]. Organization: After some background material in Section 2, we describe the manifold of positive definite operators in Section 3. Sections 4 and 5 form the core of the paper, where we develop the general framework for the Log-Hilbert-Schmidt metric together with the explicit formulas for the case of covariance operators on an RKHS. Empirical results for image classification are given in Section 6. The proofs for all mathematical results are given in the Supplementary Material. 2 Background The Riemannian manifold of positive definite matrices: The manifold structure of Sym++(n) has been studied extensively, both mathematically and computationally. This study goes as far 1We will provide a theoretically rigorous formulation for the Bregman divergences between positive definite operators in a longer version of the present work. 2 back as [18], for more recent treatments see e.g. [16, 3, 19, 4]. The most commonly encountered Riemannian metric on Sym++(n) is the affine-invariant metric, in which the geodesic distance between two positive definite matrices A and B is given by d(A, B) = || log(A−1/2BA−1/2)||F , (1) where log denotes the matrix logarithm operation and F is an Euclidean norm on the space of symmetric matrices Sym(n). Following the classical literature, in this work we take F to be the Frobenious norm, which is induced by the standard inner product on Sym(n). From a practical viewpoint, the metric (1) tends to be computationally intensive, which is one of the main motivations for the Log-Euclidean metric of [2], in which the geodesic distance between A and B is given by dlogE(A, B) = || log(A) −log(B)||F . (2) The main goal of this paper is to generalize the Log-Euclidean metric to what we term the LogHilbert-Schmidt metric between positive definite operators on an infinite-dimensional Hilbert space and apply this metric in particular to compute distances between covariance operators on an RKHS. Covariance operators: Let the input space X be an arbitrary non-empty set. Let x = [x1, . . . , xm] be a data matrix sampled from X, where m ∈N is the number of observations. Let K be a positive definite kernel on X × X and HK its induced reproducing kernel Hilbert space (RKHS). Let Φ : X →HK be the corresponding feature map, which gives the (potentially infinite) mapped data matrix Φ(x) = [Φ(x1), . . . , Φ(xm)] of size dim(HK) × m in the feature space HK. The corresponding covariance operator for Φ(x) is defined to be CΦ(x) = 1 mΦ(x)JmΦ(x)T : HK →HK, (3) where Jm is the centering matrix, defined by Jm = Im −1 m1m1T m with 1m = (1, . . . , 1)T ∈Rm. The matrix Jm is symmetric, with rank(Jm) = m −1, and satisfies J2 m = Jm. The covariance operator CΦ(x) can be viewed as a (potentially infinite) covariance matrix in the feature space HK, with rank at most m −1. If X = Rn and K(x, y) = ⟨x, y⟩Rn, then CΦ(x) = Cx, the standard n × n covariance matrix encountered in statistics. 2 Regularization: Generally, covariance matrices may not be full-rank and thus may only be positive semi-definite. In order to apply the theory of Sym++(n), one needs to consider the regularized version (Cx + γIRn) for some γ > 0. In the infinite-dimensional setting, with dim(HK) = ∞, CΦ(x) is always rank-deficient and regularization is always necessary. With γ > 0, (CΦ(x) +γIHK) is strictly positive and invertible, both of which are needed to define the Log-Hilbert-Schmidt metric. 3 Positive definite unitized Hilbert-Schmidt operators Throughout the paper, let H be a separable Hilbert space of arbitrary dimension. Let L(H) be the Banach space of bounded linear operators on H and Sym(H) be the subspace of self-adjoint operators in L(H). We first describe in this section the manifold of positive definite unitized HilbertSchmidt operators on which the Log-Hilbert-Schmidt metric is defined. This manifold setting is motivated by the following two crucial differences between the finite and infinite-dimensional cases. (A) Positive definite: If A ∈Sym(H) and dim(H) = ∞, in order for log(A) to be well-defined and bounded, it is not sufficient to require that all eigenvalues of A be strictly positive. Instead, it is necessary to require that all eigenvalues of A be bounded below by a positive constant (Section 3.1). (B) Unitized Hilbert-Schmidt: The infinite-dimensional generalization of the Frobenious norm is the Hilbert-Schmidt norm. However, if dim(H) = ∞, the identity operator I is not Hilbert-Schmidt and would have infinite distance from any Hilbert-Schmidt operator. To have a satisfactory framework, it is necessary to enlarge the algebra of Hilbert-Schmidt operators to include I (Section 3.2). These differences between the cases dim(H) = ∞and dim(H) < ∞are sharp and manifest themselves in the concrete formulas for the Log-Hilbert-Schmidt metric which we obtain in Sections 4.2 and 5. In particular, the formulas for the case dim(H) = ∞are not obtainable from their corresponding finite-dimensional versions when dim(H) →∞. 2 One can also define CΦ(x) = 1 m−1Φ(x)JmΦ(x)T . This should not make much practical difference if m is large. 3 3.1 Positive definite operators Positive and strictly positive operators: Let us discuss the first crucial difference between the finite and infinite-dimensional settings. Recall that an operator A ∈Sym(H) is said to be positive if ⟨Ax, x⟩≥0 ∀x ∈H. The eigenvalues of A, if they exist, are all nonnegative. If A is positive and ⟨Ax, x⟩= 0 ⇐⇒x = 0, then A is said to be strictly positive, and all its eigenvalues are positive. We denote the sets of all positive and strictly positive operators on H, respectively, by Sym+(H) and Sym++(H). Let A ∈Sym++(H). Assume that A is compact, then A has a countable spectrum of positive eigenvalues {λk(A)}dim(H) k=1 , counting multiplicities, with limk→∞λk(A) = 0 if dim(H) = ∞. Let {φk(A)}dim(H) k=1 denote the corresponding normalized eigenvectors, then A = dim(H) X k=1 λk(A)φk(A) ⊗φk(A), (4) where φk(A) ⊗φk(A) : H →H is defined by (φk(A) ⊗φk(A))w = ⟨w, φk(A)⟩φk(A), w ∈H. The logarithm of A is defined by log(A) = dim(H) X k=1 log(λk(A))φk(A) ⊗φk(A). (5) Clearly, log(A) is bounded if and only if dim(H) < ∞, since for dim(H) = ∞, we have limk→∞log(λk(A)) = −∞. Thus, when dim(H) = ∞, the condition that A be strictly positive is not sufficient for log(A) to be bounded. Instead, the following stronger condition is necessary. Positive definite operators: A self-adjoint operator A ∈L(H) is said to be positive definite (see e.g. [20]) if there exists a constant MA > 0 such that ⟨Ax, x⟩≥MA||x||2 for all x ∈H. (6) The eigenvalues of A, if they exist, are bounded below by MA. This condition is equivalent to requiring that A be strictly positive and invertible, with A−1 ∈L(H). Clearly, if dim(H) < ∞, then strict positivity is equivalent to positive definiteness. Let P(H) denote the open cone of selfadjoint, positive definite, bounded operators on H, that is P(H) = {A ∈L(H), A∗= A, ∃MA > 0 s.t. ⟨Ax, x⟩≥MA||x||2 ∀x ∈H}. (7) Throughout the remainder of the paper, we use the following notation: A > 0 ⇐⇒A ∈P(H). 3.2 The Riemann-Hilbert manifold of positive definite unitized Hilbert-Schmidt operators Let HS(H) denote the two-sided ideal of Hilbert-Schmidt operators on H in L(H), which is a Banach algebra with the Hilbert-Schmidt norm, defined by ||A||2 HS = tr(A∗A) = dim(H) X k=1 λk(A∗A). (8) We now discuss the second crucial difference between the finite and infinite-dimensional settings. If dim(H) = ∞, then the identity operator I is not Hilbert-Schmidt, since ||I||HS = ∞. Thus, given γ ̸= µ > 0, we have || log(γI) −log(µI)||HS = | log(γ) −log(µ)| ||I||HS = ∞, that is even the distance between two different multiples of the identity operator is infinite. This problem is resolved by considering the following extended (or unitized) Hilbert-Schmidt algebra [14, 1, 15]: HR = {A + γI : A∗= A, A ∈HS(H), γ ∈R}. (9) This can be endowed with the extended Hilbert-Schmidt inner product ⟨A + γI, B + µI⟩eHS = tr(A∗B) + γµ = ⟨A, B⟩HS + γµ, (10) under which the scalar operators are orthogonal to the Hilbert-Schmidt operators. The corresponding extended Hilbert-Schmidt norm is given by ||(A + γI)||2 eHS = ||A||2 HS + γ2, where A ∈HS(H). (11) If dim(H) < ∞, then we set || ||eHS = || ||HS, with ||(A + γI)||eHS = ||A + γI||HS. Manifold of positive definite unitized Hilbert-Schmidt operators: Define Σ(H) = P(H) ∩HR = {A + γI > 0 : A∗= A, A ∈HS(H), γ ∈R}. (12) If (A + γI) ∈Σ(H), then it has a countable spectrum {λk(A) + γ}dim(H) k=1 satisfying λk + γ ≥MA for some constant MA > 0. Thus (A + γI)−1 exists and is bounded, and log(A + γI) as defined by (5) is well-defined and bounded, with log(A + γI) ∈HR. 4 The main results of [15] state that when dim(H) = ∞, Σ(H) is an infinite-dimensional RiemannHilbert manifold and the map log : Σ(H) →HR and its inverse exp : HR →Σ(H) are diffeomorphisms. The Riemannian distance between two operators (A + γI), (B + µI) ∈Σ(H) is given by d[(A + γI), (B + µI)] = || log[(A + γI)−1/2(B + µI)(A + γI)−1/2]||eHS. (13) This is the infinite-dimensional version of the affine-invariant metric (1) 3. 4 Log-Hilbert-Schmidt metric This section defines and develops the Log-Hilbert-Schmidt metric, which is the infinite-dimensional generalization of the Log-Euclidean metric (2). The general formulation presented in this section is then applied to RKHS covariance operators in Section 5. 4.1 The general setting Consider the following operations on Σ(H): (A + γI) ⊙(B + µI) = exp(log(A + γI) + log(B + µI)), (14) λ  (A + γI) = exp(λ log(A + γI)) = (A + γI)λ, λ ∈R. (15) Vector space structure on Σ(H): The key property of the operation ⊙is that, unlike the usual operator product, it is commutative, making (Σ(H), ⊙) an abelian group and (Σ(H), ⊙, ) a vector space, which is isomorphic to the vector space (HR, +, ·), as shown by the following. Theorem 1. Under the two operations ⊙and , (Σ(H), ⊙, ) becomes a vector space, with ⊙ acting as vector addition and  acting as scalar multiplication. The zero element in (Σ(H), ⊙, ) is the identity operator I and the inverse of (A + γI) is (A + γI)−1. Furthermore, the map ψ : (Σ(H), ⊙, ) →(HR, +, ·) defined by ψ(A + γI) = log(A + γI), (16) is a vector space isomorphism, so that for all (A + γI), (B + µI) ∈Σ(H) and λ ∈R, ψ((A + γI) ⊙(B + µI)) = log(A + γI) + log(B + µI), ψ(λ  (A + γI)) = λ log(A + γI), (17) where + and · denote the usual operator addition and multiplication operations, respectively. Metric space structure on Σ(H): Motivated by the vector space isomorphism between (Σ(H), ⊙, ) and (HR, +, ·) via the mapping ψ, the following is our generalization of the LogEuclidean metric to the infinite-dimensional setting. Definition 1. The Log-Hilbert-Schmidt distance between two operators (A + γI) ∈Σ(H), (B + µI) ∈Σ(H) is defined to be dlogHS[(A + γI), (B + µI)] = log[(A + γI) ⊙(B + µI)−1] eHS . (18) Remark 1. For our purposes in the current work, we focus on the Log-HS metric as defined above based on the one-to-one correspondence between the algebraic structures of (Σ(H), ⊙, ) and (HR, +, ·). An in-depth treatment of the Log-HS metric in connection with the manifold structure of Σ(H) will be provided in a longer version of the paper. The following theorem shows that the Log-Hilbert-Schmidt distance satisfies all the axioms of a metric, making (Σ(H), dlogHS) a metric space. Furthermore, the square Log-Hilbert-Schmidt distance decomposes uniquely into a sum of a square Hilbert-Schmidt norm plus a scalar term. Theorem 2. The Log-Hilbert-Schmidt distance as defined in (18) is a metric, making (Σ(H), dlogHS) a metric space. Let (A + γI) ∈Σ(H), (B + µI) ∈Σ(H). If dim(H) = ∞, then there exist unique operators A1, B1 ∈HS(H) ∩Sym(H) and scalars γ1, µ1 ∈R such that A + γI = exp(A1 + γ1I), B + µI = exp(B1 + µ1I), (19) and d2 logHS[(A + γI), (B + µI)] = ∥A1 −B1∥2 HS + (γ1 −µ1)2. (20) If dim(H) < ∞, then (19) and (20) hold with A1 = log(A+γI), B1 = log(B +µI), γ1 = µ1 = 0. 3We give a more detailed discussion of Eqs. (12) and (13) in the Supplementary Material. 5 Log-Euclidean metric: Theorem 2 states that when dim(H) < ∞, we have dlogHS[(A+γI), (B + µI)] = dlogE[(A + γI), (B + µI)]. We have thus recovered the Log-Euclidean metric as a special case of our framework. Hilbert space structure on (Σ(H), ⊙, ): Motivated by formula (20), whose right hand side is a square extended Hilbert-Schmidt distance, we now show that (Σ(H), ⊙, ) can be endowed with an inner product, under which it becomes a Hilbert space. Definition 2. Let (A+γI), (B +µI) ∈Σ(H). Let A1, B1 ∈HS(H)∩Sym(H) and γ1, µ1 ∈R be the unique operators and scalars, respectively, such that A + γI = exp(A1 + γ1I) and B + µI = exp(B1 + µ1I), as in Theorem 2. The Log-Hilbert-Schmidt inner product between (A + γI) and (B + µI) is defined by ⟨A + γI, B + µI⟩logHS = ⟨log(A + γI), log(B + µI)⟩eHS = ⟨A1, B1⟩HS + γ1µ1. (21) Theorem 3. The inner product ⟨, ⟩logHS as given in (21) is well-defined on (Σ(H), ⊙, ). Endowed with this inner product, (Σ(H), ⊙, , ⟨, ⟩logHS) becomes a Hilbert space. The corresponding Log-Hilbert-Schmidt norm is given by ||A + γI||2 logHS = || log(A + γI)||2 eHS = ||A1||2 HS + γ2 1. (22) In terms of this norm, the Log-Hilbert-Schmidt distance is given by dlogHS[(A + γI), (B + µI)] = (A + γI) ⊙(B + µI)−1 logHS . (23) Positive definite kernels defined with the Log-Hilbert-Schmidt metric: An important consequence of the Hilbert space structure of (Σ(H), ⊙, , ⟨, ⟩logHS) is that it is straightforward to generalize many positive definite kernels on Euclidean space to Σ(H) × Σ(H). Corollary 1. The following kernels defined on Σ(H) × Σ(H) are positive definite: K[(A + γI), (B + µI)] = (c + ⟨A + γI, B + µI⟩logHS)d, c > 0, d ∈N, (24) K[(A + γI), (B + µI)] = exp(−dp logHS[(A + γI), (B + µI)]/σ2), 0 < p ≤2. (25) 4.2 Log-Hilbert-Schmidt metric between regularized positive operators For our purposes in the present work, we focus on the following subset of Σ(H): Σ+(H) = {A + γI : A ∈HS(H) ∩Sym+(H) , γ > 0} ⊂Σ(H). (26) Examples of operators in Σ+(H) are the regularized covariance operators (CΦ(x) +γI) with γ > 0. In this case the formulas in Theorems 2 and 3 have the following concrete forms. Theorem 4. Assume that dim(H) = ∞. Let A, B ∈HS(H) ∩Sym+(H). Let γ, µ > 0. Then d2 logHS[(A + γI), (B + µI)] = || log( 1 γ A + I) −log( 1 µB + I)||2 HS + (log γ −log µ)2. (27) Their Log-Hilbert-Schmidt inner product is given by ⟨(A + γI), (B + µI)⟩logHS = ⟨log( 1 γ A + I), log( 1 µB + I)⟩HS + (log γ)(log µ). (28) Finite dimensional case: As a consequence of the differences between the cases dim(H) < ∞and dim(H) = ∞, we have different formulas for the case dim(H) < ∞, which depend on dim(H) and which are surprisingly more complicated than in the case dim(H) = ∞. Theorem 5. Assume that dim(H) < ∞. Let A, B ∈Sym+(H). Let γ, µ > 0. Then d2 logHS[(A + γI), (B + µI)] = || log(A γ + I) −log(B µ + I)||2 HS +2(log γ −log µ)tr[log(A γ + I) −log(B µ + I)] + (log γ −log µ)2 dim(H). (29) The Log-Hilbert-Schmidt inner product between (A + γI) and (B + µI) is given by ⟨(A + γI), (B + µI)⟩logHS = ⟨log(A γ + I), log(B µ + I)⟩HS +(log γ)tr[log(B µ + I)] + (log µ)tr[log(A γ + I)] + (log γ log µ) dim(H). (30) 6 5 Log-Hilbert-Schmidt metric between regularized covariance operators Let X be an arbitrary non-empty set. In this section, we apply the general results of Section 4 to compute the Log-Hilbert-Schmidt distance between covariance operators on an RKHS induced by a positive definite kernel K on X ×X. In this case, we have explicit formulas for dlogHS and the inner product ⟨, ⟩logHS via the corresponding Gram matrices. Let x = [xi]m i=1, y = [yi]m i=1, m ∈N, be two data matrices sampled from X and CΦ(x), CΦ(y) be the corresponding covariance operators induced by the kernel K, as defined in Section 2. Let K[x], K[y], and K[x, y] be the m × m Gram matrices defined by (K[x])ij = K(xi, xj), (K[y])ij = K(yi, yj), (K[x, y])ij = K(xi, yj), 1 ≤i, j ≤m. Let A = 1 √γmΦ(x)Jm : Rm →HK, B = 1 √µmΦ(y)Jm : Rm →HK, so that AT A = 1 γmJmK[x]Jm, BT B = 1 µmJmK[y]Jm, AT B = 1 √γµmJmK[x, y]Jm. (31) Let NA and NB be the numbers of nonzero eigenvalues of AT A and BT B, respectively. Let ΣA and ΣB be the diagonal matrices of size NA × NA and NB × NB, and UA and UB be the matrices of size m × NA and m × NB, respectively, which are obtained from the spectral decompositions 1 γmJmK[x]Jm = UAΣAU T A, 1 µmJmK[y]Jm = UBΣBU T B. (32) In the following, let ◦denote the Hadamard (element-wise) matrix product. Define CAB = 1T NA log(INA + ΣA)Σ−1 A (U T AAT BUB ◦U T AAT BUB)Σ−1 B log(INB + ΣB)1NB. (33) Theorem 6. Assume that dim(HK) = ∞. Let γ > 0, µ > 0. Then d2 logHS[(CΦ(x) + γI), (CΦ(y) + µI)] = tr[log(INA + ΣA)]2 + tr[log(INB + ΣB)]2 −2CAB + (log γ −log µ)2. (34) The Log-Hilbert-Schmidt inner product between (CΦ(x) + γI) and (CΦ(y) + µI) is ⟨(CΦ(x) + γI), (CΦ(y) + µI)⟩logHS = CAB + (log γ)(log µ). (35) Theorem 7. Assume that dim(HK) < ∞. Let γ > 0, µ > 0. Then d2 logHS[(CΦ(x) + γI), (CΦ(y) + µI)] = tr[log(INA + ΣA)]2 + tr[log(INB + ΣB)]2 −2CAB +2(log γ µ)(tr[log(INA + ΣA)] −tr[log(INB + ΣB)]) + (log γ µ)2 dim(HK). (36) The Log-Hilbert-Schmidt inner product between (CΦ(x) + γI) and (CΦ(y) + µI) is ⟨(CΦ(x) + γI), (CΦ(y) + µI)⟩logHS = CAB + (log µ)tr[log(INA + ΣA)] +(log γ)tr[log(INB + ΣB)] + (log γ log µ) dim(HK). (37) 6 Experimental results This section demonstrates the empirical performance of the Log-HS metric on the task of multicategory image classification. For each input image, the original features extracted from the image are implicitly mapped into the infinite-dimensional RKHS induced by the Gaussian kernel. The covariance operator defined on the RKHS is called the GaussianCOV and is used as the representation for the image. In a classification algorithm, the distance between two images is the Log-HS distance between their corresponding GaussianCOVs. This is compared with the directCOV representation, that is covariance matrices defined using the original input features. In all of the experiments, we employed LIBSVM [7] as the classification method. The following algorithms were evaluated in our experiments: Log-E (directCOV and Gaussian SVM using the Log-Euclidean metric), Log-HS (GaussianCOV and Gaussian SVM using the Log-HS metric), Log-HS∆(GaussianCOV and SVM with the Laplacian kernel K(x, y) = exp(−||x−y|| σ )). For all experiments, the kernel parameters were chosen by cross validation, while the regularization parameters were fixed to be γ = µ = 10−8. We also compare with empirical results by the different algorithms in [10], namely J-SVM and SSVM (SVM with the Jeffreys and Stein divergences between directCOVs, respectively), JH-SVM and SH-SVM (SVM with the Jeffreys and Stein divergences between GaussianCOVs, respectively), and results of the Covariance Discriminant Learning (CDL) technique of [25], which can be considered as the state-of-the-art for COV-based classification. All results are reported in Table1. 7 Table 1: Results over all the datasets Methods Kylberg texture KTH-TIPS2b KTH-TIPS2b (RGB) Fish GaussianCOV Log-HS 92.58%(±1.23) 81.91%(±3.3) 79.94%(±4.6) 56.74%(±2.87) Log-HS∆ 92.56%(±1.26) 81.50%(±3.90) 77.53%(±5.2) 56.43%(±3.02) SH-SVM[10] 91.36%(±1.27) 80.10%(±4.60) JH-SVM[10] 91.25%(±1.33) 79.90%(±3.80) directCOV Log-E 87.49%(±1.54) 74.11%(±7.41) 74.13%(±6.1) 42.70%(±3.45) S-SVM[10] 81.27%(±1.07) 78.30%(±4.84) J-SVM[10] 82.19%(±1.30) 74.70%(±2.81) CDL [25] 79.87%(±1.06) 76.30%(±5.10) Texture classification: For this task, we used the Kylberg texture dataset [13], which contains 28 texture classes of different natural and man-made surfaces, with each class consisting of 160 images. For this dataset, we followed the validation protocol of [10], where each image is resized to a dimension of 128 × 128, with m = 1024 observations computed on a coarse grid (i.e., every 4 pixels in the horizontal and vertical direction). At each point, we extracted a set of n = 5 lowlevel features F(x, y) = [Ix,y, |Ix| , |Iy| , |Ixx| , |Iyy|] , where I, Ix, Iy, Ixx and Iyy, are the intensity, first- and second-order derivatives of the texture image. We randomly selected 5 images in each class for training and used the remaining ones as test data, repeating the entire procedure 10 times. We report the mean and the standard deviation values for the classification accuracies for the different experiments over all 10 random training/testing splits. Material classification: For this task, we used the KTH-TIPS2b dataset [6], which contains images of 11 materials captured under 4 different illuminations, in 3 poses, and at 9 scales. The total number of images per class is 108. We applied the same protocol as used for the previous dataset [10], extracting 23 low-level dense features: F(x, y) = Rx,y, Gx,y, Bx,y, G0,0 x,y , . . . G4,5 x,y  , where Rx,y, Gx,y, Bx,y are the color intensities and Go,s x,y are the 20 Gabor filters at 4 orientations and 5 scales. We report the mean and the standard deviation values for all the 4 splits of the dataset. Fish recognition: The third dataset used is the Fish Recognition dataset [5]. The fish data are acquired from a live video dataset resulting in 27370 verified fish images. The whole dataset is divided into 23 classes. The number of images per class ranges from 21 to 12112, with a medium resolution of roughly 150 × 120 pixels. The significant variations in color, pose and illumination inside each class make this dataset very challenging. We apply the same protocol as used for the previous datasets, extracting the 3 color intensities from each image to show the effectiveness of our method: F(x, y) = [Rx,y, Gx,y, Bx,y]. We randomly selected 5 images from each class for training and 15 for testing, repeating the entire procedure 10 times. Discussion of results: As one can observe in Table1, in all of the datasets, the Log-HS framework, operating on GaussianCOVs, significantly outperforms approaches based on directCOVs computed using the original input features, including those using Log-Euclidean, Stein and Jeffreys divergences. Across all datasets, our improvement over the Log-Euclidean metric is up to 14% in accuracy. This is consistent with kernel-based learning theory, because GaussianCOVs, defined on the infinite-dimensional RKHS, can better capture nonlinear input correlations than directCOVs, as we expected. To the best of our knowledge, our results in the Texture and Material classification experiments are the new state of the art results for these datasets. Furthermore, our results, which are obtained using a theoretically rigorous framework, also consistently outperform those of [10]. The computational complexity of our framework, its two-layer kernel machine interpretation, and other discussions are given in the Supplementary Material. Conclusion and future work We have presented a novel mathematical and computational framework, namely Log-HilbertSchmidt metric, that generalizes the Log-Euclidean metric between SPD matrices to the infinitedimensional setting. Empirically, on the task of image classification, where each image is represented by an infinite-dimensional RKHS covariance operator, the Log-HS framework substantially outperforms other approaches based on covariance matrices computed directly on the original input features. Given the widespread use of covariance matrices, we believe that the Log-HS framework can be potentially useful for many problems in machine learning, computer vision, and other applications. Many more properties of the Log-HS metric, along with further applications, will be reported in a longer version of the current paper and in future work. 8 References [1] E. Andruchow and A. Varela. Non positively curved metric in the space of positive definite infinite matrices. Revista de la Union Matematica Argentina, 48(1):7–15, 2007. [2] V. Arsigny, P. Fillard, X. Pennec, and N. Ayache. Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM J. on Matrix An. and App., 29(1):328–347, 2007. [3] R. Bhatia. Positive Definite Matrices. Princeton University Press, 2007. [4] D. A. Bini and B. Iannazzo. Computing the Karcher mean of symmetric positive definite matrices. Linear Algebra and its Applications, 438(4):1700–1710, 2013. [5] B. J. Boom, J. He, S. Palazzo, P. X. Huang, C. Beyan, H.-M. Chou, F.-P. Lin, C. Spampinato, and R. B. Fisher. A research tool for long-term and continuous analysis of fish assemblage in coral-reefs using underwater camera footage. Ecological Informatics, in press, 2013. [6] B. Caputo, E. Hayman, and P. Mallikarjuna. Class-specific material categorisation. In ICCV, pages 1597–1604, 2005. [7] C.-C. Chang and C.-J. Lin. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1–27:27, May 2011. [8] A. Cherian, S. Sra, A. Banerjee, and N. Papanikolopoulos. Jensen-Bregman LogDet divergence with application to efficient similarity search for covariance matrices. TPAMI, 35(9):2161–2174, 2013. [9] I.L. Dryden, A. Koloydenko, and D. Zhou. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Annals of Applied Statistics, 3:1102–1123, 2009. [10] M. Harandi, M. Salzmann, and F. Porikli. Bregman divergences for infinite dimensional covariance matrices. In CVPR, 2014. [11] S. Jayasumana, R. Hartley, M. Salzmann, Hongdong Li, and M. Harandi. Kernel methods on the Riemannian manifold of symmetric positive definite matrices. In CVPR, 2013. [12] B. Kulis, M. A. Sustik, and I. S. Dhillon. Low-rank kernel learning with Bregman matrix divergences. The Journal of Machine Learning Research, 10:341–376, 2009. [13] G. Kylberg. The Kylberg texture dataset v. 1.0. External report (Blue series) 35, Centre for Image Analysis, Swedish University of Agricultural Sciences and Uppsala University, 2011. [14] G. Larotonda. Geodesic Convexity, Symmetric Spaces and Hilbert-Schmidt Operators. PhD thesis, Universidad Nacional de General Sarmiento, Buenos Aires, Argentina, 2005. [15] G. Larotonda. Nonpositive curvature: A geometrical approach to Hilbert–Schmidt operators. Differential Geometry and its Applications, 25:679–700, 2007. [16] J. D. Lawson and Y. Lim. The geometric mean, matrices, metrics, and more. The American Mathematical Monthly, 108(9):797–812, 2001. [17] P. Li, Q. Wang, W. Zuo, and L. Zhang. Log-Euclidean kernels for sparse representation and dictionary learning. In ICCV, 2013. [18] G.D. Mostow. Some new decomposition theorems for semi-simple groups. Memoirs of the American Mathematical Society, 14:31–54, 1955. [19] X. Pennec, P. Fillard, and N. Ayache. A Riemannian framework for tensor computing. International Journal of Computer Vision, 66(1):41–66, 2006. [20] W.V. Petryshyn. Direct and iterative methods for the solution of linear operator equations in Hilbert spaces. Transactions of the American Mathematical Society, 105:136–175, 1962. [21] B. Sch¨olkopf, A. Smola, and K.-R. M¨uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput., 10(5), July 1998. [22] S. Sra. A new metric on the manifold of kernel matrices with application to matrix geometric means. In NIPS, 2012. [23] D. Tosato, M. Spera, M. Cristani, and V. Murino. Characterizing humans on Riemannian manifolds. TPAMI, 35(8):1972–1984, Aug 2013. [24] O. Tuzel, F. Porikli, and P. Meer. Pedestrian detection via classification on Riemannian manifolds. TPAMI, 30(10):1713–1727, 2008. [25] R. Wang, H. Guo, L. S. Davis, and Q. Dai. Covariance discriminative learning: A natural and efficient approach to image set classification. In CVPR, pages 2496–2503, 2012. [26] S. K. Zhou and R. Chellappa. From sample similarity to ensemble similarity: Probabilistic distance measures in reproducing kernel Hilbert space. TPAMI, 28(6):917–929, 2006. 9
2014
399
5,512
Quantized Kernel Learning for Feature Matching Danfeng Qin ETH Z¨urich Xuanli Chen TU Munich Matthieu Guillaumin ETH Z¨urich Luc Van Gool ETH Z¨urich {qind, guillaumin, vangool}@vision.ee.ethz.ch, xuanli.chen@tum.de Abstract Matching local visual features is a crucial problem in computer vision and its accuracy greatly depends on the choice of similarity measure. As it is generally very difficult to design by hand a similarity or a kernel perfectly adapted to the data of interest, learning it automatically with as few assumptions as possible is preferable. However, available techniques for kernel learning suffer from several limitations, such as restrictive parametrization or scalability. In this paper, we introduce a simple and flexible family of non-linear kernels which we refer to as Quantized Kernels (QK). QKs are arbitrary kernels in the index space of a data quantizer, i.e., piecewise constant similarities in the original feature space. Quantization allows to compress features and keep the learning tractable. As a result, we obtain state-of-the-art matching performance on a standard benchmark dataset with just a few bits to represent each feature dimension. QKs also have explicit non-linear, low-dimensional feature mappings that grant access to Euclidean geometry for uncompressed features. 1 Introduction Matching local visual features is a core problem in computer vision with a vast range of applications such as image registration [28], image alignment and stitching [6] and structure-from-motion [1]. To cope with the geometric transformations and photometric distorsions that images exhibit, many robust feature descriptors have been proposed. In particular, histograms of oriented gradients such as SIFT [15] have proved successful in many of the above tasks. Despite these results, they are inherently limited by their design choices. Hence, we have witnessed an increasing amount of work focusing on automatically learning visual descriptors from data via discriminative embeddings [11, 4] or hyper-parameter optimization [5, 21, 23, 22]. A dual aspect of visual description is the measure of visual (dis-)similarity, which is responsible for deciding whether a pair of features matches or not. In image registration, retrieval and 3D reconstruction, for instance, nearest neighbor search builds on such measures to establish point correspondences. Thus, the choice of similarity or kernel impacts the performance of a system as much as the choice of visual features [2, 16, 18]. Designing a good similarity measure for matching is difficult and commonly used kernels such as the linear, intersection, χ2 and RBF kernels are not ideal as their inherent properties (e.g., stationarity, homogeneity) may not fit the data well. Existing techniques for automatically learning similarity measures suffer from different limitations. Metric learning approaches [25] learn to project the data to a lower-dimensional and more discriminative space where the Euclidean geometry can be used. However, these methods are inherently linear. Multiple Kernel Learning (MKL) [3] is able to combine multiple base kernels in an optimal way, but its complexity limits the amount of data that can be used and forces the user to pre-select or design a small number of kernels that are likely to perform well. Additionally, the resulting kernel may not be easily represented in a reasonably small Euclidean space. This is problematic, as many efficient algorithms (e.g. approximate nearest neighbor techniques) heavily rely on Euclidean geometry and have non-intuitive behavior in higher dimensions. 1 In this paper, we introduce a simple yet powerful family of kernels, Quantized Kernels (QK), which (a) model non-linearities and heterogeneities in the data, (b) lead to compact representations that can be easily decompressed into a reasonably-sized Euclidean space and (c) are efficient to learn so that large-scale data can be exploited. In essence, we build on the fact that vector quantizers project data into a finite set of N elements, the index space, and on the simple observation that kernels on finite sets are fully specified by the N×N Gram matrix of these elements (the kernel matrix), which we propose to learn directly. Thus, QKs are piecewise constant but otherwise arbitrary, making them very flexible. Since the learnt kernel matrices are positive semi-definite, we directly obtain the corresponding explicit feature mappings and exploit their potential low-rankness. In the remainder of the paper, we first further discuss related work (Sec. 2), then present QKs in detail (Sec. 3). As important contributions, we show how to efficiently learn the quantizer and the kernel matrix so as to maximize the matching performance (Sec. 3.2), using an exact linear-time inference subroutine (Sec. 3.3), and devise practical techniques for users to incorporate knowledge about the structure of the data (Sec. 3.4) and reduce the number of parameters of the system. Our experiments in Sec. 4 show that our kernels yield state-of-the-art performance on a standard feature matching benchmark and improve over kernels used in the literature for several descriptors, including one based on metric learning. Our compressed features are very compact, using only 1 to 4 bits per dimension of the original features. For instance, on SIFT descriptors, our QK yields about 10% improvement on matching compared to the dot product, while compressing features by a factor 8. 2 Related work Our work relates to a vast literature on kernel selection and tuning, descriptor, similarity, distance and kernel learning. We present a selection of such works below. Basic kernels and kernel tuning. A common approach for choosing a kernel is to pick one from the literature: dot product, Gaussian RBF, intersection [16], χ2, Hellinger, etc. These generic kernels have been extensively studied [24] and have properties such as homogeneity or stationarity. These properties may be inadequate for the data of interest and thus the kernels will not yield optimal performance. Efficient yet approximate versions of such kernels [9, 20, 24] are similarly inadequate. Descriptor learning. Early work on descriptor learning improved SIFT by exploring its parameter space [26]. Later, automatic parameter selection was proposed with a non-convex objective [5]. Recently, significant improvements in local description for matching have been obtained by optimizing feature encoding [4] and descriptor pooling [21, 23]. These works maximize the matching performance directly via convex optimization [21] or boosting [23]. As we show in our experiments, our approach improves matching even for such optimized descriptors. Distance, similarity and kernel learning. Mahalanobis metrics (e.g., [25]) are probably the most widely used family of (dis-)similarities in supervised settings. They extend the Euclidean metric by accounting for correlations between input dimensions and are equivalent to projecting data to a new, potentially smaller, Euclidean space. Learning the projection improves discrimination and compresses feature vectors, but the projection is inherently linear.1 There are several attempts to learn more powerful non-linear kernels from data. Multiple Kernel Learning (MKL) [3] operates on a parametric family of kernels: it learns a convex combination of a few base kernels so as to maximize classification accuracy. Recent advances now allow to combine thousands of kernels in MKL [17] or exploit specialized families of kernels to derive faster algorithms [19]. In that work, the authors combine binary base kernels based on randomized indicator functions but restricted them to XNOR-like kernels. Our QK framework can also be seen as an efficient and robust MKL on a specific family of binary base kernels. However, our binary base kernels originate from more general quantizations: they correspond to their regions of constantness. As a consequence, the resulting optimization problem is also more involves and thus calls for approximate solutions. In parallel to MKL approaches, Non-Parametric Kernel Learning (NPKL) [10] has emerged as a flexible kernel learning alternative. Without any assumption on the form of the kernel, these methods aim at learning the Gram matrix of the data directly. The optimization problem is a semi-definite program whose size is quadratic in the number of samples. Scalability is therefore an issue, and approximation techniques must be used to compute the kernel on unobserved data. Like NPKL, we learn the values of the kernel matrix directly. However, we do it in the index space instead of the 1Metric learning can be kernelized, but then one has to choose the kernel. 2 original space. Hence, we restrict our family of kernels to piecewise constant ones2, but, contrary to NPKL, the complexity of the problems we solve does not grow with the number of data points but with the refinement of the quantization and our kernels trivially generalize to unobserved inputs. 3 Quantized kernels In this section, we present the framework of quantized kernels (QK). We start in Sec. 3.1 by defining QKs and looking at some of their properties. We then present in Sec. 3.2 a general alternating learning algorithm. A key step is to optimize the quantizer itself. We present in Sec. 3.3 our scheme for quantization optimization for a single dimensional feature and how to generalize it to higher dimensions in Sec. 3.4. 3.1 Definition and properties Formally, quantized kernels QKD N are the set of kernels kq on RD×RD such that: ∃q : RD 7→{1, . . . , N}, ∃K ∈RN×N ⪰0, ∀x, y ∈RD, kq(x, y) = K(q(x), q(y)), (1) where q is a quantization function which projects x ∈RD to the finite index space {1, . . . , N}, and K ⪰0 denotes that K is a positive semi-definite (PSD) matrix. As discussed above, quantized kernels are an efficient parametrization of piecewise constant functions, where q defines the regions of constantness. Moreover, the N ×N matrix K is unique for a given choice of kq, as it simply accounts for the N(N+1)/2 possible values of the kernel and is the Gram matrix of the N elements of the index space. We can also see q as a 1-of-N coding feature map ϕq, such that: kq(x, y) = K(q(x), q(y)) = ϕq(x)⊤Kϕq(y). (2) The components of the matrix K fully parametrize the family of quantized kernels based on q, and it is a PSD matrix if and only if kq is a PSD kernel. An explicit feature mapping of kq is easily computed from the Cholesky decomposition of the PSD matrix K = P⊤P: kq(x, y) = ϕq(x)⊤Kϕq(y) = ψP q (x), ψP q (y) , (3) where ψP q (x) = Pϕq(x). It is of particular interest to limit the rank N ′ ≤N of K, and hence the number of rows in P. In their compressed form, vectors require only log2(N) bits of memory for storing q(x) and they can be decompressed in RN ′ using Pϕq(x). Not only is this decompressed vector smaller than one based on ϕq, but it is also associated with the Euclidean geometry rather than the kernel one. This allows the exploitation of the large literature of efficient methods specialized to Euclidean spaces. 3.2 Learning quantized kernels In this section, we describe a general alternating algorithm to learn a quantized kernel kq for feature matching. This problem can be formulated as quadruple-wise constraints of the following form: kq(x, y) > kq(u, v), ∀(x, y) ∈P, ∀(u, v) ∈N, (4) where P denotes the set of positive feature pairs, and N is the negative one. The positive set contains feature pairs that should be visually matched, while the negative pairs are mismatches. We adopt a large-margin formulation of the above constraints using the trace-norm regularization ∥· ∥∗on K, which is the tightest convex surrogate to low-rank regularization [8]. Using M training pairs {(xj, yj)}j=1...M, we obtain the following optimization problem: argmin K⪰0, q∈QD N E(K, q) = λ 2 ∥K∥∗+ M X j=1 max  0, 1 −ljϕq(xj)⊤Kϕq(yj)  , (5) where QD N denotes the set of quantizers q : RD 7→{1, . . . , N}, the pair label lj ∈{−1, 1} denotes whether the feature pair (xj, yj) is in N or P respectively. The parameter λ controls the trade-off between the regularization and the empirical loss. Solving Eq. (5) directly is intractable. We thus propose to alternate between the optimization of K and q. We describe the former below, and the latter in the next section. 2As any continuous function on an interval is the uniform limit of a series of piecewise constant functions, this assumption does not inherently limit the flexibility of the family. 3 Optimizing K with fixed q. When fixing q in Eq. (5), the objective function becomes convex in K but is not differentiable, so we resort to stochastic sub-gradient descent for optimization. Similar to [21], we used Regularised Dual Averaging (RDA) [27] to optimize K iteratively. At iteration t + 1, the kernel matrix Kt+1 is updated with the following rule: Kt+1 = Π  − √ t γ Gt + λI  (6) where γ > 0 and Gt = 1 t Pt t′=1 Gt′ is the rolling average of subgradients Gt′ of the loss computed at step t′ from one sample pair. I is the identity matrix and Π is the projection onto the PSD cone. 3.3 Interval quantization optimization for a single dimension To optimize an objective like Eq. (5) when K is fixed, we must consider how to design and parametrize the elements of QD N. In this work, we adopt interval quantizers, and in this section we assume D=1, i.e., restrict the study of quantization to R. Interval quantizers. An interval quantizer q over R is defined by a set of N + 1 boundaries bi ∈R with b0 = −∞, bN = ∞and q(x) = i if and only if bi−1 < x ≤bi. Importantly, interval quantizers are monotonous, x≤y ⇒q(x)≤q(y), and boundaries bi can be set to any value between maxq(x)=i x (included) and minq(x)=i+1 x (excluded). Therefore, Eq. (5) can be viewed as a data labelling problem, where each value xj or yj takes a label in [1, N], with a monotonicity constraint. Thus, let us now consider the graph (V, E) where nodes V = {vt}t=1...2M represent the list of all xj and yj in a sorted order and the edges E ={(vs, vt)} connect all pairs (xj, yj). Then Eq. (5) with fixed K is equivalent to the following discrete pairwise energy minimization problem: argmin q∈[1,N]2M E′(q) = X (s,t)∈E Est(q(vs), q(vt)) + 2M X t=2 Ct(q(vt−1), q(vt)), (7) where Est(q(vs), q(vt)) = Ej(q(xj), q(yj)) = max (0, 1 −ljK(q(xj), q(yj))) and Ct is ∞for q(vt) < q(vt−1) and 0 otherwise (i.e., it encodes the monotonicity of q in the sorted list of vt). The optimization of Eq. (7) is an NP-hard problem as the energies Est are arbitrary and the graph does not have a bounded treewidth, in general. Hence, we iterate the individual optimization of each of the boundaries using an exact linear-time algorithm, which we present below. Exact linear-time optimization of a binary interval quantizer. We now consider solving equations of the form of Eq. (7) for the binary label case (N = 2). The main observation is that the monotonicity constraint means that labels are 1 until a certain node t and then 2 from node t + 1, and this switch can occur only once on the entire sequence, where vt ≤b1 < vt+1. This means that there are only 2M +1 possible labellings and we can order them from (1, . . . , 1), (1, . . . , 1, 2) to (2, . . . , 2). A na¨ıve algorithm consists in computing the 2M +1 energies explicitly. Since each energy computation is linear in the number of edges, this results in a quadratic complexity overall. A linear-time algorithm exist. It stems from the observation that the energies of two consecutive labellings (e.g., switching the label of vt from 1 to 2) differ only by a constant number of terms: E(q(vt−1)=1, q(vt)=2, q(vt+1)=2) = E(q(vt−1)=1, q(vt)=1, q(vt+1)=2) + Ct(1, 2) −Ct(1, 1) + Ct+1(2, 2) −Ct+1(1, 2) + Est(q(vs), 2) −Est(q(vs), 1) (8) where, w.l.o.g., we have assumed (s, t) ∈E. After finding the optimal labelling, i.e. finding the label change (vt, vt+1), we set b1 =(vt+vt+1)/2 to obtain the best possible generalization. Finite spaces. When the input feature space has a finite number of different values (e.g., x ∈ [1, T]), then we can use linear-time sorting and merge all nodes with equal value in Eq. (7): this results in considering at most T + 1 labellings, which is potentially much smaller than 2M + 1. Extension to the multilabel case. Optimizing a single boundary bi of a multilabel interval quantization is essentially the same binary problem as above, where we limit the optimization to the values currently assigned to i and i + 1 and keep the other assignments q fixed. We use unaries Ej(q(xj), q(yj)) or Ej(q(xj), q(yj)) to model half-fixed pairs for xj or yj, respectively. 3.4 Learning higher dimensional quantized kernels We now want to generalize interval quantizers to higher dimensions. This is readily feasible via product quantization [13], using interval quantizers for each individual dimension. 4 Interval product quantization. An interval product quantizer q(x) : RD 7→{1, . . . , N} is of the form q(x) = (q1(x1), . . . , qD(xD)), where q1, . . . , qD are interval quantizers with N1, . . . , ND bins respectively, i.e., N = QD d=1 Nd. The learning algorithm devised above trivially generalizes to interval product quantization by fixing all but one boundary of a single component quantizer qd. However, learning K ∈RN × RN when N is very large becomes problematic: not only does RDA scale unfavourably, but the lack of training data will eventually lead to severe overfitting. To address these issues, we devise below variants of QKs that have practical advantages for robust learning. Additive quantized kernels (AQK). We can drastically reduce the number of parameters by restricting product quantized kernels to additive ones, which consists in decomposing over dimensions: kq(x, y) = D X d=1 kqd(xd, yd) = D X d=1 ϕqd(xd)⊤Kdϕqd(yd) = ϕq(x)⊤Kϕq(y), (9) where qd ∈Q1 Nd, ϕqd is the 1-of-Nd coding of dimension d, Kd is the Nd × Nd Gram matrix of dimension d, ϕq is the concatenation of the D mappings ϕqd, and K is a (P d Nd)×(P d Nd) blockdiagonal matrix of K1, . . . , KD. The benefits of AQK are twofold. First, the explicit feature space is reduced from N = Q d Nd to N ′ = P d Nd. Second, the number of parameters to learn in K is now only P d N 2 d instead of N 2. The compression ratio is unchanged since log2(N) = P d log2(Nd). To learn K in Eq. (9), we simply set the off-block-diagonal elements of Gt′ to zero in each iteration, and iteratively update K as describe in Sec. 3.2. To optimize a product quantizer, we iterate the optimization of each 1d quantizer qd following Sec. 3.3, while fixing qc for c ̸= d. This leads to using the following energy Ej for a pair (xj, yj): Ej,d(qd(xj,d), qd(yj,d)) = max (0, µj,d −ljKd(qd(xj,d), qd(yj,d))) , (10) where µj,d = 1 −lj P c̸=d Kc(qc(xc), qc(yc)) acts as an adaptive margin. Block quantized kernels (BQK). Although the additive assumption in AQK greatly reduces the number of parameters, it is also very restrictive, as it assumes independent data dimensions. A simple way to extend additive quantized kernels to model the inter-dependencies of dimensions is to allow the off-diagonal elements of K in Eq. (9) to be nonzero. As a trade-off between a blockdiagonal (AQK) and a full matrix, in this work we also consider the grouping of the feature dimensions into B blocks, and only learn off-block-diagonal elements within each block, leading to Block Quantized Kernels (BQK). In this way, assuming ∀d Nd = n, the number of parameters in K is B times smaller than for the full matrix. As a matter of fact, many features such as SIFT descriptors exhibit block structure. SIFT is composed of a 4×4 grid of 8 orientation bins. Components within the same spatial cell correlate more strongly than others and, thus, only modeling those jointly may prove sufficient. The optimization of K and q are straightforwardly adapted from the AQK case. Additional parameter sharing. Commonly, the different dimensions of a descriptor are generated by the same procedure and hence share similar properties. This results in block matrices K1, . . . , KD in AQK that are quite similar as well. We propose to exploit this observation and share the kernel matrix for groups of dimensions, further reducing the number of parameters. Specifically, we cluster dimensions based on their variances into G equally sized groups and use a single block matrix for each group. During optimization, dimensions sharing the same block matrix can conveniently be merged, i.e. ϕq(x) = [P d s.t. Kd=K′ 1 ϕqd(xd), . . . , P d s.t. Kd=K′ G ϕqd(xd)], and then K = diag(K′ 1, . . . , K′ G) is learnt following the procedure already described for AQK. Notably, the quantizers themselves are not shared, so the kernel still adapts uniquely to every dimension of the data, and the optimization of quantizers is not changed either. This parameter sharing strategy can be readily applied to BQK as well. 4 Results We now present our experimental results, starting with a description of our protocol. We then explore parameters and properties of our kernels (optimization of quantizers, explicit feature maps). Finally, we compare to the state-of-the-art in performance and compactness. Dataset and evaluation protocol. We evaluate our method using the dataset of Brown et al. [5]. It contains three sets of patches extracted from Liberty, Notre Dame and Yosemite using the Difference of Gaussians (DoG) interest point detector. The patches are rectified with respect to the scale 5 Initial Optimized Uniform 24.84 21.68 Adaptive 25.99 25.70 Adaptive+ 14.62 14.29 Table 1: Impact of quantization optimization for different quantization strategies 2 6 10 14 18 14 16 18 20 #intervals FPR @ 95% recall [%] Figure 1: Impact of N, the number of quantization intervals 1 2 3 4 8 10 12 14 16 #groups FPR @ 95% recall [%] SIFT[15] SQ-4-DAISY[4] PR-proj[18] Figure 2: Impact of G, the number of dimension groups 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 (a) 50 100 150 200 250 50 100 150 200 250 (b) 50 100 150 200 250 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 (c) 50 100 150 200 250 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 (d) 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 (e) 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 (f) −0.08 −0.04 0 0.04 0.08 0.1 0.06 −0.06 −0.02 0.02 Figure 3: Our learned feature maps and additive quantized kernel of a single dimension. (a) shows the quantized kernel in index space, while (b) is in the original feature space for the first quantizer. (c,d) show the two corresponding feature maps, and (e,f) the related rank-1 kernels. and dominant orientation, and pairwise correspondences are computed using a multi-view stereo algorithm. In our experiments, we use the standard evaluation protocol [5] and state-of-the-art descriptors: SIFT [15], PR-proj [21] and SQ-4-DAISY [4]. M=500k feature pairs are used for training on each dataset, with as many positives as negatives. We report the false positive rate (FPR) at 95% recall on the test set of 100k pairs. A challenge for this dataset is the bias in local patch appearance for each set, so a key factor for performance is the ability to generalize and adapt across sets. Below, in absence of other mention, AQKs are trained for SIFT on Yosemite and tested on Liberty. Interval quantization and optimization. We first study the influence of initialization and optimization on the generalization ability of the interval quantizers. For initialization, we have used two different schemes: a) Uniform quantization, i.e. the quantization with equal intervals; b) Adaptive quantization, i.e. the quantization with intervals with equal number of samples. In both cases, it allows to learn a first kernel matrix, and we can then iterate with boundary optimization (Sec. 3.3). Typically, convergence is very fast (2-3 iterations) and takes less than 5 minutes in total (i.e., about 2s per feature dimension) with 1M nodes. We see in Table 1 that uniform binning outperforms the adaptive one and that further optimization benefits the uniform case more. This may seem paradoxical at first, but this is due to the train/test bias problem: intervals with equal number of samples are very different across sets, so refinements will not transfer well. Hence, following [7], we first normalize the features with respect to their rank, separately for the training and test sets. We refer to this process as Adaptive+. As Table 1 shows, not only does it bring a significant improvement, but further optimization of the quantization boundaries is more beneficial than for the Adaptive case. In the following, we thus adopt this strategy. Number of quantization intervals. In Fig. 1, we show the impact of the number of intervals N of the quantizer on the matching accuracy, using a single shared kernel submatrix (G = 1). This number balances the flexibility of the model and its compression ratio. As we can see, using too few intervals limits the performance of QK, and using too many eventually leads to overfitting. The best performance for SIFT is obtained with between 8 and 16 intervals. Explicit feature maps. Fig. 3a shows the additive quantized kernel learnt for SIFT with N = 8 and G = 1. Interestingly, the kernel has negative values far from the diagonal and positive values near the diagonal. This is typical of stationary kernels: when both features have similar values, they contribute more to the similarity. However, contrary to stationary kernels, diagonal elements are far from being constant. There is a mode on small values and another one on large ones. The second one is stronger: i.e., the co-occurrence of large values yields greater similarity. This is consistent with the voting nature of SIFT descriptors, where strong feature presences are both rarer and more informative than their absences. The negative values far from the diagonal actually penalize inconsistent observations, thus confirming existing results [12]. Looking at the values in the original space in Fig. 3b, we see that the quantizer has learnt that fine intervals are needed in the lower 6 Descriptor Kernel Dimensionality Train on Yosemite Train on Notredame Mean Notredame Liberty Yosemite Liberty SIFT[15] Euclidean 128 24.02 31.34 27.96 31.34 28.66 SIFT[15] χ2 128 17.65 22.84 23.50 22.84 21.71 SIFT[15] AQK(8) 128 10.72 16.90 10.72 16.85 13.80 SIFT[15] AQK(8) 256 9.26 14.48 10.16 14.43 12.08 SIFT[15] BQK(8) 256 8.05 13.31 9.88 13.16 11.10 SQ-4-DAISY [4] Euclidean 1360 10.08 16.90 10.47 16.90 13.58 SQ-4-DAISY [4] χ2 1360 10.61 16.25 12.19 16.25 13.82 SQ-4-DAISY [4] SQ [4] 1360 8.42 15.58 9.25 15.58 12.21 SQ-4-DAISY [4] AQK(8) ≤1813 4.96 9.41 5.60 9.77 7.43 PR-proj [21] Euclidean[21] <64 7.11 14.82 10.54 12.88 11.34 PR-proj [21] AQK(16) ≤102 5.41 10.90 7.65 10.54 8.63 Table 2: Performance of kernels on different datasets with different descriptors. AQK(N) denotes the additive quantized kernel with N quantization intervals. Following [6], we report the False positive rate (%) at 95% recall. The best results for each descriptor are in bold. values, while larger ones are enough for larger values. This is consistent with previous observations that the contribution of large values in SIFT should not grow proportionally [2, 18, 14]. In this experiment, the learnt kernel has rank 2. We show in Fig. 3c, 3d, 3e and 3f the corresponding feature mappings and their associated rank 1 kernels. The map for the largest eigenvalue (Fig. 3c) is monotonous but starts with negative values. This impacts dot product significantly, and accounts for the above observation that negative similarities occur when inputs disagree. This rank 1 kernel cannot allot enough contribution to similar mid-range values. This is compensated by the second rank (Fig. 3f). Number of groups. Fig. 2 now shows the influence of the number of groups G on performance, for the three different descriptors (N = 8 for SIFT and SQ-4-DAISY, N = 16 for PR-proj). As for intervals, using more groups adds flexibility to the model, but as less data is available to learn each parameter, over-fitting will hurt performance. We choose G = 3 for the rest of the experiments. Comparison to the state of the art. Table 2 reports the matching performance of different kernels using different descriptors, for all sets, as well as the dimensionality of the corresponding explicit feature maps. For all three descriptors and on all sets, our quantized kernels significantly and consistently outperform the best reported result in the literature. Indeed, AQK improves the mean error rate at 95% recall from 28.66% to 12.08% for SIFT, from 13.58% to 7.43% for SQ-4-DAISY and from 11.34% to 8.63% for PR-proj compared to the Euclidean distance, and about as much for the χ2 kernel. Note that PR-proj already integrates metric learning in its design ([21] thus recommends using the Euclidean distance): as a consequence our experiments show that modelling non-linearities can bring significant improvements. When comparing to sparse quantization (SQ) with hamming distance as done in [4], the error is significantly reduced from 12.21% to 7.43%. This is a notable achievement considering that [4] is the previous state of the art. The SIFT descriptor has a grid block design which makes it particularly suited for the use of BQK. Hence, we also evaluated our BQK variant for that descriptor. With BQK(8), we observed a relative improvement of 8%, from 12.08% for AQK(8) to 11.1%. We provide in Fig. 4 the ROC curves for the three descriptors when training on Yosemite and testing on Notre Dame and Liberty. These figures show that the improvement in recall is consistent over the full range of false positive rates. For further comparisons, our data and code are available online.3 Compactness of our kernels. In many applications of feature matching, the compactness of the descriptor is important. In Table 3, we compare to other methods by grouping them according to their memory footprint. As a reference, the best method reported in Table 2 (AQK(8) on SQ-4DAISY) uses 4080 bits per descriptor. As expected, error rates increase as fewer bits are used, the original features being significantly altered. Notably, QKs consistently yield the best performance in all groups. Even with a crude binary quantization of SQ-4-DAISY, our quantized kernel outperform the state-of-the-art SQ of [4] by 3 to 4%. When considering the most compact encodings (≤64 bits), our AQK(2) does not improve over BinBoost [22], a descriptor designed for extreme compactness, or the product quantization (PQ [13]) encoding as used in [21]. This is because our current framework does not yet allow for joint compression of multiple dimensions. Hence, it is unable to use less 3See: http://www.vision.ee.ethz.ch/˜qind/QuantizedKernel.html 7 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] SIFT BQK(8) AQK(8) AQK(2) L2 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] SQ-4-DAISY AQK(8) AQK(2) SQ 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] PR−proj AQK(16) AQK(4) L2 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] SIFT BQK(8) AQK(8) AQK(2) L2 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] SQ-4-DAISY AQK(8) AQK(2) SC 0 5 10 15 20 25 30 70 75 80 85 90 95 100 False Positive Rate [%] True Positive Rate [%] PR−proj AQK(16) AQK(4) L2 Figure 4: ROC curves when evaluating Notre Dame (top) and Liberty (bottom) from Yosemite Descriptor Encoding Memory (bits) Train on Yosemite Train on Notredame Mean Notredame Liberty Yosemite Liberty SQ-4-DAISY [4] SQ [4] 1360 8.42 15.58 9.25 15.58 12.21 SQ-4-DAISY [4] AQK(2) 1360 5.86 10.81 6.36 10.94 8.49 SIFT[15] AQK(8) 384 9.26 14.48 10.16 14.43 12.08 PR-proj [21] Bin [21] 1024 7.09 15.15 8.5 12.16 10.73 PR-proj [21] AQK(16) <256 5.41 10.90 7.65 10.54 8.63 SIFT[15] AQK(2) 128 14.62 19.72 15.65 19.45 17.36 PR-proj [21] Bin [21] 128 10.00 18.64 13.41 16.39 14.61 PR-proj [21] AQK(4) <128 7.18 13.02 10.29 13.18 10.92 BinBoost[22] BinBoost[22] 64 14.54 21.67 18.97 20.49 18.92 PR-proj [21] AQK(2) <64 14.80 20.59 19.38 22.24 19.26 PR-proj [21] PQ [21] 64 12.91 20.15 19.32 17.97 17.59 PR-proj [21] PCA+AQK(4) 64 10.74 17.46 14.44 17.60 15.06 Table 3: Performance comparison of different compact feature encoding. The number in the table is reported as False positive rate (%) at 95% recall. The best results for each group are in bold. than 1 bit per original dimension, and is not optimal in that case. To better understand the potential benefits of decorrelating features and joint compression in future work, we pre-processed the data with PCA, projecting to 32 dimensions and then using AQK(4). This simple procedure obtained state-of-the-art performance with 15% error rate, now outperforming [22] and [21]. Although QKs yield very compact descriptors and achieve the best performance across many experimental setups, the computation of similarity values is slower than for competitors: in the binary case, we double the complexity of hamming distance for the 2 × 2 table look-up. 5 Conclusion In this paper, we have introduced the simple yet powerful family of quantized kernels (QK), and presented an efficient algorithm to learn its parameters, i.e. the kernel matrix and the quantization boundaries. Despite their apparent simplicity, QKs have numerous advantages: they are very flexible, can model non-linearities in the data and provide explicit low-dimensional feature mappings that grant access to the Euclidean geometry. Above all, they achieve state-of-the-art performance on the main visual feature matching benchmark. We think that QKs have a lot of potential for further improvements. In future work, we want to explore new learning algorithms to obtain higher compression ratios – e.g. by jointly compressing feature dimensions – and find the weight sharing patterns that would further improve the matching performance automatically. Acknowledgements We gratefully thank the KIC-Climate project Modeling City Systems. 8 References [1] Sameer Agarwal, Yasutaka Furukawa, Noah Snavely, Ian Simon, Brian Curless, Steven M Seitz, and Richard Szeliski. Building rome in a day. Communications of the ACM, 54(10):105–112, 2011. [2] Relja Arandjelovic and Andrew Zisserman. Three things everyone should know to improve object retrieval. In IEEE Conference onComputer Vision and Pattern Recognition (CVPR), 2012. [3] Francis R Bach, Gert RG Lanckriet, and Michael I Jordan. Multiple kernel learning, conic duality, and the smo algorithm. In Proceedings of the International Conference on Machine learning. ACM, 2004. [4] Xavier Boix, Michael Gygli, Gemma Roig, and Luc Van Gool. Sparse quantization for patch description. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [5] Matthew Brown, Gang Hua, and Simon Winder. Discriminative learning of local image descriptors. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(1):43–57, 2011. [6] Matthew Brown and David G Lowe. Automatic panoramic image stitching using invariant features. International Journal of Computer Vision, 74(1):59–73, 2007. [7] Thomas Dean, Mark A Ruzon, Mark Segal, Jonathon Shlens, Sudheendra Vijayanarasimhan, and Jay Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In CVPR, 2013. [8] Maryam Fazel. Matrix rank minimization with applications. PhD thesis, 2002. [9] Yunchao Gong, Sanjiv Kumar, Vishal Verma, and Svetlana Lazebnik. Angular quantization-based binary codes for fast similarity search. In NIPS, pages 1196–1204, 2012. [10] Steven CH Hoi, Rong Jin, and Michael R Lyu. Learning nonparametric kernel matrices from pairwise constraints. In Proceedings of the International Conference on Machine learning. ACM, 2007. [11] Gang Hua, Matthew Brown, and Simon Winder. Discriminant embedding for local image descriptors. In ICCV 2007. IEEE, 2007. [12] Herv´e J´egou and Ondˇrej Chum. Negative evidences and co-occurences in image retrieval: The benefit of pca and whitening. In Computer Vision–ECCV 2012, pages 774–787. Springer, 2012. [13] Herve Jegou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(1):117–128, 2011. [14] Herv´e J´egou, Matthijs Douze, Cordelia Schmid, and Patrick P´erez. Aggregating local descriptors into a compact image representation. In CVPR, pages 3304–3311. IEEE, 2010. [15] David G Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 2004. [16] Subhransu Maji, Alexander C Berg, and Jitendra Malik. Efficient classification for additive kernel svms. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1):66–77, 2013. [17] Francesco Orabona and Luo Jie. Ultra-fast optimization algorithm for sparse multi kernel learning. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 249–256, 2011. [18] Florent Perronnin, Jorge S´anchez, and Thomas Mensink. Improving the fisher kernel for large-scale image classification. In European Conference on Computer Vision (ECCV). 2010. [19] Gemma Roig, Xavier Boix, and Luc Van Gool. Random binary mappings for kernel learning and efficient SVM. arXiv preprint arXiv:1307.5161, 2013. [20] Dimitris Achlioptas Frank McSherry Bernhard Scholkopf. Sampling techniques for kernel methods. In NIPS 2001, volume 1, page 335. MIT Press, 2002. [21] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Learning local feature descriptors using convex optimisation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014. [22] Tomasz Trzcinski, Mario Christoudias, Pascal Fua, and Vincent Lepetit. Boosting binary keypoint descriptors. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [23] Tomasz Trzcinski, Mario Christoudias, Vincent Lepetit, and Pascal Fua. Learning image descriptors with the boosting-trick. In NIPS, 2012. [24] Andrea Vedaldi and Andrew Zisserman. Efficient additive kernels via explicit feature maps. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 34(3):480–492, 2012. [25] Kilian Weinberger, John Blitzer, and Lawrence Saul. Distance metric learning for large margin nearest neighbor classification. Advances in neural information processing systems, 18:1473, 2006. [26] Simon AJ Winder and Matthew Brown. Learning local image descriptors. In CVPR, 2007. [27] Lin Xiao et al. Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11(2543-2596):4, 2010. [28] Zheng Yi, Cao Zhiguo, and Xiao Yang. Multi-spectral remote image registration based on sift. Electronics Letters, 44(2):107–108, 2008. 9
2014
4
5,513
A Bayesian model for identifying hierarchically organised states in neural population activity Patrick Putzky1,2,3, Florian Franzen1,2,3, Giacomo Bassetto1,3, Jakob H. Macke1,3 1Max Planck Institute for Biological Cybernetics, T¨ubingen 2Graduate Training Centre of Neuroscience, University of T¨ubingen 3Bernstein Center for Computational Neuroscience, T¨ubingen patrick.putzky@gmail.com, florian.franzen@tuebingen.mpg.de giacomo.bassetto@tuebingen.mpg.de, jakob@tuebingen.mpg.de Abstract Neural population activity in cortical circuits is not solely driven by external inputs, but is also modulated by endogenous states which vary on multiple time-scales. To understand information processing in cortical circuits, we need to understand the statistical structure of internal states and their interaction with sensory inputs. Here, we present a statistical model for extracting hierarchically organised neural population states from multi-channel recordings of neural spiking activity. Population states are modelled using a hidden Markov decision tree with state-dependent tuning parameters and a generalised linear observation model. We present a variational Bayesian inference algorithm for estimating the posterior distribution over parameters from neural population recordings. On simulated data, we show that we can identify the underlying sequence of population states and reconstruct the ground truth parameters. Using population recordings from visual cortex, we find that a model with two levels of population states outperforms both a one-state and a two-state generalised linear model. Finally, we find that modelling of state-dependence also improves the accuracy with which sensory stimuli can be decoded from the population response. 1 Introduction It has long been recognised that the firing properties of cortical neurons are not constant over time, but that neural systems can exhibit multiple distinct firing regimes. For example, cortical circuits can be in a ‘synchronised’ state during slow-wave sleep, exhibiting synchronised fluctuations of neural excitability [1] or in a ‘desynchronised’ state in which firing is irregular. Neural activity in anaesthetised animals exhibits distinct states which lead to widespread modulations of neural firing rates and contribute to cross-neural correlations [2]. Changes in network state can be brought about through the influence of inter-area interactions [3] and affect communication between cortical and subcortical structures [4]. Given the strong impact of cortical states on neural firing [3, 5, 4], an understanding of the interplay between internal states and external stimuli is essential for understanding how populations of cortical neurons collectively process information. Multi-cell recording techniques allow to record neural activity from dozens or even hundreds of neurons simultaneously, making it possible to identify the signatures of underlying states by fitting appropriate statistical models to neural population activity. It is thought that the state-dependence of neocortical circuits is not well described using a global bi-modal state. Instead, the structure of cortical states is more accurately described 1 Figure 1: Illustration of the model. A) Generative model. At time t, the cortical state st is determined using a Hidden Markov Decision Tree (HMDT) and depends on the previous state st−1, population activity yt−1 and on the current stimulus xt. In our simulations, we assumed that the first split of the tree determined whether to transition into an up or down-state. Up-states contained transient periods of high firing across the population (up-high) as well as sustained periods of irregular firing (up-low). Each cortical state is then associated with different spike-generation dynamics, modelling state-dependence of firing properties such as ‘burstiness’. B) State-transition probabilities depend on the treestructure. Transition matrices are depicted as Hinton diagrams where each block represents a probability and each column sums to 1. Each row corresponds to the possible future state st (see colour), and each column to the current state. (1) A model in which transition-probabilities in the first level of the tree (up/down) are biased towards the up-state (green squares are bigger than gray ones), and weakly depend on the previous state st−1. In this example, both high/low phases are equally likely within upstates (second level of tree, depicted in second column) and do not depend on the previous state (all orange/red squares have same size). The resulting 3 × 3 matrix of transition probabilities across all states can be calculated from the transition-probabilities in the tree. (2) Changing the properties of the second-level node only leads to a local change in the transition matrix: It affects the proportion between the orange/red states, but leaves the green state unchanged. using multiple states which vary both between and within brain regions [6]. In addition, the ‘state’ of a neural population can vary across multiple time scales from milliseconds to seconds or more [6]: For example, cortical recordings can switch between up- and downphases. During an up-phase cortical activity can exhibit ‘volleys’ of synchronised activity [7]—sometimes referred to as population bursts—which can be modelled as transient states. These observations suggest that the structure of cortical states could be captured by a hierarchical organisation in which each state can give rise to multiple temporally nested ‘sub-states’. This structure naturally yields a binary tree: States can be divided into subclasses, with states further down the tree operating at faster time-scales determined by their parent node. We hypothesise that other cortical states also exhibit similar hierarchical structure. Our goal here is to provide a statistical model which can identify cortical states and their hierarchical organisation from recordings of population activity. As a running example of such a hierarchical organisation we use a model in which the population exhibits synchronised population bursts during up-states, but not during down-states. This system is modelled using a first level of state (up/down), and for which the up-state is further divided into two states (transient high-firing events and normal firing, see 1A). We present an inhomogeneous hidden Markov model (HMM) [8] to model the temporal dynamics of state-transitions [9, 10]. Our approach is most closely related to [10], who developed a state-dependent generalised linear model [11] in which both the tuning prop2 erties and state-transitions can be modelled to depend on external covariates. However, our formulation also allows for hierarchically organised state-structures. In addition, previous population models based on discrete latent states [10, 12] used point-estimation for parameter learning. In contrast, we present algorithms for full Bayesian inference over the parameters of our model, making it possible to identify states in smaller or noisier data [13]. This is important for neural population recordings which are typically characterised by short recording times relative to the dimensionality of the data and by high variability. In addition, estimates of posterior distributions are important for visualising uncertainty and for optimising experimental paradigms with active-learning methods [14, 15]. 2 Methods We use a hidden Markov decision tree (HMDT) [16] to model hierarchically organised states with binary splits and a generalised linear observation model (GLM). An HMDT combines the properties of a hidden Markov model (to model temporal structure) with a hierarchical mixture of experts (HME, to model a hierarchy of latent states) [17]. In general the hierarchical approach can represent richer dependence of states on external covariates, analogous to the difference between multi-class logistic regression and multi-class binary decision trees. For example, a two-level binary tree can separate four point clouds situated at the corners of a square whereas a 4-class multinomial regression cannot. We use Bayesian logistic regression [18] to model transition gates and emissions. In the following, we describe the model structure and propose a variational algorithm [8, 19] for inferring its parameters. 2.1 Hierarchical hidden Markov model for multivariate binary data We consider discrete time-series data of multivariate binary1 neural spiking events yt ∈ {0, 1}C where C is the number of cells. We assume that neural spiking can be influenced by (observed) covariates xt ∈RD. The covariates xt could represent external stimuli, spiking history of neurons or other measures such as the total population spike count. In our analyses below, we assume that correlations across neurons arise only from the joint coupling to the population state, and we do not include couplings between neurons as is sometimes done with GLMs [11]. Dependence of neural firing on internal states is modelled by including a 1-of-K latent state vector st, where K is the number of latent states. The emission probabilities for the observable vector yt (i.e. the probability of spiking for each neuron) are thus given by p (yt|xt, st, Φ) = K Y i=1 C Y c=1 p  y(c) t |x(c) t , φ(c) i s(i) t , (1) where Φ is a set of model parameters. We allow the external covariate xt to be different for each neuron c. To model temporal dynamics over st, we use a hidden Markov model (HMM) [10], where the state transitions take the form p (st|st−1, xt, Ψ) = K Y i=1 K Y j=1 p  s(i) t |s(j) t−1, xt, Ψ s(i) t s(j) t−1 , (2) where Ψ is a set of parameters of the transition model. The model allows state-transitions to be dependent on an external input xt— this can e.g. be used to model state-transitions caused by stimulation of subcortical structures involved in controlling cortical states [20]. Moving beyond this standard input output HMM formulation [21], we introduce hierarchically organised auxiliary latent variables zt which represent the current state st through a binary tree. Using HME terminology, we refer to the nodes representing zt as ‘gates’. Each of the K leaves of the tree (or, equivalently, each path through the tree) corresponds to one of the K entries of st and we can thus represent st in the form s(k) t = L Y l=1  z(l) t A(l,k) L  1 −z(l) t A(l,k) R , (3) 1All derivations below can be generalised to model the emission probabilities by any kind of generalised linear model. 3 where AL and AR are adjacency matrices which indicate whether state k is in the left or right branch of gate l, respectively (see [19]). Using this representation, st is deterministic given zt which significantly simplifies the inference process. The auxiliary latent variables z(l) t are Bernoulli random variables and we chose their conditional probability distribution to be p(z(l) t = 1|x(l) t , st−1, vl) = σ  v⊤ l u(l) t  . (4) Here, σ(·) is the logistic sigmoid, vl are the parameters of the l-th gate and ut represents a concatenation of the previous state st−1, the input xt (which could for example represent population firing rate, time in trial or an external stimulus) and a constant term of unit value to model the prior probability of z(l) 0 = 1. This parametrisation significantly reduces the number of parameters used for the transition probabilities as compared to [10]. To enforce stronger temporal locality and less jumping between states we could also reduce this probability to be conditioned only on previous activations of a sub-tree of the HMDT instead of all population states. 2.2 Learning & Inference For posterior inference over the model parameters we would need to infer the joint distribution over all stochastic variables conditioned on X, p (Y, S, Φ, Ψ, λ, ν|X) =p (Y|S, X, Φ) p (S|X, Ψ) p (Φ|λ) p (λ) p (Ψ|ν) p (ν) (5) where Y is the set of yt’s, Φ and Ψ are the sets of parameters for the emission and gating distributions, respectively, and λ and ν are the hyperparameters for the parameter priors. Since there is no closed form solution for this distribution, we use a variational approximation [8]. We assume that the posterior factorises as q (S, Φ, Ψ, λ, ν) =q (S) q (Φ) q (Ψ) q (λ) q (ν) (6) =q (S) K Y k=1 C Y c=1 q  φ(c) k  q  λ(c) k  L Y l=1 q (ψl) q (νl) , (7) and find the variational approximation to the posterior over parameters, q (S, Φ, Ψ, λ, ν), by optimising the variational lower bound L(q) to the evidence L(q) := X S ZZZZ q (S, Φ, Ψ, λ, ν) ln p (Y, S, Φ, Ψ, λ, ν|X) q (S, Φ, Ψ, λ, ν) dΦdΨdλdν (8) ≤ln X S ZZZZ p (Y, S, Φ, Ψ, λ, ν|X) dΦdΨdλdν = ln p (Y|X) . (9) We use variational Expectation-Maximisation (VBEM) to perform alternating updates on the posterior over latent state variables and the posterior over model parameters. To infer the posterior over latent variables (i.e. responsibilities), we use a modified forward-backward algorithm as proposed in [22] (see also [8]). In order to perform the forward and backward steps, they propose the use of subnormalised probabilities of the form ˜p  s(i) t |s(j) t−1, xt, Ψ  := exp  EΨ h ln p  s(i) t |s(j) t−1, xt, Ψ i (10) ˜p (yt|xt, Φi) := exp (EΦi [ln p (yt|xt, Φi)]) (11) for the state-transition probabilities and emission probabilities. Since all relevant probabilities in our model are over discrete variables, it would be straightforward to normalise those probabilities, but we found that normalisation did not noticeably change results. With the approximations from above, the forward probability can thus be written as α  s(i) t  = 1 ˜Ct ˜p  yt|s(i) t , xt, φ  K X j=1 α  s(j) t−1  ˜p  s(i) t |s(j) t−1, xt, Ψ  , (12) where α(s(i) t ) is the probability-mass of state s(i) t given previous time steps and ˜Ct is a normalisation constant. Similar to the forward step, the backward recursion takes the form 4 β  s(i) t  = 1 ˜Ct K X j=1 βt  s(j) t+1  ˜p  yt+1|s(j) t+1, xt+1, φ  ˜p  s(j) t+1|s(i) t , xt, Ψ  . (13) Using the forward and backward equation steps we can infer the state posteriors [8]. Given the state posteriors, the logarithm of the approximate parameter posterior for each of the nodes takes the form ln q⋆(ωn) = T X t=1 η(n) t ln p  µ(n) t |x(n) t , ωn, (. . . )  + Eγn [ln p (ωn|γn)] + const. (14) where ωn are the parameters of the n-th node and p (ωn|γn) is the prior over the parameters. Here, η(n) t is the posterior responsibility or estimated influence of node n on the tth observation and µ(n) t denotes the expected output (known for state nodes) of node n (see supplement for details). This equation also holds for a tree structure with multinomial gates and for non-binary emission models such as Poisson and linear models. The above equations are valid for maximum likelihood inference, except that all parameter priors are removed, and the expectations of log-likelihoods reduce to log-likelihoods We use logistic regression for all emission probabilities and gates, and a local variational approximation to the logistic sigmoid as presented in [18]. As parameter priors we use anisotropic Gaussians with individual Gamma priors on each diagonal entry of the precision matrix. With this prior structure we can perform automatic relevance determination [23]. We chose shape parameter a0 =1 × 10−2 and rate parameter b0 = 1 × 10−4, leading to a broad Gamma hyperprior [19]. In many applications, it will be reasonable to assume that neurons in close-by states of the tree show similar response characteristics (similar parameters). The hierarchical organisation of the model yields a natural structure for hierarchical priors which can encourage parameter similarity2. 2.3 Details of simulated and neurophysiological data To assess and illustrate our model, we simulated a population recording with trials of 3 s length (20 neurons, 10 ms time bins). As illustrated in Fig. 1 A, we modelled one low-firingrate down state (down, base firing rate 0.5 Hz) and two up states (up-low and up-high, with base firing rates of 5, and 50 Hz respectively). The root node switched between up and down states, whereas a second node controlled transitions between the two types of upstates. Up-high states only occurred transiently, modelling synchronised bouts of activity. In the down state, neurons have a 10 ms refractory period, during up states they exhibit bursting activity. Transitions from down to up go mainly via up-high to up-low, while downtransitions go from up-low to down; stimulation increases the probability of being in one of the up states. A pulse-stimulus occurred at time 1 s of each trial. Each model was fit on a set of 20 trials and evaluated on a different test set of 20 trials. For each training set, 24 random parameter initialisations were drawn and the one with highest evidence was chosen for evaluation. State predictions were evaluated using the Viterbi algorithm [24, Ch. 13]. We analysed a recording from visual cortex (V1) of an anaesthetised macaque [2]. The data-set consisted of 1600 presentations of drifting gratings (16 directions, 100 trials each), each lasting 2 s. Experimental details are described in [2]. For each trial, we kept a segment of 500 ms before and after a stimulus presentation, resulting in trials of length 3 s each. We binned and binarised spike trains in 50 ms bins. Additional spikes (present in (5.45 ± 1.56) % of bins) were discarded by the binarisation procedure. We chose the representation of the stimulus to be the outer product of the two vectors [1, sin(ϑ), cos(ϑ)], where ϑ is the phase of the grating, and [1, sin(θ), cos(θ), sin(2θ), cos(2θ)] for the direction θ of the grating. This resulted in a 15 dimensional stimulus-parametrisation, and made it possible to represent tuning-curves with orientation and direction selectivity, as well as modulation of firing rates by stimulus phase. The only gate input was chosen to be an indicator function with unit value during stimulus presentation and zero value otherwise. Post-spike filters were parametrised using five cubic b-splines for the last 10 bins with a bin width of 50 ms. 2See supplement for an example of how this could be implemented with Gaussian priors. 5 Figure 2: Performance of the model on simulated data. A) Example rasters sampled using ground truth (GT) parameters, colors indicate sequence of underlying population states. B) For the sample from (A), the state-sequence decoded with our variational Bayes (VB) method matches the decoded sequence using GT parameters. C) Comparison of statedecoding performance using GT parameters, VB and maximum likelihood (ML) learning (Wilcoxon ranksum, * p < 0.05; *** p ≪0.001). D) Model performance quantified using per-data-point log-likelihood difference between estimated and GT-model on test-set. Our VB method outperforms ML (Wilcoxon ranksum, *** p ≪0.001), and both models considerably outperform a 1-state GLM (not shown). E) Estimated post-spike filters match the GT values well (depicted are the filters from one of the cross-validated models). F) Comparison of the autocorrelation of the ground truth data and samples drawn from the VB fit as in (E). G) GT (top) and VB estimated (bottom) transition matrices in absence (left) or presence (right) of a stimulus. 3 Results 3.1 Results on simulated data To illustrate our model and to evaluate the estimation procedure on data with known ground truth, we used a simulated population recording of 20 neurons by sampling from our model (details in Methods, see Fig. 2 A). In this simulation, the up-state had much higher firing rates than the down-state. It was therefore possible to decode the underlying states from the population spike trains with high accuracy (Fig. 2 B). For the VB method, we used the posterior mean over parameters for state-inference. In addition, we compared both of these approaches to state-decoding based on a model estimated using maximum likelihood learning. All three models showed similar performance, but the decoding advantage of the 3-state VB model was statistically significant (using pairwise comparisons, Fig. 2 C). We also directly evaluated performance of the VB and ML methods for parameter estimation by calculating the log-likelihood of the data on held-out test-data, and found that our VB method performed significantly better than the ML method (Fig. 2 D). Finally, we also compared the estimated post-spike filters (Fig. 2 E), auto-correlation functions (Fig. 2 F) and state-transition matrices (Fig. 2 G) and found an excellent agreement between the GT parameters and the estimates returned by VB. To test whether the VB method is able to determine the correct model complexity, we fit an over-parameterised model with 3 layers and potentially 8 states to the simulation data. The best model fit from 200 random restarts (lower bound of −2.24 × 104, no crossvalidation, results not shown) only used 3 out of the 8 possible states (the other 5 states had a probability of less than 0.5 %). Therefore, in this example, the best lower bound is achieved by a model with correct, and low, complexity. 6 -500 0 1000 2000 up high-rate up low-rate down sampled empirical 1S-GLM A D E F G H B C coupled 1S 2S ML 3S PR 1S 1S Δloglikelihood *** *** *** *** *** -2 -1 0 accuracy (%) 3S coupled 1S 2S ML 3S PR 1S 1S 0 10 20 30 40 50 direction (deg) pθ(spike) 0 90 180 270 360 0 0.2 0.4 0.6 direction (deg) spikes (hz) 0 90 180 270 360 0 5 10 15 time (ms) pt(spike|θ=67.5º) 0 1000 2000 0 0.2 0.4 0.6 time (ms) spikes (hz) modulation 0 1000 2000 0 5 10 15 time (ms) 50 250 500 0 0.5 1 1.5 empirical ITIs (ms) events per trial 0 250 500 750 0 0.5 1 1.5 empirical ITIs sampeled ITIs 10 -2 10 0 10 -4 10 -2 10 0 number of spikes (per bin) population rate (%) 0 5 10 0 10 20 30 40 number of spikes (per bin) population rate (%) 0 5 10 0 10 20 30 40 50 i ii iii iv v Figure 3: Results for population recordings from V1. A) Raster plot of population response to a drifting grating with orientation 67.5◦. Arrows indicate stimulus onset and offset, colours show the most likely state sequence inferred with the 3-state variational Bayes (3S-VB) model. B) Cross-validated log-likelihoods per trial, relative to the 3S-VB model. C) Stimulus decoding performance, in percentage of correctly decoded stimuli (16 discrete stimuli, chance level 6.25 %), using maximum-likelihood decoding. D) Tuning properties of an example neuron. i) Orientation tuning calculated from the tuning-parameters of 3S-VB (red, orange, green) or 1-state GLM (purple). iii) Temporal component of tuning parameters. ii) Orientation tuning measured from sampled data of the estimated model, each line representing one state. Note that the firing rate also depends on state-transitions and post-spike filters. iv) Peri-stimulus time-histograms (PSTHs) estimated from samples of the estimated models. v) Post-spike filters for each state, and comparison with 1-state GLM (purple). E) Distributions of times spent in each state, i.e. inter-transition intervals (ITIs), estimated from the empirical data using 3S-VB. F) Comparison between distribution of ITIs in samples from model 3S-VB and in the Viterbi-decoded path (from E). G) Histogram of population rates (i.e. number of synchronous spikes across the population in each 50 ms bin) for 3S-VB (blue), 1S (purple), and data (gray). H) Histograms of population rate for each state. 3.2 Results on neurophysiological recordings We analysed a neural population recording from V1 to determine whether we could successfully identify cortical states by decoding the activity of the neural population, and whether accounting for state-dependence resulted in a more accurate statistical model of neural firing. While neurons generally responded robustly to the stimulus (3 D), firing rates were strongly modulated by internal states [2] (Fig. 3 A). We fit different models to data, and found that our 3-state model estimated with VB resulted in better cross-validation performance than either the 3-state model estimated with ML, the 2-state model or a 1-state GLM (i.e. a GLM without cross-neural couplings, Fig. 3 B). In addition we fit a fully coupled GLM (with cross-history terms as in [11, 13]), as well as one in which the total population count was used as a history feature using VB. These models were intermediate between the 1-state GLM and the 2-state model, i.e. both worse than the 3-state one. A ’flat’ 3-states model with a single multinomial gate estimated with ML performed similarly to the hierarchical 3S-ML model. This is to be expected, as any differences in expressive power between the two models will only become substantial for a different choice of xt or larger models. 7 We also evaluated the ability of different models to decode the stimulus, (i.e. the direction of the presented grating) from population spike trains. We evaluated the likelihood of each population spike train for each of the 16 stimulus directions, and decoded the stimulus which yielded the highest likelihood. The 3-state VB model shows best decoding performance among all tested models (3 C), and all models with state-dependence (3-state VB, 3-state ML, 2-state) outperformed the 1-state GLM. We sampled from the estimated 3S-VB model to evaluate to what extent the model captures the tuning properties of neurons (Fig. 3 D(ii & iv)). The example neuron shows strong modulation of base firing rate dependent on the population state, but not a qualitative change of the tuning properties (Fig. 3 D i-iv). The down-state post-spike filter (Fig. 3 D v) exhibits a small oscillatory component which is not present in the post-spike filters of the other states or the 1-state GLM. Investigation of inter-transition-interval (ITI) distributions from the data (after Viterbidecoding) shows heavy tails (Fig. 3 E). Comparison of ITI-distribution estimated from the empirical data and from sampled data (3S-VB) show good agreement, apart from small deficiencies of the model to capture the heavy tails of the empirical ITI distribution (Fig. 3 F). Finally, population rates (i.e. total number of spikes across the population) are often used as a summary-measure for characterizing cortical states [6]. We found that the distribution of population rates in the data was well matched by the distribution estimated from our model (Fig. 3 G) with the three states having markedly different population rate distributions (Fig. 3 H). Although a 1-state GLM also captured the tuning-properties of this neuron (Fig. 3 D) it failed to recover the distribution of population rates (Fig. 3 G). 4 Discussion We presented a statistical method for extracting cortical states from multi-cell recordings of spiking activity. Our model is based on a ‘state-dependent’ GLM [10] in which the states are organised hierarchically and evolve over time according to a hidden Markov model. Whether, and in which situations, the best descriptions of cortical states are multi-dimensional, discrete or continuous [25, 2] is an open question [6], and models like the one presented here will help shed light on these questions. We showed that the use of variational inference methods makes it possible to estimate the posterior over parameters. Bayesian inference provides better model performance on limited data [13], uncertainty information, and is also an important building block for active learning approaches [14]. Finally, it can be used to determine the best model complexity: For example, one could start inference with a model containing only one state and iteratively add states (as in divisive clustering) until the variational bound stops increasing. Cortical states can have a substantial impact on the firing and coding properties of cortical neurons [6] and interact with inter-area communication [4, 3]. Therefore, a better understanding of the interplay between cortical states and sensory information, and the role of cortical states in gating information in local cortical circuits will be indispensable for our understanding of how populations of neurons collectively process information. Advances in experimental technology enable us to record neural activity in large populations of neurons distributed across brain areas. This makes it possible to empirically study how cortical states vary across the brain, to identify pathways which influence state, and ultimately to understand their role in neural coding and computation. The combination of such data with statistical methods for identifying the organisation of cortical states holds great promise for making progress on understanding state-dependent information processing in the brain. Acknowledgements We are grateful to the authors of [2] for sharing their data (toliaslab.org/publications/eckeret-al-2014/) and to Alexander Ecker, William McGhee, Marcel Nonnenmacher and David Janssen for comments on the manuscript. This work was funded by the German Federal Ministry of Education and Research (BMBF; FKZ: 01GQ1002, Bernstein Center T¨ubingen) and the Max Planck Society. Supplementary details and code are available at www.mackelab.org. 8 References [1] M. Steriade and R. W. McCarley, Brain Control of Wakefulness and Sleep. Kluwer Academic/plemum publishers, 2005. [2] A. S. Ecker, P. Berens, R. J. Cotton, M. Subramaniyan, G. H. Denfield, C. R. Cadwell, S. M. Smirnakis, M. Bethge, and A. S. Tolias, “State dependence of noise correlations in macaque primary visual cortex,” Neuron, vol. 82, no. 1, 2014. [3] E. Zagha, A. E. Casale, R. N. S. Sachdev, M. J. McGinley, and D. A. McCormick, “Motor cortex feedback influences sensory processing by modulating network state,” Neuron, vol. 79, no. 3, 2013. [4] N. K. Logothetis, O. Eschenko, Y. Murayama, M. Augath, T. Steudel, H. C. Evrard, M. Besserve, and A. Oeltermann, “Hippocampal-cortical interaction during periods of subcortical silence,” Nature, vol. 491, no. 7425, 2012. [5] T. Bezdudnaya, M. Cano, Y. Bereshpolova, C. R. Stoelzel, J.-M. Alonso, and H. A. Swadlow, “Thalamic burst mode and inattention in the awake LGNd,” Neuron, vol. 49, no. 3, 2006. [6] K. D. Harris and A. Thiele, “Cortical state and attention,” Nature reviews. Neuroscience, vol. 12, no. 9, 2011. [7] M. A. Kisley and G. L. Gerstein, “Trial-to-Trial Variability and State-Dependent Modulation of Auditory-Evoked Responses in Cortex,” J. Neurosci., vol. 19, no. 23, 1999. [8] M. J. Beal, “Variational algorithms for approximate bayesian inference,” 2003. [9] L. M. Jones, A. Fontanini, B. F. Sadacca, P. Miller, and D. B. Katz, “Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles,” PNAS, vol. 104, no. 47, 2007. [10] S. Escola, A. Fontanini, D. Katz, and L. Paninski, “Hidden Markov models for the stimulusresponse relationships of multistate neural systems,” Neural Computation, vol. 23, no. 5, 2011. [11] L. Paninski, J. Pillow, and J. Lewi, “Statistical models for neural encoding, decoding, and optimal stimulus design,” Progress in Brain Research, vol. 165, 2007. [12] Z. Chen, S. Vijayan, R. Barbieri, M. A. Wilson, and E. N. Brown, “Discrete- and continuoustime probabilistic models and algorithms for inferring neuronal UP and DOWN states,” Neural Computation, vol. 21, no. 7, 2009. [13] S. Gerwinn, J. H. Macke, and M. Bethge, “Bayesian inference for generalized linear models for spiking neurons,” Frontiers in Computational Neuroscience, vol. 4, no. 12, 2010. [14] J. Lewi, R. Butera, and L. Paninski, “Sequential optimal design of neurophysiology experiments,” Neural Computation, vol. 21, no. 3, 2009. [15] B. Shababo, B. Paige, A. Pakman, and L. Paninski, “Bayesian inference and online experimental design for mapping neural microcircuits,” in Advances in Neural Information Processing Systems 26, pp. 1304–1312, Curran Associates, Inc., 2013. [16] M. I. Jordan, Z. Ghahramani, and L. K. Saul, “Hidden markov decision trees,” in Advances in Neural Information Processing Systems 9, pp. 501–507, MIT Press, 1997. [17] M. I. Jordan and R. A. Jacobs, “Hierarchical Mixtures of Experts and the EM Algorithm,” Neural Computation, vol. 6, no. 2, 1994. [18] T. S. Jaakkola and M. I. Jordan, “A variational approach to Bayesian logistic regression models and their extensions,” 1996. [19] C. M. Bishop and M. Svenskn, “Bayesian hierarchical mixtures of experts,” in Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI’03, (San Francisco, CA, USA), pp. 57–64, Morgan Kaufmann Publishers Inc., 2003. [20] G. Aston-Jones and J. D. Cohen, “An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance,” in Annual Review of Neuroscience, vol. 28, pp. 403–450, Annual Reviews, 2005. [21] Y. Bengio and P. Frasconi, “An input output hmm architecture,” in Advances in Neural Information Processing Systems 7, pp. 427–434, MIT Press, 1995. [22] D. J. C. MacKay, “Ensemble Learning for Hidden Markov Models,” tech. rep., Cavendish Laboratory, University of Cambridge, 1997. [23] D. J. C. MacKay, “Bayesian Non-linear Modeling for the Prediction Competition,” ASHRAE Transactions, vol. 100, no. 2, pp. 1053–1062, 1994. [24] C. M. Bishop, Pattern Recognition and Machine Learning. Information science and statistics, New York: Springer, 2006. [25] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani, “Empirical models of spiking in neural populations,” in Advances in Neural Information Processing Systems, vol. 24, Curran Associates, Inc., 2011. 9
2014
40
5,514
Constant Nullspace Strong Convexity and Fast Convergence of Proximal Methods under High-Dimensional Settings Ian E.H. Yen Cho-Jui Hsieh Pradeep Ravikumar Inderjit Dhillon Department of Computer Science University of Texas at Austin {ianyen,cjhsieh,pradeepr,inderjit}@cs.utexas.edu Abstract State of the art statistical estimators for high-dimensional problems take the form of regularized, and hence non-smooth, convex programs. A key facet of these statistical estimation problems is that these are typically not strongly convex under a high-dimensional sampling regime when the Hessian matrix becomes rankdeficient. Under vanilla convexity however, proximal optimization methods attain only a sublinear rate. In this paper, we investigate a novel variant of strong convexity, which we call Constant Nullspace Strong Convexity (CNSC), where we require that the objective function be strongly convex only over a constant subspace. As we show, the CNSC condition is naturally satisfied by high-dimensional statistical estimators. We then analyze the behavior of proximal methods under this CNSC condition: we show global linear convergence of Proximal Gradient and local quadratic convergence of Proximal Newton Method, when the regularization function comprising the statistical estimator is decomposable. We corroborate our theory via numerical experiments, and show a qualitative difference in the convergence rates of the proximal algorithms when the loss function does satisfy the CNSC condition. 1 Introduction There has been a growing interest in high-dimensional statistical problems, where the number of parameters d is comparable to or even larger than the sample size n, spurred in part by many modern science and engineering applications. It is now well understood that in order to guarantee statistical consistency it is key to impose low-dimensional structure, such as sparsity, or low-rank structure, on the high-dimensional statistical model parameters. A strong line of research has thus developed classes of regularized M-estimators that leverage such structural constraints, and come with strong statistical guarantees even under high-dimensional settings [13]. These state of the art regularized M-estimators typically take the form of convex non-smooth programs. A facet of computational consequence with these high-dimensional sampling regimes is that these M-estimation problems, even when convex, are typically not strongly convex. For instance, for the ℓ1-regularized least squares estimator (LASSO), the Hessian is rank deficient when n < d. In the absence of additional assumptions however, optimization methods to solve general non-smooth non-strongly convex programs can only achieve a sublinear convergence rate [19, 21]; faster rates typically require strong convexity [1, 20]. In the past few years, an effort has thus been made to impose additional assumptions that are stronger than mere convexity, and yet weaker than strong convexity; and proving faster rates of convergence of optimization methods under these assumptions. Typically these assumptions take the form of a restricted variant of strong convexity, which incidentally mirror those assumed for statistical guarantees as well, such as the Restricted Isometry 1 Property or Restricted Eigenvalue property. A caveat with these results however is that these statistically motivated assumptions need not hold in general, or require sufficiently large number of samples to hold with high probability. Moreover, the standard optimization methods have to be modified in some manner to leverage these assumptions [5, 7, 17]. Another line of research exploits a local error bound to establish asymptotic linear rate of convergence for a special form of non-strongly convex functions [16, 8, 6]. However, these do not provide finite-iteration convergence bounds, due to the potentially large number of iterations spent on early stage. In this paper, we consider a novel simple condition, which we term Constant Nullspace Strong Convexity (CNSC). This assumption is motivated not from statistical considerations, but from the algebraic form of standard M-estimators; indeed as we show, standard M-estimation problems even under high-dimensional settings naturally satisfy the CNSC condition. Under this CNSC condition, we then investigate the convergence rates of the class of proximal optimization methods; specifically the Proximal Gradient method (Prox-GD) [14, 15, 18] and the Proximal Newton method (ProxNewton) [1, 2, 9]. These proximal methods are very amenable to regularized M-estimation problems: they do not treat the M-estimation problem as a black-box convex non-smooth problem, but instead leverage the composite nature of the objective of the form F(x) = h(x)+f(x), where h(x) is a possibly non-smooth convex function while f(x) is a convex smooth function with Lipschitzcontinuous gradient. We show that under our CNSC condition, Proximal Gradient achieves global linear convergence when the non-smooth component is a decomposable norm. We also show that Proximal Newton, under the CNSC condition, achieves local quadratic convergence as long as the non-smooth component is Lipschitz-continuous. Note that in the absence of strong convexity, but under no additional assumptions beyond convexity, the proximal methods can only achieve sublinear convergence as noted earlier. We have thus identified an algebraic facet of the M-estimators that explains the strong computational performance of standard proximal optimization methods in practical settings in solving high-dimensional statistical estimation problems. The paper is organized as follows. In Section 2, we define the CNSC condition and introduce the Proximal Gradient and Proximal Newton methods. Then we prove global linear convergence of Prox-GD and local quadratic convergence of Prox-Newton in Section 3 and 4 respectively. In Section 5, we corroborate our theory via experiments on real high-dimensional data set. We will leave all the proof of lemmas to the appendix. 2 Preliminaries We are interested in composite optimization problems of the form min x∈Rd F(x) = h(x) + f(x), (1) where h(x) is a possibly non-smooth convex function and f(x) is twice differentiable convex function with its Hessian matrix H(x) = ∇2f(x) satisfying mI ≼H(x) ≼MI, ∀x ∈Rd, (2) where for strongly convex f(x) we have m > 0; otherwise, for convex but not strongly convex f(x) we have m = 0. 2.1 Constant Nullspace Strong Convexity (CNSC) Before defining our strong convexity variant of Constant Nullspace Strong Convexity (CNSC), we first provide some intuition by considering the following large class of statistical estimation problems in high-dimensional machine learning, where f(x) takes the form f(x) = n ∑ i=1 L(aT i x, yi), (3) where L(u, y) is a non-negative loss function that is convex in its first argument, ai is the observed feature vector and yi is the observed response of the i-th sample. The Hessian matrix of (3) takes the form H(x) = AT D(Ax)A, (4) 2 where A is a n by d design (data) matrix with Ai,: = aT i and D(Ax) is a diagonal matrix with Dii(x) = L ′′(aT i x, yi), where the double-derivative in L′′(u, y) is with respect to the first argument. It is easy to see that in high-dimensional problems with d > n, (4) is not positive definite so that strong convexity would not hold. However, for strictly convex loss function L(·, y), we have L ′′(u, y) > 0 and vT H(x)v = 0 iff Av = 0. (5) As a consequence vT H(x)v > 0 as long as v does not lie in the Nullspace of A; that is, the Hessian H(x) might satisfy the strong convexity bound in the above restricted sense. We generalize this concept as follows. We first define the following notation: given a subspace T , we let ΠT (·) denote the orthogonal projection onto T , and let T ⊥denote the orthogonal subspace to T . Assumption 1 ( Constant Nullspace Strong Convexity ). A twice-differentiable f(x) satisfies Constant Nullspace Strong Convexity (CNSC) with respect to T (CNSC-T ) iff there is a constant vector space T s.t. f(x) depends only on z = ΠT (x) and its Hessian matrix satisfies vT H(z)v ≥m∥v∥2, ∀v ∈T (6) for some m > 0, and ∀z ∈T , H(z)v = 0, ∀v ∈T ⊥. (7) From the motivating section above, the above condition can be seen to hold for a wide range of loss functions, such as those arising from linear regression models, as well as generalized linear models (e.g. logistic regression, poisson regression, multinomial regression etc.) 1. For L ′′(u, y) ≥mL > 0, we have m = mLλmin(AT A) > 0 as the constant in (6), where λmin(AT A) is the minimum positive eigenvalue of AT A. Then by the assumption, any point x can be decomposed as x = z + y, where z = ΠT (x), y = ΠT ⊥(x), so that the difference between gradient of two points can be written as g(x1) −g(x2) = ∫1 0 H(s∆x + x2)∆xds = ∫1 0 H(s∆z + z2)∆zds = ˜H(z1, z2)∆z, (8) where ∆x = x1 −x2, ∆z = z1 −z2, and ˜H(z1, z2) = ∫1 0 H(s∆z +z2)ds is the average Hessian matrix along the path from z2 to z1. It is easy to verify that ˜H(z1, z2) satisfies inequalities (2), (6) and equality (7) for all z1, z2 ∈T by just applying inequalities (equality) to each individual Hessian matrix being integrated. Then we have following theorem that shows the uniqueness of ¯z at optimal. Theorem 1 (Optimality Condition). For f(x) satisfying CNSC-T , 1. ¯x is an optimal solution of (1) iff −g(¯x) = ¯ρ for some ¯ρ ∈∂h(¯x). 2. The optimal ¯ρ and ¯z = ΠT (¯x) are unique. Proof. The first statement is true since ¯x is an optimal solution iff 0 ∈∂h(¯x) + ∇f(¯x). To prove the second statement, suppose ¯x1 = ¯z1+¯y1 and ¯x2 = ¯z2+¯y2 are both optimal. Let ∆x = ¯x1−¯x2 and ∆z = ¯z1 −¯z2. Since h(x) is convex, −g(¯x1) ∈∂h(¯x1) and −g(¯x2) ∈∂h(¯x2) should satisfy ⟨−g(¯x1) + g(¯x2), ∆x⟩≥0. However, since f(x) satisfies CNSC-T , by (8), ⟨−g(¯x1) + g(¯x2), ∆x⟩= ⟨−˜H(¯z1, ¯z2)∆z, ∆x⟩= −∆z ˜H(¯z1, ¯z2)∆z ≤−m∥∆z∥2 2 for some m > 0. The two inequalities can simultaneously hold only if ∆¯z = 0. Therefore, ¯z is unique at optimum, and thus g(¯x) = g(0) + ˜H(¯z, 0)¯z and ¯ρ = −g(¯x) are also unique. In next two sections, we review the Proximal Gradient Method (Prox-GD) and Proximal Newton Method (Prox-Newton), and introduce some tools that will be used in our analysis. 1 Note for many generalized linear models, the second derivative L ′′(u, y) of loss function approaches 0 if |u| →∞. However, this could not happen as long as there is a penalty term h(x) which goes to infinity if x diverges, which then serves as a finite constraint bound on x. 3 2.2 Proximal Gradient Method The Prox-GD algorithm comprises a gradient descent step xt+ 1 2 = xt −1 M g(xt) followed by a proximal step xt+1 = proxh M(xt+ 1 2 ) = arg x min h(x) + M 2 ∥x −xt+ 1 2 ∥2 2, (9) where ∥· ∥2 means the Frobinius norm if x is a matrix. For simplicity, we will denote proxh M(.) as prox(.) in the following discussion when it is clear from the context. In Prox-GD algorithm, it is assumed that (9) can be computed efficiently, which is true for most of decomposable regularizers. Here we introduce some properties of proximal operator that can facilitate our analysis. Lemma 1. Define ∆P x = x −prox(x), the following properties hold for proximal operation (9). 1. M∆P x ∈∂h(prox(x)). 2. ∥prox(x1) −prox(x2)∥2 2 ≤∥x1 −x2∥2 2 −∥∆P x1 −∆P x2∥2 2. 2.3 Proximal Newton Method In this section, we introduce the Proximal Newton method, which has been shown to be considerably more efficient than first-order methods in many applications [1], including Sparse Inverse Covariance Estimation [2] and ℓ1-regularized Logistic-Regression [9, 10]. Each step of Prox-Newton solves a local quadratic approximation x+ t = arg x min h(x) + 1 2(x −xt)T Ht(x −xt) + gT t (x −xt) (10) to find a search direction x+ −xt, and then conduct a line search procedure to find t such that f(xt+1) = f(xt + t(x+ t −xt)) meets a sufficient decrease condition. Note unlike Prox-GD update (9), in most of cases (10) requires an iterative procedure to solve. For example if h(x) is ℓ1-norm, then a coordinate descent algorithm is usually employed to solve (10) as an LASSO subproblem [1, 2, 9, 10]. The convergence of Newton-type method comprises two phases [1, 3]. In the first phase, it is possible that step size t < 1 is chosen, while in the second phase, which occurs when xt is close enough to optimum, step size t = 1 is always chosen and each step leads to quadratic convergence. In this paper, we focus on the quadratic convergence phase, while refer readers to [21] for a global analysis of Prox-Newton without strong convexity assumption. In the quadratic convergence phase, we have xt+1 = x+ t and the update can be written as xt+1 = proxHt ( xt + ∆xnt t ) , Ht∆xnt t = −gt, (11) where ∆xnt t is the Newton step when h(x) is absent, and the proximal operator proxH(.) is defined for any PSD matrix H as proxH(x) = arg v min h(v) + 1 2∥v −x∥2 H. (12) Note while we use ∥x∥2 H to denote xT Hx, we only require H to be PSD instead of PD. Therefore, ∥x∥H is not a true norm, and (12) might have multiple solutions, where proxH(x) refers to any one of them. In the following, we show proxH(.) has similar properties as that of prox(.) in previous section. Lemma 2. Define ∆P x = x−proxH(x), the following properties hold for the proximal operator: 1. H∆P x ∈∂h(proxH(x)). 2. ∥proxH(x1) −proxH(x2)∥2 H ≤∥x1 −x2∥2 H. 4 3 Linear Convergence of Proximal Gradient Method In this section, we analyze convergence of Proximal Gradient Method for h(x) = λ∥x∥, where ∥· ∥ is a decomposable norm defined as follows. Definition 1 (Decomposable Norm). ∥· ∥is a decomposable norm if there are orthogonal subspaces {Mi}J i=1 with Rd = ∪J i=1Mi such that for any point x ∈Rd that can be written as x = ∑ j∈E cjaj, where cj > 0 and aj ∈Mj, ∥aj∥∗= 1, we have ∥x∥= ∑ j∈E cj, and ∂∥x∥= {ρ | ΠMj(ρ) = aj, ∀j ∈E; ∥ΠMj(ρ)∥∗≤1, ∀j /∈E}, (13) where ∥· ∥∗is the dual norm of ∥· ∥. The above definition includes several well-known examples such as ℓ1-norm ∥x∥1 and group-ℓ1 norm ∥X∥1,2. For ℓ1-norm, Mj corresponds to vectors with only j-th coordinate not equal to 0, and E is the set of non-zero coordinates of x. For group-ℓ1 norm, Mj corresponds to vectors with only j-th group not equal to 0T and E are the set of non-zero groups of X. Under the definition, we can profile the set of optimal solutions as follows. Lemma 3 (Optimal Set). Let ¯E be the active set at optimal and ¯E+ = {j| ∥ΠMj(¯ρ)∥∗= λ} be its augmented set (which is unique since ¯ρ is unique) such that ΠMj(¯ρ) = λ¯aj, j ∈¯E+. The optimal solutions of (1) form a polyhedral set ¯ X = { x | ΠT (x) = ¯z and x ∈¯O } , (14) where ¯O = { x | x = ∑ j∈¯E+ cj ¯aj, cj ≥0, j ∈¯E+} is the set of x with ¯ρ ∈∂h(x). Given the optimal set is a polyhedron, we can then employ the following lemma to bound the distance of an iterate xt to the optimal set ¯ X. Lemma 4 (Hoffman’s bound). Consider a polyhedral set S = {x|Ax ≤b, Ex = c}. For any point x ∈Rd, there is a ¯x ∈S such that ∥x −¯x∥2 ≤θ(S) [Ax −b]+ Ex −c 2 , (15) where θ(S) is a positive constant that depends only on A and E. The above bound first appears in [11], and was employed in [4] to prove linear convergence of Feasible Descent method for a class of convex smooth function. A proof of the ℓ2-norm version (15) can be found in [4, lemma 4.3]. By applying (15) to the set ¯ X, the distance of a point x to ¯ X can be bounded by infeasible amounts to the two constraints ΠT (x) = z and x ∈¯O, where the latter can be bounded according the following lemma when cj = ⟨x, ¯aj⟩≥0, ∀j ∈¯E+. Lemma 5. Let ¯ A = span(¯a1, ¯a2 . . . , ¯a| ¯E+|). Suppose ∥x∥≤R and ΠMj(x) = 0 for j /∈¯E+. Then λ2∥x −Π ¯ A(x)∥2 2 ≤R2∥ρ −¯ρ∥2 2, where ρ ∈∂h(x) and ¯ρ is as defined in Theorem 1. Now we are ready to prove the main theorem of this section. Theorem 2 (Linear Convergence of Prox-GD). Let ¯ X be the set of optimal solutions for problem (1), and ¯x = Π ¯ X (x) be the solution closest to x. Denote dλ = minj /∈¯E+ ( λ −∥ΠMj(¯ρ)∥∗ ) > 0. For the sequence {xt}∞ t=0 produced by Proximal Gradient Method, we have: (a) If xt+1 satisfies the condition that ∃j /∈¯E+ : ΠMj(xt+1) ̸= 0 or ∃j ∈¯E+ : ⟨xt+1, ¯aj⟩< 0, (16) we then have: ∥xt+1 −¯xt+1∥2 2 ≤(1 −α)∥xt −¯xt∥2 2, α = d2 λ M 2∥x0 −¯x0∥2 2 (17) 5 (b) If xt+1 does not satisfy the condition in (16) but xt does, then ∥xt+1 −¯xt+1∥2 2 ≤(1 −α)∥xt−1 −¯xt−1∥2 2, α = d2 λ M 2∥x0 −¯x0∥2 2 (18) (c) If neither xt+1, xt satisfy the condition in (16), then ∥xt+2 −¯xt+2∥2 2 ≤ 1 1 + β ∥xt −¯xt∥2 2, β = m Mθ( ¯ X)2 , (19) where we recall that θ( ¯ X) is the constant determined by polyhedron ¯ X from Hoffman’s Bound (15). Proof. Since ¯xt is an optimal solution, we have ¯xt = prox(¯xt −g(¯xt)/M). Let ∆xt = xt −¯xt, ρt = M(xt+ 1 2 −xt+1) ∈∂h(xt+1) and ˜H = ˜H(zt, ¯zt). by Lemma 1, each iterate of Prox-GD has ∥xt −¯xt∥2 2 −∥xt+1 −¯xt+1∥2 2 ≥∥xt −¯xt∥2 2 −∥xt+1 −¯xt∥2 2 = ∥∆xt∥2 2 −∥prox(xt −g(xt)/M) −prox(¯xt −g(¯xt)/M)∥2 2 ≥∥∆xt∥2 2 −∥(xt −g(xt)/M) −(¯xt −g(¯xt)/M)∥2 2 + ∥ρt −¯ρ∥2 2/M 2. (20) Since g(xt) −g(¯xt) = ˜H∆x from (8), we have ∥xt −¯xt∥2 2 −∥xt+1 −¯xt+1∥2 2 ≥∥∆xt∥2 2 −∥∆xt −˜H∆xt/M∥2 2 + ∥ρt −¯ρ∥2 2/M 2 ≥∆xT t ( ˜H/M ) ∆xt + ∥ρt −¯ρ∥2 2/M 2 ≥m∥∆zt∥2 2/M + ∥ρt −¯ρ∥2 2/M 2. (21) The second inequality holds since 2 ˜H/M −˜H2/M 2 = ( ˜H/M)(2I −˜H/M) ≽˜H/M. The inequality tells us ∥xt −¯xt∥2 −∥xt+1 −¯xt+1∥2 ≥0, that is, the distance to the optimal set ∥xt −¯xt∥is monotonically non-increasing. To get a tighter bound, we consider two cases. Case 1: ΠMj(xt) ̸= 0 for some j /∈¯E+ or ⟨xt, ¯aj⟩< 0 for some j ∈¯E+. In this case, suppose there is j /∈E+ t with ΠMj(xt) ̸= 0, then 2 ∥ρt −¯ρ∥2 2 ≥∥ΠMj(ρt) −ΠMj(¯ρ)∥2 ∗≥(∥ΠMj(ρt)∥∗−∥ΠMj(¯ρ)∥∗)2 ≥d2 λ. (22) On the other hand, if ⟨xt, ¯aj⟩< 0 for some j ∈¯E+, then we have ⟨aj, ¯aj⟩< 0 for ΠMj(ρt) = λaj. Therefore ∥ρt −¯ρ∥2 2 ≥∥ΠMj(ρt) −ΠMj(¯ρ)∥2 2 ≥λ2∥aj −¯aj∥2 2 = λ2(2 −2⟨aj, ¯aj⟩) > 2λ2. Either cases we have ∥xt −¯xt∥2 2 −∥xt+1 −¯xt+1∥2 2 ≥∥ρt −¯ρ∥2 2 M 2 ≥ ( d2 λ M 2∥x0 −¯x0∥2 2 ) ∥xt −¯xt∥2 2. (23) Case 2: Both xt, xt+1 do not fall in Case 1 Given ⟨xt, ¯aj⟩≥0, ∀j ∈¯E+ and ΠMj(xt) = 0, ∀j /∈¯E+, then x belongs to the set ¯O defined in Lemma 3 iff ∥x −Π ¯ A(x)∥2 2 = 0. The condition can be also scaled as λ2 mMR2 ∥x −Π ¯ A(x)∥2 2 = 0, where R is a bound on ∥xt∥holds for ∀t, which must exist as long as the regularization parameter λ > 0 in h(x) = λ∥x∥. By Lemma 4, the distance of point xt to the polyhedral set ¯ X is bounded by its infeasible amount ∥xt −¯xt∥2 2 ≤θ( ¯ X)2 ( ∥zt −¯z∥2 2 + λ2 mMR2 ∥xt −Π ¯ A(xt)∥2 2 ) , (24) 2From our definition of decomposable norm, if a vector v belongs to single subspace Mj, then ∥v∥= ∥v∥∗= ∥v∥2. The reason is: By the definition, if v ∈Mj, then v = cjaj for some cj > 0, aj ∈Mj, ∥aj∥∗= 1, and it has decomposable norm ∥v∥= cj. However, we also have ∥v∥∗= ∥cjaj∥∗= cj∥aj∥∗= cj = ∥v∥. The norm equals to its dual norm only if it is ℓ2-norm. 6 where zt = ΠT (xt). Applying (24) to (21) for iteration t + 1, we have ∥xt+1 −¯xt+1∥2 −∥xt+2 −¯xt+2∥2 ≥ m Mθ( ¯ X)2 ∥∆xt+1∥2 − λ2 M 2R2 ∥xt+1 −Π ¯ A(xt+1)∥2 2 + ∥ρt+1 −¯ρ∥2 M 2 . For iteration t, we have ∥xt −¯xt∥2 −∥xt+1 −¯xt+1∥2 ≥m M ∥∆zt∥2 2 + ∥ρt −¯ρ∥2 M 2 . By Lemma 5, adding the two inequalities gives ∥xt −¯xt∥2 −∥xt+2 −¯xt+2∥2 ≥ m Mθ( ¯ X)2 ∥∆xt+1∥2 + m M ∥∆zt∥2 2 + ∥ρt+1 −¯ρ∥2 M 2 ≥ m Mθ( ¯ X)2 ∥∆xt+1∥2 ≥ m Mθ( ¯ X)2 ∥∆xt+2∥2, which yields desired result (18) after arrangement. We note that the descent in the first two cases is actually even stronger than stated above: from the proofs, that the distance can be seen to reduce by a fixed constant. This is faster than superlinear convergence since the final solution could then be obtained in a finite number of steps. 4 Quadratic Convergence of Proximal Newton Method The key idea of the proof is to re-formulate Prox-Newton update (10) as zt+1 = arg z∈T min h(z + ˆy(z)) + gT t (z −zt) + 1 2∥z −zt∥2 Ht (25) where ˆy(z) = arg y∈T ⊥min h(z + y), (26) so that we can focus our convergence analysis on z = ΠT (x) as follows. Lemma 6 (Optimality Condition). For any matrix H satisfying CNSC-T , the update ∆x = arg d min h(x + d) + g(x)T d + 1 2∥d∥2 H (27) has F(x + t∆x) −F(x) ≤−t∥∆z∥2 H + O(t2), (28) where ∆z = ΠT (∆x). Furthermore, if x is an optimal solution, ∆x = 0 satisfies (27). The following lemma then states that, for Prox-Newton, the function suboptimality is bounded by only distance in the T space. Lemma 7. Suppose h(x) and f(x) are Lipschitz-continuous with Lipschitz constants Lh and Lf. In quadratic convergence phase (defined in Theorem 3), Proximal Newton Method has F(xt) −F(¯x) ≤L∥zt −¯z∥, (29) where L = max{Lh, Lf} and zt = ΠT (xt), ¯z = ΠT (¯x). By the above lemma, we have F(xt) −F(¯x) ≤Lϵ as long as ∥zt −¯z∥≤ϵ. Therefore, it suffices to show quadratic convergence of ∥zt −¯z∥to guarantee F(xt) −F(¯x) double its precision after each iteration. Theorem 3 (Quadratic Convergence of Prox-Newton). For f(x) satisfying CNSC-T with Lipschitzcontinuous second derivative ∇2f(x), the Proximal Newton update (10) has ∥zt+1 −¯z∥≤LH 2m ∥zt −¯z∥2, where ¯z = ΠT (¯x), zt = ΠT (xt), and LH is the Lipschitz constant for ∇2f(x). 7 Proof. Let ¯x be an optimal solution of (1). By Lemma 6, for any PSD matrix H the update ∆¯x = 0 satisfies (27), which means ¯x = proxHt(¯x + ∆¯xnt), Ht∆¯xnt = −g(¯x). (30) Then by non-expansiveness of proximal operation (Lemma 2), we have ∥xt+1 −¯x∥Ht = ∥proxHt(xt + ∆xnt t ) −proxHt(¯x + ∆¯xnt)∥Ht ≤∥(xt + ∆xnt t ) −(¯x + ∆¯xnt)∥Ht = ∥(xt −¯x) + (∆xnt t −∆¯xnt)∥Ht = ∥(zt −¯z) + (∆znt t −∆¯znt t )∥Ht. (31) Since for z ∈T , ∥Htz∥2 ≥√m∥z∥Ht, (31) leads to ∥xt+1 −¯x∥Ht ≤ 1 √m∥Ht(zt −¯z) −Ht(∆znt t −∆¯znt)∥2 = 1 √m∥Ht(zt −¯z) −(gt −¯g)∥2 ≤LH 2√m∥zt −¯z∥2 2, (32) where last inequality follows from Lipschitz-continuity of ∇2f(x). Since zt+1, ¯z ∈T , we have ∥xt+1 −¯x∥Ht = ∥zt+1 −¯z∥Ht ≥√m∥zt+1 −¯z∥2. (33) Finally, combining (33) with (32), ∥zt+1 −¯z∥2 ≤LH 2m ∥zt −¯z∥2 2, where quadratic convergence phase occurs when ∥zt −¯z∥< √ 2m LH . 5 Numerical Experiments In this section, we study the convergence behavior of Proximal Gradient method and Proximal Newton method on high-dimensional real data set with and without the CNSC condition. In particular, two loss functions — logistic loss L(u, y)=log(1 + exp(−yu)) and ℓ2-hinge loss L(u, y)=max(1 −yu, 0)2 — are used in (3) with ℓ1-regularization h(x) = λ∥x∥1, where both losses are smooth but only logistic loss has strict convexity that implies the CNSC condition. For Proximal Newton method we employ an randomized coordinate descent algorithm to solve subproblem (10) as in [9]. Figure 5 shows their convergence results of objective value relative to the optimum on rcv1.1k, subset of a document classification data set with dimension d = 10, 192 and number of samples n = 1000. From the figure one can clearly observe the linear convergence of Prox-GD and quadratic convergence of Prox-Newton on problem satisfying CNSC, contrasted to the qualitatively different behavior on problem without CNSC. 0.5 1 1.5 2 2.5 3 x 10 6 10 −8 10 −6 10 −4 10 −2 iter obj Prox−GD logistic L2hinge 5 10 15 20 25 30 10 −8 10 −6 10 −4 10 −2 10 0 iter obj Prox−Newton logistic L2hinge Figure 1: objective value (relative to optimum) of Proximal Gradient method (left) and Proximal Newton method (right) with logistic loss and ℓ2-hinge loss. Acknowledgement This research was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H acknowledges support from an IBM PhD fellowship. P.R. acknowledges the support of ARO via W911NF-12-10390 and NSF via IIS-1149803, IIS-1320894, IIS-1447574, and DMS-1264033. 8 References [1] J. D. Lee, Y. Sun, and M. A. Saunders. Proximal newton-type methods for minimizing composite functions. In NIPS, 2012. [2] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance estimation using quadratic approximation. In NIPS 2011. [3] S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge Univ. Press, Cambridge, U.K., 2003. [4] P.-W. Wang and C.-J. Lin. Iteration Complexity of Feasible Descent Methods for Convex Optimization. Technical report, Department of Computer Science, National Taiwan University, Taipei, Taiwan, 2013. [5] A. Agarwal, S. Negahban, and M. Wainwright. Fast Global Convergence Rates of Gradient Methods for High-Dimensional Statistical Recovery. In NIPS 2010. [6] K. Hou, Z. Zhou, A. M.-S. So, and Z.-Q. Luo, On the linear convergence of the proximal gradient method for trace norm regularization, in Neural Information Processing Systems (NIPS), 2013. [7] L. Xiao and T. Zhang, A proximal-gradient homotopy method for the l1-regularized leastsquares problem, in ICML, 2012. [8] P. Tseng and S. Yun, A coordinate gradient descent method for nonsmooth separable minimization, Math. Prog. B. 117 (2009). [9] G.-X. Yuan, C.-H. Ho, and C.-J. Lin, An improved GLMNET for l1-regularized logistic regression, Journal of Machine Learning Research, vol. 13, pp. 19992030, 2012 [10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin, LIBLINEAR: A library for large linear classification, Journal of Machine Learning Research, vol. 9, pp. 1871-1874, 2008. [11] Alan J Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 1952. [12] Tewari, A, Ravikumar, P, and Dhillon, I S. Greedy Algorithms for Structurally Constrained High Dimensional Problems. In NIPS, 2011. [13] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. In NIPS, 2009. [14] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [15] S. Becker, J. Bobin, and E.J.Candes. Nesta: a fast and accurate first-order method for sparse recovery. SIAM Journal on Imaging Sciences, 2011. [16] Z. Q. Luo and P. Tseng. Error bounds and convergence analysis of feasible descent methods: a general approach. Annals of Operations Research, 46-47:157–178, 1993. [17] Rahul Garg and Rohit Khandekar. Gradient Descent with Sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In ICML 2009. [18] S. Ji and J. Ye. An accelerated gradient method for trace norm minimization. In ICML, 2009. [19] Y. E. Nesterov, Gradient Methods for Minimizing Composite Objective Function, CORE report, 2007. [20] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers, New York, 2004 [21] K. Scheinberg, X. Tang. Practical Inexact Proximal Quasi-Newton Method with Global Complexity Analysis. COR@L Technical Report at Lehigh University. arXiv:1311.6547, 2013. 9
2014
400
5,515
Sparse Bayesian structure learning with dependent relevance determination prior Anqi Wu1 Mijung Park2 Oluwasanmi Koyejo3 Jonathan W. Pillow4 1,4 Princeton Neuroscience Institute, Princeton University, {anqiw, pillow}@princeton.edu 2 The Gatsby Unit, University College London, mijung@gatsby.ucl.ac.uk 3 Department of Psychology, Stanford University, sanmi@stanford.edu Abstract In many problem settings, parameter vectors are not merely sparse, but dependent in such a way that non-zero coefficients tend to cluster together. We refer to this form of dependency as “region sparsity”. Classical sparse regression methods, such as the lasso and automatic relevance determination (ARD), model parameters as independent a priori, and therefore do not exploit such dependencies. Here we introduce a hierarchical model for smooth, region-sparse weight vectors and tensors in a linear regression setting. Our approach represents a hierarchical extension of the relevance determination framework, where we add a transformed Gaussian process to model the dependencies between the prior variances of regression weights. We combine this with a structured model of the prior variances of Fourier coefficients, which eliminates unnecessary high frequencies. The resulting prior encourages weights to be region-sparse in two different bases simultaneously. We develop efficient approximate inference methods and show substantial improvements over comparable methods (e.g., group lasso and smooth RVM) for both simulated and real datasets from brain imaging. 1 Introduction Recent work in statistics has focused on high-dimensional inference problems where the number of parameters p equals or exceeds the number of samples n. Although ill-posed in general, such problems are made tractable when the parameters have special structure, such as sparsity in a particular basis. A large literature has provided theoretical guarantees about the solutions to sparse regression problems and introduced a suite of practical methods for solving them efficiently [1–7]. The Bayesian interpretation of standard “shrinkage” based methods for sparse regression problems involves maximum a postieriori (MAP) inference under a sparse, independent prior on the regression coefficients [8–15]. Under such priors, the posterior has high concentration near the axes, so the posterior maximum is at zero for many weights unless it is pulled strongly away by the likelihood. However, these independent priors neglect a statistical feature of many real-world regression problems, which is that non-zero weights tend to arise in clusters, and are therefore not independent a priori. In many settings, regression weights have an explicit topographic relationship, as when they index regressors in time or space (e.g., time series regression, or spatio-temporal neural receptive field regression). In such settings, nearby weights exhibit dependencies that are not captured by independent priors, which results in sub-optimal performance. Recent literature has explored a variety of techniques for improving sparse inference methods by incorporating different types of prior dependencies, which we will review here briefly. The smooth relevance vector machine (s-RVM) extends the RVM to incorporate a smoothness prior defined 1 in a kernel space, so that weights are smooth as well as sparse in a particular basis [16]. The group lasso captures the tendency for groups of coefficients to remain in or drop out of a model in a coordinated manner by using an l1 penalty on the l2 norms pre-defined groups of coefficients [17]. A method described in [18] uses a multivariate Laplace distribution to impose spatio-temporal coupling between prior variances of regression coefficients, which imposes group sparsity while leaving coefficients marginally uncorrelated. The literature includes many related methods [19–24], although most require a priori knowledge of the dependency structure, which may be unavailable in many applications of interest. Here we introduce a novel, flexible method for capturing dependencies in sparse regression problems, which we call dependent relevance determination (DRD). Our approach uses a Gaussian process to model dependencies between latent variables governing the prior variance of regression weights. (See [25], which independently proposed a similar idea.) We simultaneously impose smoothness by using a structured model of the prior variance of the weights’ Fourier coefficients. The resulting model captures sparse, local structure in two different bases simultaneously, yielding estimates that are sparse as well as smooth. Our method extends previous work on automatic locality determination (ALD) [26] and Bayesian structure learning (BSL) [27], both of which described hierarchical models for capturing sparsity, locality, and smoothness. Unlike these methods, DRD can tractably recover region-sparse estimates with multiple regions of non-zero coefficients, without pre-definining number of regions. We argue that DRD can substantially improve structure recovery and predictive performance in real-world applications. This paper is organized as follows: Sec. 2 describes the basic sparse regression problem; Sec. 3 introduces the DRD model; Sec. 4 and Sec. 5 describe the approximate methods we use for inference; In Sec. 6, we show applications to simulated data and neuroimaging data. 2 Problem setup 2.1 Observation model We consdier a scalar response yi ∈R linked to an input vector xi ∈Rp via the linear model: yi = xi ⊤w + ϵi, for i = 1, 2, · · · , n, (1) with observation noise ϵi ∼N(0, σ2). The regression (linear weight) vector w ∈Rp is the quantity of interest. We denote the design matrix by X ∈Rn×p, where each row of X is the ith input vector xi⊤, and the observation vector by y = [y1, · · · , yn]⊤∈Rn. The likelihood can be written: y|X, w, σ2 ∼N(y|Xw, σ2I). (2) 2.2 Prior on regression vector We impose the zero-mean multivariate normal prior on w: w|θ ∼N(0, C(θ)) (3) where the prior covariance matrix C(θ) is a function of hyperparameters θ. One can specify C(θ) based on prior knowledge on the regression vector, e.g. sparsity [28–30], smoothness [16, 31], or both [26]. Ridge regression assumes C(θ) = θ−1I where θ is a scalar for precision. Automatic relevance determination (ARD) uses a diagonal prior covariance matrix with a distinct hyperparameter θi for each element of the diagonal, thus Cii = θ−1 i . Automatic smoothness determination (ASD) assumes a non-diagonal prior covariance, given by a Gaussian kernel, Cij = exp(−ρ −∆ij/2δ2) where ∆ij is the squared distance between the filter coefficients wi and wj in pixel space and θ = {ρ, δ2}. Automatic locality determination (ALD) parametrizes the local region with a Gaussian form, so that prior variance of each filter coefficient is determined by its Mahalanobis distance (in coordinate space) from some mean location ν under a symmetric positive semi-definite matrix Ψ. The diagonal prior covariance matrix is given by Cii = exp(−1 2(χi −ν)⊤Ψ−1(χi −ν))), where χi is the space-time location (i.e., filter coordinates) of the ith filter coefficient wi and θ = {ν, Ψ}. 2 3 Dependent relevance determination (DRD) priors We formulate the prior covariances to capture the region dependent sparsity in the regression vector in the following. Sparsity inducing covariance We first parameterise the prior covariance to capture region sparsity in w Cs = diag[exp(u)], (4) where the parameters are u ∈Rp. We impose the Gaussian process (GP) hyperprior on u u ∼N(b1, K). (5) The GP hyperprior is controlled by the mean parameter b ∈R and the squared exponential kernel parameters, overall scale ρ ∈R and the length scale l ∈R. We denote the hyperparameters by θs = {b, ρ, l}. We refer to the prior distribution associated with the covariance Cs as dependent relevance determination (DRD) prior. Note that this hyperprior induces dependencies between the ARD precisions, that is, prior variance changes slowly between neighboring coefficients. If the ith coefficient of u has large prior variance, then probably the i + 1 and i −1 coefficients are large as well. Smoothness inducing covariance We then formulate the smoothness inducing covariance in frequency domain. Smoothness is captured by a low pass filter with only lower frequencies passing through. Therefore, we define a zeromean Gaussian prior over the Fourier-transformed weights w using a diagonal covariance matrix Cf with diagonal Cf,ii = exp(−χ2 i 2δ2 ), (6) where χi is the ith location of the regression weights w in frequency domain and δ2 is the Gaussian covariance. We denote the hyperparameters by θf = δ2. This formulation imposes neighboring weights to have similar levels of Fourier power. Similar to the automatic determination in frequency coordinates (ALDf) [26], this way of formulating the covariance requires taking discrete Fourier transform of input vectors to construct the prior in the frequency domain. This is a significant consumption in computation and memory requirements especially when p is large. To avoid the huge expense, we abandon the single frequency version Cf but combine it with Cs to form Csf with both sparsity and smoothness induced as the following. Smoothness and region sparsity inducing covariance Finally, to capture both region sparsity and smoothness in w, we combine Cs and Cf in the following way Csf = C 1 2s B⊤CfBC 1 2s , (7) where B is the Fourier transformation matrix which could be huge when p is large. Implementation exploits the speed of the FFT to apply B implicitly. This formulation implies that the sparse regions that are captured by Cs are pruned out and the variance of the remaining entries in weights are correlated by Cf. We refer to the prior distribution associated with the covariance Csf as smooth dependent relevance determination (sDRD) prior. Unlike Cs, the covariance Csf is no longer diagonal. To reduce computational complexity and storage requirements, we only store those values that correspond to non-zero portions in the diagonal of Cs and Cf from the full Csf. 3 Figure 1: Generative model for locally smooth and globally sparse Bayesian structure learning. The ith response yi is linked to an input vector xi and a weight vector w in each i. The weight vector w is governed by u and θf. The hyper-prior p(u|θs) imposes correlated sparsity in w and the hyperparameters θf imposes smoothness in w. 4 Posterior inference for w First, we denote the overall hyperparameter set to be θ = {σ2, θs, θf} = {σ2, b, ρ, l, δ2}. We compute the maximum likelihood estimate for θ (denoted by ˆθ) and compute the conditional MAP estimate for the weights w given ˆθ (closed form), which is the empirical Bayes procedure equipped with a hyper-prior. Our goal is to infer w. The posterior distribution over w is obtained by p(w|X, y) = Z Z p(w, u, θ|X, y)dudθ, (8) which is analytically intractable. Instead, we approximate the marginal posterior distribution with the conditional distribution given the MAP estimate of u, denoted by µu, and the maximum likelihood estimation of σ2, θs, θf denoted by ˆσ2, ˆθs, ˆθf, p(w|X, y) ≈ p(w|X, y, µu, ˆσ2, ˆθs, ˆθf). (9) The approximate posterior over w is multivariate normal with the mean and covariance given by p(w|X, y, µu, ˆσ2, ˆθs, ˆθf) = N(µw, Λw), (10) Λw = ( 1 ˆσ2 X⊤X + C−1 µu, ˆ θs, ˆ θf )−1, (11) µw = 1 ˆ σ2 ΛwXT y. (12) 5 Inference for hyperparameters The MAP inference of w derived in the previous section depends on the values of ˆθ = { ˆσ2, ˆθs, ˆθf}. To estimate ˆθ, we maximize the marginal likelihood of the evidence. ˆθ = arg max θ log p(y|X, θ) (13) where p(y|X, θ) = Z Z p(y|X, w, σ2)p(w|u, θf)p(u|θs)dwdu. (14) Unfortunately, computing the double integrals is intractable. In the following, we consider the the approximation method based on Laplace approximation to compute the integral approximately. Laplace approximation to posterior over u To approximate the marginal likelihood, we can rewrite Bayes’ rule to express the marginal likelihood as the likelihood times prior divided by the posterior, p(y|X, θ) = p(y|X, u)p(u|θ) p(u|y, X, θ) , (15) Laplace’s method allows us to approximate p(u|y, X, θ), the posterior over the latent u given the data {X, y} and hyper-parameters θ, using a Gaussian centered at the mode of the distribution and inverse covariance given by the Hessian of the negative log-likelihood. Let µu = arg maxu p(u|y, X, θ) and Λu = −( ∂2 ∂u∂u⊤log p(u|y, X, θ))−1 denote the mean and covariance 4 Figure 2: Comparison of estimators for 1D simulated example. First column: True filter weight, maximum likelihood (linear regression) estimate, empirical Bayesian ridge regression (L2penalized) estimate; Second column: ARD estimate with different and independent prior covariance hyperparameters, lasso regression with L1-regularization and group lasso with group size of 5; Third column: ALD methods in space-time domain, frequency domain and combination of both, respectively; Fourth column: DRD method in space-time domain only and its smooth version sDRD imposing both sparsity (space-time) and smoothness (frequency), and smooth RVM initialized with elastic net estimate. of this Gaussian, respectively. Although the right-hand-side can be evaluated at any value of u, a common approach is to use the mode u = µu, since this is where the Laplace approximation is most accurate. This leads to the following expression for the log marginal likelihood: log p(y|X, θ) ≈ log p(y|X, µu) + log p(µu|θ) −1 2 log |2πΛu|. (16) Then by optimizing log p(y|X, θ) with regard to θ, we can get ˆθ given a fixed µu, denoted as ˆθµu. Following an iterative optimization procedure, we obtain an updating rule µt u = arg maxu p(u|y, X, ˆθµt−1 u ) at tth iteration. The algorithm will stop when u and θ converge. More information and details about formulation and derivation are described in the appendix. 6 Experiment and Results 6.1 One Dimensional Simulated Data Beginning with simulated data, we compare performances of various regression estimators. One dimensional data is generated from a generative model with d = 200 dimensions. Firstly to generate a Gaussian process, a covariance kernel matrix K is built with squared exponential kernel with the spatial locations of regression weights as inputs. Then a scalar b is set as the mean function to determine the scale of prior covariance. Given the Gaussian process, we generate a multivariate vector u, and then take its exponential to obtain the diagonal of prior covariance Cs in space-time domain. To induce smoothness, eq. 7 is introduced to get covariance Csf. Then a weight vector w is sampled from a Gaussian distribution with zero mean and Csf. Finally, we obtain the response y given stimulus x with w plus Gaussian noise ϵ. In our case, ϵ should be large enough to ensure that data and response won’t impose strong likelihood over prior knowledge. Thus the introduced prior would largely dominate the estimate. Three local regions are constructed, which are positive, negative and a half-positive-half-negative with sufficient zeros between separate bumps clearly apart. As shown in Figure 2, the left top subfigure shows the underlying weight vector w. Traditional methods like maximum likelihood, without any prior, are significantly overwhelmed by large noise of the data. Weak priors such as ridge, ARD, lasso could fit the true weight better with 5 Figure 3: Estimated filter weights and prior covariances. Upper row shows the true filter (dotted black) and estimated ones (red); Bottom row shows the underlying prior covariance matrix. different levels of sparsity imposed, but are still not sparse enough and not smooth at all. Group lasso enforces a stronger sparsity than lasso by assuming block sparsity, thus making the result smoother locally. ALD based methods have better performance, compared with traditional ones, in identifying one big bump explicitly. ALDs is restricted by the assumption of one modal Gaussian, therefore is able to find one dominating local region. ALDf focuses localities in frequency domain thus make the estimate smoother but no spatial local regions are discovered. ALDsf combines the effects in both ALDs and ALDf, and thus possesses smoothness but only one region is found. Smooth Relevance Vector Machine (sRVM) can smooth the curve by incorporating a flexible noisedependent smoothness prior into the RVM, but is not able to draw information from data likelihood magnificently. Our DRD can impose distinct local sparsity via Gaussian process prior and sDRD can induce smoothness via bounding the frequencies. For all baseline models, we do model selection via cross-validation varying through a wide range of parameter space, thus we can guarantee the fairness for comparisons. To further illustrate the benefits and principles of DRD, we demonstrate the estimated covariance via ARD, ALDsf and sDRD in Figure 3. It can be stated that ARD could detect multiple localities since its priors are purely independent scalars which could be easily influenced by data with strong likelihood, but the consideration is the loss of dependency and smoothness. ALDsf can only detect one locality due to its deterministic Gaussian form when likelihood is not sufficiently strong, but with Fourier components over the prior, it exhibits smoothness. sDRD could capture multiple local sparse regions as well as impose smoothness. The underlying Gaussian process allows multiple non-zero regions in prior covariance with the result of multiple local sparsities for weight tensor. Smoothness is introduced by a Gaussian type of function controlling the frequency bandwidth and direction. In addition, we examine the convergence properties of various estimators as a function of the amount of collected data and give the average relative errors of each method in Figure 4. Responses are simulated from the same filter as above with large Gaussian white noise which weakens the data likelihood and thus guarantees a significant effect of prior over likelihood. The results show that sDRD estimate achieves the smallest MSE (mean squared error), regardless of the number of training samples. The MSE, mentioned here and in the following paragraphs, refers to the error compared with the underlying w. The test error, which will be mentioned in later context, refers to the error compared with true y. The left plot in Figure 4 shows that other methods require at least 1-2 times more data than sDRD to achieve the same error rate. The right figure shows the ratio of the MSE for each estimate to the MSE for sDRD estimate, showing that the next best method (ALDsf) exhibits an error nearly two times of sDRD. 6.2 Two Dimensional Simulated Data To better illustrate the performance of DRD and lay the groundwork for real data experiment, we present a 2-dimensional synthetic experiment. The data is generated to match characteristics of real fMRI data, as will be outlined in the next section. With a similar generation procedure as in 1dimensional experiment, a 2-dimensional w is generated with analogical properties as the regression weights in fMRI data. The analogy is based on reasonable speculation and accumulated acknowledge from repeated trials and experiment. Two comparative studies are conducted to investigate the influences of sample size on the recovery accuracy of w and predictive ability, both with dimension = 1600 (the same as fMRI). To demonstrate structural sparsity recovery, we only compare our DRD method with ARD, lasso, elastic net (elnet), group lasso (glasso). 6 Figure 4: Convergence of error rates on simulated data with varying training size (Left) and the relative error (MSE ratio) for sDRD (Right) Figure 5: Test error for each method when n = 215 (Left) and n = 800 (Right) for 2D simulated data. The sample size n varies in {215, 800}. The results are shown in Fig. 5 and Fig. 6. When n = 215, only DRD is able to reveal an approximative estimation of true w with a small level of noise as well as giving the lowest predictive error. Group lasso performs slightly better than ARD, lasso and elnet, and presents only a weakly distinct block wise estimation compared with lasso and elnet. Lasso and elnet both show similar performances and give a stronger sparsity than ARD, which indicates that ARD fails to impose strong sparsity in this synthetic case due to its complete independencies among dimensions when data is less sufficient and noisy. When n = 800, DRD still holds the best prediction. Group lasso fails to keep the record since block-wise penalty can capture group information but miss the subtlety when finer details matter. ARD progresses to the second place because when data likelihood is strong enough, posterior of w won’t be greatly influenced by the noise but follow the likelihood and the prior. Additionally, since ARD’s prior is more flexible and independent than lasso and elnet, the posterior would approximate the underlying w better and finer. 6.3 fMRI Data We analyzed functional MRI data from the Human Connectome Project 1 collected from 215 healthy adult participants on a relational reasoning task. We used contrast images for the comparison of relational reasoning and matching tasks. Data were processed using the HCP minimal preprocessing pipelines [32], down-sampled to 63×76×63 voxels using the flirt applyXfm tool [33], then tailored further down to 40 × 76 × 40 by deleting zero-signal regions outside the brain. We analyzed 215 samples, each of which is an average from Z-slice 37 to 39 slices of 3D structure based on recommendations by domain experts. As the dependent variable in the regression, we selected the number of correct responses on the Penn Matrix Text, which is a measure of fluid intelligence that should be related to relational reasoning performance. In each run, we randomly split the fMRI data into five sets for five-fold cross-validation, and took an average of test errors across five folds for each run. Hyperparameters were chosen by a five-fold cross-validation within the training set, and the optimal hyper parameter set was used for computing test performance. Fig. 7 shows the regions of positive (red) and negative (blue) support for the regression weights we obtained using different sparse regression methods. The rightmost panel quantifies performance using mean squared error on held out test data. Both predictive performance and estimated pattern are similar to n = 215 result in 2D synthetic experiment. ARD returns a quite noisy estimation due to the complete independencies and weak likelihood. The elastic net estimate improves slightly over lasso but is significantly better than ARD, which indicates that lasso type of regularizations impose stronger sparsity than ARD in this case. Group lasso is slightly better 1http://www.humanconnectomeproject.org/. 7 Figure 6: Surface plot of estimated w from each method using 2-dimensional simulated data when n = 215. Figure 7: Positive (red) and negative (blue) supports of the estimated weights from each method using real fMRI data and the corresponding test errors. because of its block-wise regularization, but more noisy blocks pop up influencing the predictive ability. DRD reveals strong sparsity as well as clustered local regions. It also possesses the smallest test error indicating the best predictive ability. Given that local group information most likely gather around a few pixels in fMRI data, smoothness would be less valuable to be induced. This is the reason that sDRD doesn’t show a distinct outperforming result over DRD, as a result of which we omit smoothness imposing comparative experiment for fMRI data. In addition, we also test the StructOMP [24] method for both 2D simulated data and fMRI data, but it doesn’t show satisfactory estimation and predictive ability in the 2D data with our data’s intrinsic properties. Therefore we chose to not show it for comparison in this study. 7 Conclusion We proposed DRD, a hierarchal model for smooth and region-sparse weight tensors, which uses a Gaussian process to model spatial dependencies in prior variances, an extension of the relevance determination framework. To impose smoothness, we also employed a structured model of the prior variances of Fourier coefficients, which allows for pruning of high frequencies. Due to the intractability of marginal likelihood integration, we developed an efficient approximate inference method based on Laplace approximation, and showed substantial improvements over comparable methods on both simulated and fMRI real datasets. Our method yielded more interpretable weights and indeed discovered multiple sparse regions that were not detected by other methods. We have shown that DRD can gracefully incorporate structured dependencies to recover smooth, regionsparse weights without any specification of groups or regions, and believe it will be useful for other kinds of high-dimensional datasets from biology and neuroscience. Acknowledgments This work was supported by the McKnight Foundation (JP), NSF CAREER Award IIS-1150186 (JP), NIMH grant MH099611 (JP) and the Gatsby Charitable Foundation (MP). 8 References [1] R. Tibshirani. Journal of the Royal Statistical Society. Series B, pages 267–288, 1996. [2] H. Lee, A. Battle, R. Raina, and A. Ng. In NIPS, pages 801–808, 2006. [3] H. Zou and T. Hastie. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005. [4] B. Efron, T. Hastie, I. Johnstone, and et al. Tibshirani, R. Least angle regression. The Annals of statistics, 32(2):407–499, 2004. [5] A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [6] G. Yuan, K. Chang, C. Hsieh, and C. Lin. JMLR, 11:3183–3234, 2010. [7] F. Bach, R. Jenatton, J. Mairal, and et al. Obozinski, G. Convex optimization with sparsity-inducing norms. Optimization for Machine Learning, pages 19–53, 2011. [8] R. Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995. [9] M. Tipping. Sparse bayesian learning and the relevance vector machine. JMLR, 1:211–244, 2001. [10] D. MacKay. Bayesian non-linear modeling for the prediction competition. In Maximum Entropy and Bayesian Methods, pages 221–234. Springer, 1996. [11] T. Mitchell and J. Beauchamp. Bayesian variable selection in linear regression. JASA, 83(404):1023– 1032, 1988. [12] E. George and R. McCulloch. Variable selection via gibbs sampling. JASA, 88(423):881–889, 1993. [13] C. Carvalho, N. Polson, and J. Scott. Handling sparsity via the horseshoe. In International Conference on Artificial Intelligence and Statistics, pages 73–80, 2009. [14] C. Hans. Bayesian lasso regression. Biometrika, 96(4):835–845, 2009. [15] B. Anirban, P. Debdeep, P. Natesh, and David D. Bayesian shrinkage. December 2012. [16] A. Schmolck. Smooth Relevance Vector Machines. PhD thesis, University of Exeter, 2008. [17] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006. [18] M. Van Gerven, B. Cseke, F. De Lange, and T. Heskes. Efficient bayesian multivariate fmri analysis using a sparsifying spatio-temporal prior. NeuroImage, 50(1):150–161, 2010. [19] J. Friedman, T. Hastie, and R. Tibshirani. A note on the group lasso and a sparse group lasso. arXiv preprint arXiv:1001.0736, 2010. [20] L. Jacob, G. Obozinski, and J. Vert. Group lasso with overlap and graph lasso. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 433–440. ACM, 2009. [21] H. Liu, L. Wasserman, and J. Lafferty. Nonparametric regression and classification with joint sparsity constraints. In NIPS, pages 969–976, 2009. [22] R. Jenatton, J. Audibert, and F. Bach. Structured variable selection with sparsity-inducing norms. JMLR, 12:2777–2824, 2011. [23] S. Kim and E. Xing. Statistical estimation of correlated genome associations to a quantitative trait network. PLoS genetics, 5(8):e1000587, 2009. [24] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. JMLR, 12:3371–3412, 2011. [25] B. Engelhardt and R. Adams. Bayesian structured sparsity from gaussian fields. arXiv preprint arXiv:1407.2235, 2014. [26] M. Park and J. Pillow. Receptive field inference with localized priors. PLoS computational biology, 7(10):e1002219, 2011. [27] M. Park, O. Koyejo, J. Ghosh, R. Poldrack, and J. Pillow. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pages 489–497, 2013. [28] M. Tipping. Sparse Bayesian learning and the relevance vector machine. JMLR, 1:211–244, 2001. [29] A. Tipping and A. Faul. Analysis of sparse bayesian learning. NIPS, 14:383–389, 2002. [30] D. Wipf and S. Nagarajan. A new view of automatic relevance determination. In NIPS, 2007. [31] M. Sahani and J. Linden. Evidence optimization techniques for estimating stimulus-response functions. NIPS, pages 317–324, 2003. [32] M. Glasser, S. Sotiropoulos, A. Wilson, T. Coalson, B. Fischl, J. Andersson, J. Xu, S. Jbabdi, M. Webster, and et al. Polimeni, J. NeuroImage, 2013. [33] N.M. Alpert, D. Berdichevsky, Z. Levin, E.D. Morris, and A.J. Fischman. Improved methods for image registration. NeuroImage, 3(1):10 – 18, 1996. 9
2014
401
5,516
Deep Symmetry Networks Robert Gens Pedro Domingos Department of Computer Science and Engineering University of Washington Seattle, WA 98195-2350, U.S.A. {rcg,pedrod}@cs.washington.edu Abstract The chief difficulty in object recognition is that objects’ classes are obscured by a large number of extraneous sources of variability, such as pose and part deformation. These sources of variation can be represented by symmetry groups, sets of composable transformations that preserve object identity. Convolutional neural networks (convnets) achieve a degree of translational invariance by computing feature maps over the translation group, but cannot handle other groups. As a result, these groups’ effects have to be approximated by small translations, which often requires augmenting datasets and leads to high sample complexity. In this paper, we introduce deep symmetry networks (symnets), a generalization of convnets that forms feature maps over arbitrary symmetry groups. Symnets use kernel-based interpolation to tractably tie parameters and pool over symmetry spaces of any dimension. Like convnets, they are trained with backpropagation. The composition of feature transformations through the layers of a symnet provides a new approach to deep learning. Experiments on NORB and MNIST-rot show that symnets over the affine group greatly reduce sample complexity relative to convnets by better capturing the symmetries in the data. 1 Introduction Object recognition is a central problem in vision. What makes it challenging are all the nuisance factors such as pose, lighting, part deformation, and occlusion. It has been shown that if we could remove these factors, recognition would be much easier [2, 17]. Convolutional neural networks (convnets), the current state-of-the-art method for object recognition, capture only one type of invariance (translation); the rest have to be approximated via it and standard features. In practice, the best networks require enormous datasets which are further expanded by affine transformations [7, 13] yet are sensitive to imperceptible image perturbations [23]. We propose deep symmetry networks, a generalization of convnets based on symmetry group theory [20] that makes it possible to capture a broad variety of invariances, and correspondingly improves generalization. A symmetry group is a set of transformations that preserve the identity of an object and obey the group axioms. Most of the visual nuisance factors are symmetry groups themselves, and by incorporating them into our model we are able to reduce the sample complexity of learning from data transformed by these groups. Deep symmetry networks (symnets) form feature maps over any symmetry group, rather than just the translation group. A feature map in a deep symmetry network is defined analogously to convnets as a filter that is applied at all points in the symmetry space. Each layer in our general architecture is constructed by applying every symmetry in the group to the input, computing features on the transformed input, and pooling over neighborhoods. The entire architecture is then trained by backpropagation. In this paper, we instantiate the architecture with the affine group, resulting in deep affine networks. In addition to translation, the affine group includes rotation, scaling and shear. The affine group of the two-dimensional plane is six-dimensional (i.e., an affine transformation can be represented by a point in 6D affine space). The key challenge with 1 extending convnets to affine spaces is that it is intractable to explicitly represent and compute with a high-dimensional feature map. We address this by approximating the map using kernel functions, which not only interpolate but also control pooling in the feature maps. Compared to convnets, this architecture substantially reduces sample complexity on image datasets involving 2D and 3D transformations. We share with other researchers the hypothesis that explanatory factors cannot be disentangled unless they are represented in an appropriate symmetry space [4, 11]. Our adaptation of a representation to work in symmetry space is similar in some respects to the use of tangent distance in nearest-neighbor classifiers [22]. Symnets, however, are deep networks that compute features in symmetry space at every level. Whereas the tangent distance approximation is only locally accurate, symnet feature maps can represent large displacements in symmetry space. There are other deep networks that reinterpret the invariance of convolutional networks. Scattering networks [6] are cascades of wavelet decompositions designed to be invariant to particular Lie groups, where translation and rotation invariance have been demonstrated so far. The M-theory of Anselmi et al. [2] constructs features invariant to a symmetry group by using statistics of dot products with group orbits. We differ from these networks in that we model multiple symmetries jointly in each layer, we do not completely pool out a symmetry, and we discriminatively train our entire architecture. The first two differences are important because objects and their subparts may have relative flexibility but not total invariance along certain dimensions of symmetry space. For example, a leg of a person can be seen in some but not all combinations of rotation and scale relative to the torso. Without discriminative training, scattering networks and M-theory are limited to representing features whose invariances may be inappropriate for a target concept because they are fixed ahead of time, either by the wavelet hierarchy of the former or unsupervised training of the latter. The discriminative training of symnets yields features with task-oriented invariance to their sub-features. In the context of digit recognition this might mean learning the concept of a ‘0’ with more rotation invariance than a ‘6’, which would incur loss if it had positive weights in the region of symmetry space where a ‘9’ would also fire. Much of the vision literature is devoted to features that reduce or remove the effects of certain symmetry groups, e.g., [18, 17]. Each feature by itself is not discriminative for object recognition, so structure is modeled separately, usually with a representation that does not generalize to novel viewpoints (e.g., bags-of-features) or with a rigid alignment algorithm that cannot represent uncertainty over geometry (e.g. [9, 19]). Compared to symnets, these features are not learned, have invariance limited to a small set of symmetries, and destroy information that could be used to model object sub-structure. Like deformable part models [10], symnets can model and penalize relative transformations that compose up the hierarchy, but can also capture additional symmetries. Symmetry group theory has made a limited number of appearances in machine learning [8]. A few applications are discussed by Kondor [12], and they are also used in determinantal point processes [14]. Methods for learning transformations from examples [24, 11] could potentially benefit from being embedded in a deep symmetry network. Symmetries in graphical models [21] lead to effective lifted probabilistic inference algorithms. Deep symmetry networks may be applicable to these and other areas. In this paper, we first review symmetry group theory and its relation to sample complexity. We then describe symnets and their affine instance, and develop new methods to scale to high-dimensional symmetry spaces. Experiments on NORB and MNIST-rot show that affine symnets can reduce by a large factor the amount of data required to achieve a given accuracy level. 2 Symmetry Group Theory A symmetry of an object is a transformation that leaves certain properties of that object intact [20]. A group is a set S with an operator ∗on it with the four properties of closure, associativity, an identity element, and an inverse element. A symmetry group is a type of group where the group elements are functions and the operator is function composition. A simple geometric example is the symmetry group of a square, which consists of four reflections and {0, 1, 2, 3} multiples of 90degree rotations. These transformations can be composed together to yield one of the original eight symmetries. The identity element is the 0-degree rotation. Each symmetry has a corresponding inverse element. Composition of these symmetries is associative. 2 Lie groups are continuous symmetry groups whose elements form a smooth differentiable manifold. For example, the symmetries of a circle include reflections and rotations about the center. The affine group is a set of transformations that preserves collinearity and parallel lines. The Euclidean group is a subgroup of the affine group that preserves distances, and includes the set of rigid body motions (translations and rotations) in three-dimensional space. The elements of a symmetry group can be represented as matrices. In this form, function composition can be performed via matrix multiplication. The transformation P followed by Q (also denoted Q ◦P) is computed as R = QP. In this paper we treat the transformation matrix P as a point in D-dimensional space, where D depends on the particular representation of the symmetry group (e.g., D = 6 for affine transformations in the plane). A generating set of a group is a subset of the group such that any group element can be expressed through combinations of generating set elements and their inverses. For example, a generating set of the translation symmetry group is {x →x + ϵ, y →y + ϵ} for infinitesimal ϵ. We define the k-neighborhood of element f in group S under generating set G as the subset of S that can be expressed as f composed with elements of G or their inverses at most k times. With the previous example, the k-neighborhood of a translation vector f would take the shape of a diamond centered at f in the xy-plane. The orbit of an object x is the set of objects obtained by applying each element of a symmetry group to x. Formally, a symmetry group S acting on a set of objects X defines an orbit for each x ∈X: Ox = {s ∗x : s ∈S}. For example, the orbit of an image I(u) whose points are transformed by the rotation symmetry group s ∗I(u) = I(s−1 ∗u) is the set of images resulting from all rotations of that image. If two orbits share an element, they are the same orbit. In this way, a symmetry group S partitions the set of objects into unique orbits X = S a Oa. If a data distribution D(x, y) has the property that all the elements of an orbit share the same label y, S imposes a constraint on the hypothesis class of a learner, effectively lowering its VC-dimension and sample complexity [1]. 3 Deep Symmetry Networks Deep symmetry networks represent rich compositional structure that incorporates invariance to highdimensional symmetries. The ideas behind these networks are applicable to any symmetry group, be it rigid-body transformations in 3D or permutation groups over strings. The architecture of a symnet consists of several layers of feature maps. Like convnets, these feature maps benefit from weight tying and pooling, and the whole network is trained with backpropagation. The maps and the filters they apply are in the dimension D of the chosen symmetry group S. A deep symmetry network has L layers l ∈{1, ..., L} each with Il features and corresponding feature maps. A feature is the dot-product of a set of weights with a corresponding set of values from a local region of a lower layer followed by a nonlinearity. A feature map represents the application of a filter at all points in symmetry space. A feature at point P is computed from the feature maps of the lower layer at points in the k-neighborhood of P. As P moves in the symmetry space of a feature map, so does its neighborhood of inputs in the lower layer. Feature map i of layer l is denoted M[l, i] : RD →R, a scalar function of the D-dimensional symmetry space. Given a generating set G ⊂S, the points in the k-neighborhood of the identity element are stored in an array T[ ]. Each filter i of layer l defines a weight vector w[l, i, j] for each point T[j] in the k-neighborhood. The vector w[l, i, j] is the size of Il−1, the number of features in the underlying layer. For example, a feature in an affine symnet that detects a person would have positive weight for an arm sub-feature in the region of the k-neighborhood that would transform the arm relative to the person (e.g., smaller, rotated, and translated relative to the torso). The value of feature map i in layer l at point P is the dot-product of weights and underlying feature values in the neighborhood of P followed by a nonlinearity: M[l, i](P) = σ (v(P, l, i)) (1) v(P, l, i) = P|T| j w[l, i, j] · x(P ◦T[j]) (2) x(P′) = * S(M[l −1, 0])(P′) . . . S(M[l −1, Il−1])(P′) + (3) 3 Layer l Feature map i Layer l-1 Feature maps 0,1,2 Layer l-1 Pooled feature maps 0,1,2 Kernels Figure 1: The evaluation of point P in map M[l, i]. The elements of the k-neighborhood of P are computed P ◦T[j]. Each point in the neighborhood is evaluated in the pooled feature maps of the lower layer l −1. The pooled maps are computed with kernels on the underlying feature maps. The dashed line intersects the points in the pooled map whose values form x(P ◦T[j]) in Equation 3; it also intersects the contours of kernels used to compute those pooled values. The value of the feature is the sum of the dot-products w[l, i, j] · x(P ◦T[j]) over all j, followed by a nonlinearity. where σ is the nonlinearity (e.g., tanh(x) or max(x, 0)), v(P, l, i) is the dot product, P ◦T[j] represents element j in the k-neighborhood of P, and x(P′) is the vector of values from the underlying pooled maps at point P′. This definition is a generalization of feature maps in convnets1. Similarly, the same filter weights w[l, i, j] are tied across all points P in feature map M[l, i]. The evaluation of a point in a feature map is visualized in Figure 1. Feature maps M[l, i] are pooled via kernel convolution to become S(M[l, i]). In the case of sum-pooling, S(M[l, i])(P) = R M[l, i](P −Q)K(Q) dQ; for max-pooling, S(M[l, i])(P) = maxQ M[l, i](P −Q)K(Q). The kernel K(Q) is also a scalar function of the D-dimensional symmetry space. In the previous example of a person feature, the arm feature map could be pooled over a wide range of rotations but narrow range of translations and scales so that the person feature allows for moveable but not unrealistic arms. Each filter can specify the kernels it uses to pool lower layers, but for the sake of brevity and analogy to convnets we assume that the feature maps of a layer are pooled by the same kernel. Note that convnets discretize these operations, subsample the pooled map, and use a uniform kernel K(Q) = 1{∥Q∥∞< r}. As with convnets, the values of points in a symnet feature map are used by higher symnet layers, layers of fully connected hidden units, and ultimately softmax classification. Hidden units take the familiar form o = σ(Wx + b), with input x, output o, weight matrix W, and bias b. The log-loss of the softmax L on an instance is −wi · x −bi + log (P c exp (wc · x + bc)), where Y = i is the true label, wc and bc are the weight vector and bias for class c, and the summation is over the classes. The input image is treated as a feature map (or maps, if color or stereo) with values in the translation symmetry space. Deep symmetry networks are trained with backpropagation and are amenable to the same best practices as convnets. Though feature maps are defined as continuous, in practice the maps and their gradients are evaluated on a finite set of points P ∈M[l, i]. We provide the partial derivative of the loss L with respect to a weight vector. ∂L ∂w[l, i, j] = P P∈M[l,i] ∂L ∂M[l,i](P) ∂M[l,i](P) ∂w[l,i,j] (4) ∂M[l, i](P) ∂w[l, i, j] = σ′ (v(P, l, i)) x(P ◦T[j]) (5) 1The neighborhood that defines a square filter in convnets is the reference point translated by up to k times in x and k times in y. 4 A B1 B2 C2 B3 C3 B4 C4 B5 C1 A B1 B2 C2 B3 C3 B4 C4 B5 C1 A B1 B2 B3 B4 B5 C1 C2 C3 C4 A B1 B2 C2 B3 C3 B4 C4 B5 C1 B1 C1 B1 C1 A B1 B2 B3 B4 B5 C1 C2 C3 C4 Figure 2: The feature hierarchy of a three-layer deep affine net is visualized with and without pooling. From top to bottom, the layers (A,B,C) contain one, five, and four feature maps, each corresponding to a labeled part of the cartoon figure. Each horizontal line represents a six-dimensional affine feature map, and bold circles denote six-dimensional points in the map. The dashed lines represent the affine transformation from a feature to the location of one of its filter points. For clarity, only a subset of filter points are shown. Left: Without pooling, the hierarchy represents a rigid affine transformation among all maps. Another point on feature map A is visualized in grey. Right: Feature maps B1 and C1 are pooled with a kernel that gives those features flexibility in rotation. The partial derivative of the loss L with respect to the value of a point in a lower layer is ∂L ∂M[l −1, i](P) = PIl i′ P P′∈M[l,i′] ∂L ∂M[l,i′](P′) ∂M[l,i′](P′) ∂M[l−1,i](P) (6) ∂M[l, i′](P′) ∂M[l −1, i](P) = σ′ (v(P′, l, i′)) P|T| j w[l, i′, j][i] ∂S(M[l−1,i])(P′◦T[j]) ∂M[l−1,i](P) (7) where the gradient of the pooled feature map ∂S(M[l,i])(P) ∂M[l,i](Q) equals K(P −Q) for sum-pooling. None of this treatment depends explicitly on the dimensionality of the space except for the kernel and transformation composition which have polynomial dependence on D. In the next section we apply this architecture to the affine group in 2D, but it could also be applied to the affine group in 3D or any other symmetry group. 4 Deep Affine Networks −1 0 1 −1 0 1 Figure 3: The six transformations in the generating set of the affine group applied to a square (exaggerated ϵ=0.2, identity is black square). We instantiate a deep symmetry network with the affine symmetry group in the plane. The affine symmetry group contains transformations capable of rotating, scaling, shearing, and translating two-dimensional points. The transformation is described by six coordinates:  x′ y′  =  a b c d   x y  +  e f  This means that each of the feature maps M[l, i] and elements T[j] of the k-neighborhood is represented in six dimensions. The identity transformation is a=d=1, b=c=e=f =0. The generating set of the affine symmetry group contains six elements, each of which is obtained by adding ϵ to one of the six coordinates in the identity transform. This generating set is visualized in Figure 3. A deep affine network can represent a rich part hierarchy where each weight of a feature modulates the response to a subpart at a point in the affine neighborhood. The geometry of a deep affine network is best understood by tracing a point on a feature map through its filter point transforms into lower layers. Figure 2 visualizes this structure without and with pooling on the left and right sides of the diagram, respectively. Without pooling, the feature hierarchy defines a rigid affine relationship between the point of evaluation on a map and the location of its sub-features. In contrast, a pooled value on a sub-feature map is computed from a neighborhood defined by the kernel of points in affine space; this can represent model flexibility along certain dimensions of affine space. 5 5 Scaling to High-Dimensional Symmetry Spaces It would be intractable to explicitly represent the high-dimensional feature maps of symnets. Even a subsampled grid becomes unwieldy at modest dimensions (e.g., a grid in affine space with ten steps per axis has 106 points). Instead, each feature map is evaluated at N control points. The control points are local maxima of the feature in symmetry space, found by Gauss-Newton optimization, each initialized from a prior. This can be seen as a form of non-maximum suppression. Since the goal is recognition, there is no need to approximate the many points in symmetry space where the feature is not present. The map is then interpolated with kernel functions; the shape of the function also controls pooling. 5.1 Transformation Optimization Convnets max-pool a neighborhood of translation space by exhaustive evaluation of feature locations. There are a number of algorithms that solve for a maximal feature location in symmetry space but they are not efficient when the feature weights are frequently adjusted [9, 19]. We adopt an iterative approach that dovetails with the definition of our features. If a symnet is based on a Lie group, gradient based optimization can be used to find a point P∗ that locally maximizes the feature value (Equation 1) initialized at point P. In our experiments with deep affine nets, we follow the forward compositional (FC) warp [3] to align filters with the image. An extension of Lucas-Kanade, FC solves for an image alignment. We adapt this procedure to our filters and weight vectors: min∆P P|T| j ∥w[l, i, j] −x(P ◦∆P ◦T[j])∥2. We run an FC alignment for each of the N control points in feature map M[l, i], each initialized from a prior. Assuming P|T| j ∥x(P ◦∆P ◦T[j])∥2 is constant, this procedure locally maximizes the dot product between the filter and the map in Equation 2. Each iteration of FC takes a Gauss-Newton step to solve for a transformation of the neighborhood of the feature in the underlying map ∆P, which is then composed with the control point: P ←P ◦∆P. 5.2 Kernels Figure 4: Contours of three 6D Gaussian kernels visualized on a surface in affine space. Points are visualized by an oriented square transformed by the affine transformation at that point. Each kernel has a different covariance matrix Σ. Given a set of N local optima O∗ = {(P1, v1), . . . , (PN, vN)} in D-dimensional feature map M[l, i], we use kernel-based interpolation to compute a pooled map S(M[l, i]). The kernel performs three functions: penalizing relative locations of sub-features in symmetry space (cf. [10]), interpolating the map, and pooling a region of the map. These roles could be split into separate filter-specific kernels that are then convolved appropriately. The choice of these kernels will vary with the application. In our experiments, we lump these functions into a single kernel for a layer. We use a Gaussian kernel K(Q) = e−qT Σ−1q where q is the D-dimensional vector representation of Q and the D×D covariance matrix Σ controls the shape and extent of the kernel. Several instances of this kernel are shown in Figure 4. Max-pooling produced the best results on our tests. 6 Experiments In our experiments we test the hypothesis that a deep network with access to a larger symmetry group will generalize better from fewer examples, provided those symmetries are present in the data. In particular, theory suggests that a symnet will have better sample complexity than another classifier on a dataset if it is based on a symmetry group that generates variations present in that dataset [1]. We compare deep affine symnets to convnets on the MNIST-rot and NORB image classification datasets, which finely sample their respective symmetry spaces such that learning curves measure the amount of augmentation that would be required to achieve similar performance. On both datasets affine symnets achieve a substantial reduction in sample complexity. This is particularly remarkable on NORB because its images are generated by a symmetry space in 3D. Symnet running time was within an order of magnitude of convnets, and could be greatly optimized. 6.1 MNIST-rot MNIST-rot [15] consists of 28x28 pixel greyscale images: 104 for training, 2 × 103 for validation, and 5 × 104 for testing. The images are sampled from the MNIST digit recognition dataset and each 6 Figure 5: Impact of training set size on MNIST-rot test performance for architectures that use either one convolutional layer or one affine symnet layer. is rotated by a random angle in the uniform distribution [0, 2π]. With transformations that apply to the whole image, MNIST-rot is a good testbed for comparing the performance of a single affine layer to a single convnet layer. We modified the Theano [5] implementation of convolutional networks so that the network consisted of a single layer of convolution and maxpooling followed by a hidden layer of 500 units and then softmax classification. The affine net layer was directly substituted for the convolutional layer. The control points of the affine net were initialized at uniformly random positions with rotations oriented around the image center, and each control point was locally optimized with four iterations of GaussNewton updates. The filter points of the affine net were arranged in a square grid. Both the affine net and the convnet compute a dot-product and use the sigmoid nonlinearity. Both networks were trained with 50 epochs of mini-batch gradient descent with momentum, and test results are reported on the network with lowest error on the validation set2. The convnet did best with small 5 × 5 filters and the symnet with large 20 × 20 filters. This is not surprising because the convnet must approximate the large rotations of the dataset with translations of small patches. The affine net can pool directly in this space of rotations with large filters. Learning curves for the two networks are presented in Figure 5. We observe that the affine symnet roughly halves the error of the convnet. With small sample sizes, the symnet achieves an accuracy for which the convnet requires about eight times as many samples. 6.2 NORB MNIST-rot is a synthetic dataset with symmetries that are not necessarily representative of real images. The NORB dataset [16] contains 2 × 108 × 108 pixel stereoscopic images of 50 toys in five categories: quadrupeds, human figures, airplanes, trucks, and cars. Five of the ten instances of each category are reserved for the test set. Each toy is photographed on a turntable from an exhaustive set of angles and lighting conditions. Each image is then perturbed by a random translation shift, planar rotation, luminance change, contrast change, scaling, distractor object, and natural image background. A sixth blank category containing just the distractor and background is also used. As in other papers, we downsample the images to 2×48×48. To compensate for the effect of distractors in smaller training sets, we also train and test on a version of the dataset that is centrally-cropped to 2 × 24 × 24. We report results for whichever version had lower validation error. In our experiments we train on a variable subset of the first training fold, using the first 2 × 103 images of the second fold for validation. Our results use both of the testing folds. We compare architectures that use two convolutional layers or two affine ones, which performed better than single-layer ones. As with the MNIST-rot experiments, the symnet and convnet layers are followed by a layer of 500 hidden units and softmax classification. The symnet control points in the first layer were arranged in three concentric rings in translation space, with 8 points spaced across rotation (200 total points). Control points in the second layer were fixed at the center of translation 2Grid search over learning rate {.1, .2}, mini-batch size {10, 50, 100}, filter size {5, 10, 15, 20, 25}, number of filters {20, 50, 80}, pooling size (convnet) {2, 3, 4}, and number of control points (symnet) {5, 10, 20}. 7 Figure 6: Impact of training set size on NORB test performance for architectures with two convolutional or affine symnet layers followed by a fully connected layer and then softmax classification. space arranged over 8 rotations and up to 2 vertical scalings (16 total points) to approximate the effects of elevation change. Control points were not iteratively optimized due to the small size of object parts in downsampled images. The filter points of the first layer of the affine net were arranged in a square grid. The second layer filter points were arranged in a circle in translation space at a 3 or 4 pixel radius, with 8 filter points evenly spaced across rotation at each translation. We report the test results of the networks with lowest validation error on a range of hyperparameters3. The learning curves for convnets and affine symnets are shown in Figure 6. Even though the primary variability in NORB is due to rigid 3D transformations, we find that our affine networks still have an advantage over convnets. A 3D rotation can be locally approximated with 2D scales, shears, and rotations. The affine net can represent these transformations and so it benefited from larger filter patches. The translation approximation of the convnet is unable to properly align larger features to the true symmetries, and so it performed better with smaller filters. The convnet requires about four times as much data to reach the accuracy of the symnet with the smallest training set. Larger filters capture more structure than smaller ones, allowing symnets to generalize better than convnets, and effectively giving each symnet layer the power of more than one convnet layer. The left side of the graph may be more indicative of the types of gains symnets may have over convnets in more realistic datasets that do not have thousands of images of identical 3D shapes. With the ability to apply more realistic transformations to sub-parts, symnets may also be better able to reuse substructure on datasets with many interrelated or fine-grained categories. Since symnets are a clean generalization of convnets, they should benefit from the learning, regularization, and efficiency techniques used by state-of-the-art networks [13]. 7 Conclusion Symmetry groups underlie the hardest challenges in computer vision. In this paper we introduced deep symmetry networks, the first deep architecture that can compute features over any symmetry group. It is a natural generalization of convolutional neural networks that uses kernel interpolation and transformation optimization to address the difficulties in representing high-dimensional feature maps. In experiments on two image datasets with 2D and 3D variability, affine symnets achieved higher accuracy than convnets while using significantly less data. Directions for future work include extending to other symmetry groups (e.g., lighting, 3D space), modeling richer distortions, incorporating probabilistic inference, and scaling to larger datasets. Acknowledgments This research was partly funded by ARO grant W911NF-08-1-0242, ONR grants N00014-13-10720 and N00014-12-1-0312, and AFRL contract FA8750-13-2-0019. The views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ARO, ONR, AFRL, or the United States Government. 3Grid search over filter size in each layer {6, 9}, pooling size in each layer (convnet) {2, 3, 4}, first layer control point translation spacing (symnet) {2, 3}, momentum {0, 0.5, 0.9}, others as in MNIST-rot. 8 References [1] Y. S. Abu-Mostafa. Hints and the VC dimension. Neural Computation, 5(2):278–288, 1993. [2] F. Anselmi, J. Z. Leibo, L. Rosasco, J. Mutch, A. Tacchetti, and T. Poggio. Unsupervised learning of invariant representations in hierarchical architectures. ArXiv preprint 1311.4158, 2013. [3] S. Baker and I. Matthews. Lucas-Kanade 20 years on: A unifying framework. International Journal of Computer Vision, 56(3):221–255, 2004. [4] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828, 2013. [5] J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference, 2010. [6] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1872–1886, 2013. [7] D. Ciresan, U. Meier, and J. Schmidhuber. Multi-column deep neural networks for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2012. [8] P. Diaconis. Group representations in probability and statistics. Institute of Mathematical Statistics, 1988. [9] B. Drost, M. Ulrich, N. Navab, and S. Ilic. Model globally, match locally: Efficient and robust 3D object recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010. [10] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2008. [11] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In Proceedings of the TwentyFirst International Conference on Artificial Neural Networks, 2011. [12] I. R. Kondor. Group theoretical methods in machine learning. Columbia University, 2008. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, 2012. [14] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. ArXiv preprint 1207.6083, 2012. [15] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In Proceedings of the Twenty-Fourth International Conference on Machine Learning, 2007. [16] Y. LeCun, F. J. Huang, and L. Bottou. Learning methods for generic object recognition with invariance to pose and lighting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2004. [17] T. Lee and S. Soatto. Video-based descriptors for object recognition. Image and Vision Computing, 29(10):639–652, 2011. [18] D. G. Lowe. Object recognition from local scale-invariant features. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1999. [19] F. Lu and E. Milios. Robot pose estimation in unknown environments by matching 2D range scans. Journal of Intelligent and Robotic Systems, 18(3):249–275, 1997. [20] W. Miller. Symmetry groups and their applications. Academic Press, 1972. [21] M. Niepert. Markov chains on orbits of permutation groups. In Proceedings of the Twenty-Eight Conference on Uncertainty in Artificial Intelligence, 2012. [22] P. Simard, Y. LeCun, and J. S. Denker. Efficient pattern recognition using a new transformation distance. In Advances in Neural Information Processing Systems 5, 1992. [23] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. International Conference on Learning Representations, 2014. [24] L. Wiskott and T. J. Sejnowski. Slow feature analysis: Unsupervised learning of invariances. Neural Computation, 14(4):715–770, 2002. 9
2014
402
5,517
Covariance shrinkage for autocorrelated data Daniel Bartz Department of Computer Science TU Berlin, Berlin, Germany daniel.bartz@tu-berlin.de Klaus-Robert M¨uller TU Berlin, Berlin, Germany Korea University, Korea, Seoul klaus-robert.mueller@tu-berlin.de Abstract The accurate estimation of covariance matrices is essential for many signal processing and machine learning algorithms. In high dimensional settings the sample covariance is known to perform poorly, hence regularization strategies such as analytic shrinkage of Ledoit/Wolf are applied. In the standard setting, i.i.d. data is assumed, however, in practice, time series typically exhibit strong autocorrelation structure, which introduces a pronounced estimation bias. Recent work by Sancetta has extended the shrinkage framework beyond i.i.d. data. We contribute in this work by showing that the Sancetta estimator, while being consistent in the high-dimensional limit, suffers from a high bias in finite sample sizes. We propose an alternative estimator, which is (1) unbiased, (2) less sensitive to hyperparameter choice and (3) yields superior performance in simulations on toy data and on a real world data set from an EEG-based Brain-Computer-Interfacing experiment. 1 Introduction and Motivation Covariance matrices are a key ingredient in many algorithms in signal processing, machine learning and statistics. The standard estimator, the sample covariance matrix S, has appealing properties in the limit of large sample sizes n: its entries are unbiased and consistent [HTF08]. On the other hand, for sample sizes of the order of the dimensionality p or even smaller, its entries have a high variance and the spectrum has a large systematic error. In particular, large eigenvalues are overestimated and small eigenvalues underestimated, the condition number is large and the matrix difficult to invert [MP67, ER05, BS10]. One way to counteract this issue is to shrink S towards a biased estimator T (the shrinkage target) with lower variance [Ste56], Csh := (1 −λ)S + λT, the default choice being T = p−1trace(S)I, the identity multiplied by the average eigenvalue. For the optimal shrinkage intensity λ⋆, a reduction of the expected mean squared error is guaranteed [LW04]. Model selection for λ can be done by cross-validation (CV) with the known drawbacks: for (i) problems with many hyperparameters, (ii) very high-dimensional data sets, or (iii) online settings which need fast responses, CV can become unfeasible and a faster model selection method is required. A popular alternative to CV is Ledoit and Wolf’s analytic shrinkage procedure [LW04] and more recent variants [CWEH10, BM13]. Analytic shrinkage directly estimates the shrinkage intensity which minimizes the expected mean squared error of the convex combination with a negligible computational cost, especially for applications which rely on expensive matrix inversions or eigendecompositions in high dimensions. All of the above algorithms assume i.i.d. data. Real world time series, however, are often non-i.i.d. as they possess pronounced autocorrelation (AC). This makes covariance estimation in high dimensions even harder: the data dependence lowers the effective sample size available for constructing the estimator [TZ84]. Thus, stronger regularization λ will be needed. In Figure 1 the simple case of an autoregressive model serves as an example for an arbitrary generative model with autocorrelation. 1 0 20 40 60 80 100 0 0.2 0.4 0.6 0.8 1 time lag autocorrelation (AC) no AC (AR−coeff. = 0) low AC (AR−coeff. = 0.7) high AC (AR−coeff. = 0.95) 150 160 170 180 190 200 0 10 20 30 40 #eigenvalue eigenvalue population sample 150 160 170 180 190 200 0 10 20 30 40 #sample eigendirection variance Figure 1: Dependency of the eigendecomposition on autocorrelation. p = 200, n = 250. The Figure shows, for three levels of autocorrelation (left), the population and sample eigenvalues (middle): with increasing autocorrelation the sample eigenvalues become more biased. This bias is an optimistic measure for the quality of the covariance estimator: it neglects that population and sample eigenbasis also differ [LW12]. Comparing sample eigenvalues to the population variance in the sample eigenbasis, the bias is even larger (right). In practice, violations of the i.i.d. assumption are often ignored [LG11, SBMK13, GLL+14], although Sancetta proposed a consistent shrinkage estimator under autocorrelation [San08]. In this paper, we contribute by showing in theory, simulations and on real world data, that (i) ignoring autocorrelations for shrinkage leads to large estimation errors and (ii) for finite samples Sancetta’s estimator is still substantially biased and highly sensitive to the number of incorporated time lags. We propose a new bias-corrected estimator which (iii) outperforms standard shrinkage and Sancetta’s method under the presence of autocorrelation and (iv) is robust to the choice of the lag parameter. 2 Shrinkage for autocorrelated data Ledoit and Wolf derived a formula for the optimal shrinkage intensity [LW04, SS05]: λ⋆= P ij Var Sij  P ij E hSij −Tij 2i. (1) The analytic shrinkage estimator ˆλ is obtained by replacing expectations with sample estimates: Var Sij  −→ d Var Sij  = 1 n2 n X s=1  xisxjs −1 n n X t=1 xitxjt 2 (2) E hSij −Tij 2i −→ bE  (Sij −Tij)2 = (Sij −Tij)2, (3) where xit is the tth observation of variable i. While the estimator eq. (3) is unbiased even under a violation of the i.i.d. assumption, the estimator eq. (2) is based on Var 1 n n X t=1 xitxjt ! i.i.d. = 1 nVar (xitxjt) . If the data are autocorrelated, cross terms cannot be ignored and we obtain Var 1 n n X t=1 xitxjt ! = 1 n2 n X s,t=1 Cov(xitxjt, xisxjs) = 1 nCov(xitxjt, xitxjt) + 2 n n−1 X s=1 n −s n Cov(xitxjt, xi,t+sxj,t+s) =: 1 nΓij(0) + 2 n n−1 X s=1 Γij(s) (4) Figure 2 illustrates the effect of ignoring the cross terms for increasing autocorrelation (larger ARcoefficients, see section 3 for details on the simulation). It compares standard shrinkage to an oracle shrinkage based on the population variance of the sample covariance1. The population variance of S 1calculated by resampling. 2 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 norm. Σij var(Sij) 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 shrinkage intensity λ AR−coefficients 0 50 100 150 200 0 5 10 15 #sample eigendirection variance AR−coeff. = 0.7 population sample pop. var(S) shrinkage standard shrinkage 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 impr. over sample cov. Figure 2: Dependency of shrinkage on autocorrelation. p = 200, n = 250. increases because the effective sample size is reduced [TZ84], yet the standard shrinkage variance estimator eq. (2) does not increase (outer left). As a consequence, for oracle shrinkage the shrinkage intensity increases, for the standard shrinkage estimator it even decreases because the denominator in eq. (1) grows (middle left). With increasing autocorrelation, the sample covariance becomes a less precise estimator: for optimal (stronger) shrinkage more improvement becomes possible, yet standard shrinkage does not improve (middle right). Looking at the variance estimates in the sample eigendirections for AR-coefficients of 0.7, we see that the bias of standard shrinkage is only marginally smaller than the bias of the sample covariance, while oracle shrinkage yields a substantial bias reduction (outer right). Sancetta-estimator An estimator for eq. (4) was proposed by [San08]: ˆΓSan ij (s) := 1 n n−s X t=1 (xitxjt −Sij) (xi,t+sxj,t+s −Sij) , (5) d Var Sij San,b := 1 n ˆΓSan ij (0) + 2 n−1 X s=1 κ(s/b)ˆΓSan ij (s) ! , b > 0, where κ is a kernel which has to fulfill Assumption B in [And91]. We will restrict our analysis to the truncated kernel κTR(x) = {1 for |x| ≤1, 0 otherwise} to obtain less cluttered formulas2. The kernel parameter b describes how many time lags are taken into account. The Sancetta estimator behaves well in the high dimensional limit: the main theoretical result states that for (i) a fixed decay of the autocorrelation, (ii) b, n →∞and (iii) b2 increasing at a lower rate than n, the estimator is consistent independently of the rate of p (for details, see [San08]). This is in line with the results in [LW04, CWEH10, BM13]: as long as n increases, all of these shrinkage estimators are consistent. Bias of the Sancetta-estimator In the following we will show that the Sancetta-estimator is suboptimal in finite samples: it has a non-negligible bias. To understand this, consider a lag s large enough to have Γij(s) ≈0. If we approximate the expectation of the Sancetta-estimator, we see that it is biased downwards: E h ˆΓSan ij (s) i ≈E " 1 n n−s X t=1 xitxjtxi,t+sxj,t+s −S2 ij  # . ≈n −s n E2 [Sij] −E  S2 ij  = −n −s n Var (Sij) < 0. Bias-corrected (BC) estimator We propose a bias-corrected estimator for the variance of the entries in the sample covariance matrix: ˆΓBC ij (s) := 1 n n−s X t=1 xitxjtxi,t+sxj,t+s −S2 ij  , (6) d Var Sij BC,b := 1 n −1 −2b + b(b + 1)/n ˆΓBC ij (0) + 2 n−1 X s=1 κTR(s/b)ˆΓBC ij (s) ! , b > 0. 2in his simulations, Sancetta uses the Bartlett kernel. For fixed b, this increases the truncation bias. 3 The estimator ˆΓBC ij (s) is very similar to ˆΓSan ij (s), but slightly easier to compute. The main difference is the denominator in d Var Sij BC,b: it is smaller than n and thus corrects the downwards bias. 2.1 Theoretical results It is straightforward to extend the theoretical results on the Sancetta estimator ([San08], see summary above) to our proposed estimator. In the following, to better understand the limitations of the Sancetta estimator, we will provide a complementary theoretical analysis on the behaviour of the estimator for finite n. Our theoretical results are based on the analysis of a sequence of statistical models indexed by p. Xp denotes a p × n matrix of n observations of p variables with mean zero and covariance matrix Cp. Yp = R⊤ p Xp denotes the same observations rotated in their eigenbasis, having diagonal covariance Λp = R⊤ p CpRp. Lower case letters xp it and yp it denote the entries of Xp and Yp, respectively3. The analysis is based on the following assumptions: Assumption 1 (A1, bound on average eighth moment). There exists a constant K1 independent of p such that 1 p p X i=1 E[(xp i1)8] ≤K1. Assumption 2 (A2, uncorrelatedness of higher moments). Let Q denote the set of quadruples {i,j,k,l} of distinct integers. P i,j,kl,l∈Qp Cov2[yp i1yp j1, yp k,1+syp l,1+s] |Qp| = O p−1 , and ∀s : P i,j,kl,l∈Qp Cov h (yp i1yp j1)2, (yp k,1+syp l,1+s)2i |Qp| = O p−1 , hold. Assumption 3 (A3, non-degeneracy). There exists a constant K2 such that 1 p p X i=1 E[(xp i1)2] ≥K2. Assumption 4 (A4, moment relation). There exist constants α4, α8, β4 and β8 such that E[y8 i ] ≤ (1 + α8)E2[y4 i ], E[y4 i ] ≤ (1 + α4)E2[y2 i ], E[y8 i ] ≥ (1 + β8)E2[y4 i ], E[y4 i ] ≥ (1 + β4)E2[y2 i ]. Remarks on the assumptions A restriction on the eighth moment (assumption A1) is necessary because the estimators eq. (2), (3), (5) and (6) contain fourth moments, their variances therefore contain eighths moments. Note that, contrary to the similar assumption in the eigenbasis in [LW04], A1 poses no restriction on the covariance structure [BM13]. To quantify the effect of averaging over dimensions, assumption A2 restricts the correlations of higher moments in the eigenbasis. This assumption is trivially fulfilled for Gaussian data, but much weaker (see [LW04]). Assumption A3 rules out the degenerate case of adding observation channels without any variance and assumption A4 excludes distributions with arbitrarily heavy tails. Based on these assumptions, we can analyse the difference between the Sancetta-estimator and our proposed estimator for large p: Theorem 1 (consistency under “fixed n”-asympotics). Let A1, A2, A3, A4 hold. We then have 1 p2 X ij Var (Sij) = Θ(1) 3We shall often drop the sequence index p and the observation index t to improve readability of formulas. 4 0 100 200 300 400 500 6 6.5 7 7.5 8 8.5 9 9.5 x 10 −3 no AC (b = 10) Σij var(Sij)/p2 dimensionality 0 100 200 300 400 500 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 low AC (b = 20) dimensionality 0 100 200 300 400 500 0 5 10 15 20 25 30 high AC (b = 90) dimensionality population shrinkage Sancetta bias−corr Figure 3: Dependence of the variance estimates on the dimensionality. Averaged over R = 50 models. n = 250. E 1 p2 X ij  d Var San,b (Sij) −Var (Sij)  2 = BiasSan,b + BiasSan,b TR 2 + O P j γ2 j (P j γj)2 ! E 1 p2 X ij  d Var BC,b (Sij) −Var (Sij)  2 = BiasBC,b TR 2 + O P j γ2 j (P j γj)2 ! where the γi denote the eigenvalues of C and BiasSan,b := −1 p2 X ij ( 1 + 2b −b(b + 1)/n n Var (Sij) −4 n3 b X s=1 n X t=n−s n X u=1 Cov [xitxjt, xiuxju] ) BiasSan,b TR := −1 p2 2 n X ij n X s=b+1 n −s n Cov [xitxjt, xi,t+sxj,t+s] BiasBC,b TR := −1 p2 2 n −1 −2b + b(b+1) n X ij n−1 X s=b+1 Cov [xitxjt, xi,t+sxj,t+s] Proof. see the supplemental material. Remarks on Theorem 1 (i) The mean squared error of both estimators consists of a bias and a variance term. Both estimators have a truncation bias which is a consequence of including only a limited number of time lags into the variance estimation. When b is chosen sufficiently high, this term gets close to zero. (ii) The Sancetta-estimator has an additional bias term which is smaller than zero in each dimension and therefore does not average out. Simulations will show that, as a consequence, the Sancetta-estimator has a strong bias which gets larger with increasing lag parameter b. (iii) The variance of both estimators behaves as O(P i γ2 i / (P i γi)2): the more the variance of the data is spread over the eigendirections, the smaller the variance of the estimators. This bound is minimal if the eigenvalues are identical. (iv) Theorem 1 does not make a statement on the relative sizes of the variances of the estimators. Note that the BC estimator mainly differs by a multiplicative factor > 1, hence the variance is larger, but not relative to the expectation of the estimator. 3 Simulations Our simulations are based on those in [San08]: We average over R = 50 multivariate Gaussian AR(1) models ⃗xt = A⃗xt−1 + ⃗ϵt, with parameter matrix4 A = ψAC · I , with ψno AC = 0, ψlow AC = 0.7, and ψhigh AC = 0.95 (see Figure 1). The innovations ϵit are Gaussian with variances σ2 i drawn from a log-normal distribution 4more complex parameter matrices or a different generative model do not pose a problem for the biascorrected estimator. The simple model was chosen for clarity of presentation. 5 0 25 50 75 100 2 4 6 8 x 10 −3 no AC Σij var(Sij)/p2 0 25 50 75 100 0.1 0.2 0.3 0.4 0.5 shrinkage intensity λ 0 25 50 75 100 0.2 0.25 0.3 0.35 0.4 PRIAL 0 25 50 75 100 0.02 0.04 0.06 0.08 0.1 low AC 0 25 50 75 100 0.1 0.2 0.3 0.4 0.5 0.6 0 25 50 75 100 0.2 0.3 0.4 0.5 0.6 0.7 lag parameter b 0 25 50 75 100 0 5 10 15 20 high AC 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 0 25 50 75 100 0 0.2 0.4 0.6 0.8 1 pop. var(S) shrinkage Sancetta bias−corr Figure 4: Robustness to the choice of lag parameter b. Variance estimates (upper row), shrinkage intensities (middle row) and improvement over sample covariance (lower row). Averaged over R = 50 models. p = 200, n = 250. with mean µ = 1 and scale parameter σ = 0.5. For each model, we generate K = 50 data sets to calculate the std. deviations of the estimators and to obtain an approximation of p−2 P ij Var (Sij). Simulation 1 analyses the dependence of the estimators on the dimensionality of the data. The number of observations is fixed at n = 250 and the lag parameter b chosen by hand such that the whole autocorrelation is covered5: bno AC = 10, blow AC = 20 and bhigh AC = 90. Figure 3 shows that the standard shrinkage estimator is unbiased and has low variance in the no AC-setting, but under the presence of autocorrelation it strongly underestimates the variance. As predicted by Theorem 1, the Sancetta estimator is also biased; its remains stays constant for increasing dimensionality. Our proposed estimator has no visible bias. For increasing dimensionality the variances of all estimators decrease. Relative to the average estimate, there is no visible difference between the standard deviations of the Sancetta and the BC estimator. Simulation 2 analyses the dependency on the lag parameter b for fixed dimensionality p = 200 and number of observations n = 250. In addition to variance estimates, Figure 4 reports shrinkage intensities and the percentage improvement in absolute loss (PRIAL) over the sample covariance matrix: PRIAL C{pop., shr, San., BC} = E∥S −C∥−E∥C{pop., shr, San., BC} −C∥ E∥S −C∥ . The three quantities show very similar behaviour. Standard shrinkage performs well in the no ACcase, but is strongly biased in the autocorrelated settings. The Sancetta estimator is very sensitive to the choice of the lag parameter b. For low AC, the bias at the optimal b is small: only a small number of biased terms are included. For high AC the optimal b is larger, the higher number of biased terms causes a larger bias. The BC-estimator is very robust: it performs well for all b large enough to capture the autocorrelation. For very large b its variance increases slightly, but this has practically 5for b < 1, optimal in the no AC-setting, Sancetta and BC estimator are equivalent to standard shrinkage. 6 0 5 10 15 20 0.55 0.6 0.65 0.7 0.75 0.8 number of trials per class accuracy sample cov standard shrinkage sancetta bias−corr cross−val. 0 5 10 15 20 0 0.01 0.02 0.03 0.04 0.05 number of trials per class accuracy − acc(sample cov) 0 5 10 15 20 0 0.05 0.1 0.15 0.2 number of trials per class shrinkage intensity 0 25 50 75 −0.5 0 0.5 1 time lag b = 75 AC 0 5 10 15 20 0.55 0.6 0.65 0.7 0.75 0.8 number of trials per class accuracy sample cov standard shrinkage sancetta bias−corr cross−val. 0 5 10 15 20 0 0.01 0.02 0.03 0.04 0.05 number of trials per class accuracy − acc(sample cov) 0 5 10 15 20 0 0.05 0.1 0.15 0.2 number of trials per class shrinkage intensity 0 100 200 300 −0.5 0 0.5 1 time lag b = 300 AC Figure 5: BCI motor imagery data for lag parameter b = 75 (upper row) and b = 300 (lower row). Averaged over subjects and K = 100 runs. no effect on the PRIAL. An interesting aspect is that our proposed estimator even outperforms shrinkage based on the the population Var (Sij) (calculated by resampling). This results from the correlation of the estimator d Var Sij BC,b with the sample estimate eq. (3) of the denominator in eq. (1). 4 Real World Data: Brain Computer Interface based on Motor Imagery As an example of autocorrelated data we reanalyzed a data set from a motor imagery experiment. In the experiment, brain activity for two different imagined movements was measured via EEG (p = 55 channels, 80 subjects, 150 trials per subject, each trial with ntrial = 390 measurements [BSH+10]). The frequency band was optimized for each subject and from the class-wise covariance matrices, 1-3 filters per class were extracted by Common Spatial Patterns (CSP), adaptively chosen by a heuristic (see [BTL+08]). We trained Linear Discriminant Analysis on log-variance features. To improve the estimate of the class covariances on these highly autocorrelated data, standard shrinkage, Sancetta shrinkage, cross-validation and and our proposed BC shrinkage estimator were applied. The covariance structure is far from diagonal, therefore, for each subject, we used the average of the class covariances of the other subjects as shrinkage target [BLT+11]. Shrinkage is dominated by the influence of high-variance directions [BM13], which are pronounced in this data set. To reduce this effect, we rescaled, only for the calculation of the shrinkage intensities, the first five principal components to have the same variance as the sixth principal component. We analyse the dependency of the four algorithms on the number of supplied training trials. Figure 5 (upper row) shows results for an optimized time lag (b = 75) which captures well the autocorrelation of the data (outer left). Taking the autocorrelation into account makes a clear difference (middle left/right): while standard shrinkage outperforms the sample covariance, it is clearly outperformed by the autocorrelation-adjusted approaches. The Sancetta-estimator is slightly worse than our proposed estimator. The shrinkage intensities (outer right) are extremely low for standard shrinkage and the negative bias of the Sancetta-estimator shows clearly for small numbers of training trials. Figure 5 (lower row) shows results for a too large time lag (b = 300). The performance of the Sancetta-estimator strongly degrades as its shrinkage intensities get smaller, while our proposed estimator is robust to the choice of b, only for the smallest number of trials we observe a small degradation in performance. Figure 6 (left) compares our bias-corrected estimator to the four other approaches for 10 training trials: it significantly outperforms standard shrinkage and Sancetta shrinkage for both the larger (b = 300, p ≤0.01) and the smaller time lag (b = 75, p ≤0.05). 7 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 1 **90% **90% **8.75% **8.75% bias−corr sample covariance 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 1 **77.50% **78.75% **21.25% **20% standard shrinkage 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 1 51.25% 53.75% 47.50% 45.00% CV subject−wise classification accuracies 0.5 0.6 0.7 0.8 0.9 0.5 0.6 0.7 0.8 0.9 1 *60% **60% *38.75% **38.75% Sancetta estimator b = 75 b = 300 SC Shr San BC CV 0 20 40 60 80 100 120 time demand normalized runtime Figure 6: Subject-wise BCI classification accuracies for 10 training trials (left) and time demands (right). ∗∗/∗:= significant at p ≤0.01 or p ≤0.05, respectively. Analytic shrinkage procedures optimize only the mean squared error of the covariance matrix, while cross-validation directly optimizes the classification performance. Yet, Figure 5 (middle) shows that for small numbers of training trials our proposed estimator outperforms CV, although the difference is not significant (see Fig. 6). For larger numbers of training trials CV performs better. This shows that the MSE is not a very good proxy for classification accuracies in the context of CSP: for optimal MSE, shrinkage intensities decrease with increasing number of observations. CV shrinkage intensities instead stay on a constant level between 0.1 and 0.15. Figure 6 (right) shows that the three shrinkage approaches (b = 300) have a huge performance advantage over cross-validation (10 folds/10 parameter candidates) with respect to runtime. 5 Discussion Analytic Shrinkage estimators are highly useful tools for covariance matrix estimation in timecritical or computationally expensive applications: no time-consuming cross-validation procedure is required. In addition, it has been observed that in some applications, cross-validation is not a good predictor for out-of-sample performance [LG11, BKT+07]. Its speed and good performance has made analytic shrinkage widely used: it is, for example, state-of-the-art in ERP experiments [BLT+11]. While standard shrinkage assumes i.i.d. data, many real world data sets, for example from video, audio, finance, biomedical engineering or energy systems clearly violate this assumption as strong autocorrelation is present. Intuitively this means that the information content per data point becomes lower, and thus the covariance estimation problem becomes harder: the dimensionality remains unchanged but the effective number of samples available decreases. Thus stronger regularization is required and standard analytic shrinkage [LW04] needs to be corrected. Sancetta already moved the first step into this important direction by providing a consistent estimator under i.i.d. violations [San08]. In this work we analysed finite sample sizes and showed that (i) even apart from truncation bias —which results from including a limited number of time lags— Sancetta’s estimator is biased, (ii) this bias is only negligible if the autocorrelation decays fast compared to the length of the time series and (iii) the Sancetta estimator is very sensitive to the choice of lag parameter. We proposed an alternative estimator which is (i) both consistent and —apart from truncation bias— unbiased and (ii) highly robust to the choice of lag parameter: In simulations on toy and real world data we showed that the proposed estimator yields large improvements for small samples and/or suboptimal lag parameter. Even for optimal lag parameter there is a slight but significant improvement. Analysing data from BCI motor imagery experiments we see that (i) the BCI data set possesses significant autocorrelation, that (ii) this adversely affects CSP based on the sample covariance and standard shrinkage (iii) this effect can be alleviated using our novel estimator, which is shown to (iv) compare favorably to Sancetta’s estimator. Acknowledgments This research was also supported by the National Research Foundation grant (No. 2012-005741) funded by the Korean government. We thank Johannes H¨ohne, Sebastian Bach and Duncan Blythe for valuable discussions and comments. 8 References [And91] Donald WK Andrews. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica: Journal of the Econometric Society, pages 817–858, 1991. [BKT+07] Benjamin Blankertz, Motoaki Kawanabe, Ryota Tomioka, Friederike Hohlefeld, Klaus-Robert M¨uller, and Vadim V Nikulin. Invariant common spatial patterns: Alleviating nonstationarities in brain-computer interfacing. In Advances in Neural Information Processing Systems, pages 113– 120, 2007. [BLT+11] Benjamin Blankertz, Steven Lemm, Matthias Treder, Stefan Haufe, and Klaus-Robert M¨uller. Single-trial analysis and classification of ERP components – a tutorial. NeuroImage, 56(2):814– 825, 2011. [BM13] Daniel Bartz and Klaus-Robert M¨uller. Generalizing analytic shrinkage for arbitrary covariance structures. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1869–1877. Curran Associates, Inc., 2013. [BS10] Zhidong Bai and Jack William Silverstein. Spectral Analysis of Large Dimensional Random Matrices. Springer Series in Statistics. Springer New York, 2010. [BSH+10] Benjamin Blankertz, Claudia Sannelli, Sebastian Halder, Eva M Hammer, Andrea K¨ubler, KlausRobert M¨uller, Gabriel Curio, and Thorsten Dickhaus. Neurophysiological predictor of SMRbased BCI performance. Neuroimage, 51(4):1303–1309, 2010. [BTL+08] Benjamin Blankertz, Ryota Tomioka, Steven Lemm, Motoaki Kawanabe, and Klaus-Robert M¨uller. Optimizing spatial filters for robust EEG single-trial analysis. Signal Processing Magazine, IEEE, 25(1):41–56, 2008. [CWEH10] Yilun Chen, Ami Wiesel, Yonina C Eldar, and Alfred O Hero. Shrinkage algorithms for MMSE covariance estimation. Signal Processing, IEEE Transactions on, 58(10):5016–5029, 2010. [ER05] Alan Edelman and N. Raj Rao. Random matrix theory. Acta Numerica, 14:233–297, 2005. [GLL+14] Alexandre Gramfort, Martin Luessi, Eric Larson, Denis A. Engemann, Daniel Strohmeier, Christian Brodbeck, Lauri Parkkonen, and Matti S. H¨am¨al¨ainen. MNE software for processing MEG and EEG data. NeuroImage, 86(0):446 – 460, 2014. [HTF08] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. Springer, 2008. [LG11] Fabien Lotte and Cuntai Guan. Regularizing common spatial patterns to improve BCI designs: unified theory and new algorithms. Biomedical Engineering, IEEE Transactions on, 58(2):355– 362, 2011. [LW04] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365–411, 2004. [LW12] Olivier Ledoit and Michael Wolf. Nonlinear shrinkage estimation of large-dimensional covariance matrices. The Annals of Statistics, 40(2):1024–1060, 2012. [MP67] Vladimir A. Marˇcenko and Leonid A. Pastur. Distribution of eigenvalues for some sets of random matrices. Mathematics of the USSR-Sbornik, 1(4):457, 1967. [San08] Alessio Sancetta. Sample covariance shrinkage for high dimensional dependent data. Journal of Multivariate Analysis, 99(5):949–967, May 2008. [SBMK13] Wojciech Samek, Duncan Blythe, Klaus-Robert M¨uller, and Motoaki Kawanabe. Robust spatial filtering with beta divergence. In L. Bottou, C.J.C. Burges, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1007–1015. 2013. [SS05] Juliane Sch¨afer and Korbinian Strimmer. A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Statistical Applications in Genetics and Molecular Biology, 4(1):1175–1189, 2005. [Ste56] Charles Stein. Inadmissibility of the usual estimator for the mean of a multivariate normal distribution. In Proc. 3rd Berkeley Sympos. Math. Statist. Probability, volume 1, pages 197–206, 1956. [TZ84] H. Jean Thi´ebaux and Francis W. Zwiers. The interpretation and estimation of effective sample size. Journal of Climate and Applied Meteorology, 23(5):800–811, 1984. 9
2014
403
5,518
Scale Adaptive Blind Deblurring Haichao Zhang Jianchao Yang Duke University, NC Adobe Research, CA hczhang1@gmail.com jiayang@adobe.com Abstract The presence of noise and small scale structures usually leads to large kernel estimation errors in blind image deblurring empirically, if not a total failure. We present a scale space perspective on blind deblurring algorithms, and introduce a cascaded scale space formulation for blind deblurring. This new formulation suggests a natural approach robust to noise and small scale structures through tying the estimation across multiple scales and balancing the contributions of different scales automatically by learning from data. The proposed formulation also allows to handle non-uniform blur with a straightforward extension. Experiments are conducted on both benchmark dataset and real-world images to validate the effectiveness of the proposed method. One surprising finding based on our approach is that blur kernel estimation is not necessarily best at the finest scale. 1 Introduction Blind deconvolution is an important inverse problem that gains increasing attentions from various fields, such as neural signal analysis [3, 10] and computational imaging [6, 8]. Although some results obtained in this paper are applicable to more general bilinear estimation problems, we will use blind image deblurring as an example. Image blur is an undesirable degradation that often accompanies the image formation process due to factors such as camera shake. Blind image deblurring aims to recover a sharp image from only one blurry observed image. While significant progress has been made recently [6, 16, 14, 2, 22, 11], most of the existing blind deblurring methods do not work well in the presence of noise, leading to inaccurate blur kernel estimation, which is a problem that has been observed in several recent work [17, 26]. Figure 1 shows an example where the kernel recovery quality of previous methods degrades significantly even though only 5% of Gaussian noise is added to the blurry input. Moreover, it has been empirically observed that even for noise-free images, image structures with scale smaller than that of the blur kernel are actually harmful for kernel estimation [22]. Therefore, various structure selection techniques, such as hard/hysteresis gradient thresholding [2, 16], selective edge map [22], and image decomposition [24] are incorporated into kernel estimation. In this paper, we propose a novel formulation for blind deblurring, which explains the conventional empirical coarse-to-fine estimation scheme and reveals some novel perspectives. Our new formulation not only offers the ability to encompass the conventional multi-scale estimation scheme, but also offers the ability to achieve robust blind deblurring in a simple but principled way. Our model analysis leads to several interesting and perhaps surprising observations: (i) Blur kernel estimation is not necessarily best at the finest image scale and (ii) There is no universal single image scale that can be defined as a priori to maximize the performance of blind deblurring. The remainder of the paper is structured as follows. In Section 2, we conduct an analysis to motivate our proposed scale-adaptive blind deblurring approach. Section 3 presents the proposed approach, including a generalization to noise-robust kernel estimation as well as non-uniform blur estimation. We discuss the relationship of the proposed method to several previous methods in Section 4. Ex1 (a) Blurry & Noisy (b) Levin et al. [13] (c) Zhang et al. [25] (d) Zhong et al. [26] (e) Proposed Figure 1: Sensitivity of blind deblurring to image noise. Random gaussian noise (5%) is added to the observed blurry image before kernel estimation. The deblurred images are obtained with the corresponding estimated blur kernels and the noise-free blurry image to capitalize the kernel estimation accuracy. periments are carried out in Section 5, and the results are compared with those of the state-of-the-art methods in the literature. Finally, we conclude the paper in Section 6. 2 Motivational Analysis For uniform blur, the blurry image can be modeled as follows y = k ∗x + n, (1) where ∗denotes 2D convolution,1 x is the unknown sharp image, y is the observed blurry image, k is the unknown blur kernel (a.k.a., point spread function), and n is a zero-mean Gaussian noise term [6]. As mentioned above, most of the blind deblurring methods are sensitive to image noise and small scale structures [17, 26, 22]. Although these effects have been empirically observed [2, 22, 24, 17], we provide a complementary analysis in the following, which motivates our proposed approach later. Our analysis is based on the following result: Theorem 1 (Point Source Recovery [1]) For a signal x containing point sources at different locations, if the minimum distance between sources is at least 2/fc, where fc denotes the cut-off frequency of the Gaussian kernel k, then x can be recovered exactly given k and the observed signal y in the noiseless case. Although Theorem 1 is stated in the noiseless and non-blind case with a parametric Gaussian kernel, it is still enlightening for analyzing the general blind deblurring case we are interested in. As sparsity of the image is typically exploited in the image derivative domain for blind deblurring, Theorem 1 implies that large image structures whose gradients are distributed far from each other are likely to be recovered more accurately, which in return, benefits the kernel estimation. On the contrary, small image structures with gradients distributed near each other are likely to have larger recovery errors, and thus is harmful for kernel estimation. We refer these small image structures as small scale structure in this paper. Apart from the above recoverability analysis, Theorem 1 also suggests a straightforward approach to deal with noise and small scale structures by performing blur kernel estimation after smoothing the noisy (and blurry) image y with a low-pass filter fp with a proper cut-off frequency fc yp = fp ∗y ⇔yp = fp ∗k ∗x + fp ∗n ⇔yp = kp ∗x + np (2) where kp ≜fp ∗k and np ≜fp ∗n. As fp is a low-pass filter, the noise level of yp is reduced. Also, as the small scale structures correspond to signed spikes with small separation distance in the derivative domain, applying a local averaging will make them mostly canceled out [22], and therefore, noise and small scale structure can be effectively suppressed. However, applying the lowpass filter will also smooth the large image structures besides noise, and as a result, it will alter the profile of the edges. As the salient large scale edge structures are the crucial information for blur kernel estimation, the low-pass filtering may lead to inaccurate kernel estimation. This is the inherent limitation of linear filtering for blind deblurring. To achieve noise reduction while retaining the latent edge structures, one may resort to non-linear filtering schemes, such as anisotropic diffusion [20], Bilateral filtering [19], sparse regression [5]. These approaches typically assume the absence of motion blur, and thus can cause over-sharpening of the edge structures and over-smoothing of image details when blur is present [17], resulting in a filtered image that is no longer linear with respect to the latent sharp image, making accurate kernel estimation even more difficult. 1We also overload ∗to denote the 2D convolution followed by lexicographic ordering based on the context. 2 0 20 40 60 80 100 120 −1 0 1 0 20 40 60 80 100 120 −1 0 1 0 20 40 60 80 100 120 −1 0 1 0 20 40 60 80 100 120 −1 0 1 0 20 40 60 80 100 120 −1 0 1 Signal x True Recovered Scale4 Scale3 Scale2 Scale1 0 5 10 15 0 0.1 0.2 0 5 10 15 0 0.5 0 5 10 15 0 0.5 0 5 10 15 0 0.2 0.4 0 5 10 15 0 0.2 0.4 Blur Kernel k 1 2 3 4 5 6 7 8 9 0 0.5 1 1 2 3 4 5 6 7 8 9 0 0.1 0.2 1 2 3 4 5 6 7 8 9 0 0.1 0.2 1 2 3 4 5 6 7 8 9 0 0.5 1 2 3 4 5 6 7 8 9 0 0.5 1 Scale Filter fp Figure 2: Multi-Scale Blind Sparse Recovery. The signal structures of different scales will be recovered at different scales. Large scale structures are recovered first and small structures are recovered later. Top: original signal, blur kernel. Bottom: the recovered signal and bluer kernel progressively across different scales (scale-4 to scale-1 represents the coarsest scale to the finest (original) scale. The blur kernel at the i-th scale is initialized with the solution from the i-1-th scale. 3 The Proposed Approach To facilitate subsequent analysis, we first introduce the definition of scale space [15, 4]: Definition 1 For an image x, its scale-space representation corresponding to a Gaussian filter G s is defined by the convolution Gs ∗x, where the variance s is referred to as the scale parameter. Without loss of clarity, we also refer the different scale levels as different scale spaces in the sequel. Natural images have a multi-scale property, meaning that different scale levels reveal different scales of image structures. According to Theorem 1, different scale spaces may play different roles for kernel estimation, due to the different recoverability of the signal components in the corresponding scale spaces. We propose a new framework for blind deblurring by introducing a variable scale filter, which defines the scale space where the blind estimation process is operated. With the scale filter, it is straightforward to come up with a blur estimation procedure similar to the conventional coarse-tofine estimation by constructing an image pyramid. However, we operate deblurring in a space with the same spatial resolution as the original image rather than a downscaled space as conventionally done. Therefore, it avoids the additional estimation error caused by interpolation between spatial scales in the pyramid. To mitigate the problem of structure smoothing, we incorporate the knowledge about the filter into the deblurring model, which is different from the way of using filtering simply as a pre-processing step. More importantly, we can formulate the deblurring problem in multiple scale spaces in this way, and learn the contribution of each scale space adaptively for each input image. 3.1 Scale-Space Blind Deblurring Model Our task is to recover k and x from the filtered observation y p, obtained via (2) with a known scale filter fp. The model is derived in the derivative domain, and we use x ∈R m and yp ∈Rn to denote the lexicographically ordered sharp and (filtered-) blurry image derivatives respectively. 2 The final deblurred image is recovered via a non-blind deblurring step with the estimated blur kernel [26]. From the modifed observation model (2), we can obtain the following likelihood: p(yp|x, k, λ) ∝exp  −∥fp ∗y −fp ∗k ∗x∥2 2 2λ  = exp  −∥yp −kp ∗x∥2 2 2λ  , (3) where λ is the variance of the Gaussian noise. Maximum likelihood estimation using (3) is ill-posed and further regularization over the unknowns is required. We use a parametrized Gaussian prior for x, p(x) =  i p(xi) ∝ i N(xi; 0, γi), where the unknown scale variables γ = [γ1, γ2, · · · ] are closely related to the sparsity of x and they will be estimated jointly with other variables. Rather than computing the Maximum A Posteriori (MAP) solution, which typically requires empirical tricks to achieve success [16, 2], we use type-II maximum likelihood estimation following [13, 21, 25], by marginalizing over the latent image and maximizing over the other unknowns maxγ,k,λ≥0  p(yp|x, k, λ)p(x)dx ≡minγ,k,λ≥0 yT p ΣT yp + log |Σp| , (4) 2The derivative filters used in this work are {[−1, 1], [−1, 1]T }. 3 where Σp ≜  λI + HpΓHT p  , Hp is the convolution matrix of kp and Γ ≜diag[γ]. Using standard linear algebra techniques together with an upper-bound over Σ p,3 we can reform (4) as follows [21] min λ,k≥0,x 1 λ∥fp ∗y −fp ∗k ∗x∥2 2 + rp(x, k, λ) + (n −m) log λ, with rp(x, k, λ) ≜  i min γi x2 i γi + log(λ + γi∥kp∥2 2), (5) which now resembles a typical regularized-regression formulation for blind deblurring when eliminating fp. The proposed objective function has one interesting property as stated in the following. Theorem 2 (Scale Space Blind Deblurring) Taking fp as a Gaussian filter, solving (5) essentially achieves estimation for x and k in the scale space defined by f p given y in the original space. In essence, Theorem 2 reveals the equivalence between performing blind deblurring on y directly while constraining x and k in a certain scale space and by solving the proposed model (5) with the aid of the additional filter fp. This places the proposed model (5) on a sound theoretical footing. Cascaded Scale-Space Blind Deblurring. If the blur kernel k has a clear cut-off frequency and the target signal contains structures at distinct scales, then we can suppress the structures with scale smaller than k using a properly designed scale filter fp according to Theorem 1, and then solve (5) for kernel estimation. However, in practice, the blur kernels are typically non-parametric and with complex forms, therefore do not have a clear cut-off frequency. Moreover, natural images have a multi-scale property, meaning different scale spaces reveal different image structures. All these facts suggests that it is not easy to select a fixed scale filter fp a priori and calls for a variable scale filter. Nevertheless, based on the basic point that large scale structures are more advantageous than small scale structures for kernel estimation, a natural idea is to perform (5) separately at different scales, and pick the best estimation as the output. While this is an appealing idea, it is not applicable in practice due to the non-availability of the ground-truth, which is required for evaluating the estimation quality. A more practical approach is to perform (5) in a cascaded way, starting the estimation from a large scale and then reducing the scale for the next cascade. The kernel estimation from the previous scale is used as the starting point for the next one. With this scheme, the blur kernel is refined along with the resolution of the scale space, and may become accurate enough before reaching the finest resolution level, as shown in Figure 2 for a 1D example. The latent sparse signal in this example contains 4 point sources, with the minimum separation distance of 2, which is smaller than the support of the blur kernel. It is observed that some large elements of the blur kernel are recovered first and then the smaller ones appear later at a smaller scale. It can also be noticed that the kernel estimation is already fairly accurate before reaching the finest scale (i.e., the original pixel-level representation). In this case, the final estimation at the last scale is fairly stable given the initialization from the last scale. However, performing blind deblurring by solving (5) in the last original scale directly (i.e., fp ≡δ) cannot achieve successful kernel estimation (results not shown). A similar strategy by constructing an image pyramid has been applied successfully in many of the recent deblurring methods [6, 16, 2, 22, 8, 25]. It is important to emphasize that the main purpose of our scale-space perspective is more to provide complementary analysis and understanding of the empirical coarse-to-fine approach in blind deblurring algorithms, than to replace it. More discussions on this point are provided in Section 4. Nevertheless, the proposed alternative approach can achieve performance on par with state-of-the-art methods, as shown in Figure 4. More importantly, this alternative formulation offers us a number of extra dimensions for generalization, such as extensions to noise robust kernel estimation and scale-adaptive estimation, as shown in the next section. 3.2 Scale-Adaptive Deblurring via Tied Scale-Space Estimation In the above cascade procedure, a single filter fp is used at each step in a greedy way. Instead, we can define a set of scale filters P ≜{fp}P p=1, apply each of them to the observed image y to get a set of filtered observations {yp}P p=1, and then tie the estimation across all scales with the shared latent sharp image x. By constructing P as a set of Gaussian filters with decreasing radius, it is equivalent to perform blind deblurring in different scale spaces. Large scale space is more robust to image noise, and thus is more effective in stabilizing the estimation; however, only large scale 3log |Σp| ≤ i log  λ + γi∥kp∥2 2  + (n −m) log λ [25]. 4 Filtering Radius 1 2 3 4 5 2 4 6 8 10 12 14 30 40 50 60 70 80 0 1 2 3 4 5 0 0 0 0 0 0 Iter.1 0 1 2 3 4 5 0 0 0 0 0 0 Iter.3 0 1 2 3 4 5 0 0 0 0 0 0 Iter.15 without additive noise Iterations Filtering Radius 1 2 3 4 5 2 4 6 8 0 2 4 20 30 40 50 60 70 80 0 1 2 3 4 5 0 0 0 0 0 0 Iter.1 0 1 2 3 4 5 0 0 0 0 0 0 Iter.3 0 1 2 3 4 5 0 0 0 0 0 0 Iter.15 with additive noise estimation error w/o noise org.scale 101.9 opt.scale 43.8 uni.scale 39.4 adaptive 36.7 5% noise org.scale 316.3 opt.scale 63.2 uni.scale 77.6 adaptive 46.4 Figure 3: Scale Adaptive Contribution Learning for a set of 25 Gaussian filters with radius r ∈ (0, 5] on the first image [14]. Left: without adding noise. Right: with 5% additive noise. The values in the heat-map represent the contribution weight (λ −1 p ) for each scale filter during the iterations. The table on the right shows the performance (SSD error) of blind deblurring with different scales: original scale (org.scale), empirically optimal scale (opt.scale), multiple scales with uniform contribution weights (uni.scale) and multiple scales with adaptive weights (adaptive). structures are “visible” (recoverable) in this space. Small scale space offers the potential to recover more fine details, but is less robust to image noise. By conducting deblurring in multiple scale spaces simultaneously, we can exploit the complementary property of different scales for robust blind deblurring in a unified framework. Furthermore, different scales may contribute differently to the kernel estimation, we therefore use a distinct noise level parameter λ p for each scale, which reflects the relative contribution of that scale to the estimation. Concretely, the final cost function can be obtained by accumulating the cost function (5) over all the P filtered observations with adaptive noise parameters 4 min {λp},k≥0,x P  p=1 1 λp ∥fp ∗y −fp ∗k ∗x∥2 2 + R(x, k, {λp}) + (n −m)  p log λp, where R(x, k, {λp}) =  p rp(x, k, {λp}) =  p,i min γi x2 i γi + log(λp + γi∥kp∥2 2). (6) The penalty function R here is in effect a penalty term that exploits multi-scale regularity/consistency of the solution space. The effectiveness of the proposed approach compared to other methods is illustrated in Figure 1 and more results are provided in Section 5. Formulating the deblurring problem as (6), our joint estimation framework enjoys a number of features that are particularly appropriate for the purpose of blind deblurring in presence of noise and small scale image structures: (i) It exploits both the regularization of sharing the latent sharp image x across all filtered observations and the knowledge about the set of filters {fp}. In this way, k is recovered directly without post-processing as previous work [26]; (ii) the proposed approach can be extended to handle nonuniform blur, as discussed in Section 3.3; and (iii) there is no inherent limitations on the form of the filters we can use besides Gaussian filters, e.g., we can also use directional filters as in [26]. Scale Adaptiveness. With this cost function, the contribution of each filtered observation y p constructed by fp is reflected by weight λ−1 p . The parameters {λ−1 p } are initialized uniformly across all filters and are then learned during the kernel estimation process automatically. In this scenario, a smaller noise level estimation indicates a larger contribution in estimation. It is natural to expect that the distribution of the contribution weights for the same set of filters will change under different input noise levels, as shown in Figure 3. From the figure, we obtain a number of interest observations: •The proposed algorithm is adaptive to observations with different noise levels. As we can see, filters with smaller radius contribute more in the noise-free case, while in the noisy case, filters with larger radius contribute more. •The distribution of the contribution weights evolves during the iterative estimation process. For example in the noise-less case, starting with uniform weights, the middle-scale filters contribute the most at the beginning of the iterations, while smaller-scale filters contribute more to the estimation later on, a natural coarse-to-fine behavior. Similar trends can also be observed for the noisy case. 4This can be achieved either in an online fashion or in one shot. 5 1 2 3 4 0 50 100 150 Image Index Estimation Error Fergus Shan Cho Levin Zhang Proposed (a) 1 2 3 4 5 6 7 8 (b) (c) 1 2 3 4 Figure 4: Blind Deblurring Results: Noise-free Case. (a) Performance comparison (image estimation error) on the benchmark dataset [14], which contains (b) 8 blur kernels and (c) 4 images. •While it is expected that the original scale space is not the “optimal” scale for kernel estimation in presence of noise, it is somewhat surprising to find that this is also the case for the noisefree case. This corroborates previous findings that small scale structures are harmful to kernel estimation [22], and our algorithm automatically learn the scale space to suppress the effects of small scale structures. •The weight distribution is more flat in the noise-free case, while it is more peaky for the noisy case. Figure 3 is obtained with the first kernel and image in Figure 4. Similar properties can be observed for different images/blurs, although the position of the empirical mode are unlikely to be the same. The table in Figure 3 shows the estimation error using difference scale space configurations. Blind deblurring in the original space directly (org.scale) fails, indicated by the large estimation error. However, when setting the filter as fo, whose contribution λ−1 o is empirically the largest among all filters (opt.scale), the performance is much better than in the original scale directly, with the estimation error reduced significantly. The proposed method, by tying multiple scales together and learning adaptive contribution weights (adaptive), performs the best across all the configurations, especially in the noisy case. 3.3 Non-Uniform Blur Extension The extension of the uniform blind deblurring model proposed above to the non-uniform blur case is achieved by using a generalized observation model [18, 9], representing the blurry image as the summation of differently transformed versions of the latent sharp image y = Hx+n = j=1 wjPjx+ n = Dw + n. Here Pj is the j-th projection or homography operator (a combination of rotations and translations) and wj is the corresponding combination weight representing the proportion of time spent at that particular camera pose during exposure. D = [P 1x, P2x, · · · , Pjx, · · · ] denotes the dictionary constructed by projectively transforming x using a set of transformation operators. w ≜[w1, w2, · · · ]T denotes the combination weights of the blurry image over the dictionary. The uniform convolutional model (1) can be obtained by restricting {P j} to be translations only. With derivations similar to those in Section 3.1, it can be shown that the cost function for the general non-uniform blur case is min λ,w≥0,x P  p=1 1 λp ∥yp −Hpx∥2 2 +  p,i min γi x2 i γi + log(λp + γi∥hip∥2 2) + (n −m)  p log λp, (7) where Hp ≜Fp j wjPj is the compound operator incorporating both the additional filter and the non-uniform blur. Fp is the convolutional matrix form of fp and hip denotes the effective compound local kernel at site i in the image plane constructed with w and the set of transformation operators. 4 Discussions We discuss the relationship of the proposed approach with several recent methods to help understanding properties of our approach further. Image Pyramid based Blur Kernel Estimation. Since the blind deblurring work of Fergus et al. [6], image pyramid has been widely used as a standard architecture for blind deblurring [16, 2, 8, 22, 13, 25]. The image pyramid is constructed by resizing the observed image with a fixed ratio for multiple times until reaching a scale where the corresponding kernel is very small, e.g. 3 × 3. Then the blur kernel is estimated firstly from the smallest image and is upscaled for initializing the next level. This process is repeated until the last level is reached. While it is effective for exploiting the 6 1 2 3 4 5 6 7 8 0 50 100 150 200 Estimation Error Kernel Index Image Estimation Quality Zhong Proposed (a) 1 2 3 4 0 50 100 150 Estimation Error Image Index Image Estimation Quality Zhong Proposed (b) 0 2 4 6 8 20 40 60 80 100 120 140 160 180 Estimation Error Noise Level (%) Image Estimation Quality Levin Zhang Zhong Proposed (c) Figure 5: Deblurring results in the presence of noise on the benchmark dataset [14]. Performance averaged over (a) different images and (b) different kernels, with 5% additive Gaussian noise. (c) Comparison of the proposed method with Levin et al. [13], Zhang et al. [25], Zhong et al. [26] on the first image with the first kernel, under different noise levels. solution space, this greedy pyramid construction does not provide an effective way to handle image noise. Our formulation not only retains properties similar to the pyramid coarse-to-fine estimation, but also offers the extra flexibility to achieve scale-adaptive estimation, which is robust to noise and small scale structures. Noise-Robust Blind Deblurring [17, 26]. Based on the observation that using denoising as a preprocessing can help with blur kernel estimation in the presence of noise, Tai et al. [17] proposed to perform denoising and kernel estimation alternatively, by incorporating an additional image penalty function designed specially taking the blur kernel into account [17]. This approach uses separate penalty terms and introduces additional balancing parameters. Our proposed model, on the contrary, has a coupled penalty function and learns the balancing parameters from the data. Moreover, the proposed model can be generalized to non-uniform blur in a straightforward way. Another recent method [26] performs blind kernel estimation on images filtered with different directional filters separately and then reconstructs the final kernel in a second step via inverse Radon transform [26]. This approach is only applicable to uniform blur and directional auxiliary filters. Moreover, it treats each filtered observation independently thus may introduce additional errors in the second kernel reconstruction step, due to factors such as mis-alignment between the estimated compound kernels. Small Scale Structures in Blur Kernel Estimation [22, 2]. Based on the observation that small scale structures are harmful for kernel estimation, Xu and Jia [22] designed an empirical approach for structure selection based on gradient magnitudes. Structure selection has also been incorporated into blind deblurring in various forms before, such as gradient thresholding [2, 16]. However, it is hard to determine a universal threshold for different images and kernels. Other techniques such as image decomposition has also been incorporated [24], where the observed blurry image is decomposed into structure and texture layers. However, standard image decomposition techniques do not consider image blur, thus might not work well in the presence of blur. Another issue for this approach is again the selection of the parameter for separating texture from structure, which is image dependent in general. The proposed method achieves robustness to small scale structures by optimizing the scale contribution weights jointly with blind deblurring, in an image adaptive way. The optimization techniques used in this paper has been used before for image deblurring [13, 21, 25], with different context and motivations. 5 Experimental Results We perform extensive experiments in this section to evaluate the performance of the proposed method compared with several state-of-the-art blind deblurring methods, including two recent noise robust deblurring methods of Tai et al. [17], and Zhong et al. [26], as well as a non-uniform deblurring method of Xu et al. [23]. We construct {f p} as Gaussian filters, with the radius uniformly sampled over a specified range, which is typically set as [0.1, 3] in the experiment. 5 The number of iterations is used as the stopping criteria and is fixed as 15 in practice. Evaluation using the Benchmark Dataset of Levin et al. [14]. We first perform evaluation on the benchmark dataset of Levin et al. [14], containing 4 images and 8 blur kernels, leading to 32 blurry images in total (see Figure 4). Performances for the noise-free case are reported in Figure 4, where the proposed approach performs on par with state-of-the-art. To evaluate the performances 5The number of filters P should be large enough to characterize the scale space. We typically set P = 7. 7 Kyoto Blurry Blurry Building Blurry Elephant Blurry Blurry Blurry Tai [17] Xu [23] Xu [23] Zhong [26] Zhong [26] Zhong [26] Proposed Proposed Proposed Figure 6: Deblurring results on image with non-uniform blur, compared with Tai et al. [17], Zhong et al. [26] and Xu et al. [23]. Full images are shown in the supplementary file. of different methods in the presence of noise, we add i.i.d. Gaussian noise to the blurry images, and then perform kernel estimation. The estimated kernels are used for non-blind deblurring [12] on the noise-free blurry images. The bar plots in Figure 5 show the sum-of-squared-difference (SSD) error of the deblurred images using the proposed method and the method of Zhong et al. [26] when the noise level is 5%. As the same non-blind deblurring method is used, this SSD error reflects the quality of the kernel estimation. It is clear that the proposed method performs better than the method of Zhong et al. [26] overall. We also show the results of different methods with increasing noise levels in Figure 5. It is observed that while the conventional methods (e.g. Levin et al. [13], Zhang et al. [25]) performs well when the noise level is low, their performances degrade rapidly when the noise level increases. The method of Zhong et al. [26] performs more robustly across different noise levels, but does not performs as well as the other methods when the noise level is very low. This might be caused by the loss of information during its two-step process. The proposed method outperforms the other methods for all the noise levels, proving its effectiveness. Deblurring on Real-World Images. We further evaluate the performance of the proposed method on real-world images from the literature [17, 7, 8]. The results are shown in Figure 6. For the Kyoto image from [17], the deblurred image of Tai et al. [17] has some ringing artifacts while the result of Zhong et al. [26] has ghosting effects due to the inaccurate kernel estimation. The deblurred image from the propose method has neither ghosting or strong ringing artifacts. For the other two test images, the non-uniform deblurring method [23] produces deblurred images that are still very blurry, as it achieves kernel estimations close to a delta kernel for both images, due to the presence of noise. The method of Zhong et al. [26] can only handle uniform blur and the deblurred images have strong ringing artifacts. The proposed method can estimate the non-uniform blur accurately and can produce high-quality deblurring results better than the other methods. 6 Conclusion We present an analysis of blind deblurring approach from the scale-space perspective. The novel analysis not only helps in understanding several empirical techniques widely used in the blind deblurring literature, but also inspires new extensions. Extensive experiments on benchmark dataset as well as real-world images verify the effectiveness of the proposed method. For future work, we would like to investigate the extension of the proposed approach in several directions, such as blind image denoising and multi-scale dictionary learning. The task of learning the auxiliary filters in a blur and image adaptive fashion is another interesting future research direction. Acknowledgement The research was supported in part by Adobe Systems. 8 References [1] E. J. Candès and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. CoRR, abs/1203.5871, 2012. [2] S. Cho and S. Lee. Fast motion deblurring. In SIGGRAPH ASIA, 2009. [3] C. Ekanadham, D. Tranchina, and E. P. Simoncelli. A blind sparse deconvolution method for neural spike identification. In NIPS, 2011. [4] J. H. Elder and S. W. Zucker. Local scale control for edge detection and blur estimation. IEEE Trans. Pattern Anal. Mach. Intell., 20(7):699–716, 1998. [5] Z. Farbman, R. Fattal, D. Lischinski, and R. Szeliski. Edge-preserving decompositions for multi-scale tone and detail manipulation. In SIGGRAPH, 2008. [6] R. Fergus, B. Singh, A. Hertzmann, S. T. Roweis, and W. T. Freeman. Removing camera shake from a single photograph. In SIGGRAPH, 2006. [7] A. Gupta, N. Joshi, C. L. Zitnick, M. Cohen, and B. Curless. Single image deblurring using motion density functions. In ECCV, 2010. [8] S. Harmeling, M. Hirsch, and B. Schölkopf. Space-variant single-image blind deconvolution for removing camera shake. In NIPS, 2010. [9] M. Hirsch, C. J. Schuler, S. Harmeling, and B. Schölkopf. Fast removal of non-uniform camera shake. In ICCV, 2011. [10] Y. Karklin and E. P. Simoncelli. Efficient coding of natural images with a population of noisy linear-nonlinear neurons. In NIPS, 2011. [11] D. Krishnan, T. Tay, and R. Fergus. Blind deconvolution using a normalized sparsity measure. In CVPR, 2011. [12] A. Levin, R. Fergus, F. Durand, and W. T. Freeman. Deconvolution using natural image priors. Technical report, MIT, 2007. [13] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Efficient marginal likelihood optimization in blind deconvolution. In CVPR, 2011. [14] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman. Understanding blind deconvolution algorithms. IEEE Trans. Pattern Anal. Mach. Intell., 33(12):2354–2367, 2011. [15] T. Lindeberg and B. M. H. Romeny. Linear scale-space: I. Basic theory, II. Early visual operations. In Geometry-Driven Diffusion in Computer Vision, 1994. [16] Q. Shan, J. Jia, and A. Agarwala. High-quality motion deblurring from a single image. In SIGGRAPH, 2008. [17] Y.-W. Tai and S. Lin. Motion-aware noise filtering for deblurring of noisy and blurry images. In CVPR, pages 17–24, 2012. [18] Y.-W. Tai, P. Tan, and M. S. Brown. Richardson-Lucy deblurring for scenes under a projective motion path. IEEE Trans. Pattern Anal. Mach. Intell., 33(8):1603–1618, 2011. [19] C. Tomasi and R. Manduchi. Bilateral filtering for gray and color images. In ICCV, 1998. [20] D. Tschumperlé and R. Deriche. Vector-valued image regularization with PDEs: A common framework for different applications. IEEE Trans. Pattern Anal. Mach. Intell., 27(4):506–517, 2005. [21] D. P. Wipf and H. Zhang. Revisiting Bayesian blind deconvolution. CoRR, abs/1305.2362, 2013. [22] L. Xu and J. Jia. Two-phase kernel estimation for robust motion deblurring. In ECCV, 2010. [23] L. Xu, S. Zheng, and J. Jia. Unnatural L0 sparse representation for natural image deblurring. In CVPR, 2013. [24] Y. Xu, X. Hu, L. Wang, and S. Peng. Single image blind deblurring with image decomposition. In ICASSP, 2012. [25] H. Zhang and D. Wipf. Non-uniform camera shake removal using a spatially adaptive sparse penalty. In NIPS, 2013. [26] L. Zhong, S. Cho, D. Metaxas, S. Paris, and J. Wang. Handling noise in single image deblurring using directional filters. In CVPR, 2013. 9
2014
404
5,519
Parallel Feature Selection inspired by Group Testing Yingbo Zhou∗ Utkarsh Porwal∗ CSE Department SUNY at Buffalo {yingbozh, utkarshp}@buffalo.edu Ce Zhang CS Department University of Wisconsin-Madison czhang@cs.wisc.edu Hung Ngo CSE Department SUNY at Buffalo hungngo@buffalo.edu XuanLong Nguyen EECS Department University of Michigan xuanlong@umich.edu Christopher R´e CS Department Stanford University chrismre@cs.stanford.edu Venu Govindaraju CSE Department SUNY at Buffalo govind@buffalo.edu Abstract This paper presents a parallel feature selection method for classification that scales up to very high dimensions and large data sizes. Our original method is inspired by group testing theory, under which the feature selection procedure consists of a collection of randomized tests to be performed in parallel. Each test corresponds to a subset of features, for which a scoring function may be applied to measure the relevance of the features in a classification task. We develop a general theory providing sufficient conditions under which true features are guaranteed to be correctly identified. Superior performance of our method is demonstrated on a challenging relation extraction task from a very large data set that have both redundant features and sample size in the order of millions. We present comprehensive comparisons with state-of-the-art feature selection methods on a range of data sets, for which our method exhibits competitive performance in terms of running time and accuracy. Moreover, it also yields substantial speedup when used as a pre-processing step for most other existing methods. 1 Introduction Feature selection (FS) is a fundamental and classic problem in machine learning [10, 4, 12]. In classification, FS is the following problem: Given a universe U of possible features, identify a subset of features F ⊆U such that using the features in F one can build a model to best predict the target class. The set F not only influences the model’s accuracy, its computational cost, but also the ability of an analyst to understand the resulting model. In applications, such as gene selection from micro-array data [10, 4], text categorization [3], and finance [22], U may contain hundreds of thousands of features from which one wants to select only a small handful for F. While the overall goal is to have an FS method that is both computationally efficient and statistically sound, natural formulations of the FS problem are known to be NP-hard [2]. For large scale data, scalability is a crucial criterion, because FS often serves not as an end but a means to other sophisticated subsequent learning. In reality, practitioners often resort to heuristic methods, which can broadly be categorized into three types: wrapper, embedded, and filter [10, 4, 12]. In the wrapper method, a classifier is used as a black-box to test on any subset of features. In filter methods no classifier is used; instead, features are selected based on generic statistical properties of the (labeled) ∗* denotes equal contribution 1 data such as mutual information and entropy. Embedded methods have built in mechanisms for FS as an integral part of the classifier training. Devising a mathematically rigorous framework to explain and justify FS heuristics is an emerging research area. Recently Brown et al. [4] considered common FS heuristics using a formulation based on conditional likelihood maximization. The primary contribution of this paper is a new framework for parallelizable feature selection, which is inspired by the theory of group testing. By exploiting parallelism in our test design we obtain a FS method that is easily scalable to millions of features and samples or more, while preserving useful statistical properties in terms of classification accuracy, stability and robustness. Recall that group testing is a combinatorial search paradigm [7] in which one wants to identify a small subset of “positive items” from a large universe of possible items. In the original application, items are blood samples of WWII draftees and an item is positive if it is infected with syphilis. Testing individual blood sample is very expensive; the group testing approach is to distribute samples into pools in a smart way. If a pool is tested negative, then all samples in the pool are negative. On the other hand, if a pool is tested positive then at least one sample in the pool is positive. We can think of the FS problem in the group testing framework: there is a presumably small, unknown subset F of relevant features in a large universe of N features. Both FS and group testing algorithms perform the same basic operation: apply a “test” to a subset T of the underlying universe; this test produces a score, s(T), that is designed to measure the quality of the features T (or return positive/negative in the group testing case). From the collection of test scores the relevant features are supposed to be identified. Most existing FS algorithms can be thought of as sequential instantiations in this framework1: we select the set T to test based on the scores of previous tests. For example, let X = (X1, . . . , XN) be a collection of features (variables) and Y be the class label. In the joint mutual information (JMI) method [25], the feature set T is grown sequentially by adding one feature at each iteration. The next feature’s score, s(Xk), is defined relative to the set of features already selected in T: s(Xk) =  Xj∈T I(Xk, Xj; Y ). As each such scoring operation takes a non-negligible amount of time, a sequential method may take a long time to complete. A key insight is that group testing needs not be done sequentially. With a good pooling design, all the tests can be performed in parallel in which we determine the pooling design without knowing any pool’s test outcome. From the vector of test outcomes, one can identify exactly the collection of positive blood samples. Parallel group testing, commonly called non-adaptive group testing (NAGT) is a natural paradigm and has found numerous applications in many areas of mathematics, computer Science, and biology [18]. It is natural to wonder whether a “parallel” FS scheme can be designed for machine learning in the same way NAGT was possible: all feature sets T are specified in advance, without knowing the scores of any other tests, and from the final collection of scores the features are identified. This paper initiates a mathematical investigation of this possibility. At a high level, our parallel feature selection (PFS) scheme has three inter-related components: (1) the test design indicates the collection of subsets of features to be tested, (2) the scoring function s : 2[N] →R that assigns a score to each test, and (3) the feature identification algorithm that identifies the final selected feature set from the test scores. The design space is thus very large. Every combination of the three components leads to a new PFS scheme.2 We argue that PFS schemes are preferred over sequential FS for two reasons: 1. scalability, the tests in a PFS schem can be performed in parallel, and thus the scheme can be scaled to large datasets using standard parallel computing techniques, and 2. stability, errors in individual trials do not affect PFS methods as dramatically as sequential methods. In fact, we will show in this paper that increasing the number of tests improves the accuracy of our PFS scheme. We propose and study one such PFS approach. We show that our approach has comparable (and sometimes better) empirical quality compared to previous heuristic approaches while providing sound statistical guarantees and substantially improved scalability. Our technical contributions We propose a simple approach for the first and the third components of a PFS scheme. For the second component, we prove a sufficient condition on the scoring function under which the feature identification algorithm we propose is guaranteed to identify exactly the set 1A notable exception is the MIM method, which is easily parallelizable and can be regarded as a special implementation of our framework 2It is important to emphasize that this PFS framework is applicable to both filter and wrapper approaches. In the wrapper approach, the score s(T) might be the training error of some classifier, for instance. 2 of original (true) features. In particular, we introduce a notion called C-separability, which roughly indicates the strength of the scoring function in separating a relevant feature from an irrelevant feature. We show that when s is C-separable and we can estimate s, we are able to guarantee exact recovery of the right set of features with high probability. Moreover, when C > 0, the number of tests can be asymptotically logarithmic in the number of features in U. In theory, we provide sufficient conditions (a Na¨ıve Bayes assumption) according to which one can obtain separable scoring functions, including the KL divergence and mutual information (MI). In practice, we demonstrate that MI is separable even when the sufficient condition does not hold, and moreover, on generated synthetic data sets, our method is shown recover exactly the relevant features. We proceed to provide a comprehensive evaluation of our method on a range of real-world data sets of both large and small sizes. It is the large scale data sets where our method exhibits superior performance. In particular, for a huge relation extraction data set (TAC-KBP) that has millions redundant features and samples, we outperform all existing methods in accuracy and time, in addition to generating plausible features (in fact, many competing methods could not finish the execution). For the more familiar NIPS 2013 FS Challenge data, our method is also competitive (best or second-best) on the two largest data sets. Since our method hinges on the accuracy of score functions, which is difficult achieve for small data, our performance is more modest in this regime (staying in the middle of the pack in terms of classification accuracy). Nonetheless, we show that our method can be used as a preprocessing step for other FS methods to eliminate a large portion of the feature space, thereby providing substantial computational speedups while retaining the accuracy of those methods. 2 Parallel Feature Selection The general setting Let N be the total number of input features. For each subset T ⊆[N] := {1, . . . , N}, there is a score s(T) normalized to be in [0, 1] that assesses the “quality” of features in T. We select a collection of t tests, each of which is a subset T ⊆[N] such that from the scores of all tests we can identify the unknown subset F of d relevant variables that are most important to the classification task. We encode the collection of t tests with a binary matrix A = (aij) of dimension t × N, where aij = 1 iff feature j belongs to test i. Corresponding to each row i of A is a “test score” si = s({j | aij = 1}) ∈[0, 1]. Specifying A is called test design, identifying F from the score vector (si)i∈[t] is the job of the feature identification algorithm. The scheme is inherently parallel because all the tests must be specified in advance and executed in parallel; then the features are selected from all the test outcomes. Test design and feature identification Our test design and feature identification algorithms are extremely simple. We construct the test matrix A randomly by putting a feature in the test with probability p (to be chosen later). Then, from the test scores we rank the features and select d top-ranked features. The ranking function is defined as follows. Given a t × N test matrix A, let aj denote its jth column. The dot-product aj, s is the total score of all the tests that feature j participates in. We define ρ(j) = aj, s to be the rank of feature j with respect to the test matrix A and the score function s. The scoring function The crucial piece stiching together the entire scheme is the scoring function. The following theorem explains why the above test design and feature identification strategy make sense, as long as one can choose a scoring function s that satisfies a natural separability property. Intuitively, separable scoring functions require that adding more hidden features into a test set increase its score. Definition 2.1 (Separable scoring function). Let C ≥0 be a real number. The score function s : 2[N] →[0, 1] is said to be C-separable if the following property holds: for every f ∈F and ˜f /∈F, and for every T ⊆[N] −{f, ˜f}, we have s(T ∪{f}) −s(T ∪{ ˜f}) ≥C. In words, with a separable scoring function adding a relevant feature should be better than adding an irrelevant feature to a given subset T of features. Due to space limination, the proofs of the following theorem, propositions, and corollaries can be found in the supplementary materials. The essence of the idea is that, when s can separate relevant features from irrelevant features, with high probability a relevant feature will be ranked higher than an irrelevant feature. Hoeffding’s inequality is then used to bound the number of tests. 3 Theorem 2.2. Let A be the random t × N test matrix obtained by setting each entry to be 1 with probability p ∈[0, 1] and 0 with probability 1 −p. If the scoring function s is C-separable, then the expected rank of a feature in F is at least the expected rank of a feature not in F. Furthermore, if C > 0, then for any δ ∈(0, 1), with probability at least 1 −δ every feature in F has rank higher than every feature not in F, provided that the number of tests t satisfies t ≥ 2 C2p2(1 −p)2 log d(N −d) δ  . (1) By setting p = 1/2 in the above theorem, we obtain the following. It is quite remarkable that, assuming we can estimate the scores accurately, we only need about O(log N) tests to identify F. Corollary 2.3. Let C > 0 be a constant such that there is a C-separable scoring function s. Let d = |F|, where F is the set of hidden features. Let δ ∈(0, 1) be an arbitrary constant. Then, there is a distribution of t × N test matrices A with t = O(log(d(N −d)/δ)) such that, by selecting a test matrix randomly from the distribution, the d top-ranked features are exactly the hidden features with probability at least 1 −δ. Of course, in reality estimating the scores accurately is a very difficult problem, both statistically and computationally, depending on what the scoring function is. We elaborate more on this point below. But first, we show that separable scoring functions exist, under certain assumption about the underlying distribution. Sufficient conditions for separable scoring functions We demonstrate the existence of separable scoring functions given some sufficient conditions on the data. In practice, loss functions such as classification error and other surrogate losses may be used as scoring functions. For binary classification, information-theoretic quantities such as Kullback-Leibler divergence, Hellinger distance and the total variation — all of which special cases of f-divergences [5, 1] — may also be considered. For multi-class classification, mutual information (MI) is a popular choice. The data pairs (X, Y ) are assumed to be iid samples from a joint distribution P(X, Y ). The following result shows that under the so-called “naive Bayes” condition, i.e., all components of random vector X are conditionally independent given label variable Y , the Kullback-Leibler distance is a separable scoring function in a binary classification setting: Proposition 2.4. Consider the binary classification setting, i.e., Y ∈{0, 1} and assume that the naive Bayes condition holds. Define score function to be the Kullback-Leibler divergence: s(T) := KL(P(XT |Y = 0)||P(XT |Y = 1)). Then s is a separable scoring function. Moreover, s is C-separable, where C := minf∈F s(f). Proposition 2.5. Consider the multi-class classification setting, and assume that the naive Bayes condition holds. Moreover, for any pair f ∈F and ˜f /∈F, the following holds for any T ⊆ [N] −{f, ˜f} I(Xf; Y ) −I(Xf; XT ) ≥I(X ˜ f; Y ) −I(X ˜ f; XT ). Then, the MI function s(T) := I(XT ; Y ) is a separable scoring function. We note the naturalness of the condition so required, as quantity I(Xf; Y ) −I(Xf; XT ) may be viewed as the relevance of feature f with respect to the label Y , subtracted by the redundancy with other existing features T. If we assume further that X ˜ f is independent of both XT and the label Y , and there is a positive constant C such that I(Xf; Y ) −I(Xf; XT ) ≥C for any f ∈F, then s(T) is obviously a C-separable scoring function. It should be noted that the naive Bayes conditions are sufficient, but not necessary for a scoring function to be C-separable. Separable scoring functions for filters and wrappers. In practice, information-based scoring functions need to be estimated from the data. Consistent estimators of scoring functions such as KL divergence (more generally f-divergences) and MI are available (e.g., [20]). This provides the theoretical support for applying our test technique to filter methods: when the number of training data is sufficiently large, a consistent estimate of a separable scoring function must also be a separable scoring function. On the other hand, a wrapper method uses a classification algorithm’s performance as a scoring function for testing. Therefore, the choice of the underlying (surrogate) loss function plays a critical role. The following result provides the existence of loss functions which induce separable scoring functions for the wrapper method: 4 Proposition 2.6. Consider the binary classification setting, and let P T 0 := P(XT |Y = 0), P T 1 := P(XT |Y = 1). Assume that an f-divergence of the form: s(T) =  φ(dP T 0 /dP T 1 )dP T 1 is a separable scoring function for some convex function φ : R+ →R. Then there exists a surrogate loss function l : R × R →R+ under which the minimum l-risk: Rl(T) := infg E [l(Y, g(XT ))] is also a separable scoring function. Here the infimum is taken over all measurable classifier functions g acting on feature input XT , E denotes expectation with respect to the joint distribution of XT and Y . This result follows from Theorem 1 of [19], who established a precise correspondence between fdivergences defined by convex φ and equivalent classes of surrogate losses l. As a consequence, if the Hellinger distance between P T 0 and P T 1 is separable, then the wrapper method using the Adaboost classifier corresponds to a separable scoring function. Similarly, a separable KullbackLeibler divergence implies that of a logistic regression based wrapper; while a separable variational distance implies that of a SVM based wrapper. 3 Experimental results 3.1 Synthetic experiments In this section, we synthetically illustrate that separable scoring functions exist and our PFS framework is sound beyond the Na¨ıve Bayes assumption (NBA). We first show that MI is C-separable for large C even when the NBA is violated. The NBA was only needed in Propositions 2.4 and 2.5 in order for the proofs to go through. Then, we show that our framework recovers exactly the relevant features for two common classes of input distributions. Figure 1: Illustration of MI as a separable scoring function for the case of statistically dependent features. The top left point shows the scores for the 1st setting; the middle points shows the scores for the 2nd setting; and the bottom points shows the scores for the 3rd setting. We generate 1, 000 data points from two separated 2-D Gaussians with the same covariance matrix but different means, one centered at (−2, −2) and the other at (2, 2). We start with the identity covariance matrix, and gradually change the off diagonal element to −0.999, representing highly correlated features. Then, we add 1,000 dimensional zero mean Gaussian noise with the same covariance matrix, where the diagonal is 1 and the off-diagonal elements increases from 0 gradually to 0.999. We then calculate the MI between two features and the class label, and the two features are selected in three settings: 1) the two genuine dimensions; 2) one of the genuine feature and one from the noisy dimensions; 3) two random pair from the noisy dimensions. The MI that we get from these three conditions is shown in Figure 1. It is clear from this figure MI is a separable scoring function, despite the fact that the NBA is violated. We also synthetically evaluated our entire PFS idea, using two multinomials and two Gaussians to generate two binary classification task data. Our PFS scheme is able to capture exactly the relevant features in most cases. Details are in the supplementary material section due to lack of space. 3.2 Real-world data experiment results This section evaluates our approach in terms of accuracy, scalability, and robustness accross a range of real-world data sets: small, medium, and large. We will show that our PFS scheme works very well on medium and large data sets; because, as was shown in Section 3.1, with sufficient data to estimate test scores, we expect our method to work well in terms of accuracy. On the small datasets, our approach is only competitive and does not dominate existing approaches, due to the lack of data to estimate scores well. However, we show that we can still use our PFS scheme as a pre-processing step to filter down the number of dimensions; this step reduces the dimensionality, helps speed up existing FS methods from 3-5 times while keeps their accuracies. 3.2.1 The data sets and competing methods Large: TAC-KBP is a large data set with the number of samples and dimensions in the millions3; its domain is on relation extraction from natural language text. Medium: GISETTE and MADE3http://nlp.cs.qc.cuny.edu/kbp/2010/ 5 LON are two largest data sets from the NIPS 2003 feature selection challenge4, with the number of dimensions in the thousands. Small: Colon, Leukemia, Lymph, NCI9, and Lung are chosen from the small Micro-array datasets [6], along with the UCI datasets5. These sets typically have a few hundreds to a few thousands variables, with only tens of data samples. We compared our method with various baseline methods including mutual information maximization[14] (MIM), maximum relevancy minimum redundancy[21] (MRMR), conditional mutual information maximization[9] (CMIM), joint mutual information[25] (JMI), double input symmetrical relevance[16] (DISR), conditional infomax feature extraction[15] (CIFE), interaction capping[11] (ICAP), fast correlation based filter[26] (FCBF), local learning based feature selection [23] (LOGO), and feature generating machine [24] (FGM). 3.2.2 Accuracy                                                           !" # Figure 2: Result from different methods on TAC-KBP dataset. (a) Precision/Recall of different methods; (b) Top-5 keywords appearing in the Top-20 features selected by our method. Dotted lines in (a) are FGM (or MIM) with our approach as pre-processing step. Accuracy results on large data set. As shown in Figure 2(a), our method dominates both MIM and FGM. Given the same precision, our method achieves 2-14× higher recall than FGM, and 1.22.4× higher recall than MIM. Other competitors do not finish execution in 12 hours. We compare the top-features produced by our method and MIM, and find that our method is able to extract features that are strong indicators only when they are combined with other features, while MIM, which tests features individually, ignores this type of combination. We then validate that the features selected by our method makes intuitive sense. For each relation, we select the top-20 features and report the keyword in these features.6 As shown in Figure 2(b), these top-features selected by our method are good indicators of each relation. We also observe that using our approach as the pre-processing step improves the quality of FGM significantly. In Figure 2(a) (the broken lines), we run FGM (MIM) on the top-10K features produced by our approach. We see that running FGM with pre-processing achieves up to 10× higher recall given the same precision than running FGM on all 1M features. Accuracy results on medium data sets Since the focus of the evaluation is to analyze the efficacy of feature selection approaches, we employed the same strategy as Brown et al.[4] i.e. the final classification is done using k-nearest neighbor classifier with k fixed to three, and applied Euclidean distance7. We denote our method by Fk (and Wk), where F denotes filter (and W denotes wrapper method). k denotes the number of tests (i.e. let N be the dimension of data, then the total number of tests is kN). We bin each dimension of the data into five equal distanced bins when the data is real valued, otherwise the data is not processed8. MI is used as the scoring function for filter method, and loglikelihood is used for scoring the wrapper method. The wrapper we used is logistic regression9. 4http://www.nipsfsc.ecs.soton.ac.uk/datasets/ 5http://archive.ics.uci.edu/ml/ 6Following the syntax used by Mintz et al. [17], if a feature has the form [⇑poss wife ⇓prop of], we report the keyword as wife in Figure 2(b). 7The classifier for FGM is linear support vector machine (SVM), since it optimized for the SVM criteria. 8For SVM based method, the real valued data is not processed, and all data is normalized to have unit length. 9The logistic regressor used in wrapper is only to get the testing scores, the final classification scheme is still k-NN. 6 (a) (b) Figure 3: Result from real world datasets: a) curve showing the ratio between the errors of various methods applied on original data and on filtered data, where a large portion of the dimension is filtered out (value larger than one indicates performance improvement); b) the speed up we get by applying our method as a pre-processing method on various methods across different datasets, the flat dashed line indicates the location where the speed up is one. For GISETTE we select up to 500 features and for MADELON we select up to 100 features. To get the test results, we use the features according to the smallest validation error for each method, and the results on test set are illustrated in table 4. Table 1: Test set balanced error rate (%) from different methods on NIPS datasets Datasets Best 2nd Best 3rd Best Median Ours Ours Ours Ours Perf. Perf. Perf. Perf. (F3) (W3) (F10) (W10) GISETTE 2.15 3.06 3.09 3.86 4.85 2.72 4.69 2.89 MADELON 10.61 11.28 12.33 25.92 22.61 10.17 18.39 10.50 Accuracy results on the small data sets. As expected, due to the lack of data to estimate scores, our accuracy performance is average for this data set. Numbers can be found in the supplementary materials. However, as suggested by theorem A.3 (in supplementary materials), our method can also be used as a preprocessing step for other feature selection method to eliminate a large portion of the features. In this case, we use the filter methods to filter out e + 0.1 of the input features, where e is the desired proportion of the features that one wants to reserve. Using our method as preprocessing step achieves 3-5 times speedup as compare to the time spend by original methods that take multiple passes through the datasets, and keeps or improves the performance in most of the cases (see figure 3 a and b). The actual running time can be found in supplementary materials. 3.2.3 Scalability 360   3600   36000   360000   3600000   1   10   100   1000   360   3600   36000   360000   3600000   10000   100000   1000000   360   3600   36000   360000   3600000   10000   100000   1000000                       !  " ! #   Figure 4: Scalability Experiment of Our Approach We validate that our method is able to run on large-scale data set efficiently, and the ability to take advantage of parallelism is the key to its scalability. 7 Experiment Setup Given the TAC-KBP data set, we report the execution time by varying the degree of parallelism, number of features, and number of examples. We first produce a series of data sets by sub-sampling the original data set with different number examples ({104, 105, 106}) and number of features ({104, 105, 106}). We also try different degree of parallelism by running our approach using a single thread, 4-threads on a 4-core CPU, 32 threads on a single 8-CPU (4core/CPU) machine, and multiple machines available in the national Open Science Grid (OSG). For each combination of number of features, number of examples, and degree of parallelism, we estimate the throughput as the number of tests that we can run in 1 second, and estimate the total running time accordingly. We also ran our largest data set (106 rows and 106 columns) on OSG and report the actual run time. Degree of Parallelism Figure 4(a) reports the (estimated) run time on the largest data set (106 rows and 106 columns) with different degree of parallelism. We first observe that running our approach requires non-trivial amount of computational resources–if we only use a single thread, we need about 400 hours to finish our approach. However, the running time of our approach decreases linearly with the number of cores that we used. If we run our approach on a single machine with 32 cores, it finishes in just 11 hours. This linear speed-up behavior allows our approach to scale to very large data set–when we run our approach on the national Open Science Grid, we observed that our approach is able to finish in 2.2 hours (0.7 hours for actual execution, and 1.5 hours for scheduling overhead). The Impact of Number of Features and Number of Examples Figure 4(b,c) report the run time with different number of features and number of examples, respectively. In Figure 4(b), we fix the number of examples to be 105, and vary the number of features, and in Figure 4(c), we fix the number of features to be 106 and vary the number of examples. We see that as the number of features or the number of examples increase, our approach uses more time; however, the running time never grows super-linearly. This behavior implies the potential of our approach to scale to even larger data sets. 3.2.4 Stability and robustness Our method exhibits several robustness properties. In particular, the proof of Theorem 2.2 suggests that as the number of tests are increased the performance also improves. Therefore, in this section we empirically evaluate this observation. We picked four datasets: KRVSKP, Landset, Splice and Waveform from the UCI datasets and both NIPS datasets. (a) (b) (c) (d) Figure 5: Change of performance with respect of number of tests on several UCI datasets with (a) filter and (b) wrapper methods; and (c) GISETTE and (d) MADELON datasets. The trend is pretty clear as can be observed from figure 5. The performance of both wrapper and filter methods improves as we increase the number of tests, which can be attributed to the increase of robustness against inferior estimates for the test scores as the number of tests increases. In addition, apart from MADELON dataset, the performance converges fast, normally around k = 10 ∼15. Additional stability experiments can be found in the supplementary materials, where we evaluate ours and other methods in terms of consistency index. References [1] S. M. Ali and S. D. Silvey. A general class of coefficients of divergence of one distribution from another. J. Royal Stat. Soc. Series B, 28:131–142, 1966. [2] Edoardo Amaldi and Viggo Kann. On the approximability of minimizing nonzero variables or unsatisfied relations in linear systems, 1997. 8 [3] Ron Bekkerman, Ran El-Yaniv, Naftali Tishby, and Yoad Winter. Distributional word clusters vs. words for text categorization. J. Mach. Learn. Res., 3:1183–1208, March 2003. [4] Gavin Brown, Adam Pocock, Ming-Jie Zhao, and Mikel Luj´an. Conditional likelihood maximisation: A unifying framework for information theoretic feature selection. JMLR, 13:27–66, 2012. [5] I. Csisz´ar. Information-type measures of difference of probability distributions and indirect observation. Studia Sci. Math. Hungar, 2:299–318, 1967. [6] C. H. Q. Ding and H. Peng. Minimum redundancy feature selection from microarray gene expression data. J. Bioinformatics and Computational Biology, pages 185–206, 2005. [7] Ding-Zhu Du and Frank K. Hwang. Combinatorial group testing and its applications, volume 12 of Series on Applied Mathematics. World Scientific Publishing Co. Inc., River Edge, NJ, second edition, 2000. [8] Devdatt P. Dubhashi and Alessandro Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, Cambridge, 2009. [9] Francois Fleuret and Isabelle Guyon. Fast binary feature selection with conditional mutual information. Journal of Machine Learning Research, 5:1531–1555, 2004. [10] Isabelle Guyon and Andr´e Elisseeff. An introduction to variable and feature selection. J. Mach. Learn. Res., 3:1157–1182, March 2003. [11] A. Jakulin and I. Bratko. Machine learning based on attribute interactions: Ph.D. dissertation. 2005. [12] Ron Kohavi and George H. John. Wrappers for feature subset selection. Artif. Intell., 97(1-2):273–324, December 1997. [13] Ludmila I. Kuncheva. A stability index for feature selection. In Artificial Intelligence and Applications, pages 421–427, 2007. [14] David D. Lewis. Feature selection and feature extraction for text categorization. In In Proceedings of Speech and Natural Language Workshop, pages 212–217. Morgan Kaufmann, 1992. [15] Dahua Lin and Xiaoou Tang. Conditional infomax learning: An integrated framework for feature extraction and fusion. In ECCV (1), pages 68–82, 2006. [16] P. E. Meyer and G. Bontempi. On the use of variable complementarity for feature selection in cancer classification. In Proceedings of EvoWorkshop, pages 91–102. Springer-Verlag, 2006. [17] Mike Mintz, Steven Bills, Rion Snow, and Daniel Jurafsky. Distant supervision for relation extraction without labeled data. In ACL/IJCNLP, pages 1003–1011, 2009. [18] Hung Q. Ngo, Ely Porat, and Atri Rudra. Efficiently decodable compressed sensing by list-recoverable codes and recursion. In Proceedings of STACS, volume 14, pages 230–241, 2012. [19] X. Nguyen, M. J. Wainwright, and M. I. Jordan. On surrogate losses and f-divergences. Annals of Statistics, 37(2):876–904, 2009. [20] X. Nguyen, M. J. Wainwright, and M. I. Jordan. Estimating divergence functionals and the likelihood ratio by by convex risk minimization. IEEE Trans. on Information Theory, 56(11):5847–5861, 2010. [21] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information: criteria of maxdependency, max-relevance, and min-redundancy. IEEE Transactions on PAMI, 27:1226–1238, 2005. [22] Herv´e Stoppiglia, G´erard Dreyfus, R´emi Dubois, and Yacine Oussar. Ranking a random feature for variable and feature selection. J. Mach. Learn. Res., 3:1399–1414, March 2003. [23] Y. Sun, S. Todorovic, and S. Goodison. Local-learning-based feature selection for high-dimensional data analysis. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1610–1626, Sept 2010. [24] Mingkui Tan, Li Wang, and Ivor W. Tsang. Learning sparse svm for feature selection on very high dimensional datasets. In ICML, pages 1047–1054, 2010. [25] Howard Hua Yang and John E. Moody. Data visualization and feature selection: New algorithms for nongaussian data. In NIPS, pages 687–702, 1999. [26] Lei Yu and Huan Liu. Efficient feature selection via analysis of relevance and redundancy. Journal of Machine Learning Research, 5:1205–1224, 2004. [27] Ce Zhang, Feng Niu, Christopher R´e, and Jude W. Shavlik. Big data versus the crowd: Looking for relationships in all the right places. In ACL (1), pages 825–834, 2012. 9
2014
405
5,520
Streaming, Memory Limited Algorithms for Community Detection Se-Young. Yun MSR-Inria 23 Avenue d’Italie, Paris 75013 seyoung.yun@inria.fr Marc Lelarge ∗ Inria & ENS 23 Avenue d’Italie, Paris 75013 marc.lelarge@ens.fr Alexandre Proutiere † KTH, EE School / ACL Osquldasv. 10, Stockholm 100-44, Sweden alepro@kth.se Abstract In this paper, we consider sparse networks consisting of a finite number of nonoverlapping communities, i.e. disjoint clusters, so that there is higher density within clusters than across clusters. Both the intra- and inter-cluster edge densities vanish when the size of the graph grows large, making the cluster reconstruction problem nosier and hence difficult to solve. We are interested in scenarios where the network size is very large, so that the adjacency matrix of the graph is hard to manipulate and store. The data stream model in which columns of the adjacency matrix are revealed sequentially constitutes a natural framework in this setting. For this model, we develop two novel clustering algorithms that extract the clusters asymptotically accurately. The first algorithm is offline, as it needs to store and keep the assignments of nodes to clusters, and requires a memory that scales linearly with the network size. The second algorithm is online, as it may classify a node when the corresponding column is revealed and then discard this information. This algorithm requires a memory growing sub-linearly with the network size. To construct these efficient streaming memory-limited clustering algorithms, we first address the problem of clustering with partial information, where only a small proportion of the columns of the adjacency matrix is observed and develop, for this setting, a new spectral algorithm which is of independent interest. 1 Introduction Extracting clusters or communities in networks have numerous applications and constitutes a fundamental task in many disciplines, including social science, biology, and physics. Most methods for clustering networks assume that pairwise “interactions” between nodes can be observed, and that from these observations, one can construct a graph which is then partitioned into clusters. The resulting graph partitioning problem can be typically solved using spectral methods [1, 3, 5, 6, 12], compressed sensing and matrix completion ideas [2,4], or other techniques [10]. A popular model and benchmark to assess the performance of clustering algorithms is the Stochastic Block Model (SBM) [9], also referred to as the planted partition model. In the SBM, it is assumed ∗Work performed as part of MSR-INRIA joint research centre. M.L. acknowledges the support of the French Agence Nationale de la Recherche (ANR) under reference ANR-11-JS02-005-01 (GAP project). †A. Proutiere’s research is supported by the ERC FSA grant, and the SSF ICT-Psi project. 1 that the graph to partition has been generated randomly, by placing an edge between two nodes with probability p if the nodes belong to the same cluster, and with probability q otherwise, with q < p. The parameters p and q typically depends on the network size n, and they are often assumed to tend to 0 as n grows large, making the graph sparse. This model has attracted a lot of attention recently. We know for example that there is a phase transition threshold for the value of (p−q)2 p+q . If we are below the threshold, no algorithm can perform better than the algorithm randomly assigning nodes to clusters [7, 14], and if we are above the threshold, it becomes indeed possible to beat the naive random assignment algorithm [11]. A necessary and sufficient condition on p and q for the existence of clustering algorithms that are asymptotically accurate (meaning that the proportion of misclassified nodes tends to 0 as n grows large) has also been identified [15]. We finally know that spectral algorithms can reconstruct the clusters asymptotically accurately as soon as this is at all possible, i.e., they are in a sense optimal. We focus here on scenarios where the network size can be extremely large (online social and biological networks can, already today, easily exceed several hundreds of millions of nodes), so that the adjacency matrix A of the corresponding graph can become difficult to manipulate and store. We revisit network clustering problems under memory constraints. Memory limited algorithms are relevant in the streaming data model, where observations (i.e. parts of the adjacency matrix) are collected sequentially. We assume here that the columns of the adjacency matrix A are revealed one by one to the algorithm. An arriving column may be stored, but the algorithm cannot request it later on if it was not stored. The objective of this paper is to determine how the memory constraints and the data streaming model affect the fundamental performance limits of clustering algorithms, and how the latter should be modified to accommodate these restrictions. Again to address these questions, we use the stochastic block model as a performance benchmark. Surprisingly, we establish that when there exists an algorithm with unlimited memory that asymptotically reconstruct the clusters accurately, then we can devise an asymptotically accurate algorithm that requires a memory scaling linearly in the network size n, except if the graph is extremely sparse. This claim is proved for the SBM with parameters p = a f(n) n and q = b f(n) n , with constants a > b, under the assumption that log n ≪f(n). For this model, unconstrained algorithms can accurately recover the clusters as soon as f(n) = ω(1) [15], so that the gap between memory-limited and unconstrained algorithms is rather narrow. We further prove that the proposed algorithm reconstruct the clusters accurately before collecting all the columns of the matrix A, i.e., it uses less than one pass on the data. We also propose an online streaming algorithm with sublinear memory requirement. This algorithm output the partition of the graph in an online fashion after a group of columns arrives. Specifically, if f(n) = nα with 0 < α < 1, our algorithm requires as little as nβ memory with β > max 1 −α, 2 3  . To the best of our knowledge, our algorithm is the first sublinear streaming algorithm for community detection. Although streaming algorithms for clustering data streams have been analyzed [8], the focus in this theoretical computer science literature is on worst case graphs and on approximation performance which is quite different from ours. To construct efficient streaming memory-limited clustering algorithms, we first address the problem of clustering with partial information. More precisely, we assume that a proportion γ (that may depend on n) of the columns of A is available, and we wish to classify the nodes corresponding to these columns, i.e., the observed nodes. We show that a necessary and sufficient condition for the existence of asymptotically accurate algorithms is √γf(n) = ω(1). We also show that to classify the observed nodes efficiently, a clustering algorithm must exploit the information provided by the edges between observed and unobserved nodes. We propose such an algorithm, which in turn, constitutes a critical building block in the design of memory-limited clustering schemes. To our knowledge, this paper is the first to address the problem of community detection in the streaming model, and with memory constraints. Note that PCA has been recently investigated in the streaming model and with limited memory [13]. Our model is different, and to obtain efficient clustering algorithms, we need to exploit its structure. 2 Models and Problem Formulation We consider a network consisting of a set V of n nodes. V admits a hidden partition of K nonoverlapping subsets V1, . . . , VK, i.e., V = SK k=1 Vk. The size of community or cluster Vk is αkn for some αk > 0. Without loss of generality, let α1 ≤α2 ≤· · · ≤αK. We assume that when the 2 network size n grows large, the number of communities K and their relative sizes are kept fixed. To recover the hidden partition, we have access to a n × n symmetric random binary matrix A whose entries are independent and satisfy: for all v, w ∈V , P[Avw = 1] = p if v and w are in the same cluster, and P[Avw = 1] = q otherwise, with q < p. This corresponds to the celebrated Stochastic Block Model (SBM). If Avw = 1, we say that nodes v and w are connected, or that there is an edge between v and w. p and q typically depend on the network size n. To simplify the presentation, we assume that there exists a function f(n) , and two constants a > b such that p = a f(n) n and q = b f(n) n . This assumption on the specific scaling of p and q is not crucial, and most of the results derived in this paper hold for more general p and q (as it can be seen in the proofs). For an algorithm π, we denote by επ(n) the proportion of nodes that are misclassified by this algorithm. We say that π is asymptotically accurate if limn→∞E[επ(n)] = 0. Note that in our setting, if f(n) = O(1), there is a non-vanishing fraction of isolated nodes for which no algorithm will perform better than a random guess. In particular, no algorithm can be asymptotically accurate. Hence, we assume that f(n) = ω(1), which constitutes a necessary condition for the graph to be asymptotically connected, i.e., the largest connected component to have size n −o(n). In this paper, we address the problem of reconstructing the clusters from specific observed entries of A, and under some constraints related to the memory available to process the data and on the way observations are revealed and stored. More precisely, we consider the two following problems. Problem 1. Clustering with partial information. We first investigate the problem of detecting communities under the assumption that the matrix A is partially observable. More precisely, we assume that a proportion γ (that typically depend on the network size n) of the columns of A are known. The γn observed columns are selected uniformly at random among all columns of A. Given these observations, we wish to determine the set of parameters γ and f(n) such that there exists an asymptotically accurate clustering algorithm. Problem 2. Clustering in the streaming model and under memory constraints. We are interested here in scenarios where the matrix A cannot be stored entirely, and restrict our attention to algorithms that require memory less than M bits. Ideally, we would like to devise an asymptotically accurate clustering algorithm that requires a memory M scaling linearly or sub-linearly with the network size n. In the streaming model, we assume that at each time t = 1, . . . , n, we observe a column Av of A uniformly distributed over the set of columns that have not been observed before t. The column Av may be stored at time t, but we cannot request it later on if it has not been explicitly stored. The problem is to design a clustering algorithm π such that in the streaming model, π is asymptotically accurate, and requires less than M bits of memory. We distinguish offline clustering algorithms that must store the mapping between all nodes and their clusters (here M has to scale linearly with n), and online algorithms that may classify the nodes when the corresponding columns are observed, and then discard this information (here M could scale sub-linearly with n). 3 Clustering with Partial Information In this section, we solve Problem 1. In what follows, we assume that γn = ω(1), which simply means that the number of observed columns of A grows large when n tends to ∞. However we are typically interested in scenarios where the proportion of observed columns γ tends to 0 as the network size grows large. Let (Av, v ∈V (g)) denote the observed columns of A. V (g) is referred to as the set of green nodes and we denote by n(g) = γn the number of green nodes. V (r) = V \ V (g) is referred to as the set of red nodes. Note that we have no information about the connections among the red nodes. For any k = 1, . . . , K, let V (g) k = V (g) ∩Vk, and V (r) k = V (r) ∩Vk. We say that a clustering algorithm π classifies the green nodes asymptotically accurately, if the proportion of misclassified green nodes, denoted by επ(n(g)), tends to 0 as the network size n grows large. 3.1 Necessary Conditions for Accurate Detection We first derive necessary conditions for the existence of asymptotically accurate clustering algorithms. As it is usual in this setting, the hardest model to estimate (from a statistical point of view) corresponds to the case of two clusters of equal sizes (see Remark 3 below). Hence, we state our information theoretic lower bounds, Theorems 1 and 2, for the special case where K = 2, and 3 α1 = α2. Theorem 1 states that if the proportion of observed columns γ is such that √γf(n) tends to 0 as n grows large, then no clustering algorithm can perform better than the naive algorithm that assigns nodes to clusters randomly. Theorem 1 Assume that √γf(n) = o(1). Then under any clustering algorithm π, the expected proportion of misclassified green nodes tends to 1/2 as n grows large, i.e., lim n→∞E[επ(n(g))] = 1/2. Theorem 2 (i) shows that this condition is tight in the sense that as soon as there exists a clustering algorithm that classifies the green nodes asymptotically accurately, then we need to have √γf(n) = ω(1). Although we do not observe the connections among red nodes, we might ask to classify these nodes through their connection patterns with green nodes. Theorem 2 (ii) shows that this is possible only if γf(n) tends to infinity as n grows large. Theorem 2 (i) If there exists a clustering algorithm that classifies the green nodes asymptotically accurately, then we have: √γf(n) = ω(1). (ii) If there exists an asymptotically accurate clustering algorithm (i.e., classifying all nodes asymptotically accurately), then we have: γf(n) = ω(1). Remark 3 Theorems 1 and 2 might appear restrictive as they only deal with the case of two clusters of equal sizes. This is not the case as we will provide in the next section an algorithm achieving the bounds of Theorem 2 (i) and (ii) for the general case (with a finite number K of clusters of possibly different sizes). In other words, Theorems 1 and 2 translates directly in minimax lower bounds thanks to the results we obtain in Section 3.2. Note that as soon as γf(n) = ω(1) (i.e. the mean degree in the observed graph tends to infinity), then standard spectral method applied on the squared matrix A(g) = (Avw, v, w ∈V (g)) will allow us to classify asymptotically accurately the green nodes, i.e., taking into account only the graph induced by the green vertices is sufficient. However if γf(n) = o(1) then no algorithm based on the induced graph only will be able to classify the green nodes. Theorem 2 shows that in the range of parameters 1/f(n)2 ≪γ ≪1/f(n), it is impossible to cluster asymptotically accurately the red nodes but the question of clustering the green nodes is left open. 3.2 Algorithms In this section, we deal with the general case and assume that the number K of clusters (of possibly different sizes) is known. There are two questions of interest: clustering green and red nodes. It seems intuitive that red nodes can be classified only if we are able to first classify green nodes. Indeed as we will see below, once the green nodes have been classified, an easy greedy rule is optimal for the red nodes. Classifying green nodes. Our algorithm to classify green nodes rely on spectral methods. Note that as suggested above, in the regime 1/f(n)2 ≪γ ≪1/f(n), any efficient algorithm needs to exploit the observed connections between green and red nodes. We construct such an algorithm below. We should stress that our algorithm does not require to know or estimate γ or f(n). When from the observations, a red node w ∈V (r) is connected to at most a single green node, i.e., if P v∈V (g) Avw ≤1, this red node is useless in the classification of green nodes. On the contrary, when a red node is connected to two green nodes, say v1 and v2 (Av1w = 1 = Av2w), we may infer that the green nodes v1 and v2 are likely to be in the same cluster. In this case, we say that there is an indirect edge between v1 and v2. To classify the green nodes, we will use the matrix A(g) = (Avw)v,w∈V (g), as well as the graph of indirect edges. However this graph is statistically different from the graphs arising in the classical stochastic block model. Indeed, when a red node is connected to three or more green nodes, then the presence of indirect edges between these green nodes are not statistically independent. To circumvent this difficulty, we only consider indirect edges created through red nodes connected to exactly two green nodes. Let V (i) = {v : v ∈V (r) and P w∈V (g) Awv = 2}. We denote by A′ the (n(g) × n(g)) matrix reporting the number of such indirect edges between pairs of green nodes: for all v, w ∈V (g), A′ vw = P z∈V (i) AvzAwz. 4 Algorithm 1 Spectral method with indirect edges Input: A ∈{0, 1}|V |×|V (g)|, V , V (g), K V (r) ←V \ V (g) V (i) ←{v : v ∈V (r) and P w∈V (g) Awv = 2} A(g) ←(Avw)v,w∈V (g) and A′ ←(A′ vw = P z∈V (i) AvzAwz)v,w∈V (g) ˆp(g) ← P v,w∈V (g) A(g) vw |V (g)|2 and ˆp′ ← P v,w∈V (g) A′ vw |V (g)|2 Q(g), σ(g) K , Γ(g) ←Approx(A(g), ˆp(g), V (g), K ) and Q′, σ′ K, Γ′ ←Approx(A′, ˆp′, V (g), K ) if σ(g) K √ |V (g)|ˆp(g) · 1{|V (g)|ˆp(g)≥50} ≥ σ′ K √ |V (g)|ˆp′ · 1{|V (g)|ˆp′≥50} then (Sk)1≤k≤K ←Detection (Q(g), Γ(g), K) Randomly place nodes in V (g) \ Γ(g) to partitions (Sk)k=1,...,K else (Sk)1≤k≤K ←Detection (Q′, Γ′, K) Randomly place nodes in V (g) \ Γ′ to partitions (Sk)k=1,...,K end if Output: (Sk)1≤k≤K, Our algorithm to classify the green nodes consists in the following steps: Step 1. Construct the indirect edge matrix A′ using red nodes connected to two green nodes only. Step 2. Perform a spectral analysis of matrices A(g) and A′ as follows: first trim A(g) and A′ (to remove nodes with too many connections), then extract their K largest eigenvalues and the corresponding eigenvectors. Step 3. Select the matrix A(g) or A′ with the largest normalized K-th largest eigenvalue. Step 4. Construct the K clusters V (g) 1 , . . . , V (g) K based on the eigenvectors of the matrix selected in the previous step. The detailed pseudo-code of the algorithm is presented in Algorithm 1. Steps 2 and 4 of the algorithm are standard techniques used in clustering for the SBM, see e.g. [5]. The algorithms involved in these Steps are presented in the supplementary material (see Algorithms 4, 5, 6). Note that to extract the K largest eigenvalues and the corresponding eigenvectors of a matrix, we use the power method, which is memory-efficient (this becomes important when addressing Problem 2). Further observe that in Step 3, the algorithm exploits the information provided by the red nodes: it selects, between the direct edge matrix A(g) and the indirect edge matrix A′, the matrix whose spectral properties provide more accurate information about the K clusters. This crucial step is enough for the algorithm to classify the green nodes asymptotically accurately whenever this is at all possible, as stated in the following theorem: Theorem 4 When √γf(n) = ω(1), Algorithm 1 classifies the green nodes asymptotically accurately. In view of Theorem 2 (i), our algorithm is optimal. It might be surprising to choose one of the matrix A(g) or A′ and throw the information contained in the other one. But the following simple calculation gives the main idea. To simplify, consider the case γf(n) = o(1) so that we know that the matrix A(g) alone is not sufficient to find the clusters. In this case, it is easy to see that the matrix A′ alone allows to classify as soon as √γf(n) = ω(1). Indeed, the probability of getting an indirect edge between two green nodes is of the order (a2 + b2)f(n)2/(2n) if the two nodes are in the same clusters and abf(n)2/n if they are in different clusters. Moreover the graph of indirect edges has the same statistics as a SBM with these probabilities of connection. Hence standard results show that spectral methods will work as soon as γf(n)2 tends to infinity, i.e. the mean degree in the observed graph of indirect edges tends to infinity. In the case where γf(n) is too large (indeed ≫ln(f(n))), then the graph of indirect edges becomes too sparse for A′ to be useful. But in this regime, A(g) allows to classify the green nodes. This argument gives some intuitions about the full proof of Theorem 4 which can be found in the Appendix. 5 Algorithm 2 Greedy selections Input: A ∈{0, 1}|V |×|V (g)|, V , V (g), (S(g) k )1≤k≤K. V (r) ←V \ V (g) and Sk ←S(g) k , for all k for v ∈V (r) do Find k⋆= arg maxk{P w∈S(g) k Avw/|S(g) k |} (tie broken uniformly at random) Sk⋆←Sk⋆∪{v} end for Output: (Sk)1≤k≤K. An attractive feature of our Algorithm 1 is that it does not require any parameter of the model as input except the number of clusters K. In particular, our algorithm selects automatically the best matrix among A′ and A(g) based on their spectral properties. Classifying red nodes. From Theorem 2 (ii), in order to classify red nodes, we need to assume that γf(n) = ω(1). Under this assumption, the green nodes are well classified under Algorithm 1. To classify the red nodes accurately, we show that it is enough to greedily assign these nodes to the clusters of green nodes identified using Algorithm 1. More precisely, a red node v is assigned to the cluster that maximizes the number of observed edges between v and the green nodes of this cluster. The pseudo-code of this procedure is presented in Algorithm 2. Theorem 5 When γf(n) = ω(1), combining Algorithms 1 and 2 yields an asymptotically accurate clustering algorithm. Again in view of Theorem 2 (ii), our algorithm is optimal. To summarize our results about Problem 1, i.e., clustering with partial information, we have shown that: (a) If γ ≪1/f(n)2, no clustering algorithm can perform better than the naive algorithm that assigns nodes to clusters randomly (in the case of two clusters of equal sizes). (b) If 1/f(n)2 ≪γ ≪1/f(n), Algorithm 1 classifies the green nodes asymptotically accurately, but no algorithm can classify the red nodes asymptotically accurately. (c) If 1/f(n) ≪γ, the combination of Algorithm 1 and Algorithm 2 classifies all nodes asymptotically accurately. 4 Clustering in the Streaming Model under Memory Constraints In this section, we address Problem 2 where the clustering problem has additional constraints. Namely, the memory available to the algorithm is limited (memory constraints) and each column Av of A is observed only once, hence if it is not stored, this information is lost (streaming model). In view of previous results, when the entire matrix A is available (i.e. γ = 1) and when there is no memory constraint, we know that a necessary and sufficient condition for the existence of asymptotically accurate clustering algorithms is that f(n) = ω(1). Here we first devise a clustering algorithm adapted to the streaming model and using a memory scaling linearly with n that is asymptotically accurate as soon as log(n) ≪f(n). Algorithms 1 and 2 are the building blocks of this algorithm, and its performance analysis leverages the results of previous section. We also show that our algorithm does not need to sequentially observe all columns of A in order to accurately reconstruct the clusters. In other words, the algorithm uses strictly less than one pass on the data and is asymptotically accurate. Clearly if the algorithm is asked (as above) to output the full partition of the network, it will require a memory scaling linearly with n, the size of the output. However, in the streaming model, we can remove this requirement and the algorithm can output the full partition sequentially similarly to an online algorithm (however our algorithm is not required to take an irrevocable action after the arrival of each column but will classify nodes after a group of columns arrives). In this case, the memory requirement can be sublinear. We present an algorithm with a memory requirement which depends on the density of the graph. In the particular case where f(n) = nα with 0 < α < 1, our algorithm requires as little as nβ bits of memory with β > max 1 −α, 2 3  to accurately cluster the nodes. Note that when the graph is very sparse (α ≈0), then the community detection is a hard statistical task and the algorithm needs to gather a lot of columns so that the memory requirement is quite 6 Algorithm 3 Streaming offline Input: {A1, . . . , AT }, p, V , K Initial: N ←n × K matrix filled with zeros and B ← nh(n) min{np,n1/3} log n Subsampling: At ←Randomly erase entries of At with probability max{0, 1 −n1/3 np } for τ = 1to ⌊T B ⌋do A(B) ←n × B matrix where i-th column is Ai+(τ−1)B (S(τ) k ) ←Algorithm 1 (A(B), V, {(τ −1)B + 1, . . . , τB}, K) if τ = 1 then ˆVk ←S(1) k for all k and Nv,k ←P w∈S(1) k Awv for all v ∈V and k else ˆVs(k) ←ˆVs(k) ∪S(τ) k for all k where s(k) = arg max1≤i≤K P v∈ˆ Vi P w∈S(τ) k Avw | ˆ Vi||S(τ) k | Nv,s(k) ←Nv,s(k) + P w∈S(τ) k Awv for all v ∈V and k end if end for Greedy improvement : ¯Vk ←{v : k = arg max1≤i≤K Nv,i | ˆVi| } for all k Output: ( ¯Vk)1≤k≤K, high (β ≈1). As α increases, the graph becomes denser and the statistical task easier. As a result, our algorithm needs to look at smaller blocks of columns and the memory requirement decreases. However, for α ≥1/3, although the statistical task is much easier, our algorithm hits its memory constraint and in order to store blocks with sufficiently many columns, it needs to subsample each column. As a result, the memory requirement of our algorithm does not decrease for α ≥1/3. The main idea of our algorithms is to successively treat blocks of B consecutive arriving columns. Each column of a block is stored in the memory. After the last column of a block arrives, we apply Algorithm 1 to classify the corresponding nodes accurately, and we then merge the obtained clusters with the previously identified clusters. In the online version, the algorithm can output the partition of the block and in the offline version, it stores this result. We finally remove the stored columns, and proceed with the next block. For the offline algorithm, after a total of T observed columns, we apply Algorithm 2 to classify the remaining nodes so that T can be less than n. The pseudo-code of the offline algorithm is presented in Algorithm 3. Next we discuss how to tune B and T so that the classification is asymptotically accurate, and we compute the required memory to implement the algorithm. Block size. We denote by B the size of a block. Let h(n) be such that the block size is B = h(n)n f(n) log(n). Let ¯f(n) = min{f(n), n1/3} which represents the order of the number of positive entries of each column after the subsampling process. According to Theorem 4 (with γ = B/n), to accurately classify the nodes arrived in a block, we just need that B n ¯f(n)2 = ω(1), which is equivalent to h(n) = ω( log(n) min{f(n),n1/3}). Now the merging procedure that combines the clusters found analyzing the current block with the previously identified clusters uses the number of connections between the nodes corresponding to the columns of the current block to the previous clusters. The number of these connections must grow large as n tends to ∞to ensure the accuracy of the merging procedure. Since the number of these connections scales as B2 ¯ f(n) n , we need that h(n)2 = ω(min{f(n), n1/3} log(n)2 n ). Note that this condition is satisfied as long as h(n) = ω( log(n) min{f(n),n1/3}). Total number of columns for the offline algorithm. To accurately classify the nodes whose columns are not observed, we will show that we need the total number of observed columns T to satisfy T = ω( n min{f(n),n1/3}) (which is in agreement with Theorem 5). Required memory for the offline algorithm. To store the columns of a block, we need Θ(nh(n)) bits. To store the previously identified clusters, we need at most log2(K)n bits, and we can store the number of connections between the nodes corresponding to the columns of the current block to the previous clusters using a memory linearly scaling with n. Finally, to execute Algorithm 1, the 7 Algorithm 4 Streaming online Input: {A1, . . . , An}, p, V , K Initial: B ← nh(n) min{np,n1/3} log n and τ ⋆= ⌊T B ⌋ Subsampling: At ←Randomly erase entries of At with probability max{0, 1 −n1/3 np } for τ = 1to τ ⋆do A(B) ←n × B matrix where i-th column is Ai+(τ−1)B (Sk)1≤k≤K ←Algorithm 1 (A(B), V, {(τ −1)B + 1, . . . , τB}, K) if τ = 1 then ˆVk ←Sk for all k Output at B: (Sk)1≤k≤K else s(k) ←arg max1≤i≤K P v∈ˆ Vi P w∈Sk Avw | ˆVi||Sk| for all k Output at τB: (Ss(k))1≤k≤K end if end for power method used to perform the SVD (see Algorithm 5) requires the same amount of bits than that used to store a block of size B. In summary, the required memory is M = Θ(nh(n) + n). Theorem 6 Assume that h(n) = ω( log(n) min{f(n),n1/3}) and T = ω( n min{f(n),n1/3}). Then with M = Θ(nh(n) + n) bits, Algorithm 3, with block size B = h(n)n min{f(n),n1/3} log(n) and acquiring the T first columns of A, outputs clusters ˆV1, . . . , ˆVK such that with high probability, there exists a permutation σ of {1, . . . , K} such that: 1 n S 1≤k≤K ˆVk \ Vσ(k) = O  exp(−cT min{f(n),n1/3} n )  with a constant c > 0. Under the conditions of the above theorem, Algorithm 3 is asymptotically accurate. Now if f(n) = ω(log(n)), we can choose h(n) = 1. Then Algorithm 3 classifies nodes accurately and uses a memory linearly scaling with n. Note that increasing the number of observed columns T just reduces the proportion of misclassified nodes. For example, if f(n) = log(n)2, with high probability, the proportion of misclassified nodes decays faster than 1/n if we acquire only T = n/ log(n) columns, whereas it decays faster than exp(−log(n)2) if all columns are observed. Our online algorithm is a slight variation of the offline algorithm. Indeed, it deals with the first block exactly in the same manner and keeps in memory the partition of this first block. It then handles the successive blocks as the first block and merges the partition of these blocks with those of the first block as done in the offline algorithm for the second block. Once this is done, the online algorithm just throw all the information away except the partition of the first block. Theorem 7 Assume that h(n) = ω( log(n) min{f(n),n1/3}), then Algorithm 4 with block size B = h(n)n min{f(n),n1/3} log n is asymptotically accurate (i.e., after one pass, the fraction of misclassified nodes vanishes) and requires Θ(nh(n)) bits of memory. 5 Conclusion We introduced the problem of community detection with partial information, where only an induced subgraph corresponding to a fraction of the nodes is observed. In this setting, we gave a necessary condition for accurate reconstruction and developed a new spectral algorithm which extracts the clusters whenever this is at all possible. Building on this result, we considered the streaming, memory limited problem of community detection and developed algorithms able to asymptotically reconstruct the clusters with a memory requirement which is linear in the size of the network for the offline version of the algorithm and which is sublinear for its online version. To the best of our knowledge, these algorithms are the first community detection algorithms in the data stream model. The memory requirement of these algorithms is non-increasing in the density of the graph and determining the optimal memory requirement is an interesting open problem. 8 References [1] R. B. Boppana. Eigenvalues and graph bisection: An average-case analysis. In Foundations of Computer Science, 1987., 28th Annual Symposium on, pages 280–285. IEEE, 1987. [2] S. Chatterjee. Matrix estimation by universal singular value thresholding. arXiv preprint arXiv:1212.1247, 2012. [3] K. Chaudhuri, F. C. Graham, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. Journal of Machine Learning Research-Proceedings Track, 23:35–1, 2012. [4] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In Advances in Neural Information Processing Systems 25, pages 2213–2221. 2012. [5] A. Coja-Oghlan. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability & Computing, 19(2):227–284, 2010. [6] A. Dasgupta, J. Hopcroft, R. Kannan, and P. Mitra. Spectral clustering by recursive partitioning. In Algorithms–ESA 2006, pages 256–267. Springer, 2006. [7] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov´a. Inference and phase transitions in the detection of modules in sparse networks. Phys. Rev. Lett., 107, Aug 2011. [8] S. Guha, N. Mishra, R. Motwani, and L. O’Callaghan. Clustering data streams. In 41st Annual Symposium on Foundations of Computer Science (Redondo Beach, CA, 2000), pages 359–366. IEEE Comput. Soc. Press, Los Alamitos, CA, 2000. [9] P. Holland, K. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social Networks, 5(2):109 – 137, 1983. [10] M. Jerrum and G. B. Sorkin. The metropolis algorithm for graph bisection. Discrete Applied Mathematics, 82(13):155 – 175, 1998. [11] L. Massouli´e. Community detection thresholds and the weak ramanujan property. CoRR, abs/1311.3085, 2013. [12] F. McSherry. Spectral partitioning of random graphs. In Foundations of Computer Science, 2001. Proceedings. 42nd IEEE Symposium on, pages 529–537. IEEE, 2001. [13] I. Mitliagkas, C. Caramanis, and P. Jain. Memory limited, streaming PCA. In NIPS, 2013. [14] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. arXiv preprint arXiv:1202.1499, 2012. [15] S. Yun and A. Proutiere. Community detection via random and adaptive sampling. In COLT, 2014. 9
2014
406
5,521
Analysis of Brain States from Multi-Region LFP Time-Series Kyle Ulrich 1, David E. Carlson 1, Wenzhao Lian 1, Jana Schaich Borg 2, Kafui Dzirasa 2 and Lawrence Carin 1 1 Department of Electrical and Computer Engineering 2 Department of Psychiatry and Behavioral Sciences Duke University, Durham, NC 27708 {kyle.ulrich, david.carlson, wenzhao.lian, jana.borg, kafui.dzirasa, lcarin}@duke.edu Abstract The local field potential (LFP) is a source of information about the broad patterns of brain activity, and the frequencies present in these time-series measurements are often highly correlated between regions. It is believed that these regions may jointly constitute a “brain state,” relating to cognition and behavior. An infinite hidden Markov model (iHMM) is proposed to model the evolution of brain states, based on electrophysiological LFP data measured at multiple brain regions. A brain state influences the spectral content of each region in the measured LFP. A new state-dependent tensor factorization is employed across brain regions, and the spectral properties of the LFPs are characterized in terms of Gaussian processes (GPs). The LFPs are modeled as a mixture of GPs, with state- and regiondependent mixture weights, and with the spectral content of the data encoded in GP spectral mixture covariance kernels. The model is able to estimate the number of brain states and the number of mixture components in the mixture of GPs. A new variational Bayesian split-merge algorithm is employed for inference. The model infers state changes as a function of external covariates in two novel electrophysiological datasets, using LFP data recorded simultaneously from multiple brain regions in mice; the results are validated and interpreted by subject-matter experts. 1 Introduction Neuroscience has made significant progress in learning how activity in specific neurons or brain areas correlates with behavior. One of the remaining mysteries is how to best represent and understand the way whole-brain activity relates to cognition: in other words, how to describe brain states [1]. Although different brain regions have different functions, neural activity across brain regions is often highly correlated. It has been proposed that the specific way brain regions are correlated at any given time may represent a “state” designed specifically to optimize neural computations relevant to the behavioral context an organism is in [2]. Unfortunately, although there is great interest in the concept of global brain states, little progress has been made towards developing methods to identify or characterize them. The study of arousal is an important area of research relating to brain states. Arousal is a hotly debated topic that generally refers to the way the brain dynamically responds to varying levels of stimulation [3]. One continuum of arousal used in the neuroscience literature is sleep (low arousal) to wakefulness (higher arousal). Another is calm (low arousal) to excited or stressed (high arousal) [4]. A common electrophysiological measurement used to determine arousal levels is local field potentials (LFPs), or low-frequency (< 200 Hz) extracellular neural oscillations that represent coordinated 1 s(a) w−1 s(a) w s(a) w+1 ... ... z(ar) w−1 z(ar) w z(ar) w+1 y(ar) w−1 y(ar) w y(ar) w+1 kℓ(τ) θℓ γ R R R L A 0.5 1 1.5 2 −2 −1 0 1 2 Normalized Potential Seconds Example LFP Data y1 y2 F ⇐⇒ Classify yw 4.2 8.3 12.5 16.7 0 0.02 0.04 0.06 0.08 Frequency, Hz Spectral Density REM SWS WK Figure 1: Left: Graphical representation of our state space model. We first assign a sequence of brain states, {s(a) w }W 1 , with Markovian dynamics to animal a. Given state s(a) w , each region is assigned to a cluster, z(ar) w = ℓ∈{1, . . . , L}, and the data y(ar) w is generated from a Gaussian process with covariance function k(τ; θℓ, γ). Top: Example of two windows of an LFP time-series; we wish to classify each window based on spectral content. Spectral densities of known sleep states (REM, SWS, WK) in the hippocampus are shown. neural activity across distributed spatial and temporal scales. LFPs are useful for describing overall brain states since they reflect activity across many neural networks. We examine brain states under different levels of arousal by recording LFPs simultaneously in multiple regions of the mouse brain, first, as mice pass through different stages of sleep, and second, as mice are moved from a familiar environment to a novel environment to induce interest and exploration. In neuroscience, the analysis of electrophysiological time-series data is largely centered around dynamic causal modeling (DCM) [5], where continuous state-space models are formulated based on differential equations that are specifically crafted around knowledge of underlying neurobiological processes. However, DCM is not suitable for exploratory analysis of data, such as inferring unknown arousal levels, for two reasons: because the differential equations are driven by inputs of experimental conditions, and the analysis is dependent on a priori hypotheses about which neuronal populations and interactions are important. This work focuses on methods suitable for exploratory analysis. Previously published neuroscience studies distinguished between slow-wave sleep (SWS), rapideye-movement (REM), and wake (WK) using proportions of high-frequency (33-55 Hz) gamma oscillations and lower frequency theta (4-9 Hz) oscillations in a brain area called the Hippocampus [6, 7]. As an alternative approach, recent statistical methods for tensor factorization [8] can be applied to short time Fourier transform (STFT) coefficients by factorizing a 3–way LFP tensor, with dimensions of brain region, frequency band and time. Distinct sleep states may then be revealed by clustering the inferred sequence of time-varying score vectors. Although good first steps, the above two methods have several shortcomings: 1) They do not consider the time dependency of brain activity, and therefore cannot capture state-transition properties. 2) They cannot directly work on raw data, but require preprocessing that only considers spectral content in predefined frequency bins, thus leading to information loss. 3) They do not allow for individual brain regions to take on their own set of sub-state characteristics within a given global brain state. 4) Finally, they cannot leverage the shared information of LFP data across multiple animals. In this paper we overcome the shortcomings of previously published brain-state methods by defining a sequence of brain states over a sliding window of raw, filtered LFP data, where we impose an infinite hidden Markov model (iHMM) [9] on these state assignments. Conditioned on this brain state, each brain region is assigned to a cluster in a mixture model. Each cluster is associated with a specific spectral content (or density) pattern, manifested through a spectral mixture kernel [10] of a Gaussian process. Each window of LFP data is generated as a draw from this mixture of Gaussian processes. Thus, all animals share an underlying brain state space, of which, all brain regions share the underlying components of the mixture model. 2 Model For each animal a ∈{1, . . . A}, we have time-series of the LFP in R different regions, measured simultaneously. These time-series are split into sequential, sliding windows, y(ar) w ∈RN for w ∈{1, . . . , W}, such that windows are common across regions. These windows are chosen to be overlapping, thereby sharing data points between consecutive windows; nonoverlapping win2 dows may also be used. Each window is considered as a single observation vector, and we wish to model the generative process of these observations, {y(ar) w }. The proposed model aims to describe the spectral content in each of these LFP signals, as a function of brain region and time. This is done by first assigning a joint “brain state” to each time window, {s(a) 1 , . . . , s(a) W }, shared across all brain regions {1, . . . , R}. The brain state is assumed to evolve in time as a latent Markov process. The LFP data from a particular brain region is assumed drawn from a mixture of Gaussian processes. The characteristics of each mixture component are shared across brain states and brain regions, with mixture weights that are dependent on these two entities. 2.1 Brain state assignment Within the generative process, each animal has a latent brain state for every time window, w. This brain state is represented through a categorical latent variable s(a) w , and an infinite hidden Markov model (iHMM) is placed on the state dynamics [9, 12]. This process is formulated as s(a) w ∼Categorical(λ(a) s(a) w−1), λ(a) g ∼DP(α0β), β ∼GEM(γ0), (1) where GEM is the stick-breaking process βh = β′ h Qh−1 i=1 (1 −β′ i) with β′ h ∼Beta(1, γ0). Here, {βh}H h=1 represents global transition probabilities to each state in a potentially infinite state space. For the stick-breaking process, H →∞, but in a finite collection of data only a finite number of state transitions will be used and H can be efficiently truncated. Since the state space is shared across animals, we cannot predefine initial state assignments, s(a) 1 . To remedy this, we allow s(a) 1 ∼ Categorical(ψ(a)) and place a discrete uniform prior on ψ(a) over the truncated state space. Each animal is given a transition matrix Λ(a), where each row of this matrix is a transition probability vector λ(a) g such that the transition from state g to state h for animal a is λ(a) gh , each centered around the global transition vector β. Because each animal’s brain can be structured differently (e.g., as an extreme case, consider a central nervous system disorder), we allow Λ(a) to vary from animal to animal. 2.2 Assigning brain regions to clusters For each brain state, mixture weights are drawn to define the distribution over clusters independently for each region r, centered around a global mixture η using a hierarchical Dirichlet Process [12]: φ(r) h ∼DP(α1η), η ∼GEM(γ1), (2) where φ(r) hℓis the probability of assigning region r of a window with brain state h to cluster ℓ. This cluster assignment can be written as z(ar) w |s(a) w ∼Categorical(φ(r) s(a) w ). (3) For each cluster ℓthere is a set of parameters, θℓ, describing a Gaussian process (GP), detailed in Section 2.3. One could consider the joint probability over cluster assignments for all brain regions as an extension of a latent nonnegative PARAFAC tensor decomposition [11, 13]. We refer to the Supplemental Material for details. Our clustering model differs from the infinite tensor factorization (ITF) model of [11] in three significant ways: we place Markovian dynamics on state assignments for each animal, we model separate draws from the prior jointly for each animal, and we share cluster atoms across all regions through use of an HDP. 2.3 Infinite mixture of Gaussian processes 2.3.1 Gaussian processes and the spectral mixture kernel For a single window of data, y(ar) w ∈RN, we wish to model the data in the limit of a continuoustime function (allowing N →∞), motivating a GP formulation, and we are interested in the spectral properties of the LFP signal in this window. Previous research has established a link between the kernel function of a GP and its spectral properties [10]. We write a distribution over the time-series: y(t) ∼GP(m(t), k(t, t′)), (4) 3 where m(t) is known as the mean function, and k(t, t′) is the covariance function [14]. This framework provides a flexible, structured method to model time-series data. The structure of observations in the output space, y, is defined through a careful choice of the covariance function. Since this work aims to model the spectral content of the LFP signal, we set the mean function to 0, and use a recently proposed spectral mixture (SM) kernel [10]. This kernel is defined through a spectral domain representation, S(s), of the stationary kernel, represented by a mixture of Q Gaussian components: φ(s) = XQ q=1 ωqN(s; µq, νq), S(s) = 1 2[φ(s) + φ(−s)], (5) where φ(s) is reflected about the origin to obtain a valid spectral density, and µq, νq, and ωq respectively define the mean, variance, and relative weight of the q-th Gaussian component in the spectral domain. Priors may be placed on these parameters; for example, we use the uninformative priors µq ∼Uniform(µmin, µmax), νq ∼Uniform(0, νmax) and ωq ∼Gamma(e0, f0). A bandpass filter is applied to the LFP signal from µmin to µmax Hz as a preprocessing step, so this prior knowledge is justified. Also, νmax is set to prevent overfitting, and e0 and f0 are set to manifest a broad prior. We assume that only a noisy version of the true function is observed, so the kernel is defined as the Fourier transform of the spectral density S(s) plus white Gaussian noise: f(τ; θ) = XQ q=1 ωq exp{−2π2τ 2νq} cos(2πτµq), k(τ; θ, γ) = f(τ; θ) + γ−1δτ, (6) where the set of parameters θ = {ω, µ, ν} and γ define the covariance kernel, τ = |t−t′|, and δτ is the Kronecker delta function which equals one if τ = 0. We set the prior γ ∼Gamma(e1, f1) where the hyperparemeters e1 and f1 are chosen to manifest a broad prior. The formulation of (6) results in an interpretable kernel in the spectral domain, where the weights ωq correspond to the relative contribution of each component, the means µq represent spectral peaks, and the variances νq play a role similar to an inverse length-scale. Through a realization of this Gaussian process, an analytical representation is obtained for the marginal likelihood of the observed data y given the parameters {θ, γ}, and the observation locations t, p(y|θ, γ, t). The optimal set of kernel parameters {θ, γ} can then be chosen as the set that maximizes the marginal likelihood. Further discussions on the inference for the Gaussian process parameters is presented in Section 3. 2.3.2 Generating observed data To combine the clustering model with our SM kernel, each cluster ℓis associated with a distinct set of kernel parameters θℓ. To generate the observations {y(ar) w }, where each y(ar) w ∈RN has observation times t = {t1, . . . , tN} such that |ti −tj| = |i −j|τ for all i and j, we consider a draw from the multivariate normal distribution: y(ar) w ∼N(0, Σz(ar) w ), (Σℓ)ij = k(|ti −tj|; θℓ, γ), (7) where each observation is generated from the cluster indicated by z(ar) w (described in Section 2.2), and each cluster is represented uniquely by a covariance matrix, Σℓ, whose elements are defined through the covariance kernel k(τ; θℓ, γ). Therefore, the parameters θz(ar) w describe the autocorrelation content associated with each y(ar) w . We address two concerns with this formulation. First, this observation model ignores complex cross-covariance functions between regions. Although LFP measurements exhibit coherence patterns across regions, the generative model in (7) only weakly couples the spectral densities of each region through the brain state. In principle, the generative model could be extended to incorporate this coherence information. Second, (7) does not model the time-series itself as a stochastic process, but rather the preprocessed, ‘independent’ observation vectors. This shortcoming is not ideal, but the windowing process allows for efficient computation via the mixture of Gaussian processes. 3 Inference In the following, latent model variables are represented by Ω= {Z, S, Φ, η, Λ, β, Ψ}, the kernel parameters to be optimized are Θ = {{θℓ}L 1 , γ}, and H and L are upper limit truncations on the number of brain states and clusters, respectively. As described throughout this section, the proposed algorithm adaptively adjusts the truncation levels on the number of brain states, H, and clusters, L, 4 through a series of split-merge moves. The joint probability of the proposed model is p(Y ,Ω, Θ) = p(Y |Z, Θ)p(Z, S|Φ, Λ, Ψ)p(Φ|η)p(η)p(Λ|β)p(β)p(Ψ)p(Θ) = hY a,r,w p(y(ar) w |z(ar) w , Θ)p(z(ar) w |s(a) w , Φ) ih p(η|γ1) Y r,h p(φ(r) h |η, α1) i Y a p(s(a) 1 |ψ(a))p(ψ(a)) YW w=2 p(s(a) w |s(a) w−1, Λ(a)) h p(β|γ0) Y a,g p(λ(a) g |β, α0) i  p(γ|e1, f1) YQ q=1 p(ωq|e0, f0)p(µq|µmin, µmax)p(νq|νmax)  . (8) A variational inference scheme is developed to update Ωand Θ. 3.1 Variational inference With variational inference, an approximate variational posterior distribution is sought that is similar to the true posterior distribution, q(Ω, Θ) ≈ p(Ω, Θ|Y ). This variational posterior is assumed to have a factorization into simpler distributions, where q(Ω, Θ) = q(Z)q(S)q(Φ)q(η)q(Λ)q(β)q(Ψ)q(Θ), with further factorization q(Z) = Y a,r,w Cat(z(ar) w ; ζ(ar) w ), q(Φ) = Y h,r Dir(φ(r) h ; ν(r) h ), q(η) = δη∗(η), q(S) = Y a q({s(a) w }W w=1), q(Λ) = Y g,a Dir(λ(a) g ; κ(a) g ), q(β) = δβ∗(β), q(Ψ) = Y a δψ(a)∗(ψ(a)), q(Θ) = Y j δΘ∗ j (Θj), (9) where only necessary sufficient statistics of the latent factors q({s(a) w }W w=1) are required, and the approximate posteriors of η, β, {ψ(a)} and {Θj} are represented by point estimates at η∗, β∗, {ψ(a)∗} and {Θ∗ j}, respectively. The degenerate distributions δη∗(η) and δβ∗(β) are described in previous work on variational inference for HDPs [15, 16]. The idea is that the point estimates of the stick-breaking processes simplify the derivation of the variational posterior, and the authors of [16] show that obtaining a full posterior distribution on the stick-breaking weights has little impact on model fitting since the variational lower bound is not heavily influenced by the terms dependent on η and β. Furthermore, the Dirichlet process is truncated for both the number of states and the number of clusters such that q(z(ar) w = ℓ) = 0 for ℓ> L and q(s(a) w = h) = 0 for h > H. This truncation method (see [17] for details) is notably different than other common truncation methods of the DP (e.g., [18] and [19]), and is primarily important for facilitating the split-merge inference techniques described in Section 3.2. In mean-field variational inference, the variational distribution q(Ω, Θ) is chosen such that the Kullback-Leibler divergence of p(Ω, Θ|Y ) from q(Ω, Θ), DKL(q(Ω, Θ)||p(Ω, Θ|Y )), is minimized. This is equivalent to maximizing the evidence lower bound (also known as the variational free energy in the DCM literature), L(q) = Eq[log p(Y , Ω, Θ)] −Eq[log q(Ω, Θ)], where both expectations are taken with respect to the variational distribution. The resulting lower bound is L(q) = E[ln p(Y |Z, Θ)] + E[ln p(Z, S|Φ, Λ, Ψ)] + E[ln p(Φ|η)] + E[ln p(η)] + E[ln p(Λ|β)] + E[ln p(β)] + E[ln p(Ψ)] + E[ln p(Θ)] + H[q(Z)] + H[q(S)] + H[q(Φ)] + H[q(Λ)], (10) where all expectations are with respect to the variational distribution, the hyperparameters are excluded for notational simplicity, and we define H[q(·)] as the sum over the entropies of the individual factors of q(·). Due to the degenerate approximations for q(η), q(β), q(Ψ) and q(Θ), these full posterior distributions are not obtained, and, therefore, the terms H[q(η)], H[q(β)], H[q(Ψ)] and H[q(Θ)] are set to zero in the lower bound. The updates for ζ(ar) w and ν(r) h are standard. Variational inference for the HDP-HMM is detailed in other work (e.g., see [20, 21]); using these methods, updates for κ(a) g , ψ(a) and the necessary expected sufficient statistics of the factors of q(S) are realized. Finally, updates for β∗, η∗and {Θj} are non-conjugate, so a gradient-ascent method is performed to optimize these values. We use a simple resilient back-propagation (Rprop), though most line-search methods should suffice. Details on all updates and taking the gradient of L(q) with respect to β, η and {Θj} are found in the Supplemental Material. 5 3.2 Split-merge moves During inference, a series of split and merge operations are used to help the algorithm jump out of local optima [22]. This work takes the viewpoint that two clusters (or states) should merge only if the variational lower bound increases, and, when a split is proposed for a cluster (or state), it should always be accepted, whether or not the split increases the variational lower bound. If the split is not appropriate, a future merge step is expected to undo this operation. In this way, the opportunity is provided for cluster and state assignments to jump out of local optima, allowing the inference algorithm to readjust assignments as desired. Merge states: To merge states h′ and h′′ into a new state h, new parameters are initialized as: ρ(a) wh = ρ(a) wh′ + ρ(a) wh′′, κ(a) gh = κ(a) gh′ + κ(a) gh′′, β∗ h = β∗ h′ + β∗ h′′, and v(a) h = v(a) h′ + v(a) h′′ , such that the model now has a truncation at Hnew = H −1 states. In order to account for problems with merging two states in an HMM, a single restricted iteration is allowed, where only the statedependent variational parameters in Ωnew are updated, producing a new distribution q(Ωnew). The merge is accepted (i.e., Ω= Ωnew) if L(q(Ωnew)) > L(q(Ω)). Since these computations are not excessive, all possible state merges are computed and a small number of merges are accepted per iteration. Merge clusters: To merge clusters ℓ′ and ℓ′′ into a new cluster ℓ, new parameters are initialized as: ζ(ar) wℓ = ζ(ar) wℓ′ + ζ(ar) wℓ′′ , ν(r) hℓ= ν(r) hℓ′ + ν(r) hℓ′′, η∗ ℓ= η∗ ℓ′ + η∗ ℓ′′, and θnew ℓ = θ∗, such that there is a truncation at Lnew = L −1 clusters. We set θ∗= θℓ′ for simplicity, and allow a restricted iteration of updates to Ωnew and θnew ℓ . The merge is accepted (i.e., Ω= Ωnew and Θ = Θnew) if the lower bound is improved, L(q(Ωnew, Θnew)) > L(q(Ω, Θ)). Since the restricted iteration for θnew ℓ is expensive, only a few cluster merges may be proposed at a time. Therefore, merges are proposed for clusters with the smallest earth mover’s distance [23] between their spectral densities. Split step: When splitting states and clusters, the opposite process to the initialization of the merging procedures described above is performed. For clusters, data points within a cluster ℓare randomly chosen to stay in cluster ℓor split to a new cluster ℓ′. For splitting state h, the cluster assignment vector φ(r) h is replicated and windows within state h are randomly chosen to stay in state h or split to a new cluster h′. Regardless of how this effects the lower bound, a split step is always accepted. For implementation details, we allow the model to accept 3 state merges every third iteration, propose 5 cluster merges every third iteration, and split one state and one cluster every third iteration. Therefore, every iteration may affect the truncation level of either the number of states or clusters. A ‘burn-in’ period is allowed before starting the proposing of splits/merges, and a ‘burn-out’ period is employed in which split proposals cease. In this way, the algorithm has guarantees of improving the lower bound only during iterations when a split is not proposed, and convergence tests are only considered during the burn-out period. 4 Datasets Three datasets are considered in this work, as follows: Toy data: Data is generated for a single animal according to the proposed model in Section 2. The purpose of this dataset is to ensure the inference scheme can recover known ground truth, since ground truth information is not known for the real datasets. We set L = 5 and H = 3. For each cluster, a spectral density was generated with Q = 4, µq ∼Unif(4, 50), νq ∼Unif(1, 50) and ω ∼Dir(1, . . . , 1). The cluster usage probability vector was drawn φ(r) h ∼Dir( 1 10, . . . , 1 10). State transition probabilities were drawn according to λgh ∼Unif(0, 1) + 10δ(g=h). States were assigned to W = 1000 windows according to an HMM with transition matrix Λ, and cluster assignments were drawn conditioned on this state. Data with N = 200 was drawn for each window. Sleep data: Twelve hours of LFP data from sixteen different brain regions were recorded from three mice naturally transitioning through different levels of sleep arousal. Due to the high number of brain regions, we present only three hours of sleep data from a single mouse for simplicity. The multi-animal analysis is reserved for the novel environment dataset. Novel environment data: Thirty minutes of LFP data from five brain regions was recorded from five mice who were moved from their home cage to a novel environment approximately nine minutes into the recording. Placing animals into novel environments has been shown to increase arousal, and 6 should therefore result in (at least one) network state change [3]. Data acquisition methods for the latter two datasets are discussed in [24]. 5 Results For all results, we set Q = 10, H = 15, L = 25, stop the ‘burn-in’ period after iteration 6, and start the subsequent computation period after iteration 25. Hyperparameters were set to γ0 = γ1 = .01, α0 = α1 = 1, µmin = 0, µmax = 50, νmax = 10, and e0 = f0 = 10−6. In all results, the model was seen to converge to a local optima after 30 iterations, and each iteration took on the order of 20 seconds using Matlab code on a PC with a 2.30GHz quad-core CPU and 8GB RAM. Figure 2 shows results on the toy data. The model correctly recovers exactly 3 states and 5 clusters, and, as seen in the figure, the state assignments and spectral densities of each cluster component are recovered almost perfectly. The model was implemented for different values of the noise variance, γ−1, and, though not shown, in all cases the noise variance was recovered accurately during inference, implying the spectral mixture kernels are not overfitting the noise. In this way, we confirm that the inference scheme recovers a ground truth. For further model verification, ten-fold cross-validation was used to compute predictive probabilities for held-out data (reported in Table 1), where we compare to two simpler versions of our model: 1) the HDP-HMM on brain states in (1) is replaced with an HDP, and 2) a single brain state. For the HDP-HMM, the hold-out data was considered as ‘missing data’ in the training data and the window index was used to assign time-dependent probabilities over clusters, whereas in the HDP and Single State models it was simply withheld from the training data. We see large predictive performance gains when considering multiple brain states, and even more improvement on average (though modest) when considering an HDP-HMM. 0 10 20 30 40 50 −7 −6 −5 −4 −3 −2 Frequency, Hz Log Spectral Density 1 2 3 0 0.2 0.4 0.6 0.8 1 Brain State, (5 Channels per State) Cluster Usage for each Brain State Clust 1 Clust 2 Clust 3 Clust 4 Clust 5 Minutes Simulated Brain Activity over Time 5 10 15 ’Region 1’ ’Region 2’ ’Region 3’ ’Region 4’ ’Region 5’ Clust 1 Clust 2 Clust 3 Clust 4 Clust 5 Minutes State Assignment Comparison 5 10 15 True State Inferred State State 1 State 2 State 3 0 25 50 −8 −6 −4 −2 Freq, Hz Log Spectral Density 0 25 50 Freq, Hz 0 25 50 Freq, Hz 0 25 50 Freq, Hz 0 25 50 Freq, Hz Figure 2: Toy data results. Top row shows the generated toy data. From left to right: the five spectral functions, each associated with a component in the mixture model; the probability of each of these five components occurring for all five regions in each brain state; the generated brain state assignments from a 3-state HMM along with the generated cluster assignments for the five simulated regions. The bottom row shows the results of our model. On the left, a comparison of the recovered state vs. the true state for all time; on the right, an alignment of the five recovered kernels to the spectral density ground truth. Minutes Brain Activity over Time 5 10 15 20 25 30 35 40 45 50 55 60 Dzirasa et al. Tensor Method Our Method State 1 State 2 State 3 State 4 State 5 State 6 State 7 State 8 2 4 6 8 10 12 14 0 0.05 0.1 0.15 0.2 Frequency, Hz Spectral Density Minutes Brain Activity over Time 5 10 15 20 25 30 DLS DMS FrA M1 M_OFC_Cx OFC Basal_Amy D_Hipp L_Hb NAc_Core NAc_Shell MD_Thal PrL_Cx SubNigra V1 VTA Clust 1 Clust 2 Clust 3 Clust 4 Clust 5 Clust 6 Clust 7 1 3 2 Brain State (Showing 4/16 Regions) Cluster Usage given State/Region Clust 1 Clust 2 Clust 3 Clust 4 Clust 5 Clust 6 Clust 7 Tensor Our 0 0.2 0.4 0.6 0.8 1 State Usage Given (Dzirasa et al.) State 1 State 2 State 3 State 4 State 5 State 6 State 7 State 8 D Hipp NAc core OFC VTA D Hipp NAc core OFC VTA D Hipp NAc core OFC VTA Figure 3: Sleep data results. Top: A comparison of brain state assignments from our method to two other methods. Bottom Left: Spectral density of the 7 inferred clusters. Middle Left: Cluster assignments over time for 16 different brain regions, sorted by similarity. Middle Right: Given brain states 1, 2 and 3, we show cluster assignment probabilities for 4 different brain regions: the hippocampus (D Hipp), nucleus accumbens core (NAc core), orbitofrontal cortex (OFC) and ventral tegmental area (VTA) from left to right, respectively. Right: State assignments of our method and the tensor method conditioned on the method of [6]. 7 2 4 6 8 10 12 14 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Frequency, Hz Spectral Density Minutes Brain Activity over Time 5 10 15 20 25 Animal 1 Animal 2 Animal 3 Animal 4 Animal 5 Animal 6 Animal 7 Animal 8 Animal 9 State 1 State 2 State 3 State 4 State 5 State 6 State 7 1 2 3 4 5 6 7 0 0.2 0.4 0.6 0.8 1 Brain State, (5 Channels per State) Cluster Usage for each Brain State Clust 1 Clust 2 Clust 3 Clust 4 Clust 5 Clust 6 Figure 4: Novel environment data results. Left: The log spectral density of the 6 inferred clusters. Middle: State assignments for all 9 animals over a 30 minute period. There are 7 inferred states, and each state has a distribution over clusters for each region, as seen on the right. Dataset HDP-HMM HDP Single State Toy (×105) −1.686 (±0.053) −1.688 (±0.053) −1.718 (±0.054) Sleep (×106) −1.677 (±0.030) −1.682 (±0.020) −1.874 (±0.019) Novel (×105) −5.932 (±0.040) −5.973 (±0.034) −6.962 (±0.063) Table 1: Average held-out log predictive probability for different priors on brain states: HDP-HMM, HDP, and a single state. The data consists of W time-series windows for R regions of A animals; at random, 10% of these time-series windows were held-out, and the predictive distribution was used to determine their likelihood. The sleep and novel environment results are presented in Figures 3 and 4, respectively. With the sleep dataset, our results are compared with the two methods discussed in the Introduction: that of [6, 7], and the tensor method of [8]. We refer to the Supplemental Material for exact specifications of the tensor method. For each of these datasets, we infer the intended arousal states. In the novel environment data, we observe broad arousal changes at 9–minutes for all animals, as expected. In the sleep data, we successfully uncover at least as many states as the simple approach of [6, 7], to include SWS, REM and WK states. Thus far neuroscientists have focused primarily on 2 stages of sleep (NREM and REM), but as many as 5 have been discussed (4 different stages of NREM sleep, and 1 stage of REM). Different stages of sleep affect memory and behavior in different ways (e.g., see [25]), as does the number of times animals transition between these states [26]. Our results suggest that there may be even more levels of sleep that should be considered (e.g., transition states and sub states). This is very interesting and important for neuroscientists to know, because it is possible that each of our newly observed states could affect memory and behavior in different ways. There is no other published method that has provided evidence of these other states. In addition to brain states, we infer spectral information for each brain region through cluster assignments. Though not the primary focus of this work, it is interesting that groups of brain regions tend to share similar attributes. In Figure 3, we have sorted brain regions into groups based on cluster assignment similarity, essentially recovering a ‘network’ of the brain. This underscores the power of the proposed method: not only do we develop unsupervised methods to classify whole-brain activity into states, we infer the cross-region/animal relationships within these states. 6 Conclusion The contributions of this paper are three-fold. First, we design an extension of the infinite tensor mixture model, incorporating time dependency. Second, we develop variational inference for the proposed generative model, including an efficient inference scheme using split-merge moves for two general models: the ITM and iHMM. To the authors’ knowledge, neither of these inference schemes have been developed previously. Finally, with respect to neuroscience application, we model brain states given multi-channel LFP data in a principled manner, showing significant advantages over other potential approaches to modeling brain states. Using the proposed framework, we discover distinct brain states directly from the raw, filtered data, defined by their spectral content and network properties, and we can infer relationships between and share statistical strength across data from multiple animals. Acknowledgments The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR. 8 References [1] C D Gilbert and M Sigman, “Brain States: Top-down Influences in Sensory Processing.,” Neuron, vol. 54, no. 5, pp. 677–96, June 2007. [2] A Kohn, A Zandvakili, and M A Smith, “Correlations and Brain States: from Electrophysiology to Functional Imaging.,” Curr. Opin. Neurobiol., vol. 19, no. 4, Aug. 2009. [3] D Pfaff, A Ribeiro, J Matthews, and L Kow, “Concepts and Mechanisms of Generalized Central Nervous System Arousal,” ANYAS, Jan. 2008. [4] P J Lang and M M Bradley, “Emotion and the Motivational Brain,” Biol. Psychol., vol. 84, no. 3, pp. 437–50, July 2010. [5] K J Friston, L Harrison, and W Penny, “Dynamic Causal Modelling,” NeuroImage, vol. 19, no. 4, pp. 1273–1302, 2003. [6] K Dzirasa, S Ribeiro, R Costa, L M Santos, S C Lin, A Grosmark, T D Sotnikova, R R Gainetdinov, M G Caron, and M A L Nicolelis, “Dopaminergic Control of Sleep–Wake States,” J. Neurosci., vol. 26, no. 41, pp. 10577–10589, 2006. [7] D Gervasoni, S C Lin, S Ribeiro, E S Soares, J Pantoja, and M A L Nicolelis, “Global Forebrain Dynamics Predict Rat Behavioral States and their Transitions,” J. Neurosci., vol. 24, no. 49, pp. 11137–11147, 2004. [8] P Rai, Y Wang, S Guo, G Chen, D Dunson, and L Carin, “Scalable Bayesian Low-Rank Decomposition of Incomplete Multiway Tensors,” ICML, 2014. [9] M J Beal, Z Ghahramani, and C E Rasmussen, “The Infinite Hidden Markov Model,” NIPS, 2002. [10] A G Wilson and R P Adams, “Gaussian Process Kernels for Pattern Discovery and Extrapolation,” ICML, 2013. [11] J Murray and D B Dunson, “Bayesian learning of joint distributions of objects,” AISTATS, 2013. [12] Y W Teh, M I Jordan, M J Beal, and D M Blei, “Sharing Clusters Among Related Groups : Hierarchical Dirichlet Processes,” NIPS, 2005. [13] R A Harshman, “Foundations of the Parafac Procedure,” Work. Pap. Phonetics, 1970. [14] C E Rasmussen and C K I Williams, Gaussian Processes for Machine Learning, vol. 14, Apr. 2006. [15] M Bryant and E B Sudderth, “Truly nonparametric online variational inference for hierarchical Dirichlet processes,” NIPS, pp. 1–9, 2012. [16] P Liang, S Petrov, M I Jordan, and D Klein, “The Infinite PCFG using Hierarchical Dirichlet Processes,” Conf. Empir. Methods Nat. Lang. Process. Comput. Nat. Lang. Learn., pp. 688–697, 2007. [17] Y W Teh, K Kurihara, and M Welling, “Collapsed Variational Inference for HDP,” NIPS, 2007. [18] D M Blei and M I Jordan, “Variational Inference for Dirichlet Process Mixtures,” Bayesian Anal, 2004. [19] K Kurihara, M Welling, and N Vlassis, “Accelerated Variational Dirichlet Process Mixtures,” NIPS, 2007. [20] M J Beal, “Variational Algorithms for Approximate Bayesian Inference,” Diss. Univ. London, 2003. [21] J Paisley and L Carin, “Hidden Markov Models With Stick-Breaking Priors,” IEEE Trans. Signal Process., vol. 57, no. 10, pp. 3905–3917, 2009. [22] S Jain and R M Neal, “Splitting and merging components of a nonconjugate Dirichlet process mixture model,” Bayesian Anal, Sept. 2007. [23] Y Rubner, C Tomasi, and L J Guibas, “The Earth Movers Distance as a Metric for Image Retrieval,” Int. J. Comput. Vis., vol. 40, no. 2, pp. 99–121, 2000. [24] K Dzirasa, R Fuentes, S Kumar, J M Potes, and M A L Nicolelis, “Chronic in Vivo Multi-Circuit Neurophysiological Recordings in Mice,” J. Neurosci. Methods, vol. 195, no. 1, pp. 36–46, Jan. 2011. [25] M A Tucker, Y Hirota, E J Wamsley, H Lau, A Chaklader, and W Fishbein, “A Daytime Nap Containing Solely Non-REM Sleep Enhances Declarative but not Procedural Memory.,” Neurobiol. Learn. Mem., vol. 86, no. 2, pp. 241–7, Sept. 2006. [26] A Rolls, D Colas, A Adamantidis, M Carter, T Lanre-Amos, H C Heller, and L de Lecea, “Optogenetic Disruption of Sleep Continuity Impairs Memory Consolidation,” PNAS, vol. 108, no. 32, pp. 13305–10, Aug. 2011. 9
2014
407
5,522
Clamping Variables and Approximate Inference Adrian Weller Columbia University, New York, NY 10027 adrian@cs.columbia.edu Tony Jebara Columbia University, New York, NY 10027 jebara@cs.columbia.edu Abstract It was recently proved using graph covers (Ruozzi, 2012) that the Bethe partition function is upper bounded by the true partition function for a binary pairwise model that is attractive. Here we provide a new, arguably simpler proof from first principles. We make use of the idea of clamping a variable to a particular value. For an attractive model, we show that summing over the Bethe partition functions for each sub-model obtained after clamping any variable can only raise (and hence improve) the approximation. In fact, we derive a stronger result that may have other useful implications. Repeatedly clamping until we obtain a model with no cycles, where the Bethe approximation is exact, yields the result. We also provide a related lower bound on a broad class of approximate partition functions of general pairwise multi-label models that depends only on the topology. We demonstrate that clamping a few wisely chosen variables can be of practical value by dramatically reducing approximation error. 1 Introduction Marginal inference and estimating the partition function for undirected graphical models, also called Markov random fields (MRFs), are fundamental problems in machine learning. Exact solutions may be obtained via variable elimination or the junction tree method, but unless the treewidth is bounded, this can take exponential time (Pearl, 1988; Lauritzen and Spiegelhalter, 1988; Wainwright and Jordan, 2008). Hence, many approximate methods have been developed. Of particular note is the Bethe approximation, which is widely used via the loopy belief propagation algorithm (LBP). Though this is typically fast and results are often accurate, in general it may converge only to a local optimum of the Bethe free energy, or may not converge at all (McEliece et al., 1998; Murphy et al., 1999). Another drawback is that, until recently, there were no guarantees on whether the returned approximation to the partition function was higher or lower than the true value. Both aspects are in contrast to methods such as the tree-reweighted approximation (TRW, Wainwright et al., 2005), which features a convex free energy and is guaranteed to return an upper bound on the true partition function. Nevertheless, empirically, LBP or convergent implementations of the Bethe approximation often outperform other methods (Meshi et al., 2009; Weller et al., 2014). Using the method of graph covers (Vontobel, 2013), Ruozzi (2012) recently proved that the optimum Bethe partition function provides a lower bound for the true value, i.e. ZB ≤Z, for discrete binary MRFs with submodular log potential cost functions of any arity. Here we provide an alternative proof for attractive binary pairwise models. Our proof does not rely on any methods of loop series (Sudderth et al., 2007) or graph covers, but rather builds on fundamental properties of the derivatives of the Bethe free energy. Our approach applies only to binary models (whereas Ruozzi, 2012 applies to any arity), but we obtain stronger results for this class, from which ZB ≤Z easily follows. We use the idea of clamping a variable and considering the approximate sub-partition functions over the remaining variables, as the clamped variable takes each of its possible values. Notation and preliminaries are presented in §2. In §3, we derive a lower bound, not just for the standard Bethe partition function, but for a range of approximate partition functions over multi-label 1 variables that may be defined from a variational perspective as an optimization problem, based only on the topology of the model. In §4, we consider the Bethe approximation for attractive binary pairwise models. We show that clamping any variable and summing the Bethe sub-partition functions over the remaining variables can only increase (hence improve) the approximation. Together with a similar argument to that used in §3, this proves that ZB ≤Z for this class of model. To derive the result, we analyze how the optimum of the Bethe free energy varies as the singleton marginal of one particular variable is fixed to different values in [0, 1]. Remarkably, we show that the negative of this optimum, less the singleton entropy of the variable, is a convex function of the singleton marginal. This may have further interesting implications. We present experiments in §5, demonstrating that clamping even a single variable selected using a simple heuristic can be very beneficial. 1.1 Related work Branching or conditioning on a variable (or set of variables) and approximating over the remaining variables has a fruitful history in algorithms such as branch-and-cut (Padberg and Rinaldi, 1991; Mitchell, 2002), work on resolution versus search (Rish and Dechter, 2000) and various approaches of (Darwiche, 2009, Chapter 8). Cutset conditioning was discussed by Pearl (1988) and refined by Peot and Shachter (1991) as a method to render the remaining topology acyclic in preparation for belief propagation. Eaton and Ghahramani (2009) developed this further, introducing the conditioned belief propagation algorithm together with back-belief-propagation as a way to help identify which variables to clamp. Liu et al. (2012) discussed feedback message passing for inference in Gaussian (not discrete) models, deriving strong results for the particular class of attractive models. Choi and Darwiche (2008) examined methods to approximate the partition function by deleting edges. 2 Preliminaries We consider a pairwise model with n variables X1, . . . , Xn and graph topology (V, E): V contains nodes {1, . . ., n} where i corresponds to Xi, and E ⊆V × V contains an edge for each pairwise relationship. We sometimes consider multi-label models where each variable Xi takes values in {0, . . . , Li −1}, and sometimes restrict attention to binary models where Xi ∈B = {0, 1} ∀i. Let x = (x1, . . . , xn) be a configuration of all the variables, and N(i) be the neighbors of i. For all analysis of binary models, to be consistent with Welling and Teh (2001) and Weller and Jebara (2013), we assume a reparameterization such that p(x) = e−E(x) Z , where the energy of a configuration, E = −P i∈V θixi −P (i,j)∈E Wijxixj, with singleton potentials θi and edge weights Wij. 2.1 Clamping a variable and related definitions We shall find it useful to examine sub-partition functions obtained by clamping one particular variable Xi, that is we consider the model on the n−1 variables X1, . . . , Xi−1, Xi+1, . . . , Xn obtained by setting Xi equal to one of its possible values. Let Z|Xi=a be the sub-partition function on the model obtained by setting Xi = a, a ∈{0, . . ., Li− 1}. Observe that true partition functions and marginals are self-consistent in the following sense: Z = Li−1 X j=0 Z|Xi=j ∀i ∈V, p(Xi = a) = Z|Xi=a PLi−1 j=0 Z|Xi=j . (1) This is not true in general for approximate forms of inference,1 but if the model has no cycles, then in many cases of interest, (1) does hold, motivating the following definition. Definition 1. We say an approximation to the log-partition function ZA is ExactOnTrees if it may be specified by the variational formula −log ZA = minq∈Q FA(q) where: (1) Q is some compact space that includes the marginal polytope; (2) FA is a function of the (pseudo-)distribution q (typically a free energy approximation); and (3) For any model, whenever a subset of variables V′ ⊆V is clamped to particular values P = {pi ∈{0, . . . , Li −1}, ∀Xi ∈V′}, i.e. ∀Xi ∈V′, we constrain 1For example, consider a single cycle with positive edge weights. This has ZB < Z (Weller et al., 2014), yet after clamping any variable, each resulting sub-model is a tree hence the Bethe approximation is exact. 2 Xi = pi, which we write as V′ ←P, and the remaining induced graph on V \ V′ is acyclic, then the approximation is exact, i.e. ZA|V′←P = Z|V′←P . Similarly, define an approximation to be in the broader class of NotSmallerOnTrees if it satisfies all of the above properties except that condition (3) is relaxed to ZA|V′←P ≥Z|V′←P . Note that the Bethe approximation is ExactOnTrees, and approximations such as TRW are NotSmallerOnTrees, in both cases whether using the marginal polytope or any relaxation thereof, such as the cycle or local polytope (Weller et al., 2014). We shall derive bounds on ZA with the following idea: Obtain upper or lower bounds on the approximation achieved by clamping and summing over the approximate sub-partition functions; Repeat until an acyclic graph is reached, where the approximation is either exact or bounded. We introduce the following related concept from graph theory. Definition 2. A feedback vertex set (FVS) of a graph is a set of vertices whose removal leaves a graph without cycles. Determining if there exists a feedback vertex set of a given size is a classical NP-hard problem (Karp, 1972). There is a significant literature on determining the minimum cardinality of an FVS of a graph G, which we write as ν(G). Further, if vertices are assigned nonnegative weights, then a natural problem is to find an FVS with minimum weight, which we write as νw(G). An FVS with a factor 2 approximation to νw(G) may be found in time O(|V| + |E| log |E|) (Bafna et al., 1999). For pairwise multi-label MRFs, we may create a weighted graph from the topology by assigning each node i a weight of log Li, and then compute the corresponding νw(G). 3 Lower Bound on Approximate Partition Functions We obtain a lower bound on any approximation that is NotSmallerOnTrees by observing that ZA ≥ ZA|Xn=j ∀j from the definition (the sub-partition functions optimize over a subset). Theorem 3. If a pairwise MRF has topology with an FVS of size n and corresponding values L1, . . . , Ln, then for any approximation that is NotSmallerOnTrees, ZA ≥ Z Qn i=1 Li . Proof. We proceed by induction on n. The base case n = 0 holds by the assumption that ZA is NotSmallerOnTrees. Now assume the result holds for n −1 and consider a MRF which requires n vertices to be deleted to become acyclic. Clamp variable Xn at each of its Ln values to create the approximation Z(n) A := PLn−1 j=0 ZA|Xn=j. By the definition of NotSmallerOnTrees, ZA ≥ZA|Xn=j ∀j; and by the inductive hypothesis, ZA|Xn=j ≥ Z|Xn=j Qn−1 i=1 Li . Hence, LnZA ≥Z(n) A = PLn−1 j=0 ZA|Xn=j ≥ 1 Qn−1 i=1 Li PLn−1 j=0 Z|Xn=j = Z Qn−1 i=1 Li . By considering an FVS with minimum Qn i=1 Li, Theorem 3 is equivalent to the following result. Theorem 4. For any approximation that is NotSmallerOnTrees, ZA ≥Ze−νw. This bound applies to general multi-label models with any pairwise and singleton potentials (no need for attractive). The bound is trivial for a tree, but already for a binary model with one cycle we obtain that ZB ≥Z/2 for any potentials, even over the marginal polytope. The bound is tight, at least for uniform Li = L ∀i.2 The bound depends only on the vertices that must be deleted to yield a graph with no cycles, not on the number of cycles (which clearly upper bounds ν(G)). For binary models, exact inference takes time Θ((|V| −|ν(G)|)2ν(G)). 4 Attractive Binary Pairwise Models In this Section, we restrict attention to the standard Bethe approximation. We shall use results derived in (Welling and Teh, 2001) and (Weller and Jebara, 2013), and adopt similar notation. The Bethe partition function, ZB, is defined as in Definition 1, where Q is set as the local polytope relaxation and FA is the Bethe free energy, given by F(q) = Eq(E)−SB(q), where E is the energy 2For example, in the binary case: consider a sub-MRF on a cycle with no singleton potentials and uniform, very high edge weights. This can be shown to have ZB ≈Z/2 (Weller et al., 2014). Now connect ν of these together in a chain using very weak edges (this construction is due to N. Ruozzi). 3 and SB is the Bethe pairwise entropy approximation (see Wainwright and Jordan, 2008 for details). We consider attractive binary pairwise models and apply similar clamping ideas to those used in §3. In §4.1 we show that clamping can never decrease the approximate Bethe partition function, then use this result in §4.2 to prove that ZB ≤Z for this class of model. In deriving the clamping result of §4.1, in Theorem 7 we show an interesting, stronger result on how the optimum Bethe free energy changes as the singleton marginal qi is varied over [0, 1]. 4.1 Clamping a variable can only increase the Bethe partition function Let ZB be the Bethe partition function for the original model. Clamp variable Xi and form the new approximation Z(i) B = P1 j=0 ZB|Xi=j. In this Section, we shall prove the following Theorem. Theorem 5. For an attractive binary pairwise model and any variable Xi, Z(i) B ≥ZB. We first introduce notation and derive preliminary results, which build to Theorem 7, our strongest result, from which Theorem 5 easily follows. Let q = (q1, . . . , qn) be a location in n-dimensional pseudomarginal space, i.e. qi is the singleton pseudomarginal q(Xi = 1) in the local polytope. Let F(q) be the Bethe free energy computed at q using Bethe optimum pairwise pseudomarginals given by the formula for q(Xi = 1, Xj = 1) = ξij(qi, qj, Wij) in (Welling and Teh, 2001), i.e. for an attractive model, for edge (i, j), ξij is the lower root of αijξ2 ij −[1 + αij(qi + qj)]ξij + (1 + αij)qiqj = 0, (2) where αij = eWij −1, and Wij > 0 is the strength (associativity) of the log-potential edge weight. Let G(q) = −F(q). Note that log ZB = maxq∈[0,1]n G(q). For any x ∈[0, 1], consider the optimum constrained by holding qi = x fixed, i.e. let log ZBi(x) = maxq∈[0,1]n:qi=x G(q). Let r∗(x) = (r∗ 1(x), . . . , r∗ i−1(x), r∗ i+1(x), . . . , r∗ n(x)) with corresponding pairwise terms {ξ∗ ij}, be an arg max for where this optimum occurs. Observe that log ZBi(0) = log ZB|Xi=0, log ZBi(1) = log ZB|Xi=1 and log ZB = log ZBi(q∗ i ) = maxq∈[0,1]n G(q), where q∗ i is a location of Xi at which the global optimum is achieved. To prove Theorem 5, we need a sufficiently good upper bound on log ZBi(q∗ i ) compared to log ZBi(0) and log ZBi(1). First we demonstrate what such a bound could be, then prove that this holds. Let Si(x) = −x log x −(1 −x) log(1 −x) be the standard singleton entropy. Lemma 6 (Demonstrating what would be a sufficiently good upper bound on log ZB). If ∃x ∈[0, 1] such that log ZB ≤x log ZBi(1) + (1 −x) log ZBi(0) + Si(x), then: (i) ZBi(0) + ZBi(1) −ZB ≥emfc(x) where fc(x) = 1 + ec −exc+Si(x), m = min(log ZBi(0), log ZBi(1)) and c = | log ZBi(1) −log ZBi(0)|; and (ii) ∀x ∈[0, 1], fc(x) ≥0 with equality iff x = σ(c) = 1/(1 + exp(−c)), the sigmoid function. Proof. (i) This follows easily from the assumption. (ii) This is easily checked by differentiating. It is also given in (Koller and Friedman, 2009, Proposition 11.8). See Figure 6 in the Supplement for example plots of the function fc(x). Lemma 6 motivates us to consider if perhaps log ZBi(x) might be upper bounded by x log ZBi(1)+(1−x) log ZBi(0)+Si(x), i.e. the linear interpolation between log ZBi(0) and log ZBi(1), plus the singleton entropy term Si(x). It is easily seen that this would be true if r∗(qi) were constant. In fact, we shall show that r∗(qi) varies in a particular way which yields the following, stronger result, which, together with Lemma 6, will prove Theorem 5. Theorem 7. Let Ai(qi) = log ZBi(qi) −Si(qi). For an attractive binary pairwise model, Ai(qi) is a convex function. Proof. We outline the main points of the proof. Observe that Ai(x) = maxq∈[0,1]n:qi=x G(q) − Si(x), where G(q) = −F(q). Note that there may be multiple arg max locations r∗(x). As shown in (Weller and Jebara, 2013), F is at least thrice differentiable in (0, 1)n and all stationary points lie in the interior (0, 1)n. Given our conditions, the ‘envelope theorem’ of (Milgrom, 1999, Theorem 4 0 0.5 1 0 0.5 11 2 3 qj v=1/Qij, W=1 qi (a) W=1 0 0.5 1 0 0.5 10 10 20 qj v=1/Qij, W=3 qi (b) W=3 0 0.5 1 0 0.5 10 2 4 x 10 4 qj v=1/Qij, W=10 qi (c) W=10 Figure 1: 3d plots of vij = Q−1 ij , using ξij(qi, qj, W ) from (Welling and Teh, 2001). 1) applies, showing that Ai is continuous in [0, 1] with right derivative3 A′ i+(x) = max r∗(qi=x) ∂ ∂x [G(qi = x, r∗(x)) −Si(x)] = max r∗(qi=x) ∂ ∂x [G(qi = x, r∗(x))] −dSi(x) dx . (3) We shall show that this is non-decreasing, which is sufficient to show the convexity result of Theorem 7. To evaluate the right hand side of (3), we use the derivative shown by Welling and Teh (2001): ∂F ∂qi = −θi + log Qi, where log Qi = log (1 −qi)di−1 qdi−1 i Q j∈N(i)(qi −ξij) Q j∈N(i)(1 + ξij −qi −qj) (as in Weller and Jebara, 2013) = log qi 1 −qi + log Y j∈N(i) Qij, here defining Qij =  qi −ξij 1 + ξij −qi −qj  1 −qi qi  . A key observation is that the log qi 1−qi term is exactly −dSi(qi) dqi , and thus cancels the −dSi(x) dx term at the end of (3). Hence, A′ i+(qi) = maxr∗(qi) h −P j∈N(i) log Qij(qi, r∗ j , ξ∗ ij) i . 4 It remains to show that this expression is non-decreasing with qi. We shall show something stronger, that at every arg max r∗(qi), and for all j ∈N(i), −log Qij is non-decreasing ⇔vij = Q−1 ij is nondecreasing. The result then follows since the max of non-decreasing functions is non-decreasing. See Figure 1 for example plots of the vij function, and observe that vij appears to decrease with qi (which is unhelpful here) while it increases with qj. Now, in an attractive model, the Bethe free energy is submodular, i.e. ∂2F ∂qi∂qj ≤0 (Weller and Jebara, 2013), hence as qi increases, r∗ j (qi) can only increase (Topkis, 1978). For our purpose, we must show that dr∗ j dqi is sufficiently large such that dvij dqi ≥0. This forms the remainder of the proof. At any particular arg max r∗(qi), writing v = vij[qi, r∗ j (qi), ξ∗ ij(qi, r∗ j (qi))], we have dv dqi = ∂v ∂qi + ∂v ∂ξij dξ∗ ij dqi + ∂v ∂qj dr∗ j dqi = ∂v ∂qi + ∂v ∂ξij ∂ξ∗ ij ∂qi + dr∗ j dqi  ∂v ∂ξij ∂ξ∗ ij ∂qj + ∂v ∂qj  . (4) From (Weller and Jebara, 2013), ∂ξij ∂qi = αij(qj−ξij)+qj 1+αij(qi−ξij+qj−ξij) and similarly, ∂ξij ∂qj = αij(qi−ξij)+qi 1+αij(qj−ξij+qi−ξij), where αij = eWij −1. The other partial derivatives are easily derived: ∂v ∂qi = qi(qj−1)(1−qi)+(1+ξij−qi−qj)(qi−ξij) (1−qi)2(qi−ξij)2 , ∂v ∂ξij = qi(1−qj) (1−qi)(qi−ξij)2 , and ∂v ∂qj = −qi (1−qi)(qi−ξij). The only remaining term needed for (4) is dr∗ j dqi . The following results are proved in the Appendix, subject to a technical requirement that at an arg max, the reduced Hessian H\i, i.e. the matrix of 3This result is similar to Danskin’s theorem (Bertsekas, 1995). Intuitively, for multiple arg max locations, each may increase at a different rate, so here we must take the max of the derivatives over all the arg max. 4We remark that Qij is the ratio  p(Xi=1,Xj=0) p(Xi=0,Xj=0)  .  p(Xi=1) p(Xi=0)  = p(Xj=0|Xi=1) p(Xj=0|Xi=0). 5 second partial derivatives of F after removing the ith row and column, must be non-singular in order to have an invertible locally linear function. Call this required property P. By nature, each H\i is positive semi-definite. If needed, a small perturbation argument allows us to assume that no eigenvalue is 0, then in the limit as the perturbation tends to 0, Theorem 7 holds since the limit of convex functions is convex. Let [n] = {1, . . . , n} and G be the topology of the MRF. Theorem 8. For any k ∈[n] \ i, let Ck be the connected component of G \ i that contains Xk. If Ck + i is a tree, then dr∗ k dqi = Q (s→t)∈P (i⇝k) ξ∗ st−r∗ sr∗ t r∗ s (1−r∗ s),where P(i ⇝k) is the unique path from i to k in Ck + i, and for notational convenience, define r∗ i = qi. Proof in Appendix (subject to P). In fact, this result applies for any combination of attractive and repulsive edges. The result is remarkable, yet also intuitive. In the numerator, ξst −qsqt = Covq(Xs, Xt), increasing with Wij and equal to 0 at Wij = 0 (Weller and Jebara, 2013), and in the denominator, qs(1 −qs) = Varq(Xs), hence the ratio is exactly what is called in finance the beta of Xt with respect to Xs.5 In particular, Theorem 8 shows that for any j ∈N(i) whose component is a tree, dr∗ j dqi = ξ∗ ij−qir∗ j qi(1−qi) . The next result shows that in an attractive model, additional edges can only reinforce this sensitivity. Theorem 9. In an attractive model with edge (i, j), dr∗ j (qi) dqi ≥ ξ∗ ij−qir∗ j qi(1−qi) . Proof in Appendix (subject to P). Now collecting all terms, substituting into (4), and using (2), after some algebra yields that dv dqi ≥0, as required to prove Theorem 7. This now also proves Theorem 5. 4.2 The Bethe partition function lower bounds the true partition function Theorem 5, together with an argument similar to the proof of Theorem 3, easily yields a new proof that ZB ≤Z for an attractive binary pairwise model. Theorem 10 (first proved by Ruozzi, 2012). For an attractive binary pairwise model, ZB ≤Z. Proof. We shall use induction on n to show that the following statement holds for all n: If a MRF may be rendered acyclic by deleting n vertices v1, . . . , vn, then ZB ≤Z. The base case n = 0 holds since the Bethe approximation is ExactOnTrees. Now assume the result holds for n−1 and consider a MRF which requires n vertices to be deleted to become acyclic. Clamp variable Xn and consider Z(n) B = P1 j=0 ZB|Xn=j. By Theorem 5, ZB ≤Z(n) B ; and by the inductive hypothesis, ZB|Xn=j ≤Z|Xn=j ∀j. Hence, ZB ≤P1 j=0 ZB|Xn=j ≤P1 j=0 Z|Xn=j = Z. 5 Experiments For an approximation which is ExactOnTrees, it is natural to try clamping a few variables to remove cycles from the topology. Here we run experiments on binary pairwise models to explore the potential benefit of clamping even just one variable, though the procedure can be repeated. For exact inference, we used the junction tree algorithm. For approximate inference, we used Frank-Wolfe (FW) (Frank and Wolfe, 1956): At each iteration, a tangent hyperplane to the approximate free energy is computed at the current point, then a move is made to the best computed point along the line to the vertex of the local polytope with the optimum score on the hyperplane. This proceeds monotonically, even on a non-convex surface, hence will converge (since it is bounded), though it may be only to a local optimum and runtime is not guaranteed. This method typically produces good solutions in reasonable time compared to other approaches (Belanger et al., 2013; Weller et al., 2014) and allows direct comparison to earlier results (Meshi et al., 2009; Weller et al., 2014). To further facilitate comparison, in this Section we use the same unbiased reparameterization used by Weller et al. (2014), with E = −P i∈V θixi −P (i,j)∈E Wij 2 [xixj + (1 −xi)(1 −xj)]. 5Sudderth et al. (2007) defined a different, symmetric βst = ξst−qsqt qs(1−qs)qt(1−qt) for analyzing loop series. In our context, we suggest that the ratio defined above may be a better Bethe beta. 6 Test models were constructed as follows: For n variables, singleton potentials were drawn θi ∼ U[−Tmax, Tmax]; edge weights were drawn Wij ∼U[0, Wmax] for attractive models, or Wij ∼ U[−Wmax, Wmax] for general models. For models with random edges, we constructed Erd˝os-Renyi random graphs (rejecting disconnected samples), where each edge has independent probability p of being present. To observe the effect of increasing n while maintaining approximately the same average degree, we examined n = 10, p = 0.5 and n = 50, p = 0.1. We also examined models on a complete graph topology with 10 variables for comparison with TRW in (Weller et al., 2014). 100 models were generated for each set of parameters with varying Tmax and Wmax values. Results are displayed in Figures 2 to 4 showing average absolute error of log ZB vs log Z and average ℓ1 error of singleton marginals. The legend indicates the different methods used: Original is FW on the initial model; then various methods were used to select the variable to clamp, before running FW on the 2 resulting submodels and combining those results. avg Clamp for log Z means average over all possible clampings, whereas all Clamp for marginals computes each singleton marginal as the estimated ˆpi = ZB|Xi=1/(ZB|Xi=0 + ZB|Xi=1). best Clamp uses the variable which with hindsight gave the best improvement in log Z estimate, thereby showing the best possible result for log Z. Similarly, worst Clamp picks the variable which showed worst performance. Where one variable is clamped, the respective marginals are computed thus: for the clamped variable Xi, use ˆpi as before; for all others, take the weighted average over the estimated Bethe pseudomarginals on each sub-model using weights 1 −ˆpi and ˆpi for sub-models with Xi = 0 and Xi = 1 respectively. maxW and Mpower are heuristics to try to pick a good variable in advance. Ideally, we would like to break heavy cycles, but searching for these is NP-hard. maxW is a simple O(|E|) method which picks a variable Xi with maxi∈V P j∈N(i) |Wij|, and can be seen to perform well (Liu et al., 2012 proposed the same maxW approach for inference in Gaussian models). One way in which maxW can make a poor selection is to choose a variable at the centre of a large star configuration but far from any cycle. Mpower attempts to avoid this by considering the convergent series of powers of a modified W matrix, but on the examples shown, this did not perform significantly better. See §8.1 in the Appendix for more details on Mpower and further experimental results. FW provides no runtime guarantee when optimizing over a non-convex surface such as the Bethe free energy, but across all parameters, the average combined runtimes on the two clamped submodels was the same order of magnitude as that for the original model, see Figure 5. 6 Discussion The results of §4 immediately also apply to any binary pairwise model where a subset of variables may be flipped to yield an attractive model, i.e. where the topology has no frustrated cycle (Weller et al., 2014), and also to any model that may be reduced to an attractive binary pairwise model (Schlesinger and Flach, 2006; Zivny et al., 2009). For this class, together with the lower bound of §3, we have sandwiched the range of ZB (equivalently, given ZB, we have sandwiched the range of the true partition function Z) and bounded its error; further, clamping any variable, solving for optimum log ZB on sub-models and summing is guaranteed to be more accurate than solving on the original model. In some cases, it may also be faster; indeed, some algorithms such as LBP may fail on the original model but perform well on clamped sub-models. Methods presented may prove useful for analyzing general (non-attractive) models, or for other applications. As one example, it is known that the Bethe free energy is convex for a MRF whose topology has at most one cycle (Pakzad and Anantharam, 2002). In analyzing the Hessian of the Bethe free energy, we are able to leverage this to show the following result, which may be useful for optimization (proof in Appendix; this result was conjectured by N. Ruozzi). Lemma 11. In a binary pairwise MRF (attractive or repulsive edges, any topology), for any subset of variables S ⊆V whose induced topology contains at most one cycle, the Bethe free energy (using optimum pairwise marginals) over S, holding variables V \S at fixed singleton marginals, is convex. In §5, clamping appears to be very helpful, especially for attractive models with low singleton potentials where results are excellent (overcoming TRW’s advantage in this context), but also for general models, particularly with the simple maxW selection heuristic. We can observe some decline in benefit as n grows but this is not surprising when clamping just a single variable. Note, however, 7 2 4 8 12 16 0 0.2 0.4 0.6 0.8 1 maximum coupling strength Wmax (a) attractive log Z, Tmax = 0.1 2 4 8 12 16 0 0.1 0.2 0.3 0.4 0.5 maximum coupling strength Wmax Original all Clamp maxW Clamp best Clamp worst Clamp Mpower TRW (b) attractive margs, Tmax = 0.1 2 4 8 12 16 0 10 20 30 40 50 maximum coupling strength Wmax Original avg Clamp maxW Clamp best Clamp worst Clamp Mpower TRW (c) general log Z, Tmax = 2 2 4 8 12 16 0 0.1 0.2 0.3 0.4 maximum coupling strength Wmax (d) general margs, Tmax = 2 Figure 2: Average errors vs true, complete graph on n = 10. TRW in pink. Consistent legend throughout. 2 4 8 12 16 0 0.2 0.4 0.6 0.8 maximum coupling strength Wmax Original avg Clamp maxW Clamp best Clamp worst Clamp Mpower (a) attractive log Z, Tmax = 0.1 2 4 8 12 16 0 0.1 0.2 0.3 0.4 0.5 maximum coupling strength Wmax Original all Clamp maxW Clamp best Clamp worst Clamp Mpower (b) attractive margs, Tmax = 0.1 2 4 8 12 16 0 1 2 3 4 5 6 maximum coupling strength Wmax Original avg Clamp maxW Clamp best Clamp worst Clamp Mpower (c) general log Z, Tmax = 2 2 4 8 12 16 0 0.1 0.2 0.3 0.4 maximum coupling strength Wmax Original all Clamp maxW Clamp best Clamp worst Clamp Mpower (d) general margs, Tmax = 2 Figure 3: Average errors vs true, random graph on n = 10, p = 0.5. Consistent legend throughout. 2 4 8 12 16 0 0.2 0.4 0.6 0.8 maximum coupling strength Wmax (a) attractive log Z, Tmax = 0.1 2 4 8 12 16 0 0.1 0.2 0.3 0.4 0.5 maximum coupling strength Wmax Original all Clamp maxW Clamp best Clamp worst Clamp Mpower (b) attractive margs, Tmax = 0.1 2 4 8 12 16 0 5 10 15 20 25 30 maximum coupling strength Wmax Original avg Clamp maxW Clamp best Clamp worst Clamp Mpower (c) general log Z, Tmax = 2 2 4 8 12 16 0 0.1 0.2 0.3 0.4 maximum coupling strength Wmax Original all Clamp maxW Clamp best Clamp worst Clamp Mpower (d) general margs, Tmax = 2 Figure 4: Average errors vs true, random graph on n = 50, p = 0.1. Consistent legend throughout. 2 4 8 12 16 1 2 3 4 5 6 maximum coupling strength Wmax Random n=10, Tmax=2 Random n=10, Tmax=0.1 Random n=50, Tmax=2 Random n=50, Tmax=0.1 (a) attractive random graphs 2 4 8 12 16 1 2 3 4 5 6 7 maximum coupling strength Wmax Random n=10, Tmax=2 Random n=10, Tmax=0.1 Random n=50, Tmax=2 Random n=50, Tmax=0.1 (b) general random graphs x1 x2 x3 x4 (c) Blue (dashed red) edges are attractive (repulsive) with edge weight +2 (−2). No singleton potentials. Figure 5: Left: Average ratio of combined sub-model runtimes to original runtime (using maxW, other choices are similar). Right: Example model where clamping any variable worsens the Bethe approximation to log Z. that non-attractive models exist such that clamping and summing over any variable can lead to a worse Bethe approximation of log Z, see Figure 5c for a simple example on four variables. It will be interesting to explore the extent to which our results may be generalized beyond binary pairwise models. Further, it is tempting to speculate that similar results may be found for other approximations. For example, some methods that upper bound the partition function, such as TRW, might always yield a lower (hence better) approximation when a variable is clamped. Acknowledgments. We thank Nicholas Ruozzi for careful reading, and Nicholas, David Sontag, Aryeh Kontorovich and Toma˘z Slivnik for helpful discussion and comments. This work was supported in part by NSF grants IIS-1117631 and CCF-1302269. References V. Bafna, P. Berman, and T. Fujito. A 2-approximation algorithm for the undirected feedback vertex set problem. SIAM Journal on Discrete Mathematics, 12(3):289–9, 1999. D. Belanger, D. Sheldon, and A. McCallum. Marginal inference in MRFs using Frank-Wolfe. In NIPS Workshop on Greedy Optimization, Frank-Wolfe and Friends, December 2013. D. Bertsekas. Nonlinear Programming. Athena Scientific, 1995. 8 A. Choi and A. Darwiche. Approximating the partition function by deleting and then correcting for model edges. In Uncertainty in Artificial Intelligence (UAI), 2008. A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009. F. Eaton and Z. Ghahramani. Choosing a variable to clamp: Approximate inference using conditioned belief propagation. In Artificial Intelligence and Statistics, 2009. K. Fan. Topological proofs for certain theorems on matrices with non-negative elements. Monatshefte fr Mathematik, 62:219–237, 1958. M. Frank and P. Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2): 95–110, 1956. ISSN 1931-9193. doi: 10.1002/nav.3800030109. R. Karp. Complexity of Computer Computations, chapter Reducibility Among Combinatorial Problems, pages 85–103. New York: Plenum., 1972. D. Koller and N. Friedman. Probabilistic Graphical Models - Principles and Techniques. MIT Press, 2009. S. Lauritzen and D. Spiegelhalter. Local computations with probabilities on graphical structures and their application to expert systems. Journal of the Royal Statistical Society series B, 50:157–224, 1988. Y. Liu, V. Chandrasekaran, A. Anandkumar, and A. Willsky. Feedback message passing for inference in Gaussian graphical models. IEEE Transactions on Signal Processing, 60(8):4135–4150, 2012. R. McEliece, D. MacKay, and J. Cheng. Turbo decoding as an instance of Pearl’s ”Belief Propagation” algorithm. IEEE Journal on Selected Areas in Communications, 16(2):140–152, 1998. O. Meshi, A. Jaimovich, A. Globerson, and N. Friedman. Convexifying the Bethe free energy. In UAI, 2009. P. Milgrom. The envelope theorems. Department of Economics, Standford University, Mimeo, 1999. URL http://www-siepr.stanford.edu/workp/swp99016.pdf. J. Mitchell. Branch-and-cut algorithms for combinatorial optimization problems. Handbook of Applied Optimization, pages 65–77, 2002. K. Murphy, Y. Weiss, and M. Jordan. Loopy belief propagation for approximate inference: An empirical study. In Uncertainty in Artificial Intelligence (UAI), 1999. M. Padberg and G. Rinaldi. A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM review, 33(1):60–100, 1991. P. Pakzad and V. Anantharam. Belief propagation and statistical physics. In Princeton University, 2002. J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, 1988. M. Peot and R. Shachter. Fusion and propagation with multiple observations in belief networks. Artificial Intelligence, 48(3):299–318, 1991. I. Rish and R. Dechter. Resolution versus search: Two strategies for SAT. Journal of Automated Reasoning, 24 (1-2):225–275, 2000. N. Ruozzi. The Bethe partition function of log-supermodular graphical models. In Neural Information Processing Systems, 2012. D. Schlesinger and B. Flach. Transforming an arbitrary minsum problem into a binary one. Technical report, Dresden University of Technology, 2006. E. Sudderth, M. Wainwright, and A. Willsky. Loop series and Bethe variational bounds in attractive graphical models. In NIPS, 2007. D. Topkis. Minimizing a submodular function on a lattice. Operations Research, 26(2):305–321, 1978. P. Vontobel. Counting in graph covers: A combinatorial characterization of the Bethe entropy function. Information Theory, IEEE Transactions on, 59(9):6018–6048, Sept 2013. ISSN 0018-9448. M. Wainwright and M. Jordan. Graphical models, exponential families and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313–2335, 2005. A. Weller and T. Jebara. Bethe bounds and approximating the global optimum. In AISTATS, 2013. A. Weller and T. Jebara. Approximating the Bethe partition function. In UAI, 2014. A. Weller, K. Tang, D. Sontag, and T. Jebara. Understanding the Bethe approximation: When and how can it go wrong? In Uncertainty in Artificial Intelligence (UAI), 2014. M. Welling and Y. Teh. Belief optimization for binary networks: A stable alternative to loopy belief propagation. In Uncertainty in Artificial Intelligence (UAI), 2001. S. Zivny, D. Cohen, and P. Jeavons. The expressive power of binary submodular functions. Discrete Applied Mathematics, 157(15):3347–3358, 2009. 9
2014
408
5,523
Neural Word Embedding as Implicit Matrix Factorization Omer Levy Department of Computer Science Bar-Ilan University omerlevy@gmail.com Yoav Goldberg Department of Computer Science Bar-Ilan University yoav.goldberg@gmail.com Abstract We analyze skip-gram with negative-sampling (SGNS), a word embedding method introduced by Mikolov et al., and show that it is implicitly factorizing a word-context matrix, whose cells are the pointwise mutual information (PMI) of the respective word and context pairs, shifted by a global constant. We find that another embedding method, NCE, is implicitly factorizing a similar matrix, where each cell is the (shifted) log conditional probability of a word given its context. We show that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks. When dense low-dimensional vectors are preferred, exact factorization with SVD can achieve solutions that are at least as good as SGNS’s solutions for word similarity tasks. On analogy questions SGNS remains superior to SVD. We conjecture that this stems from the weighted nature of SGNS’s factorization. 1 Introduction Most tasks in natural language processing and understanding involve looking at words, and could benefit from word representations that do not treat individual words as unique symbols, but instead reflect similarities and dissimilarities between them. The common paradigm for deriving such representations is based on the distributional hypothesis of Harris [15], which states that words in similar contexts have similar meanings. This has given rise to many word representation methods in the NLP literature, the vast majority of whom can be described in terms of a word-context matrix M in which each row i corresponds to a word, each column j to a context in which the word appeared, and each matrix entry Mij corresponds to some association measure between the word and the context. Words are then represented as rows in M or in a dimensionality-reduced matrix based on M. Recently, there has been a surge of work proposing to represent words as dense vectors, derived using various training methods inspired from neural-network language modeling [3, 9, 23, 21]. These representations, referred to as “neural embeddings” or “word embeddings”, have been shown to perform well in a variety of NLP tasks [26, 10, 1]. In particular, a sequence of papers by Mikolov and colleagues [20, 21] culminated in the skip-gram with negative-sampling (SGNS) training method which is both efficient to train and provides state-of-the-art results on various linguistic tasks. The training method (as implemented in the word2vec software package) is highly popular, but not well understood. While it is clear that the training objective follows the distributional hypothesis – by trying to maximize the dot-product between the vectors of frequently occurring word-context pairs, and minimize it for random word-context pairs – very little is known about the quantity being optimized by the algorithm, or the reason it is expected to produce good word representations. In this work, we aim to broaden the theoretical understanding of neural-inspired word embeddings. Specifically, we cast SGNS’s training method as weighted matrix factorization, and show that its objective is implicitly factorizing a shifted PMI matrix – the well-known word-context PMI matrix from the word-similarity literature, shifted by a constant offset. A similar result holds also for the 1 NCE embedding method of Mnih and Kavukcuoglu [24]. While it is impractical to directly use the very high-dimensional and dense shifted PMI matrix, we propose to approximate it with the positive shifted PMI matrix (Shifted PPMI), which is sparse. Shifted PPMI is far better at optimizing SGNS’s objective, and performs slightly better than word2vec derived vectors on several linguistic tasks. Finally, we suggest a simple spectral algorithm that is based on performing SVD over the Shifted PPMI matrix. The spectral algorithm outperforms both SGNS and the Shifted PPMI matrix on the word similarity tasks, and is scalable to large corpora. However, it lags behind the SGNS-derived representation on word-analogy tasks. We conjecture that this behavior is related to the fact that SGNS performs weighted matrix factorization, giving more influence to frequent pairs, as opposed to SVD, which gives the same weight to all matrix cells. While the weighted and non-weighted objectives share the same optimal solution (perfect reconstruction of the shifted PMI matrix), they result in different generalizations when combined with dimensionality constraints. 2 Background: Skip-Gram with Negative Sampling (SGNS) Our departure point is SGNS – the skip-gram neural embedding model introduced in [20] trained using the negative-sampling procedure presented in [21]. In what follows, we summarize the SGNS model and introduce our notation. A detailed derivation of the SGNS model is available in [14]. Setting and Notation The skip-gram model assumes a corpus of words w ∈VW and their contexts c ∈VC, where VW and VC are the word and context vocabularies. In [20, 21] the words come from an unannotated textual corpus of words w1, w2, . . . , wn (typically n is in the billions) and the contexts for word wi are the words surrounding it in an L-sized window wi−L, . . . , wi−1, wi+1, . . . , wi+L. Other definitions of contexts are possible [18]. We denote the collection of observed words and context pairs as D. We use #(w, c) to denote the number of times the pair (w, c) appears in D. Similarly, #(w) = P c′∈VC #(w, c′) and #(c) = P w′∈VW #(w′, c) are the number of times w and c occurred in D, respectively. Each word w ∈VW is associated with a vector ⃗w ∈Rd and similarly each context c ∈VC is represented as a vector ⃗c ∈Rd, where d is the embedding’s dimensionality. The entries in the vectors are latent, and treated as parameters to be learned. We sometimes refer to the vectors ⃗w as rows in a |VW | × d matrix W, and to the vectors ⃗c as rows in a |VC| × d matrix C. In such cases, Wi (Ci) refers to the vector representation of the ith word (context) in the corresponding vocabulary. When referring to embeddings produced by a specific method x, we will usually use W x and Cx explicitly, but may use just W and C when the method is clear from the discussion. SGNS’s Objective Consider a word-context pair (w, c). Did this pair come from the observed data D? Let P(D = 1|w, c) be the probability that (w, c) came from the data, and P(D = 0|w, c) = 1 −P(D = 1|w, c) the probability that (w, c) did not. The distribution is modeled as: P(D = 1|w, c) = σ(⃗w · ⃗c) = 1 1 + e−⃗w·⃗c where ⃗w and ⃗c (each a d-dimensional vector) are the model parameters to be learned. The negative sampling objective tries to maximize P(D = 1|w, c) for observed (w, c) pairs while maximizing P(D = 0|w, c) for randomly sampled “negative” examples (hence the name “negative sampling”), under the assumption that randomly selecting a context for a given word is likely to result in an unobserved (w, c) pair. SGNS’s objective for a single (w, c) observation is then: log σ(⃗w · ⃗c) + k · EcN∼PD [log σ(−⃗w · ⃗cN)] (1) where k is the number of “negative” samples and cN is the sampled context, drawn according to the empirical unigram distribution PD(c) = #(c) |D| . 1 1In the algorithm described in [21], the negative contexts are sampled according to p3/4(c) = #c3/4 Z instead of the unigram distribution #c Z . Sampling according to p3/4 indeed produces somewhat superior results on some of the semantic evaluation tasks. It is straight-forward to modify the PMI metric in a similar fashion by replacing the p(c) term with p3/4(c), and doing so shows similar trends in the matrix-based methods as it does in word2vec’s stochastic gradient based training method. We do not explore this further in this paper, and report results using the unigram distribution. 2 The objective is trained in an online fashion using stochastic gradient updates over the observed pairs in the corpus D. The global objective then sums over the observed (w, c) pairs in the corpus: ℓ= X w∈VW X c∈VC #(w, c) (log σ(⃗w · ⃗c) + k · EcN∼PD [log σ(−⃗w · ⃗cN)]) (2) Optimizing this objective makes observed word-context pairs have similar embeddings, while scattering unobserved pairs. Intuitively, words that appear in similar contexts should have similar embeddings, though we are not familiar with a formal proof that SGNS does indeed maximize the dot-product of similar words. 3 SGNS as Implicit Matrix Factorization SGNS embeds both words and their contexts into a low-dimensional space Rd, resulting in the word and context matrices W and C. The rows of matrix W are typically used in NLP tasks (such as computing word similarities) while C is ignored. It is nonetheless instructive to consider the product W · C⊤= M. Viewed this way, SGNS can be described as factorizing an implicit matrix M of dimensions |VW | × |VC| into two smaller matrices. Which matrix is being factorized? A matrix entry Mij corresponds to the dot product Wi · Cj = ⃗wi · ⃗cj. Thus, SGNS is factorizing a matrix in which each row corresponds to a word w ∈VW , each column corresponds to a context c ∈VC, and each cell contains a quantity f(w, c) reflecting the strength of association between that particular word-context pair. Such word-context association matrices are very common in the NLP and word-similarity literature, see e.g. [29, 2]. That said, the objective of SGNS (equation 2) does not explicitly state what this association metric is. What can we say about the association function f(w, c)? In other words, which matrix is SGNS factorizing? 3.1 Characterizing the Implicit Matrix Consider the global objective (equation 2) above. For sufficiently large dimensionality d (i.e. allowing for a perfect reconstruction of M), each product ⃗w · ⃗c can assume a value independently of the others. Under these conditions, we can treat the objective ℓas a function of independent ⃗w ·⃗c terms, and find the values of these terms that maximize it. We begin by rewriting equation 2: ℓ= X w∈VW X c∈VC #(w, c) (log σ(⃗w · ⃗c)) + X w∈VW X c∈VC #(w, c) (k · EcN∼PD [log σ(−⃗w · ⃗cN)]) = X w∈VW X c∈VC #(w, c) (log σ(⃗w · ⃗c)) + X w∈VW #(w) (k · EcN∼PD [log σ(−⃗w · ⃗cN)]) (3) and explicitly expressing the expectation term: EcN∼PD [log σ(−⃗w · ⃗cN)] = X cN∈VC #(cN) |D| log σ(−⃗w · ⃗cN) = #(c) |D| log σ(−⃗w · ⃗c) + X cN∈VC\{c} #(cN) |D| log σ(−⃗w · ⃗cN) (4) Combining equations 3 and 4 reveals the local objective for a specific (w, c) pair: ℓ(w, c) = #(w, c) log σ(⃗w · ⃗c) + k · #(w) · #(c) |D| log σ(−⃗w · ⃗c) (5) To optimize the objective, we define x = ⃗w · ⃗c and find its partial derivative with respect to x: ∂ℓ ∂x = #(w, c) · σ(−x) −k · #(w) · #(c) |D| · σ(x) We compare the derivative to zero, and after some simplification, arrive at: e2x −   #(w, c) k · #(w) · #(c) |D| −1  ex − #(w, c) k · #(w) · #(c) |D| = 0 3 If we define y = ex, this equation becomes a quadratic equation of y, which has two solutions, y = −1 (which is invalid given the definition of y) and: y = #(w, c) k · #(w) · #(c) |D| = #(w, c) · |D| #w · #(c) · 1 k Substituting y with ex and x with ⃗w · ⃗c reveals: ⃗w · ⃗c = log #(w, c) · |D| #(w) · #(c) · 1 k  = log #(w, c) · |D| #(w) · #(c)  −log k (6) Interestingly, the expression log  #(w,c)·|D| #(w)·#(c)  is the well-known pointwise mutual information (PMI) of (w, c), which we discuss in depth below. Finally, we can describe the matrix M that SGNS is factorizing: M SGNS ij = Wi · Cj = ⃗wi · ⃗cj = PMI(wi, cj) −log k (7) For a negative-sampling value of k = 1, the SGNS objective is factorizing a word-context matrix in which the association between a word and its context is measured by f(w, c) = PMI(w, c). We refer to this matrix as the PMI matrix, M P MI. For negative-sampling values k > 1, SGNS is factorizing a shifted PMI matrix M P MIk = M P MI −log k. Other embedding methods can also be cast as factorizing implicit word-context matrices. Using a similar derivation, it can be shown that noise-contrastive estimation (NCE) [24] is factorizing the (shifted) log-conditional-probability matrix: M NCE ij = ⃗wi · ⃗cj = log #(w, c) #(c)  −log k = log P(w|c) −log k (8) 3.2 Weighted Matrix Factorization We obtained that SGNS’s objective is optimized by setting ⃗w · ⃗c = PMI(w, c) −log k for every (w, c) pair. However, this assumes that the dimensionality of ⃗w and ⃗c is high enough to allow for perfect reconstruction. When perfect reconstruction is not possible, some ⃗w·⃗c products must deviate from their optimal values. Looking at the pair-specific objective (equation 5) reveals that the loss for a pair (w, c) depends on its number of observations (#(w, c)) and expected negative samples (k · #(w) · #(c)/|D|). SGNS’s objective can now be cast as a weighted matrix factorization problem, seeking the optimal d-dimensional factorization of the matrix M P MI −log k under a metric which pays more for deviations on frequent (w, c) pairs than deviations on infrequent ones. 3.3 Pointwise Mutual Information Pointwise mutual information is an information-theoretic association measure between a pair of discrete outcomes x and y, defined as: PMI(x, y) = log P(x, y) P(x)P(y) (9) In our case, PMI(w, c) measures the association between a word w and a context c by calculating the log of the ratio between their joint probability (the frequency in which they occur together) and their marginal probabilities (the frequency in which they occur independently). PMI can be estimated empirically by considering the actual number of observations in a corpus: PMI(w, c) = log #(w, c) · |D| #(w) · #(c) (10) The use of PMI as a measure of association in NLP was introduced by Church and Hanks [8] and widely adopted for word similarity tasks [11, 27, 29]. Working with the PMI matrix presents some computational challenges. The rows of M PMI contain many entries of word-context pairs (w, c) that were never observed in the corpus, for which 4 PMI(w, c) = log 0 = −∞. Not only is the matrix ill-defined, it is also dense, which is a major practical issue because of its huge dimensions |VW | × |VC|. One could smooth the probabilities using, for instance, a Dirichlet prior by adding a small “fake” count to the underlying counts matrix, rendering all word-context pairs observed. While the resulting matrix will not contain any infinite values, it will remain dense. An alternative approach, commonly used in NLP, is to replace the M PMI matrix with M PMI 0 , in which PMI(w, c) = 0 in cases #(w, c) = 0, resulting in a sparse matrix. We note that M PMI 0 is inconsistent, in the sense that observed but “bad” (uncorrelated) word-context pairs have a negative matrix entry, while unobserved (hence worse) ones have 0 in their corresponding cell. Consider for example a pair of relatively frequent words (high P(w) and P(c)) that occur only once together. There is strong evidence that the words tend not to appear together, resulting in a negative PMI value, and hence a negative matrix entry. On the other hand, a pair of frequent words (same P(w) and P(c)) that is never observed occurring together in the corpus, will receive a value of 0. A sparse and consistent alternative from the NLP literature is to use the positive PMI (PPMI) metric, in which all negative values are replaced by 0: PPMI(w, c) = max (PMI (w, c) , 0) (11) When representing words, there is some intuition behind ignoring negative values: humans can easily think of positive associations (e.g. “Canada” and “snow”) but find it much harder to invent negative ones (“Canada” and “desert”). This suggests that the perceived similarity of two words is more influenced by the positive context they share than by the negative context they share. It therefore makes some intuitive sense to discard the negatively associated contexts and mark them as “uninformative” (0) instead.2 Indeed, it was shown that the PPMI metric performs very well on semantic similarity tasks [5]. Both M PMI 0 and M PPMI are well known to the NLP community. In particular, systematic comparisons of various word-context association metrics show that PMI, and more so PPMI, provide the best results for a wide range of word-similarity tasks [5, 16]. It is thus interesting that the PMI matrix emerges as the optimal solution for SGNS’s objective. 4 Alternative Word Representations As SGNS with k = 1 is attempting to implicitly factorize the familiar matrix M PMI, a natural algorithm would be to use the rows of M PPMI directly when calculating word similarities. Though PPMI is only an approximation of the original PMI matrix, it still brings the objective function very close to its optimum (see Section 5.1). In this section, we propose two alternative word representations that build upon M PPMI. 4.1 Shifted PPMI While the PMI matrix emerges from SGNS with k = 1, it was shown that different values of k can substantially improve the resulting embedding. With k > 1, the association metric in the implicitly factorized matrix is PMI(w, c) −log(k). This suggests the use of Shifted PPMI (SPPMI), a novel association metric which, to the best of our knowledge, was not explored in the NLP and wordsimilarity communities: SPPMIk(w, c) = max (PMI (w, c) −log k, 0) (12) As with SGNS, certain values of k can improve the performance of M SPPMIk on different tasks. 4.2 Spectral Dimensionality Reduction: SVD over Shifted PPMI While sparse vector representations work well, there are also advantages to working with dense lowdimensional vectors, such as improved computational efficiency and, arguably, better generalization. 2A notable exception is the case of syntactic similarity. For example, all verbs share a very strong negative association with being preceded by determiners, and past tense verbs have a very strong negative association to be preceded by “be” verbs and modals. 5 An alternative matrix factorization method to SGNS’s stochastic gradient training is truncated Singular Value Decomposition (SVD) – a basic algorithm from linear algebra which is used to achieve the optimal rank d factorization with respect to L2 loss [12]. SVD factorizes M into the product of three matrices U · Σ · V ⊤, where U and V are orthonormal and Σ is a diagonal matrix of singular values. Let Σd be the diagonal matrix formed from the top d singular values, and let Ud and Vd be the matrices produced by selecting the corresponding columns from U and V . The matrix Md = Ud ·Σd ·V ⊤ d is the matrix of rank d that best approximates the original matrix M, in the sense that it minimizes the approximation errors. That is, Md = arg minRank(M ′)=d ∥M ′ −M∥2. When using SVD, the dot-products between the rows of W = Ud · Σd are equal to the dot-products between rows of Md. In the context of word-context matrices, the dense, d dimensional rows of W are perfect substitutes for the very high-dimensional rows of Md. Indeed another common approach in the NLP literature is factorizing the PPMI matrix M PPMI with SVD, and then taking the rows of W SVD = Ud · Σd and CSVD = Vd as word and context representations, respectively. However, using the rows of W SVD as word representations consistently under-perform the W SGNS embeddings derived from SGNS when evaluated on semantic tasks. Symmetric SVD We note that in the SVD-based factorization, the resulting word and context matrices have very different properties. In particular, the context matrix CSVD is orthonormal while the word matrix W SVD is not. On the other hand, the factorization achieved by SGNS’s training procedure is much more “symmetric”, in the sense that neither W W2V nor CW2V is orthonormal, and no particular bias is given to either of the matrices in the training objective. We therefore propose achieving similar symmetry with the following factorization: W SVD1/2 = Ud · p Σd CSVD1/2 = Vd · p Σd (13) While it is not theoretically clear why the symmetric approach is better for semantic tasks, it does work much better empirically.3 SVD versus SGNS The spectral algorithm has two computational advantages over stochastic gradient training. First, it is exact, and does not require learning rates or hyper-parameter tuning. Second, it can be easily trained on count-aggregated data (i.e. {(w, c, #(w, c))} triplets), making it applicable to much larger corpora than SGNS’s training procedure, which requires each observation of (w, c) to be presented separately. On the other hand, the stochastic gradient method has advantages as well: in contrast to SVD, it distinguishes between observed and unobserved events; SVD is known to suffer from unobserved values [17], which are very common in word-context matrices. More importantly, SGNS’s objective weighs different (w, c) pairs differently, preferring to assign correct values to frequent (w, c) pairs while allowing more error for infrequent pairs (see Section 3.2). Unfortunately, exact weighted SVD is a hard computational problem [25]. Finally, because SGNS cares only about observed (and sampled) (w, c) pairs, it does not require the underlying matrix to be a sparse one, enabling optimization of dense matrices, such as the exact PMI −log k matrix. The same is not feasible when using SVD. An interesting middle-ground between SGNS and SVD is the use of stochastic matrix factorization (SMF) approaches, common in the collaborative filtering literature [17]. In contrast to SVD, the SMF approaches are not exact, and do require hyper-parameter tuning. On the other hand, they are better than SVD at handling unobserved values, and can integrate importance weighting for examples, much like SGNS’s training procedure. However, like SVD and unlike SGNS’s procedure, the SMF approaches work over aggregated (w, c) statistics allowing (w, c, f(w, c)) triplets as input, making the optimization objective more direct, and scalable to significantly larger corpora. SMF approaches have additional advantages over both SGNS and SVD, such as regularization, opening the way to a range of possible improvements. We leave the exploration of SMF-based algorithms for word embeddings to future work. 3The approach can be generalized to W SVDα = Ud·(Σd)α, making α a tunable parameter. This observation was previously made by Caron [7] and investigated in [6, 28], showing that different values of α indeed perform better than others for various tasks. In particular, setting α = 0 performs well for many tasks. We do not explore tuning the α parameter in this work. 6 Method PMI−log k SPPMI SVD SGNS d = 100 d = 500 d = 1000 d = 100 d = 500 d = 1000 k = 1 0% 0.00009% 26.1% 25.2% 24.2% 31.4% 29.4% 7.40% k = 5 0% 0.00004% 95.8% 95.1% 94.9% 39.3% 36.0% 7.13% k = 15 0% 0.00002% 266% 266% 265% 7.80% 6.37% 5.97% Table 1: Percentage of deviation from the optimal objective value (lower values are better). See 5.1 for details. 5 Empirical Results We compare the matrix-based algorithms to SGNS in two aspects. First, we measure how well each algorithm optimizes the objective, and then proceed to evaluate the methods on various linguistic tasks. We find that for some tasks there is a large discrepancy between optimizing the objective and doing well on the linguistic task. Experimental Setup All models were trained on English Wikipedia, pre-processed by removing non-textual elements, sentence splitting, and tokenization. The corpus contains 77.5 million sentences, spanning 1.5 billion tokens. All models were derived using a window of 2 tokens to each side of the focus word, ignoring words that appeared less than 100 times in the corpus, resulting in vocabularies of 189,533 terms for both words and contexts. To train the SGNS models, we used a modified version of word2vec which receives a sequence of pre-extracted word-context pairs [18].4 We experimented with three values of k (number of negative samples in SGNS, shift parameter in PMI-based methods): 1, 5, 15. For SVD, we take W = Ud · √Σd as explained in Section 4. 5.1 Optimizing the Objective Now that we have an analytical solution for the objective, we can measure how well each algorithm optimizes this objective in practice. To do so, we calculated ℓ, the value of the objective (equation 2) given each word (and context) representation.5 For sparse matrix representations, we substituted ⃗w·⃗c with the matching cell’s value (e.g. for SPPMI, we set ⃗w · ⃗c = max(PMI(w, c) −log k, 0)). Each algorithm’s ℓvalue was compared to ℓOpt, the objective when setting ⃗w · ⃗c = PMI(w, c) −log k, which was shown to be optimal (Section 3.1). The percentage of deviation from the optimum is defined by (ℓ−ℓOpt)/(ℓOpt) and presented in table 1. We observe that SPPMI is indeed a near-perfect approximation of the optimal solution, even though it discards a lot of information when considering only positive cells. We also note that for the factorization methods, increasing the dimensionality enables better solutions, as expected. SVD is slightly better than SGNS at optimizing the objective for d ≤500 and k = 1. However, while SGNS is able to leverage higher dimensions and reduce its error significantly, SVD fails to do so. Furthermore, SVD becomes very erroneous as k increases. We hypothesize that this is a result of the increasing number of zero-cells, which may cause SVD to prefer a factorization that is very close to the zero matrix, since SVD’s L2 objective is unweighted, and does not distinguish between observed and unobserved matrix cells. 5.2 Performance of Word Representations on Linguistic Tasks Linguistic Tasks and Datasets We evaluated the word representations on four dataset, covering word similarity and relational analogy tasks. We used two datasets to evaluate pairwise word similarity: Finkelstein et al.’s WordSim353 [13] and Bruni et al.’s MEN [4]. These datasets contain word pairs together with human-assigned similarity scores. The word vectors are evaluated by ranking the pairs according to their cosine similarities, and measuring the correlation (Spearman’s ρ) with the human ratings. The two analogy datasets present questions of the form “a is to a∗as b is to b∗”, where b∗is hidden, and must be guessed from the entire vocabulary. The Syntactic dataset [22] contains 8000 morpho4http://www.bitbucket.org/yoavgo/word2vecf 5 Since it is computationally expensive to calculate the exact objective, we approximated it. First, instead of enumerating every observed word-context pair in the corpus, we sampled 10 million such pairs, according to their prevalence. Second, instead of calculating the expectation term explicitly (as in equation 4), we sampled a negative example {(w, cN)} for each one of the 10 million “positive” examples, using the contexts’ unigram distribution, as done by SGNS’s optimization procedure (explained in Section 2). 7 WS353 (WORDSIM) [13] MEN (WORDSIM) [4] MIXED ANALOGIES [20] SYNT. ANALOGIES [22] Representation Corr. Representation Corr. Representation Acc. Representation Acc. SVD (k=5) 0.691 SVD (k=1) 0.735 SPPMI (k=1) 0.655 SGNS (k=15) 0.627 SPPMI (k=15) 0.687 SVD (k=5) 0.734 SPPMI (k=5) 0.644 SGNS (k=5) 0.619 SPPMI (k=5) 0.670 SPPMI (k=5) 0.721 SGNS (k=15) 0.619 SGNS (k=1) 0.59 SGNS (k=15) 0.666 SPPMI (k=15) 0.719 SGNS (k=5) 0.616 SPPMI (k=5) 0.466 SVD (k=15) 0.661 SGNS (k=15) 0.716 SPPMI (k=15) 0.571 SVD (k=1) 0.448 SVD (k=1) 0.652 SGNS (k=5) 0.708 SVD (k=1) 0.567 SPPMI (k=1) 0.445 SGNS (k=5) 0.644 SVD (k=15) 0.694 SGNS (k=1) 0.540 SPPMI (k=15) 0.353 SGNS (k=1) 0.633 SGNS (k=1) 0.690 SVD (k=5) 0.472 SVD (k=5) 0.337 SPPMI (k=1) 0.605 SPPMI (k=1) 0.688 SVD (k=15) 0.341 SVD (k=15) 0.208 Table 2: A comparison of word representations on various linguistic tasks. The different representations were created by three algorithms (SPPMI, SVD, SGNS) with d = 1000 and different values of k. syntactic analogy questions, such as “good is to best as smart is to smartest”. The Mixed dataset [20] contains 19544 questions, about half of the same kind as in Syntactic, and another half of a more semantic nature, such as capital cities (“Paris is to France as Tokyo is to Japan”). After filtering questions involving out-of-vocabulary words, i.e. words that appeared in English Wikipedia less than 100 times, we remain with 7118 instances in Syntactic and 19258 instances in Mixed. The analogy questions are answered using Levy and Goldberg’s similarity multiplication method [19], which is state-of-the-art in analogy recovery: arg maxb∗∈VW \{a∗,b,a} cos(b∗, a∗)·cos(b∗, b)/(cos(b∗, a)+ε). The evaluation metric for the analogy questions is the percentage of questions for which the argmax result was the correct answer (b∗). Results Table 2 shows the experiments’ results. On the word similarity task, SPPMI yields better results than SGNS, and SVD improves even more. However, the difference between the top PMIbased method and the top SGNS configuration in each dataset is small, and it is reasonable to say that they perform on-par. It is also evident that different values of k have a significant effect on all methods: SGNS generally works better with higher values of k, whereas SPPMI and SVD prefer lower values of k. This may be due to the fact that only positive values are retained, and high values of k may cause too much loss of information. A similar observation was made for SGNS and SVD when observing how well they optimized the objective (Section 5.1). Nevertheless, tuning k can significantly increase the performance of SPPMI over the traditional PPMI configuration (k = 1). The analogies task shows different behavior. First, SVD does not perform as well as SGNS and SPPMI. More interestingly, in the syntactic analogies dataset, SGNS significantly outperforms the rest. This trend is even more pronounced when using the additive analogy recovery method [22] (not shown). Linguistically speaking, the syntactic analogies dataset is quite different from the rest, since it relies more on contextual information from common words such as determiners (“the”, “each”, “many”) and auxiliary verbs (“will”, “had”) to solve correctly. We conjecture that SGNS performs better on this task because its training procedure gives more influence to frequent pairs, as opposed to SVD’s objective, which gives the same weight to all matrix cells (see Section 3.2). 6 Conclusion We analyzed the SGNS word embedding algorithms, and showed that it is implicitly factorizing the (shifted) word-context PMI matrix M PMI −log k using per-observation stochastic gradient updates. We presented SPPMI, a modification of PPMI inspired by our theoretical findings. Indeed, using SPPMI can improve upon the traditional PPMI matrix. Though SPPMI provides a far better solution to SGNS’s objective, it does not necessarily perform better than SGNS on linguistic tasks, as evident with syntactic analogies. We suspect that this may be related to SGNS down-weighting rare words, which PMI-based methods are known to exaggerate. We also experimented with an alternative matrix factorization method, SVD. Although SVD was relatively poor at optimizing SGNS’s objective, it performed slightly better than the other methods on word similarity datasets. However, SVD underperforms on the word-analogy task. One of the main differences between the SVD and SGNS is that SGNS performs weighted matrix factorization, which may be giving it an edge in the analogy task. As future work we suggest investigating weighted matrix factorizations of word-context matrices with PMI-based association metrics. Acknowledgements This work was partially supported by the EC-funded project EXCITEMENT (FP7ICT-287923). We thank Ido Dagan and Peter Turney for their valuable insights. 8 References [1] Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. Dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL, 2014. [2] Marco Baroni and Alessandro Lenci. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673–721, 2010. [3] Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, 2003. [4] Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. Distributional semantics in technicolor. In ACL, 2012. [5] John A Bullinaria and Joseph P Levy. Extracting semantic representations from word co-occurrence statistics: a computational study. Behavior Research Methods, 39(3):510–526, 2007. [6] John A Bullinaria and Joseph P Levy. Extracting semantic representations from word co-occurrence statistics: Stop-lists, stemming, and SVD. Behavior Research Methods, 44(3):890–907, 2012. [7] John Caron. Experiments with LSA scoring: optimal rank and basis. In Proceedings of the SIAM Computational Information Retrieval Workshop, pages 157–169, 2001. [8] Kenneth Ward Church and Patrick Hanks. Word association norms, mutual information, and lexicography. Computational linguistics, 16(1):22–29, 1990. [9] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML, 2008. [10] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 2011. [11] Ido Dagan, Fernando Pereira, and Lillian Lee. Similarity-based estimation of word cooccurrence probabilities. In ACL, 1994. [12] C Eckart and G Young. The approximation of one matrix by another of lower rank. Psychometrika, 1:211–218, 1936. [13] Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. ACM TOIS, 2002. [14] Yoav Goldberg and Omer Levy. word2vec explained: deriving Mikolov et al.’s negative-sampling wordembedding method. arXiv preprint arXiv:1402.3722, 2014. [15] Zellig Harris. Distributional structure. Word, 10(23):146–162, 1954. [16] Douwe Kiela and Stephen Clark. A systematic study of semantic vector space model parameters. In Workshop on Continuous Vector Space Models and their Compositionality, 2014. [17] Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 2009. [18] Omer Levy and Yoav Goldberg. Dependency-based word embeddings. In ACL, 2014. [19] Omer Levy and Yoav Goldberg. Linguistic regularities in sparse and explicit word representations. In CoNLL, 2014. [20] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781, 2013. [21] Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In NIPS, 2013. [22] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic regularities in continuous space word representations. In NAACL, 2013. [23] Andriy Mnih and Geoffrey E Hinton. A scalable hierarchical distributed language model. In Advances in Neural Information Processing Systems, pages 1081–1088, 2008. [24] Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. In NIPS, 2013. [25] Nathan Srebro and Tommi Jaakkola. Weighted low-rank approximations. In ICML, 2003. [26] Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: a simple and general method for semi-supervised learning. In ACL, 2010. [27] Peter D. Turney. Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In ECML, 2001. [28] Peter D. Turney. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533–585, 2012. [29] Peter D. Turney and Patrick Pantel. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 2010. 9
2014
409
5,524
Deep Convolutional Neural Network for Image Deconvolution Li Xu ∗ Lenovo Research & Technology xulihk@lenovo.com Jimmy SJ. Ren Lenovo Research & Technology jimmy.sj.ren@gmail.com Ce Liu Microsoft Research celiu@microsoft.com Jiaya Jia The Chinese University of Hong Kong leojia@cse.cuhk.edu.hk Abstract Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods. 1 Introduction Many image and video degradation processes can be modeled as translation-invariant convolution. To restore these visual data, the inverse process, i.e., deconvolution, becomes a vital tool in motion deblurring [1, 2, 3, 4], super-resolution [5, 6], and extended depth of field [7]. In applications involving images captured by cameras, outliers such as saturation, limited image boundary, noise, or compression artifacts are unavoidable. Previous research has shown that improperly handling these problems could raise a broad set of artifacts related to image content, which are very difficult to remove. So there was work dedicated to modeling and addressing each particular type of artifacts in non-blind deconvolution for suppressing ringing artifacts [8], removing noise [9], and dealing with saturated regions [9, 10]. These methods can be further refined by incorporating patch-level statistics [11] or other schemes [4]. Because each method has its own specialty as well as limitation, there is no solution yet to uniformly address all these issues. One example is shown in Fig. 1 – a partially saturated blur image with compression errors can already fail many existing approaches. One possibility to remove these artifacts is via employing generative models. However, these models are usually made upon strong assumptions, such as identical and independently distributed noise, which may not hold for real images. This accounts for the fact that even advanced algorithms can be affected when the image blur properties are slightly changed. ∗Project webpage: http://www.lxu.me/projects/dcnn/. The paper is partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region (Project No. 413113). 1 ( ) a ( ) . b Krishnan et al ( ) c Ours Figure 1: A challenging deconvolution example. (a) is the blurry input with partially saturated regions. (b) is the result of [3] using hyper-Laplacian prior. (c) is our result. In this paper, we initiate the procedure for natural image deconvolution not based on their physically or mathematically based characteristics. Instead, we show a new direction to build a data-driven system using image samples that can be easily produced from cameras or collected online. We use the convolutional neural network (CNN) to learn the deconvolution operation without the need to know the cause of visual artifacts. We also do not rely on any pre-process to deblur the image, unlike previous learning based approaches [12, 13]. In fact, it is non-trivial to find a proper network architecture for deconvolution. Previous de-noise neural network [14, 15, 16] cannot be directly adopted since deconvolution may involve many neighboring pixels and result in a very complex energy function with nonlinear degradation. This makes parameter learning quite challenging. In our work, we bridge the gap between an empirically-determined convolutional neural network and existing approaches with generative models in the context of pseudo-inverse of deconvolution. It enables a practical system and, more importantly, provides an empirically effective strategy to initialize the weights in the network, which otherwise cannot be easily obtained in the conventional random-initialization training procedure. Experiments show that our system outperforms previous ones especially when the blurred input images are partially saturated. 2 Related Work Deconvolution was studied in different fields due to its fundamentality in image restoration. Most previous methods tackle the problem from a generative perspective assuming known image noise model and natural image gradients following certain distributions. In the Richardson-Lucy method [17], image noise is assumed to follow a Poisson distribution. Wiener Deconvolution [18] imposes equivalent Gaussian assumption for both noise and image gradients. These early approaches suffer from overly smoothed edges and ringing artifacts. Recent development on deconvolution shows that regularization terms with sparse image priors are important to preserve sharp edges and suppress artifacts. The sparse image priors follow heavy-tailed distributions, such as a Gaussian Mixture Model [1, 11] or a hyper-Laplacian [7, 3], which could be efficiently optimized using half-quadratic (HQ) splitting [3]. To capture image statistics with larger spatial support, the energy is further modeled within a Conditional Random Field (CRF) framework [19] and on image patches [11]. While the last step of HQ method is quadratic optimization, Schmidt et al. [4] showed that it is possible to directly train a Gaussian CRF from synthetic blur data. To handle outliers such as saturation, Cho et al. [9] used variational EM to exclude outlier regions from a Gaussian likelihood. Whyte et al. [10] introduced an auxiliary variable in the RichardsonLucy method. An explicit denoise pass is added to deconvolution, where the denoise approach is carefully engineered [20] or trained from noisy data [12]. The generative approaches typically have difficulties to handle complex outliers that are not independent and identically distributed. 2 Another trend for image restoration is to leverage the deep neural network structure and big data to train the restoration function. The degradation is therefore no longer limited to one model regarding image noise. Burger et al. [14] showed that the plain multi-layer perceptrons can produce decent results and handle different types of noise. Xie et al. [15] showed that a stacked denoise autoencoder (SDAE) structure [21] is a good choice for denoise and inpainting. Agostinelli et al. [22] generalized it by combining multiple SDAE for handling different types of noise. In [23] and [16], the convolutional neural network (CNN) architecture [24] was used to handle strong noise such as raindrop and lens dirt. Schuler et al. [13] added MLPs to a direct deconvolution to remove artifacts. Though the network structure works well for denoise, it does not work similarly for deconvolution. How to adapt the architecture is the main problem to address in this paper. 3 Blur Degradation We consider real-world image blur that suffers from several types of degradation including clipped intensity (saturation), camera noise, and compression artifacts. The blur model is given by ˆy = ψb[φ(αx ∗k + n)], (1) where αx represents the latent sharp image. The notation α ≥1 is to indicate the fact that αx could have values exceeding the dynamic range of camera sensors and thus be clipped. k is the known convolution kernel, or typically referred to as a point spread function (PSF), n models additive camera noise. φ(·) is a clipping function to model saturation, defined as φ(z) = min(z, zmax), where zmax is a range threshold. ψb[·] is a nonlinear (e.g., JPEG) compression operator. We note that even with ˆy and kernel k, restoring αx is intractable, simply because the information loss caused by clipping. In this regard, our goal is to restore the clipped input ˆx, where ˆx = φ(αx). Although solving for ˆx with a complex energy function that involves Eq. (1) is difficult, the generation of blurry image from an input x is quite straightforward by image synthesis according to the convolution model taking all kinds of possible image degradation into generation. This motivates a learning procedure for deconvolution, using training image pairs {ˆxi, ˆyi}, where index i ∈N. 4 Analysis The goal is to train a network architecture f(·) that minimizes 1 2|N| X i∈N ∥f(ˆyi) −ˆxi∥2, (2) where |N| is the number of image pairs in the sample set. We have used the recent two deep neural networks to solve this problem, but failed. One is the Stacked Sparse Denoise Autoencoder (SSDAE) [15] and the other is the convolutional neural network (CNN) used in [16]. Both of them are designed for image denoise. For SSDAE, we use patch size 17 × 17 as suggested in [14]. The CNN implementation is provided by the authors of [16]. We collect two million sharp patches together with their blurred versions in training. One example is shown in Fig. 2 where (a) is a blurred image. Fig. 2(b) and (c) show the results of SSDAE and CNN. The result of SSDAE in (b) is still blurry. The CNN structure works relatively better. But it suffers from remaining blurry edges and strong ghosting artifacts. This is because these network structures are for denoise and do not consider necessary deconvolution properties. More explanations are provided from a generative perspective in what follows. 4.1 Pseudo Inverse Kernels The deconvolution task can be approximated by a convolutional network by nature. We consider the following simple linear blur model y = x ∗k. The spatial convolution can be transformed to a frequency domain multiplication, yielding F(y) = F(x) · F(k). 3 (a) input (b) SSDAE [15] (c) CNN [16] (d) Ours Figure 2: Existing stacked denoise autoencoder and convolutional neural network structures cannot solve the deconvolution problem. (a) (b) (c) (d) (e) Figure 3: Pseudo inverse kernel and deconvolution examples. F(·) denotes the discrete Fourier transform (DFT). Operator · is element-wise multiplication. In Fourier domain, x can be obtained as x = F−1(F(y)/F(k)) = F−1(1/F(k)) ∗y, where F−1 is the inverse discrete Fourier transform. While the solver for x is written in a form of spatial convolution with a kernel F−1(1/F(k)), the kernel is actually a repetitive signal spanning the whole spatial domain without a compact support. When noise arises, regularization terms are commonly involved to avoid division-by-zero in frequency domain, which makes the pseudo inverse falls off quickly in spatial domain [25]. The classical Wiener deconvolution is equivalent to using Tikhonov regularizer [2]. The Wiener deconvolution can be expressed as x = F−1( 1 F(k){ |F(k)|2 |F(k)|2 + 1 SNR }) ∗y = k† ∗y, where SNR is the signal-to-noise ratio. k† denotes the pseudo inverse kernel. Strong noise leads to a large 1 SNR, which corresponds to strongly regularized inversion. We note that with the introduction of SNR, k† becomes compact with a finite support. Fig. 3(a) shows a disk blur kernel of radius 7, which is commonly used to model focal blur. The pseudo-inverse kernel k† with SNR = 1E −4 is given in Fig. 3(b). A blurred image with this kernel is shown in Fig. 3(c). Deconvolution results with k† are in (d). A level of blur is removed from the image. But noise and saturation cause visual artifacts, in compliance with our understanding of Wiener deconvolution. Although the Wiener method is not state-of-the-art, its byproduct that the inverse kernel is with a finite yet large spatial support becomes vastly useful in our neural network system, which manifests that deconvolution can be well approximated by spatial convolution with sufficiently large kernels. This explains unsuccessful application of SSDA and CNN directly to deconvolution in Fig. 2 as follows. • SSDA does not capture well the nature of convolution with its fully connected structures. • CNN performs better since deconvolution can be approximated by large-kernel convolution as explained above. 4 • Previous CNN uses small convolution kernels. It is however not an appropriate configuration in our deconvolution problem. It thus can be summarized that using deep neural networks to perform deconvolution is by no means straightforward. Simply modifying the network by employing large convolution kernels would lead to higher difficulties in training. We present a new structure to update the network in what follows. Our result in Fig. 3 is shown in (e). 5 Network Architecture We transform the simple pseudo inverse kernel for deconvolution into a convolutional network, based on the kernel separability theorem. It makes the network more expressive with the mapping to higher dimensions to accommodate nonlinearity. This system is benefited from large training data. 5.1 Kernel Separability Kernel separability is achieved via singular value decomposition (SVD) [26]. Given the inverse kernel k†, decomposition k† = USV T exists. We denote by uj and vj the jth columns of U and V , sj the jth singular value. The original pseudo deconvolution can be expressed as k† ∗y = X j sj · uj ∗(vT j ∗y), (3) which shows 2D convolution can be deemed as a weighted sum of separable 1D filters. In practice, we can well approximate k† by a small number of separable filters by dropping out kernels associated with zero or very small sj. We have experimented with real blur kernels to ignore singular values smaller than 0.01. The resulting average number of separable kernels is about 30 [25]. Using a smaller SNR ratio, the inverse kernel has a smaller spatial support. We also found that an inverse kernel with length 100 is typically enough to generate visually plausible deconvolution results. This is important information in designing the network architecture. 5.2 Image Deconvolution CNN (DCNN) We describe our image deconvolution convolutional neural network (DCNN) based on the separable kernels. This network is expressed as h3 = W3 ∗h2; hl = σ(Wl ∗hl−1 + bl−1), l ∈{1, 2}; h0 = ˆy, where Wl is the weight mapping the (l −1)th layer to the lth one and bl−1 is the vector value bias. σ(·) is the nonlinear function, which can be sigmoid or hyperbolic tangent. Our network contains two hidden layers similar to the separable kernel inversion setting. The first hidden layer h1 is generated by applying 38 large-scale one-dimensional kernels of size 121 × 1, according to the analysis in Section 5.1. The values 38 and 121 are empirically determined, which can be altered for different inputs. The second hidden layer h2 is generated by applying 38 1 × 121 convolution kernels to each of the 38 maps in h1. To generate results, a 1 × 1 × 38 kernel is applied, analogous to the linear combination using singular value sj. The architecture has several advantages for deconvolution. First, it assembles separable kernel inversion for deconvolution and therefore is guaranteed to be optimal. Second, the nonlinear terms and high dimensional structure make the network more expressive than traditional pseudo-inverse. It is reasonably robust to outliers. 5.3 Training DCNN The network can be trained either by random-weight initialization or by the initialization from the separable kernel inversion, since they share the exact same structure. We experiment with both strategies on natural images, which are all degraded by additive Gaussian noise (AWG) and JPEG compression. These images are in two categories – one with strong color saturation and one without. Note saturation affects many existing deconvolution algorithms a lot. 5 Figure 4: PSNRs produced in different stages of our convolutional neural network architecture. (a) Separable kernel inversion (b) Random initialization (c) Separable kernel initialization (d) ODCNN output Figure 5: Results comparisons in different stages of our deconvolution CNN. The PSNRs are shown as the first three bars in Fig. 4. We obtain the following observations. • The trained network has an advantage over simply performing separable kernel inversion, no matter with random initialization or initialization from pseudo-inverse. Our interpretation is that the network, with high dimensional mapping and nonlinearity, is more expressive than simple separable kernel inversion. • The method with separable kernel inversion initialization yields higher PSNRs than that with random initialization, suggesting that initial values affect this network and thus can be tuned. Visual comparison is provided in Fig. 5(a)-(c), where the results of separable kernel inversion, training with random weights, and of training with separable kernel inversion initialization are shown. The result in (c) obviously contains sharp edges and more details. Note that the final trained DCNN is not equivalent to any existing inverse-kernel function even with various regularization, due to the involved high-dimensional mapping with nonlinearities. The performance of deconvolution CNN decreases for images with color saturation. Visual artifacts could also be yielded due to noise and compression. We in the next section turn to a deeper structure to address these remaining problems, by incorporating a denoise CNN module. 5.4 Outlier-rejection Deconvolution CNN (ODCNN) Our complete network is formed as the concatenation of the deconvolution CNN module with a denoise CNN [16]. The overall structure is shown in Fig. 6. The denoise CNN module has two hidden layers with 512 feature maps. The input image is convolved with 512 kernels of size 16 × 16 to be fed into the hidden layer. The two network modules are concatenated in our system by combining the last layer of deconvolution CNN with the input of denoise CNN. This is done by merging the 1 × 1 × 36 kernel with 512 16×16 kernels to generate 512 kernels of size 16×16×36. Note that there is no nonlinearity when combining the two modules. While the number of weights grows due to the merge, it allows for a flexible procedure and achieves decent performance, by further incorporating fine tuning. 6 Restoration Outlier Rejection Sub-Network Deconvolution Sub-Network 184x184 56x56 kernel size 1x121 kernel size 121x1 kernel size 16x16x38 kernel size 1x1x512 kernel size 8x8x512 64x184x38 64x64x38 49x49x512 49x49x512 Figure 6: Our complete network architecture for deep deconvolution. 5.5 Training ODCNN We blur natural images for training – thus it is easy to obtain a large number of data. Specifically, we use 2,500 natural images downloaded from Flickr. Two million patches are randomly sampled from them. Concatenating the two network modules can describe the deconvolution process and enhance the ability to suppress unwanted structures. We train the sub-networks separately. The deconvolution CNN is trained using the initialization from separable inversion as described before. The output of deconvolution CNN is then taken as the input of the denoise CNN. Fine tuning is performed by feeding one hundred thousand 184×184 patches into the whole network. The training samples contain all patches possibly with noise, saturation, and compression artifacts. The statistics of adding denoise CNN are also plotted in Fig. 4. The outlier-rejection CNN after fine tuning improves the overall performance up to 2dB, especially for those saturated regions. 6 More Discussions Our approach differs from previous ones in several ways. First, we identify the necessity of using a relatively large kernel support for convolutional neural network to deal with deconvolution. To avoid rapid weight-size expansion, we advocate the use of 1D kernels. Second, we propose a supervised pre-training on the sub-network that corresponds to reinterpretation of Wiener deconvolution. Third, we apply traditional deconvolution to network initialization, where generative solvers can guide neural network learning and significantly improve performance. Fig. 6 shows that a new convolutional neural network architecture is capable of dealing with deconvolution. Without a good understanding of the functionality of each sub-net and performing supervised pre-training, however, it is difficult to make the network work very well. Training the whole network with random initialization is less preferred because the training algorithm stops halfway without further energy reduction. The corresponding results are similarly blurry as the input images. To understand it, we visualize intermediate results from the deconvolutional CNN sub-network, which generates 38 intermediate maps. The results are shown in Fig. 7, where (a) is the selected three results obtained by random-initialization training and (b) is the results at the corresponding nodes from our better-initialized process. The maps in (a) look like the high-frequency part of the blurry input, indicating random initialization is likely to generate high-pass filters. Without proper starting values, its chance is very small to reach the component maps shown in (b) where sharper edges present, fully usable for further denoise and artifact removal. Zeiler et al. [27] showed that sparsely regularized deconvolution can be used to extract useful middle-level representation in their deconvolution network. Our deconvolution CNN can be used to approximate this structure, unifying the process in a deeper convolutional neural network. 7 (a) (b) Figure 7: Comparisons of intermediate results from deconvolution CNN. (a) Maps from random initialization. (b) More informative maps with our initialization scheme. kernel type Krishnan [3] Levin [7] Cho [9] Whyte [10] Schuler [13] Schmidt [4] Ours disk sat. 24.05dB 24.44dB 25.35dB 24.47dB 23.14dB 24.01dB 26.23dB disk 25.94dB 24.54dB 23.97dB 22.84dB 24.67dB 24.71dB 26.01dB motion sat. 24.07dB 23.58dB 25.65 dB 25.54dB 24.92dB 25.33dB 27.76dB motion 25.07dB 24.47 dB 24.29dB 23.65dB 25.27dB 25.49dB 27.92dB Table 1: Quantitative comparison on the evaluation image set. (a) Input (b) Levin et al. [7] (c) Krishnan et al. [3] (d) EPLL [11] (e) Cho et al. [9] (f) Whyte et al. [10] (g) Schuler et al. [13] (h) Ours Figure 8: Visual comparison of deconvolution results. 7 Experiments and Conclusion We have presented several deconvolution results. Here we show quantitative evaluation of our method against state-of-the-art approaches, including sparse prior deconvolution [7], hyperLaplacian prior method [3], variational EM for outliers [9], saturation-aware approach [10], learning based approach [13] and the discriminative approach [4]. We compare performance using both disk and motion kernels. The average PSNRs are listed in Table 1. Fig. 8 shows a visual comparison. Our method achieves decent results quantitatively and visually. The implementation, as well as the dataset, is available at the project webpage. To conclude this paper, we have proposed a new deep convolutional network structure for the challenging image deconvolution task. Our main contribution is to let traditional deconvolution schemes guide neural networks and approximate deconvolution by a series of convolution steps. Our system novelly uses two modules corresponding to deconvolution and artifact removal. While the network is difficult to train as a whole, we adopt two supervised pre-training steps to initialize sub-networks. High-quality deconvolution results bear out the effectiveness of this approach. References [1] Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. ACM Trans. Graph. 25(3) (2006) 8 [2] Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR. (2009) [3] Krishnan, D., Fergus, R.: Fast image deconvolution using hyper-laplacian priors. In: NIPS. (2009) [4] Schmidt, U., Rother, C., Nowozin, S., Jancsary, J., Roth, S.: Discriminative non-blind deblurring. In: CVPR. (2013) [5] Agrawal, A.K., Raskar, R.: Resolving objects at higher resolution from a single motion-blurred image. In: CVPR. (2007) [6] Michaeli, T., Irani, M.: Nonparametric blind super-resolution. In: ICCV. (2013) [7] Levin, A., Fergus, R., Durand, F., Freeman, W.T.: Image and depth from a conventional camera with a coded aperture. ACM Trans. Graph. 26(3) (2007) [8] Yuan, L., Sun, J., Quan, L., Shum, H.Y.: Progressive inter-scale and intra-scale non-blind image deconvolution. ACM Trans. Graph. 27(3) (2008) [9] Cho, S., Wang, J., Lee, S.: Handling outliers in non-blind image deconvolution. In: ICCV. (2011) [10] Whyte, O., Sivic, J., Zisserman, A.: Deblurring shaken and partially saturated images. In: ICCV Workshops. (2011) [11] Zoran, D., Weiss, Y.: From learning models of natural image patches to whole image restoration. In: ICCV. (2011) [12] Kenig, T., Kam, Z., Feuer, A.: Blind image deconvolution using machine learning for threedimensional microscopy. IEEE Trans. Pattern Anal. Mach. Intell. 32(12) (2010) [13] Schuler, C.J., Burger, H.C., Harmeling, S., Sch¨olkopf, B.: A machine learning approach for non-blind image deconvolution. In: CVPR. (2013) [14] Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: Can plain neural networks compete with bm3d? In: CVPR. (2012) [15] Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: NIPS. (2012) [16] Eigen, D., Krishnan, D., Fergus, R.: Restoring an image taken through a window covered with dirt or rain. In: ICCV. (2013) [17] Richardson, W.: Bayesian-based iterative method of image restoration. Journal of the Optical Society of America 62(1) (1972) [18] Wiener, N.: Extrapolation, interpolation, and smoothing of stationary time series: with engineering applications. Journal of the American Statistical Association 47(258) (1949) [19] Roth, S., Black, M.J.: Fields of experts. International Journal of Computer Vision 82(2) (2009) [20] Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.O.: Image restoration by sparse 3d transformdomain collaborative filtering. In: Image Processing: Algorithms and Systems. (2008) [21] Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research 11 (2010) [22] Agostinelli, F., Anderson, M.R., Lee, H.: Adaptive multi-column deep neural networks with application to robust image denoising. In: NIPS. (2013) [23] Jain, V., Seung, H.S.: Natural image denoising with convolutional networks. In: NIPS. (2008) [24] LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11) (1998) [25] Xu, L., Tao, X., Jia, J.: Inverse kernels for fast spatial deconvolution. In: ECCV. (2014) [26] Perona, P.: Deformable kernels for early vision. IEEE Trans. Pattern Anal. Mach. Intell. 17(5) (1995) [27] Zeiler, M.D., Krishnan, D., Taylor, G.W., Fergus, R.: Deconvolutional networks. In: CVPR. (2010) 9
2014
41
5,525
Constrained convex minimization via model-based excessive gap Quoc Tran-Dinh and Volkan Cevher Laboratory for Information and Inference Systems (LIONS) ´Ecole Polytechnique F´ed´erale de Lausanne (EPFL), CH1015-Lausanne, Switzerland {quoc.trandinh, volkan.cevher}@epfl.ch Abstract We introduce a model-based excessive gap technique to analyze first-order primaldual methods for constrained convex minimization. As a result, we construct firstorder primal-dual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and prox-center selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fast-gradient methods as special cases, where our rates apply. 1 Introduction In [1], Nesterov introduced a primal-dual technique, called the excessive gap, for constructing and analyzing first-order methods for nonsmooth and unconstrained convex optimization problems. This paper builds upon the same idea for constructing and analyzing algorithms for the following a class of constrained convex problems, which captures a surprisingly broad set of applications [2, 3, 4, 5]: f ⋆:= min x∈Rn {f(x) : Ax = b, x ∈X} , (1) where f : Rn →R∪{+∞} is a proper, closed and convex function; X ⊆Rn is a nonempty, closed and convex set; and A ∈Rm×n and b ∈Rm are given. In the sequel, we show how Nesterov’s excessive gap relates to the smoothed gap function for a variational inequality that characterizes the optimality condition of (1). In the light of this connection, we enforce a simple linear model on the excessive gap, and use it to develop efficient first-order methods to numerically approximate an optimal solution x⋆of (1). Then, we rigorously characterize how the following structural assumptions on (1) affect their computational efficiency: Structure 1: Decomposability. We say that problem (1) is p-decomposable if its objective function f and its feasible set X can be represented as follows: f(x) := Xp i=1 fi(xi), and X := Yp i=1 Xi, (2) where xi ∈Rni, Xi ∈Rni, fi : Rni →R ∪{+∞} is proper, closed and convex for i = 1, . . . , p, and Pp i=1 ni = n. Decomposability naturally arises in machine learning applications such as group sparsity linear recovery, consensus optimization, and duality of empirical risk minimization problems [5]. As an important example, a composite convex minimization problem minx1{f1(x1) + f2(Kx1)} can be cast into (1) with a 2-decomposable structure using an intermediate variable x2 = Kx1 to represent the linear constraints. Decomposable structure immediately supports parallel and distributed implementations in synchronous hardware architectures. Structure 2: Proximal tractability. By proximal tractability, we mean that the computation of the following operation with a given proper, closed and convex function g is “efficient” (e.g., by a closed form solution or by polynomial time algorithms) [6]: proxg(z) := arg min w∈Rnz{g(w) + (1/2)∥w −z∥2}. (3) When the constraint z ∈Z is available, we consider the proximal operator of g(·)+δZ(·) instead of g, where δZ is the indicator function of Z. Many smooth and non-smooth functions have tractable proximal operators such as norms, and the projection onto a simple set [3, 7, 4, 5]. 1 Scalable algorithms for constrained convex minimization and their limitations. We can obtain scalable numerical solutions of (1) when we augment the objective f with simple penalty functions on the constraints. Despite the fundamental difficulties in choosing the penalty parameter, this approach enhances our computational capabilities as well as numerical robustness since we can apply modern proximal gradient, alternating direction, and primal-dual methods. Unfortunately, existing approaches invariably feature one or both of the following two limitations: Limitation 1: Non-ideal convergence characterizations. Ideally, the convergence rate characterization of a first-order algorithm for solving (1) must simultaneously establish for its iterates xk ∈X both on the objective residual f(xk) −f ⋆and on the primal feasibility gap ∥Axk −b∥of its linear constraints. The constraint feasibility is critical so that the primal convergence rate has any significance. Rates on a joint of the objective residual and feasibility gap is not necessarily meaningful since (1) is a constrained problem and f(xk) −f ⋆can easily be negative at all times as compared to the unconstrained setting, where we trivially have f(xk) −f ⋆≥0. Hitherto, the convergence results of state-of-the-art methods are far from ideal; see Table 1 in [28]. Most algorithms have guarantees in the ergodic sense [8, 9, 10, 11, 12, 13, 14] with non-optimal rates, which diminishes the practical performance; they rely on special function properties to improve convergence rates on the function and feasibility [12, 15], which reduces the scope of their applicability; they provide rates on dual functions [16], or a weighted primal residual and feasibility score [13], which does not necessarily imply convergence on the primal residual or the feasibility; or they obtain convergence rate on the gap function value sequence composed both the primal and dual variables via variational inequality and gap function characterizations [8, 10, 11], where the rate is scaled by a diameter parameter of the dual feasible set which is not necessary bounded. Limitation 2: Computational inflexibility. Recent theoretical developments customize algorithms to special function classes for scalability, such as convex functions with global Lipschitz gradient and strong convexity. Unfortunately, these algorithms often require knowledge of function class parameters (e.g., the Lipschitz constant and the strong convexity parameter); they do not address the full scope of (1) (e.g., with self-concordant [barrier] functions or fully non-smooth decompositions); and they often have complicated algorithmic implementations with backtracking steps, which can create computational bottlenecks. These issues are compounded by their penalty parameter selection, which can significantly decrease numerical efficiency [17]. Moreover, they lack a natural ability to handle p-decomposability in a parallel fashion at optimal rates. Our specific contributions To this end, this paper addresses the question: “Is it possible to efficiently solve (1) using only the proximal tractability assumption with rigorous global convergence rates on the objective residual and the primal feasibility gap?” The answer is indeed positive provided that there exists a solution in a bounded feasible set X. Surprisingly, we can still leverage favorable function classes for fast convergence, such as strongly convex functions, and exploit p-decomposability at optimal rates. Our characterization is radically different from existing results, such as in [18, 8, 19, 9, 10, 11, 12, 13]. Specifically, we unify primal-dual methods [20, 21], smoothing (both for Bregman distances and for augmented Lagrangian functions) [22, 21], and the excessive gap function technique [1] in one. As a result, we develop an efficient algorithmic framework for solving (1), which covers augmented Lagrangian method [23, 24], [preconditioned] alternating direction method-of-multipliers ([P]ADMM) [8] and fast dual descent methods [18] as special cases. Based on the new technique, we establish rigorous convergence rates for a few well-known primaldual methods, which is optimal (in the sense of first order black-box models [25]) given our particular assumptions. We also discuss adaptive strategies for trading-off between the objective residual |f(xk)−f ⋆| and the feasibility gap ∥Axk−b∥, which enhance practical performance. Finally, we describe how strong convexity of f can be exploited, and numerically illustrate theoretical results. 2 Preliminaries 2.1. A semi-Bregman distance. Let Z be a nonempty, closed convex set in Rnz. A nonnegative, continuous and µb-strongly convex function b is called a µb-proximity function or prox-function of Z if Z ⊆dom (b). Then zc := argminz∈Z b(z) exists and is unique, called the center point of b. Given a smooth µb-prox-function b of Z (with µb = 1), we define db(z, ˆz) := b(ˆz)−b(z)− ∇b(z)T (ˆz−z), ∀z, ˆz ∈dom (b), as the Bregman distance between z and ˆz given b. As an example, with b(z) := (1/2)∥z∥2 2, we have db(z, ˆz) = (1/2)∥z −ˆz∥2 2, which is the Euclidean distance. 2 In order to unify both the Bregman distance and augmented Lagrangian smoothing methods, we introduce a new semi-Bregman distance db(Sx, Sxc) between x and xc, given matrix S. Since S is not necessary square, we use the prefix “semi” for this measure. We also denote by: DS X := sup{db(Sx, Sxc) : x, xc ∈X}, (4) the semi-diameter of X. If X is bounded, then 0 ≤DS X < +∞. 2.2. The dual problem of (1). Let L(x, y) := f(x) + yT (Ax −b) be the Lagrange function of (1), where y ∈Rm is the Lagrange multipliers. The dual problem of (1) is defined as: g⋆:= max y∈Rm g(y), (5) where g is the dual function, which is defined as: g(y) := min x∈X{f(x) + yT (Ax −b)}. (6) Let us denote by x⋆(y) the solution of (6) for a given y ∈Rm. Corresponding to x⋆(y), we also define the domain of g as dom (g) := {y ∈Rm : x⋆(y) exists}. If f is continuous on X and if X is bounded, then x⋆(y) exists for all y ∈Rm. Unfortunately, g is nonsmooth, and numerical solutions of (5) are difficult [25]. In general, we have g(y) ≤f(x) which is the weak-duality condition in convex analysis. To guarantee strong duality, i.e., f ⋆= g⋆for (1) and (5), we need an assumption: Assumption A. 1. The solution set X ⋆of (1) is nonempty. The function f is proper, closed and convex. In addition, either X is a polytope or the Slater condition holds, i.e.: {x ∈Rn : Ax = b}∩ relint(X) ̸= ∅, where relint(X) is the relative interior of X. Under Assumption A.1, the solution set Y⋆of (5) is also nonempty and bounded. Moreover, the strong duality holds, i.e., f ⋆= g⋆. Any point (x⋆, y⋆) ∈X ⋆× Y⋆is a primal-dual solution to (1) and (5), and is also a saddle point of L, i.e., L(x⋆, y) ≤L(x⋆, y⋆) ≤L(x, y⋆), ∀(x, y) ∈X × Rm. 2.3. Mixed-variational inequality formulation and the smoothed gap function. We use w := [x; y] ∈Rn × Rm to denote the primal-dual variable, and F(w) :=  AT y b −Ax  to denote a partial Karush-Kuhn-Tucker (KKT) mapping. Then, we can write the optimality condition of (1) as: f(x) −f(x⋆) + F(w⋆)T (w −w⋆) ≥0, ∀w ∈X × Rm, (7) which is known as the mixed-variational inequality (MVIP) [26]. If we define W := X × Rm and: G(w⋆) := max w∈W  f(x⋆) −f(x) + F(w⋆)T (w⋆−w) , (8) then G is known as the Auslender gap function of (7) [27]. By the definition of F, we can see that: G(w⋆) := max (x,y)∈W  f(x⋆) −f(x) −(Ax −b)T y⋆ = f(x⋆) −g(y⋆) ≥0. It is clear that G(w⋆) = 0 if and only if w⋆:= [x⋆; y⋆] ∈W⋆:= X ⋆×Y⋆—i.e., the strong duality. Since G is generally nonsmooth, we strictly smooth it by adding an augmented convex function: dγβ(w) ≡dγβ(x, y) := γdb(Sx, Sxc) + (β/2)∥y∥2, (9) where db is a Bregman distance, S is a given matrix, and γ, β > 0 are smoothness parameters. The smoothed gap function for G is defined as: Gγβ( ¯w) := max w∈W  f(¯x) −f(x) + F( ¯w)T ( ¯w −w) −dγβ(w) , (10) where F is defined in (7). The function Gγβ can be considered as smoothed gap function for the MVIP (7). By the definition of G and Gγβ, we can easily show that: Gγβ( ¯w) ≤G( ¯w) ≤Gγβ( ¯w) + max{dγβ(w) : w ∈W}, (11) which is key to develop the algorithm in the next section. Problem (10) is convex, and its solution w⋆ γβ( ¯w) can be computed as: w⋆ γβ( ¯w) := [x⋆ γ(¯y); y⋆ β(¯x)] ⇔ ( x∗ γ(¯y):= argmin x∈X  f(x)+yT (Ax−b)+γdb(Sx, Sxc) y∗ β(¯x):= β−1(A¯x −b). (12) In this case, the following concave function: gγ(y) := min x∈X  f(x) + yT (Ax −b) + γdb(Sx, Sxc) , (13) can be considered as a smooth approximation of the dual function g defined by (6). 3 2.4. Bregman distance smoother vs. augmented Lagrangian smoother. Depending on the choice of S and xc, we deal with two smoothers as follows: 1. If we choose S = I, the identity matrix, and xc is then center point of b, then we obtain a Bregman distance smoother. 2. If we choose S = A, and xc ∈X such that Axc = b, then we have the augmented Lagrangian smoother. Clearly, with both smoothing techniques, the function gγ is smooth and concave. Its gradient is Lipschitz continuous with the Lipschitz constant Lg γ := γ−1∥A∥2 and Lg γ := γ−1, respectively. 3 Construction and analysis of a class of first-order primal-dual algorithms 3.1. Model-based excessive gap technique for (1). Since G(w⋆) = 0 iff w⋆= [x⋆; y⋆] is a primal-dual optimal solution of (1)-(5). The goal is to construct a sequence { ¯wk} such that G( ¯wk) →0, which implies that { ¯wk} converges to w⋆. As suggested by (11), if we can construct two sequences { ¯wk} and {(γk, βk)} such that Gγkβk( ¯wk) →0+ as γkβk ↓0+, then G( ¯wk) →0. Inspired by Nesterov’s excessive gap idea in [1], we construct the following model-based excessive gap condition for (1) in order to achieve our goal. Definition 1 (Model-based Excessive Gap). Given ¯wk ∈W and (γk, βk) > 0, a new point ¯wk+1 ∈ W and (γk+1, βk+1) > 0 so that γk+1βk+1 < γkβk is said to be firmly contractive (w.r.t. Gγβ defined by (10)) when it holds for Gγkβk that: Gk+1( ¯wk+1) ≤(1 −τk)Gk( ¯wk) −ψk, (14) where Gk := Gγkβk, τk ∈[0, 1) and ψk ≥0. From Definition 1, if  ¯wk and {(γk, βk)} satisfy (14), then we have Gk( ¯wk) ≤ωkG0( ¯w0) −Ψk by induction, where ωk := Qk−1 j=0(1 −τj) and Ψk := ψ0 + Pk−1 j=1 Qj−1 i=0(1 −τi)ψj. If G0( ¯w0) ≤0, then we can bound the objective residual |f(¯xk) −f ⋆| and the primal feasibility ∥A¯xk −b∥of (1): Lemma 1 ([28]). Let Gγβ be defined by (10). Let  ¯wk k≥0 ⊂W and {(γk, βk)}k≥0 ∈R2 ++ be the sequences that satisfy (14). Then, it holds that:  −  2βkD⋆ Y + (2γkβkDS X )1/2 D⋆ Y ≤ f(¯xk) −f ⋆ ≤γkDS X , ∥A¯xk −b∥ ≤2βkD⋆ Y + (2γkβkDS X )1/2, (15) where D⋆ Y := min {∥y⋆∥2 : y⋆∈Y⋆}, which is the norm of a minimum norm dual solutions. Hence, we can derive algorithms based (γk, βk) with a predictable convergence rate via (15). In the sequel, we manipulate τk and ψk to do just that in order to preserve (14) ´a la Nesterov [1]. Finally, we say that ¯xk ∈X is an ε-solution of (1) if |f(¯xk) −f ∗| ≤ε and ∥A¯xk −b∥≤ε. 3.2. Initial points. We first show how to compute an initial point w0 such that G0( ¯w0) ≤0. Lemma 2 ([28]). Given x0 c ∈X, ¯w0 := [¯x0; ¯y0] ∈W is computed by: ( ¯x0 = x∗ γ0(0m) := arg min x∈X  f(x) + (γ0/2)db(Sx, Sx0 c) , ¯y0 = y∗ β0(¯x0) := β−1 0 (A¯x0 −b). (16) satisfies Gγ0β0( ¯w0) ≤−γ0dp(S¯x0, Sxc) ≤0 provided that β0γ0 ≥¯Lg, where ¯Lg is the Lipschitz constant of ∇gγ with gγ given Subsection 2.4. 3.3. An algorithmic template. Algorithm 1 combines the above ingredients for solving (1). We observe that the key computational step of Algorithm 1 is Step 3, where we update [¯xk+1; ¯yk+1]. In the algorithm, we provide two update schemes (1P2D) and (2P1D) based on the updates of the primal or dual variables. The primal step x∗ γk(¯yk) is calculated via (12). At line 3 of (2P1D), the operator proxS βf is computed as: proxS βf(ˆx, ˆy) := argmin x∈X  f(x) + ˆyT A(x −ˆx) + β−1db(Sx, Sˆx) , (17) where we overload the notation of the proximal operator prox defined above. At Step 2 of Algorithm 1, if we choose S := I, i.e., db(Sx, Sxc) := db(x, xc) for xc being the center point of b, then we set ¯Lg := ∥A∥2. If S := A, i.e., db(Sx, Sxc) := (1/2)∥Ax −b∥2, then we set ¯Lg := 1. Theorem 1 characterizes three variants of Algorithm 1, whose proof can be found in [28]. 4 Algorithm 1: (A primal-dual algorithmic template using model-based excessive gap) Inputs: Fix γ0 >0. Choose c0 ∈(−1, 1]. Initialization: 1: Compute a0 := 0.5(1+c0+ p 4(1−c0)+(1+c0)2, τ0 := a−1 0 , and β0 := γ−1 0 ¯Lg (c.f. the text). 2: Compute [¯x0; ¯y0] as (16) in Lemma 2. For k = 0 to kmax, perform: 3: If stopping criterion, terminate. Otherwise, use one of the following update schemes: (2P1D) :        ˆxk := (1 −τk)¯xk + τkx∗ γk(¯yk) ˆyk := β−1 k+1(Aˆxk −b) ¯xk+1 := proxS βk+1f(ˆxk, ˆyk) ¯yk+1 := (1 −τk)¯yk + τkˆyk. (1P2D) :        ¯y⋆ k := β−1 k (A¯xk −b), ˆyk := (1 −τk)¯yk + τk¯y⋆ k, ¯xk+1 := (1−τk)¯xk+τkx∗ γk(ˆyk), ¯yk+1 := ˆyk+γk Ax∗ γk(ˆyk)−b  . 4: Update βk+1 := (1 −τk)βk and γk+1 := (1 −ckτk)γk. Update ck+1 from ck (optional). 5: Update ak+1 := 0.5 1 + ck+1 + p 4a2 k + (1 −ck+1)2 and set τk+1 := a−1 k+1. End For Theorem 1. Let  (¯xk, ¯yk) be the sequence generated by Algorithm 1 after k iterations. Then: If S = A, i.e., using the augmented Lagrangian smoother, γ0 := √¯Lg = 1, and ck := 0, then the (1P2D) update satisfies: ( ∥A¯xk−b∥2 ≤ 8D⋆ Y (k+1)2 , −1 2∥A¯xk−b∥2 2−D⋆ Y∥A¯xk−b∥2 ≤ f(¯xk) −f ⋆ ≤0, (18) for all k ≥0. As a consequence, the worst-case analytical complexity of Algorithm 1 to achieve an ε-solution ¯xk is O(√ε). If S = I, i.e., using the Bregman distance smoother, γ0 := √¯Lg = ∥A∥, and ck := 1, then, for the (2P1D) scheme, we have: (2P1D) : ( ∥A¯xk−b∥ ≤ ∥A∥(2D⋆ Y+√ 2DI X ) k+1 , −D⋆ Y∥A¯xk−b∥≤ f(¯xk) −f ⋆ ≤∥A∥ k+1DI X . (19) Similarly, if γ0 := 2 √ 2∥A∥ K+1 and ck := 0 for all k = 0, 1, . . . , K, then, for the (1P2D) scheme, we have: (1P2D) :    ∥A¯xK −b∥ ≤ 2 √ 2∥A∥(D⋆ Y+√ DI X ) (K+1) , −D⋆ Y∥A¯xK −b∥≤ f(¯xK) −f ⋆ ≤2 √ 2∥A∥ (K+1) DI X . (20) Hence, the worst-case analytical complexity to achieve an ε-solution ¯xk of (1) is O ε−1 . The (1P2D) scheme has close relationship to some well-known primal dual methods we describe below. Unfortunately, 1P2D has the drawback of fixing the total number of iterations a priori, which 2P1D can avoid at the expense of one more proximal operator calculation at each iteration. 3.4. Impact of strong convexity. We can improve the above schemes when f ∈Fµ, i.e., f is strongly convex with parameter µf > 0. The dual function g given in (6) is smooth and Lipschitz gradient with Lg f := µ−1 f ∥A∥2. Let us illustrate this when S = I and using the (1P2D) scheme as: (1P2Dµ)      ˆyk := (1−τk)¯yk+τky⋆ βk(¯xk), ¯xk+1 := (1−τk)¯xk+τkx⋆(ˆyk), ¯yk+1 := ˆyk+ 1 Lg f Ax⋆(ˆyk)−b  . We can still choose the starting point as in (16) with β0 := Lg f. The parameters βk and τk at Steps 4 and 5 of Algorithm 1 are updated as βk+1 := (1 −τk)βk, and τk+1 := τk 2 ( p τ 2 k + 4 −τk), where β0 := Lg f and τ0 := ( √ 5 −1)/2. The following corollary illustrates the convergence of Algorithm 1 using (1P2Dµ); see [28] for the detail proof. 5 Corollary 1. Let f ∈Fµ and  (¯xk, ¯yk) k≥0 be generated by Algorithm 1 using (1P2Dµ). Then: ∥A¯xk −b∥2 ≤ 4∥A∥2 µf(k + 2)2 D⋆ Y, and −D⋆ Y∥A¯xk −b∥≤f(¯xk) −f ⋆≤0. Moreover, we also have ∥¯xk −x⋆∥≤ 4∥A∥ (k+2)µf D⋆ Y. It is important to note that, when f ∈Fµ, we only have one smoothness parameter β and, hence, we do not need to fix the number of iterations a priori (compared with [18]). 4 Algorithmic enhancements through existing methods Our framework can directly instantiate concrete variants of some popular primal-dual methods for (1). We illustrate three connections here and establish one convergence result for the second variant. We also borrow adaptation heuristics from other algorithms to enhance our practical performance. 4.1. Proximal-point methods. We can choose xk c := x⋆ γk−1(ˆyk−1). This makes Algorithm 1 similar to the proximal-based decomposition algorithm in [29], which employs the proximal term db(·, ˆx⋆ k−1) with the Bregman distance db. The convergence analysis can be found in [28]. 4.2. Primal-dual hybrid gradient (PDHG). When f is 2-decomposable, i.e., f(x) := f1(x1) + f2(x2), we can choose xk c by applying one gradient step to the augmented Lagrangian term as: xk c := [gk 1; gk 2] with gk 1 := xk 1−∥A1∥−2AT 1 (A1xk 1+A2xk 2−b), gk 2 := xk 2−∥A2∥−2AT 2 (A1xk+1 1 +A2xk 2 −b). (PADMM) In this case, (1P2D) leads to a new variant of PADMM in [8] or PDHG in [9]. Corollary 2 ([28]). Let  (¯xk, ¯yk) k≥0 be a sequence generated by (1P2D) in Algorithm 1 using xk c as in (PADMM). If γ0 := 2 √ 2∥A∥2 K+1 and ck := 0 for all k = 0, 1, . . . , K, then we have    ∥A¯xK −b∥ ≤2 √ 2∥A∥(D⋆ Y+DX ) (K+1) , −D⋆ Y∥A¯xK −b∥≤ f(¯xK) −f ⋆ ≤2 √ 2∥A∥ (K+1) D2 X , (21) where DX := 4 max {∥x −ˆx∥: x, ˆx ∈X}. 4.4. ADMM. When f is 2-decomposable as f(x) := f1(x1) + f2(x2), we can choose db, S and xk c such that db(Sx, Sxc) := (1/2)  ∥A1x1 + A2xk −b∥2 + ∥A1xk+1 1 + A2x2 −b∥2 . Then Algorithm 1 reduces to a new variant of ADMM. Its convergence guarantee is fundamentally as same as Corollary 2. More details of the algorithm and its convergence can be found in [28]. 4.5. Enhancements of our schemes. For the PADMM and ADMM methods, a great deal of adaptation techniques has been proposed to enhance their convergence. We can view some of these techniques in the light of model-based excessive gap condition. For instance, Algorithm 1 decreases the smoothed gap function Gγkβk as illustrated in Definition 1. The actual decrease is then given by f(¯xk) −f ⋆≤γk(DS X −Ψk/γk). In practice, Dk := DS X −Ψk/γk can be dramatically smaller than DS X in the early iterations. This implies that increasing γk can improve practical performance. Such a strategy indeed forms the basis of many adaptation techniques in PADMM, and ADMM. Specifically, if γk increases, then τk also increases and βk decreases. Since βk measures the primal feasibility gap Fk := ∥A¯xk −b∥due to Lemma 1, we should only increase γk if the feasibility gap Fk is relatively high. Indeed, in the case xk c := [gk 1; gk 2], we can compute the dual feasibility gap as Hk := γk∥AT 1 A2((ˆx⋆ 2)k+1 −(ˆx⋆ 2)k)∥. Then, if Fk ≥sHk for some s > 0, we increase γk+1 := cγk for some c > 1. We use ck = c := 1.05 in practice. We can also decrease the parameter γk in (1P2D) by γk+1 := (1 −ckτk)γk, where ck := db(Sx⋆ γk(ˆyk), Sxc)/DS X ∈[0, 1] after or during the update of (¯xk+1, ¯yk+1) as in (2P1D) if we know the estimate DS X . 5 Numerical illustrations 5.1. Theoretical vs. practical bounds. We demonstrate the empirical performance of Algorithm 1 w.r.t. its theoretical bounds via a basic non-overlapping sparse-group basis pursuit problem: min x∈Rn  Xng i=1 wi∥xgi∥2 : Ax = b, ∥x∥∞≤ρ , (22) 6 where ρ > 0 is the signal magnitude, and gi and wi’s are the group indices and weights, respectively. 0 2000 4000 6000 8000 10000 10 −5 10 0 10 5 # iterations |f (x k) −f ∗| in log-scale 0 2000 4000 6000 8000 10000 10 −10 10 −5 10 0 10 5 # iterations ∥Ax k −b∥in log-scale 0 2000 4000 6000 8000 10000 10 −5 10 0 10 5 # iterations |f (x k) −f ∗| in log-scale 0 2000 4000 6000 8000 10000 10 −10 10 −5 10 0 10 5 # iterations ∥Ax k −b∥in log-scale Theoretical bound Basic 2P1D algorithm 2P1D algorithm Theoretical bound Basic 1P2D algorithm 1P2D algorithm 0 2000 4000 6000 8000 10000 10 −10 10 −5 10 0 10 5 # iterations |f (x k) −f ∗| in log-scale 0 2000 4000 6000 8000 10000 10 −15 10 −10 10 −5 10 0 10 5 # iterations ∥Ax k −b∥in log-scale 0 2000 4000 6000 8000 10000 10 −10 10 −5 10 0 10 5 # iterations |f (x k) −f ∗| in log-scale 0 2000 4000 6000 8000 10000 10 −15 10 −10 10 −5 10 0 10 5 # iterations ∥Ax k −b∥in log-scale Theoretical bound Basic 1P2D algorithm 1P2D algorithm Theoretical bound Basic 2P1D algorithm 2P1D algorithm Figure 1: Actual performance vs. theoretical bounds: [top row] the decomposable Bregman distance smoother (S = I) and [bottom row] the augmented Lagrangian smoother (S = A). In this test, we fix xc = 0n and db(x, xc) := (1/2)∥x∥2. Since ρ is given, we can evaluate DX numerically. By solving (22) with the SDPT3 interior-point solver [30] up to the accuracy 10−8, we can estimate D⋆ Y and f ⋆. In the (2P1D) scheme, we set γ0 = β0 = p¯Lg, while, in the (1P2D) scheme, we set γ0 := 2 √ 2∥A∥(K + 1)−1 with K := 104 and generate the theoretical bounds defined in Theorem 1. We test the performance of the four variants using a synthetic data: n = 1024, m = ⌊n/3⌋= 341, ng = ⌊n/8⌋= 128, and x♮is a ⌊ng/8⌋-sparse vector. Matrix A are generated randomly using the iid standard Gaussian and b := Ax♮. The group indices gi is also generated randomly (i = 1, · · · , ng). The empirical performance of two variants: (2P1D) and (1P2D) of Algorithm 1 is shown in Figure 1. The basic algorithm refers to the case when xk c := xc = 0n and the parameters are not tuned. Hence, the iterations of the basic (1P2D) use only 1 proximal calculation and applies A and AT once each, and the iterations of the basic (2P1D) use 2 proximal calculations and applies A twice and AT once. In contrast, (2P1D) and (1P2D) variants whose iterations require one more application of AT for adaptive parameter updates. As can be seen from Figure 1 (row 1) that the empirical performance of the basic variants roughly follows the O(1/k) convergence rate in terms of |f(¯xk)−f ⋆| and ∥A¯xk−b∥2. The deviations from the bound are due to the increasing sparsity of the iterates, which improves empirical convergence. With a kick-factor of ck = −0.02/τk and adaptive xk c, both turned variants (2P1D) and (1P2D) significantly outperform theoretical predictions. Indeed, they approach x⋆up to 10−13 accuracy, i.e., ∥¯xk −x⋆∥≤10−13 after a few hundreds of iterations. Similarly, Figure 1 (row 2) illustrates the actual performance vs. the theoretical bounds O(1/k2) by using the augmented Lagrangian smoother. Here, we solve the subproblems (13) and (17) by using FISTA [31] up to 10−8 accuracy as suggested in [28]. In this case, the theoretical bounds and the actual performance of the basis variants are very close to each other both in terms of |f(¯xk) −f ⋆| and ∥A¯xk −b∥2. When the parameter γk is updated, the algorithms exhibit a better performance. 5.2. Binary linear support vector machine. This example is concerned with the following binary linear support vector machine problem: min x∈Rn  F(x) := Xm j=1 ℓj(yj, wT j x −bj) + g(x) , (23) where ℓj(s, τ) is the Hinge loss function given by ℓj(s, τ) := max {0, 1 −sτ} = [1 −sτ]+, wj is the column of a given matrix W ∈Rm×n, b ∈Rn is the bias vector, y ∈{−1, +1}m is a classifier vector g is a given regularization function, e.g., g(x) := (λ/2)∥x∥2 for the ℓ2-regularizer or g(x) := λ∥x∥1 for the ℓ1-regularizer, where λ > 0 is a regularization parameter. By introducing a slack variable r = Wx −b, we can write (23) in terms of (1) as: min x∈Rn,r∈Rm n Xm j=1 ℓj(yj, rj) + g(x) : Wx −r = b o . (24) 7 Now, we apply the (1P2D) variant to solve (24). We test this algorithm on (24) and compare it with LibSVM [32] using two problems from the LibSVM data set available at http://www.csie. ntu.edu.tw/˜cjlin/libsvmtools/datasets/. The first problem is a1a, which has p = 119 features and N = 1605 data points, while the second problem is news20, which has p = 1′355′191 features and N = 19′996 data points. We compare Algorithm 1 and the LibSVM solver in terms of the final value F(xk) of the original objective function F, the computational time, and the classification accuracy caλ := 1 − N −1 PN j=1  sign(Wxk −r) ̸= y)  of both training and test data set. We randomly select 30% data in a1a and news20 to form a test set, and the remaining 70% data is used for training. We perform 10 runs and compute the average results. These average results are plotted in Fig. 2 for two separate problems, respectively. The upper and lower bounds show the maximum and minimum values of these 10 runs. 0 200 400 600 800 1000 0 0.5 1 1.5 2 2.5 3 3.5 x 10 8 The objective values Parameter horizon (λ−1) The ob jective values F (x k) 1P2D LibSVM 0 200 400 600 800 1000 0.74 0.76 0.78 0.8 0.82 0.84 0.86 0.88 0.9 The classification accuracy (training data) Parameter horizon (λ−1) The classification accuracy (training set) 1P2D LibSVM 0 200 400 600 800 1000 0.74 0.76 0.78 0.8 0.82 0.84 0.86 The classification accuracy (test data) Parameter horizon (λ−1) The classification accuracy (test set) 0 200 400 600 800 1000 −2 0 2 4 6 8 10 12 14 16 The CPU time [second] Parameter horizon (λ−1) The CPU time [second] 1P2D LibSVM 1P2D LibSVM 0 200 400 600 800 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 x 10 7 The objective values Parameter horizon (λ−1) The ob jective value F (x k) 1P2D LibSVM 0 200 400 600 800 1000 0.5 0.6 0.7 0.8 0.9 1 The classification accuracy (training data) Parameter horizon (λ−1) The classification accuracy (training set) 1P2D LibSVM 0 200 400 600 800 1000 0.5 0.6 0.7 0.8 0.9 1 The classification accuracy (test data) Parameter horizon (λ−1) The classification accuracy (test set) 1P2D LibSVM 0 200 400 600 800 1000 400 450 500 550 600 650 700 750 800 850 The CPU time [second] Parameter horizon (λ−1) CPU time [second] 1P2D LibSVM Figure 2: The average performance results of the two algorithms on the a1a (first row) and news20 (second row) problems. As can be seen from these results that both solvers give relatively the same objective values, the accuracy for these two problems, while the computational of (1P2D) is much lower than LibSVM. We note that LibSVM becomes slower when the parameter λ becomes smaller due to its active-set strategy. The (1P2D) algorithm is almost independent of the regularization parameter λ, which is different from active-set methods. In addition, the performance of (1P2D) can be improved by taking account its parallelization ability, which has not fully been exploited yet in our implementation. 6 Conclusions We propose a model-based excessive gap (MEG) technique for constructing and analyzing firstorder primal-dual methods that numerically approximate an optimal solution of constrained convex optimization problems (1). Thanks to a combination of smoothing strategies and MEG, we propose, to the best of our knowledge, the first primal-dual algorithmic schemes for (1) that theoretically obtain optimal convergence rates directly without averaging the iterates and that seamlessly handle the p-decomposability structure. In addition, our analysis techniques can be simply adapt to handle inexact oracle produced by solving approximately the primal subproblems (c.f. [28]), which is important for the augmented Lagrangian versions with lower-iteration counts. We expect a deeper understanding of MEG and different smoothing strategies to help us in tailoring adaptive update strategies for our schemes (as well as several other connected and well-known schemes) in order to further improve the empirical performance. Acknowledgments. This work is supported in part by the European Commission under the grants MIRG268398 and ERC Future Proof, and by the Swiss Science Foundation under the grants SNF 200021-132548, SNF 200021-146750 and SNF CRSII2-147633. 8 References [1] Y. Nesterov, “Excessive gap technique in nonsmooth convex minimization,” SIAM J. Optim., vol. 16, no. 1, pp. 235–249, 2005. [2] D. Bertsekas and J. N. Tsitsiklis, Parallel and distributed computation: Numerical methods. Prentice Hall, 1989. [3] V. Chandrasekaran, B. Recht, P. Parrilo, and A. Willsky, “The convex geometry of linear inverse problems,” Laboratory for Information and Decision Systems, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Tech. Report., 2012. [4] M. B. McCoy, V. Cevher, Q. Tran-Dinh, A. Asaei, and L. Baldassarre, “Convexity in source separation: Models, geometry, and algorithms,” IEEE Signal Processing Magazine, vol. 31, no. 3, pp. 87–95, 2014. [5] M. J. Wainwright, “Structured regularizers for high-dimensional problems: Statistical and computational issues,” Annual Review of Statistics and its Applications, vol. 1, pp. 233–253, 2014. [6] N. Parikh and S. Boyd, “Proximal algorithms,” Foundations and Trends in Optimization, vol. 1, no. 3, pp. 123–231, 2013. [7] P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” Multiscale Model. Simul., vol. 4, pp. 1168– 1200, 2005. [8] A. Chambolle and T. Pock, “A first-order primal-dual algorithm for convex problems with applications to imaging,” Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120–145, 2011. [9] T. Goldstein, E. Esser, and R. Baraniuk, “Adaptive primal-dual hybrid gradient methods for saddle point problems,” Tech. Report., vol. http://arxiv.org/pdf/1305.0546v1.pdf, pp. 1–26, 2013. [10] B. He and X. Yuan, “On non-ergodic convergence rate of Douglas-Rachford alternating direction method of multipliers,” 2012, (submitted for publication). [11] ——, “On the O(1/n) convergence rate of the Douglas-Rachford alternating direction method,” SIAM J. Numer. Anal., vol. 50, pp. 700–709, 2012. [12] Y. Ouyang, Y. Chen, G. L. Lan., and E. J. Pasiliao, “An accelerated linearized alternating direction method of multiplier,” Tech, 2014. [13] R. Shefiand M. Teboulle, “Rate of convergence analysis of decomposition methods based on the proximal method of multipliers for convex minimization,” SIAM J. Optim., vol. 24, no. 1, pp. 269–297, 2014. [14] H. Wang and A. Banerjee, “Bregman alternating direction method of multipliers,” Tech. Report, pp. 1–18, 2013. Online at: http://arxiv.org/pdf/1306.3203v1.pdf. [15] H. Ouyang, N. He, L. Q. Tran, and A. Gray, “Stochastic alternating direction method of multipliers,” JMLR W&CP, vol. 28, pp. 80–88, 2013. [16] T. Goldstein, B. O. Donoghue, and S. Setzer, “Fast alternating direction optimization methods,” SIAM J. Imaging Sci., vol. 7, no. 3, pp. 1588–1623, 2014. [17] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011. [18] A. Beck and M. Teboulle, “A fast dual proximal gradient algorithm for convex minimization and applications,” Oper. Res. Letter, vol. 42, no. 1, pp. 1–6, 2014. [19] W. Deng and W. Yin, “On the global and linear convergence of the generalized alternating direction method of multipliers,” Rice University CAAM, Tech. Rep., 2012, tR12-14. [20] D. P. Bertsekas, Constrained optimization and Lagrange multiplier methods. Athena Scientific, 1996. [21] R. T. Rockafellar, “Augmented lagrangians and applications of the proximal point algorithm in convex programming,” Mathematics of Operations Research, vol. 1, pp. 97–116, 1976. [22] Y. Nesterov, “Smooth minimization of non-smooth functions,” Math. Program., vol. 103, no. 1, pp. 127–152, 2005. [23] G. Lan and R. Monteiro, “Iteration-complexity of first-order augmented Lagrangian methods for convex programming,” Tech. Report, 2013. [24] V. Nedelcu, I. Necoara, and Q. Tran-Dinh, “Computational complexity of inexact gradient augmented Lagrangian methods: Application to constrained MPC,” SIAM J. Optim. Control, vol. 52, no. 5, pp. 3109–3134, 2014. [25] Y. Nesterov, Introductory lectures on convex optimization: a basic course, Kluwer Academic Publishers, 2004, vol. 87. [26] F. Facchinei and J.-S. Pang, Finite-dimensional variational inequalities and complementarity problems, N. York, Ed. Springer-Verlag, 2003, vol. 1-2. [27] A. Auslender, Optimisation: M´ethodes Num´eriques. Paris: Masson, 1976. [28] Q. Tran-Dinh and V. Cevher, “A primal-dual algorithmic framework for constrained convex minimization,” Tech. Report., LIONS, pp. 1–54, 2014. [29] G. Chen and M. Teboulle, “A proximal-based decomposition method for convex minimization problems,” Math. Program., vol. 64, pp. 81–101, 1994. [30] K.-C. Toh, M. Todd, and R. T¨ut¨unc¨u, “On the implementation and usage of SDPT3 – a Matlab software package for semidefinitequadratic-linear programming, version 4.0,” NUS Singapore, Tech. Report, 2010. [31] A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm for linear inverse problems,” SIAM J. Imaging Sciences, vol. 2, no. 1, pp. 183–202, 2009. [32] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology, vol. 2, no. 27, pp. 1–27, 2011. 9
2014
410
5,526
A Filtering Approach to Stochastic Variational Inference Neil M.T. Houlsby ∗ Google Research Zurich, Switzerland neilhoulsby@google.com David M. Blei Department of Statistics Department of Computer Science Colombia University david.blei@colombia.edu Abstract Stochastic variational inference (SVI) uses stochastic optimization to scale up Bayesian computation to massive data. We present an alternative perspective on SVI as approximate parallel coordinate ascent. SVI trades-off bias and variance to step close to the unknown true coordinate optimum given by batch variational Bayes (VB). We define a model to automate this process. The model infers the location of the next VB optimum from a sequence of noisy realizations. As a consequence of this construction, we update the variational parameters using Bayes rule, rather than a hand-crafted optimization schedule. When our model is a Kalman filter this procedure can recover the original SVI algorithm and SVI with adaptive steps. We may also encode additional assumptions in the model, such as heavytailed noise. By doing so, our algorithm outperforms the original SVI schedule and a state-of-the-art adaptive SVI algorithm in two diverse domains. 1 Introduction Stochastic variational inference (SVI) is a powerful method for scaling up Bayesian computation to massive data sets [1]. It has been successfully used in many settings, including topic models [2], probabilistic matrix factorization [3], statistical network analysis [4, 5], and Gaussian processes [6]. SVI uses stochastic optimization to fit a variational distribution, following cheap-to-compute noisy natural gradients that arise from repeatedly subsampling the data. The algorithm follows these gradients with a decreasing step size [7]. One nuisance, as for all stochastic optimization techniques, is setting the step size schedule. In this paper we develop variational filtering, an alternative perspective of stochastic variational inference. We show that this perspective leads naturally to a tracking algorithm—one based on a Kalman filter—that effectively adapts the step size to the idiosyncrasies of data subsampling. Without any tuning, variational filtering performs as well or better than the best constant learning rate chosen in retrospect. Further, it outperforms both the original SVI algorithm and SVI with adaptive learning rates [8]. In more detail, variational inference optimizes a high-dimensional variational parameter λ to find a distribution that approximates an intractable posterior. A concept that is important in SVI is the parallel coordinate update. This refers to setting each dimension of λ to its coordinate optimum, but where these coordinates are computed parallel. We denote the resulting updated parameters λVB. With this definition we have a new perspective on SVI. At each iteration it attempts to reach its parallel coordinate update, but one estimated from a randomly sampled data point. (The true coordinate update requires iterating over all of the data.) Specifically, SVI iteratively updates an estimate of λ ∗Work carried out while a member of the University of Cambridge, visiting Princeton University. 1 as follows, λt = (1 −ρt)λt−1 + ρtˆλt, (1) where ˆλt is a random variable whose expectation is λVB t and ρt is the learning rate. The original paper on SVI points out that this iteration works because λVB t −λt is the natural gradient of the variational objective, and so Eq 1 is a noisy gradient update. But we can also see the iteration as a noisy attempt to reach the parallel coordinate optimum λVB t . While ˆλ is an unbiased estimate of this quantity, we will show that Eq 1 uses a biased estimate but with reduced variance. This new perspective opens the door to other ways of updating λt based on the noisy estimates of λVB t . In particular, we use a Kalman filter to track the progress of λt based on the sequence of noisy coordinate updates. This gives us a ‘meta-model’ about the optimal parameter, which we now estimate through efficient inference. We show that one setting of the Kalman filter corresponds to SVI; another corresponds to SVI with adaptive learning rates; and others, like using a t-distribution in place of a Gaussian, account better for noise than any previous methods. 2 Variational Filtering We first introduce stochastic variational inference (SVI) as approximate parallel coordinate ascent. We use this view to present variational filtering, a model-based approach to variational optimization that observes noisy parallel coordinate optima and seeks to infer the true VB optimum. We instantiate this method with a Kalman filter, discuss relationships to other optimization schedules, and extend the model to handle real-world SVI problems. Stochastic Variational Inference Given data x1:N, we want to infer the posterior distribution over model parameters θ, p(θ|x1:N). For most interesting models exact inference is intractable and we must use approximations. Variational Bayes (VB) formulates approximate inference as a batch optimization problem. The intractable posterior distribution p(θ|x1:N) is approximated by a simpler distribution q(θ; λ) where λ are the variational parameters of q.1 These parameters are adjusted to maximize a lower bound on the model evidence (the ELBO), L(λ) = N X i=1 Eq[log p(xi|θ)] + Eq[log p(θ)] −Eq[log q(θ)] . (2) Maximizing Eq 2 is equivalent to minimizing the KL divergence between the exact and approximate posterior, KL[q||p]. Successive optima of the ELBO often have closed-form [1], so to maximize Eq 2 VB can perform successive parallel coordinate updates on the elements in λ, λt+1 = λVB t . Unfortunately, the sum over all N datapoints in Eq 2 means that λVB t is too expensive on large datasets. SVI avoids this difficulty by sampling a single datapoint (or a mini-batch) and optimizing a cheap, noisy estimate of the ELBO ˆL(λ). The optimum of ˆL(λ) is denoted ˆλt, ˆL(λ) =NEq[log p(xi|θ)] + Eq[log p(θ)] −Eq[log q(θ)] , (3) ˆλ := argmax λ ˆL(λ) = Eq[N log p(xi|θ) + log p(θ)] . (4) The constant N in Eq 4 ensures the noisy parallel coordinate optimum is unbiased with respect to the full VB optimum, E[ˆλt] = λVB t . After computing ˆλt, SVI updates the parameters using Eq 1. This corresponds to using natural gradients [9] to perform stochastic gradient ascent on the ELBO. We present an alternative perspective on Eq 1. SVI may be viewed as an attempt to reach the true parallel coordinate optimum λVB t using the noisy estimate ˆλt. The observation ˆλt is an unbiased estimator of λVB t with variance Var[ˆλt]. The variance may be large, so SVI makes a bias/variance trade-off to reduce the overall error. The bias and variance in λt computed using SVI (Eq 1) are E[λt −λVB t ] = (1 −ρt)(λt−1 −λVB t ) , Var[λt] = ρ2 tVar[ˆλt] , (5) respectively. Decreasing the step size reduces the variance but increases the bias. However, as the algorithm converges, the bias decreases as the VB optima fall closer to the current parameters. Thus, 1To readers familiar with stochastic variational inference, we refer to the global variational parameters, assuming that the local parameters are optimized at each iteration. Details can be found in [1]. 2 λt−1 −λVB t tends to zero and as optimization progresses, ρt should decay. This reduces the variance given the same level of bias. Indeed, most stochastic optimization schedules decay the step size, including the Robbins-Monro schedule [7] used in SVI. Different schedules yield different bias/variance trade-offs, but the tradeoff is heuristic and these schedules often require hand tuning. Instead we use a model to infer the location of λVB t from the observations, and use Bayes rule to determine the optimal step size. Probabilistic Filtering for SVI We described our view of SVI as approximate parallel coordinate ascent. With this perspective, we can define a model to infer λVB t . We have three sets of variables: λt are the current parameters of the approximate posterior q(θ; λt); λVB t is a hidden variable corresponding to the VB coordinate update at the current time step; and ˆλt is an unbiased, but noisy observation of λVB t . We specify a model that observes the sequence of noisy coordinate optima ˆλ1:t, and we use it to compute a distribution over the full VB update p(λVB t |ˆλ1:t). When making a parallel coordinate update at time t we move to the best estimate of the VB optimum under the model, λt = E[λVB t |ˆλ1:t]. Using this approach we i) avoid the need to tune the step size because Bayes rule determines how the posterior mean moves at each iteration; ii) can use a Kalman filter to recover particular static and adaptive step size algorithms; and iii) can add extra modelling assumptions to vary the step size schedule in useful ways. In variational inference, our ‘target’ is λVB t . It moves because the parameters of approximate posterior λt change as optimization progresses. Therefore, we use a dynamic tracking model, the Kalman filter [10]. We compute the posterior over next VB optimum given previous observations, p(λVB t |ˆλ1:t). In tracking, this is called filtering, so we call our method variational filtering (VF).2 At each time t, VF has a current set of model parameters λt−1 and takes these steps. 1. Sample a datapoint xt. 2. Compute the noisy estimate of the coordinate update ˆλt using Eq 3. 3. Run Kalman filtering to compute the posterior over the VB optimum, p(λVB t |ˆλ1:t). 4. Update the parameters to the posterior mean λt = E[λVB t |ˆλ1:t] and repeat. Variational filtering uses the entire history of observations, encoded by the posterior, to infer the location of the VB update. Standard optimization schedules use only the current parameters λt to regularize the noisy coordinate update, and these methods require tuning to balance bias and variance in the update. In our setting, Bayes rule automatically makes this trade-off. To illustrate this perspective we consider a small problem. We fit a variational distribution for latent Dirichlet allocation on a small corpus of 2.5k documents from the ArXiv. For this problem we can compute the full parallel coordinate update and thus compute the tracking error ||λVB t −λt||2 2 and the observation noise ||λVB t −ˆλt||2 2 for various algorithms. We emphasize that ˆλt is unbiased, and so the observation noise is completely due to variance. A reduction in tracking error indicates an advantage to incurring bias for a reduction in variance. We compared variational filtering (Alg. 1) to the original Robbins-Monro schedule used in SVI [1], and a large constant step size of 0.5. The same sequence of random documents was handed to each algorithm. Figs. 1 (a-c) show the tracking error of each algorithm. The large constant step size yields large error due to high variance, see Eq 5. The SVI updates are too small and the bias dominates. Here, the bias is even larger than the variance in the noisy observations during early stages, but it decays as the term (λt−λVB t−1) in Eq 5 slowly decreases. The variational filter automatically balances bias and variance, yielding the smallest tracking error. As a result of following the VB optima more closely, the variational filter achieves larger values of the ELBO, shown in Fig. 1 (d). 3 Kalman Variational Filter We now detail our Kalman filter for SVI. Then we discuss different settings of the parameters and estimating these online. Finally, we extend the filter to handle heavy-tailed noise. 2 We do not perform ‘smoothing’ in our dynamical system because we are not interested in old VB coordinate optima after the parameters have been optimized further. 3 (a) Variational Filtering (b) SVI, Robbins-Monro (c) Constant Rate (d) ELBO 0 50 100 11 12 13 14 15 16 17 t log Euclidean distance tracking error observation error 0 50 100 11 12 13 14 15 16 17 t log Euclidean distance tracking error observation error 0 50 100 11 12 13 14 15 16 17 t log Euclidean distance tracking error observation error 0 50 100 −10.5 −10 −9.5 −9 −8.5 −8 t ELBO Variational Filtering SVI Constant Figure 1: (a-c) Curves show the error in tracking the VB update. Markers depict the error in the noisy observations ˆλt to the VB update. (d) Evolution of the ELBO computed on the entire dataset. The Gaussian Kalman filter (KF) is attractive because inference is tractable and, in SVI, computational time is the limiting factor, not the rate of data acquisition. The model is specified as p(λVB t+1|λVB t ) = N(λVB t , Q) , p(ˆλt|λVB t ) = N(λVB t , R) , (6) where R models the variance in the noisy coordinate updates and Q models how far the VB optima move at each iteration. The observation noise has zero mean because the noisy updates are unbiased. We assume no systematic parameter drift, so E[λVB t+1] = λVB t . Filtering in this linear-Gaussian model is tractable, given the current posterior p(λVB t−1|ˆλ1:t−1) = N(µt−1; Σt−1) and a noisy coordinate update ˆλt, the next posterior is computed directly using Gaussian manipulations [11], p(λVB t |ˆλ1:t) = N  [1 −Pt]µt−1 + Ptˆλt, [1 −Pt]−1[Σt−1 + Q]  , (7) Pt = [Σt−1 + Q][Σt−1 + Q + R]−1 . (8) The variable Pt is known as the Kalman gain. Notice the update to the posterior mean has the same form as the SVI update in Eq 1. The gain Pt is directly equivalent to the SVI step size ρt.3 Different modelling choices to get different optimization schedules. We now present some key cases. Static Parameters If the parameters Q and R are fixed, the step size progression in Eq 7 can be computed a priori as Pt+1 = [Q/R + Pt][1 + Q/R + Pt]−1. This yields a fixed sequence of decreasing step size. A popular schedule is the Robbins-Monro routine, ρ ∝(t0 + t)−κ also used in SVI [1]. If we set Q = 0 the variational filter returns a Robbins-Monro schedule with κ = 1. This corresponds to online estimation of the mean of a Gaussian. This is because Q = 0 assumes that the optimization has converged and the filter simply averages the noisy updates. In practice, decay rates slower that κ = 1 perform better [2, 8]. This is because updates which were computed using old parameter values are forgotten faster. Setting Q > 0 yields the same reduced memory. In this case, the step size tends to a constant limt→∞Pt = [ p 1 + 4R/Q + 1][ p 1 + 4R/Q + 1 + 2R/Q]−1. Larger the noise-to-signal ratios R/Q result in smaller limiting step sizes. This demonstrates the automatic bias/variance trade-off. If R/Q is large, the variance in the noisy updates Var[ˆλt] is assumed large. Therefore, the filter uses a smaller step size, yielding more bias (Eq 5), but with lower overall error. Conversely, if there is no noise R/Q = 0, P∞= 1 and we recover batch VB. Parameter Estimation Normally the parameters will not be known a priori. Further, if Q is fixed then the step size does not tend to zero and so Robbins-Monro criteria do not hold [7]. We can address both issues by estimating Q and R online. The parameter R models the variance in the noisy optima, and Q measures how near the process is to convergence. These parameters are unknown and will change as the optimization progresses. Q will decrease as convergence is approached; R may decrease or increase. In our demonstration in Fig. 1, it increases during early iterations and then plateaus. Therefore we estimate these parameters online, similar to [8, 12]. The desired parameter values are R = E[||ˆλt −λVB t ||2] = E[||ˆλt −λVB t−1||2 2] −||λVB t −λVB t−1||2 2 , (9) Q = ||λVB t −λVB t−1||2 2 . (10) 3 In general, Pt is a full-rank matrix update. For simplicity, and to compare to scalar learning rates, we present the 1D case. The multi-dimensional generalization is straightforward. 4 0 5000 10000 15000 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 # docs log(ρt) Students t Filter Gaussian Filter SVI−adapt [Ran13] Figure 2: Step sizes learned by the Gaussian Kalman filter, the Student’s t filter (Alg. 1) and the adaptive learning rate in [8], on non-stationary ArXiv data. The adaptive algorithms react to the dataset shift by increasing the step size. The variational filters react even faster than adaptive-SVI because not only do Q and R adjust, but the posterior variance increases at the shift which further augments the next step size. We estimate these using exponentially weighted moving averages. To estimate the two terms in Eq 9, we estimate the expected difference between the current state and the observation gt = E[ˆλt−λVB t−1], and the norm of this difference ht = E[||ˆλt −λVB t−1||2 2], using gt = (1 −τ −1 t )gt−1 + τ −1 t (ˆλt −µt−1) , ht = (1 −τ −1 t )ht−1 + τ −1 t ||ˆλt −µt−1||2 2 , (11) where τ is the window length and µt−1 is the current posterior mean. The parameters are estimated as R = ht −||gt||2 2 and Q = ||gt||2 2. After filtering, the window length is adjusted to τt+1 = (1 −Pt)τt + 1. Larger steps result in shorter memory of old parameter values. Joint parameter and state estimation can be poorly determined. Initializing the parameters to appropriate values with Monte Carlo sampling, as in [8], mitigates this issue. In our experiments we avoid this underspecification by tying the filtering parameters across the filters for each variational parameter. The variational filter with parameter estimation recovers an automatic step size similar to the adaptive-SVI algorithm in [8]. Their step size is equivalent to ρt = Q/[Q + R]. Variational filtering uses Pt = [Σt−1 + Q]/[Σt−1 + Q + R], Eq 7. If this posterior variance Σt−1 is zero the updates are identical. If Σt−1 is large, as in early time steps, the filter produces a larger step size. Fig. 3 demonstrates how the these methods react to non-stationary data. LDA was run on ArXiv abstracts whose category changed every 5k documents. Variational filtering and adaptive-SVI react to the shift by increasing the step size, the ELBO is similar for both methods. Student’s t Filter In SVI, the noisy estimates ˆλt are often heavy-tailed. For example, in matrix factorization heavy-tailed parameters distributions [13] produce to heavy-tailed noisy updates. Empirically, we observe similar heavy tails in LDA. Heavy tails may also arise from computing Euclidean distances between parameter vectors and not using the more natural Fisher information metric [9]. We add robustness these sources of noise with a heavy-tailed Kalman filter. We use a t-distributed noise model, p(ˆλt|λVB t ) = T (λVB t , R, δ), where T (m, V, d) denotes a tdistribution with mean m, covariance V and d degrees of freedom. For computational convenience we also use a t-distributed transition model, p(λVB t+1|λVB t ) = T (λVB t , Q, γ). If the current posterior is t-distributed, p(λVB t |ˆλ1:t) = T (µt, Σt, ηt) and the degrees of freedom are identical, ηt = γ = δ, then filtering has closed-form, p(λVB t |ˆλ1:t) =T  (1 −Pt)µt−1 + Ptˆλt, ηt−1 + ∆2 ηt−1 + ||λ||0 (1 −Pt)[Σt−1 + Q], ηt−1 + ||λ||0  , (12) where Pt = Σt−1 + Q Σt−1 + Q + R , and ∆2 = ||ˆλt −µt−1||2 2 Σt−1 + Q + R . (13) The update to the mean is the same as in the Gaussian KF. The crucial difference is in the update to the variance in Eq 12. If an outlier ˆλt arrives, then ∆2, and hence Σt, are augmented. The increased posterior uncertainty at time t + 1 yields an increased gain Pt+1. This allows the filter to react quickly to a large perturbation. The t-filter differs fundamentally to the Gaussian KF in that the step size is now a direct function of the observations. In the Gaussian KF the dependency is indirect, through the estimation of R and Q. Eq 12 has closed-form because the d.o.f. are equal. Unfortunately, this will not generally be the case because the posterior degrees of freedom grow, so we require an approximation. Following [14], we approximate the ‘incompatible’ t-distributions by adjusting their degrees of freedom to be equal. We choose all of these to equal ˜ηt = min(ηt, γ, δ). We match the degrees of freedom in 5 this way because it prevents the posterior degree of freedom from growing over time. If ηt, Eq 12 were allowed to grow large, the t-distributed filter reverts back to a Gaussian KF. This is undesirable because the heavy-tailed noise does not necessarily disappear at convergence. To account for adjusting the degrees of freedom, we moment match the old and new t-distributions. This has closed-from; to match the second moments of T (m, ˜Σ, ˜η) to T (m, Σ, η), the variance is set to ˜Σ = η(˜η−2) (η−2)˜ηΣ. This results in tractable filtering and has the same computational cost as Gaussian filtering. The routine is summarized in Algorithm 1. Algorithm 1 Variational filtering with Student’s t-distributed noise 1: procedure FILTER(data x1:N) 2: Initialize filtering distribution Σ0, µ0, η0, see § 5 3: Initialize statistics g0, h0, τ0 with Monte-Carlo sampling 4: Set initial variational parameters λ0 ←µ0 5: for t = 1, . . . , T do 6: Sample a datapoint xt ▷Or a mini-batch of data. 7: ˆλt ←f(λt, xt), f given by Equation Eq 4 ▷Noisy estimate of the coordinate optimum. 8: Compute gt and ht using Eq 11. 9: R ←ht −g2 t , Q ←ht ▷Update parameters of the filter. 10: ˜ηt−1 ←min(ηt−1, γ, δ) ▷Match degrees of freedom. 11: ˜Σt−1 ←ηt−1(˜ηt−1 −2)[(ηt−1 −2)˜ηt−1]−1Σt−1, similar for ˜R, ˜Q ▷Moment match. 12: Pt ←[˜Σt−1 + ˜Q][˜Σt−1 + ˜Q + ˜R]−1 ▷Compute gain, or step size. 13: ∆2 ←||ˆλt −µt−1||2 2[˜Σt−1 + ˜Q + ˜R]−1 14: µt ←[I −Pt]µt−1 + Ptˆλt, ▷Update filter posterior. 15: Σt ← ˜ηt−1+∆2 ˜ηt−1+||λ||0 [I −Pt][˜Σt−1 + ˜Q], ηt ←ηt−1 + 1 16: λt ←µt ▷Update the variational parameters of q. 17: end for 18: return λT 19: end procedure 4 Related Work Stochastic and Streamed VB SVI performs fast inference on a fixed dataset of known size N. Online VB algorithms process an infinite stream of data [15, 16], but these methods cannot use a re-sampled datapoint. Variational filtering falls between both camps. The noisy observations require an estimate of N. However, Kalman filtering does not try to optimize a static dataset like a fixed Robbins-Monro schedule. As observed in Fig. 3 the algorithm can adapt to a regime change, and forgets the old data. The filter simply tries to move to the VB coordinate update at each step, and is not directly concerned about asymptotic convergence on static dataset. Kalman filters for parameter learning Kalman filters have been used to learn neural network parameters. Extended Kalman filters have been used to train supervised networks [17, 18, 19]. The network weights evolve because of data non-stationarity. This problem differs fundamentally to SVI. In the neural network setting, the observations are the fixed data labels, but in SVI the observations are noisy realizations of a moving VB parallel coordinate optimum. If the VF draws the same datapoint, the observations ˆλ will still change because λt will have changed. In the work with neural nets, the same datapoint always yields the same observation for the filter. Adaptive learning rates Automatic step size schedules have been proposed for online estimation of the mean of a Gaussian [20], or drifting parameters [21]. The latter work uses a Gaussian KF for parameter estimation in approximate dynamic programming. Automatic step sizes are derived for stochastic gradient descent in [12] and SVI in [8]. These methods set the step size to minimize the expected update error. Our work is the first Bayesian approach to learn the SVI schedule. Meta-modelling Variational filtering is a ‘meta-model’, these are models that assist training of a more complex method. They are becoming increasingly popular, examples include Kalman filters 6 (a) LDA ArXiv (b) LDA NYT (c) LDA Wikipedia −8.5 −8 −7.5 test ELBO TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const −8 −7.9 −7.8 −7.7 −7.6 test ELBO TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const −7.5 −7.4 −7.3 −7.2 −7.1 −7 test ELBO TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const (d) BMF WebView (e) BMF Kosarak (f) BMF Netflix 0.2 0.25 0.3 0.35 0.4 0.45 recall@10 TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const 0.25 0.3 0.35 0.4 0.45 recall@10 TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const 0.12 0.14 0.16 0.18 0.2 0.22 recall@10 TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const Figure 3: Final performance achieved by each algorithm on the two problems. Stars indicate the best performing non-oracle algorithm and those statistically indistinguishable at p = 0.05. (a-c) LDA: Value of the ELBO after observing 0.5M documents. (d-f) BMF: recall@10 after observing 2 · 108 cells. for training neural networks [17], Gaussian process optimization for hyperparameter search [22] and Gaussian process regression to construct Bayesian quasi-Newton methods [23]. 5 Empirical Case Studies We tested variational filtering on two diverse problems: topic modelling with Latent Dirichlet Allocation (LDA) [24], a popular testbed for scalable inference routines, and binary matrix factorization (BMF). Variational filtering outperforms Robbins-Monro SVI and a state-of-the-art adaptive method [8] in both domains. The Student’s t filter performs substantially better than the Gaussian KF and is competitive with an oracle that picks the best constant step size with hindsight. Models We used 100 topics in LDA and set the Dirichlet hyperparameters to 0.5. This value is slightly larger than usual because it helps the stochastic routines escape local minima early on. For BMF we used a logistic matrix factorization model with a Gaussian variational posterior over the latent matrices [3]. This task differs to LDA in two ways. The variational parameters are Gaussian and we sample single cells from the matrix to form stochastic updates. We used minibatches of 100 documents in LDA, and 5 times the number of rows in BMF. Datasets We trained LDA on three large document corpora: 630k abstracts from the ArXiv, 1.73M New York Times articles, and Wikipedia, which has ≈4M articles. For BMF we used three recommendation matrices: clickstream data from the Kosarak news portal; click data from an ecommerce website, BMS-WebView-2 [25]; and the Netflix data, treating 4-5 star ratings as ones. Following [3] we kept the 1000 items with most ones and sampled up to 40k users. Algorithms We ran our Student’s t variational filter in Algorithm 1 (TVF) and the Gaussian version in § 3 (GVF). The variational parameters were initialized randomly in LDA and with an SVD-based routine [26] in BMF. The prior variance was set to Σ0 = 103 and t-distribution’s degrees of freedom to η0 = 3 to get the heaviest tails with a finite variance for moment matching. In general, VF can learn full-rank matrix stepsizes. LDA and BMF, however, have many parameters, and so we used the simplest setting of VF in which a single step size was learned for all of them; that is, Q and R are constrained to be proportional to the identity matrix. This choice reduces the cost of VF from O(N 3) to O(N). Empirically, this computational overhead was negligible. Also 7 (a) LDA ArXiv, ELBO (b) BMF WebView, recall@10 0 1 2 3 4 5 x 10 5 −8.4 −8.2 −8 −7.8 # docs test ELBO TVF (this paper) GVF (this paper) SVI [Hof13] Adapt−SVI [Ran13] Oracle Const 0 0.5 1 1.5 2 x 10 8 0.1 0.2 0.3 0.4 0.5 # matrix entries recall@10 Figure 4: Example learning curves of (a) the ELBO (plot smoothed with Lowess’ method) and (b) recall@10, on the LDA and BMF problems, respectively. it allows us to aggregate statistics across the variational parameters, yielding more robust estimates. Finally, we can directly compare our Bayesian adaptive rate to the single adaptive rate in [8]. We compared to the SVI schedule proposed in [1]. This is a Robbins-Monro schedule ρt = (t0 + t)−κ, we used κ = 0.7; t0 = 1000 for LDA as these performed well in [1, 2, 8] and κ = 0.7, t0 = 0 for BMF, as in [3]. We also compared to the adaptive-SVI routine in [8]. Finally, we used an oracle method that picked the constant learning rate from a grid of rates 10−k, k ∈1, . . . , 5, that gave the best final performance. In BMF, the Robbins-Monro SVI schedule learns a different rate for each row and column. All other methods computed a single rate. Evaluation In LDA, we evaluated the algorithms using the per-word ELBO, estimated on random sets of held-out documents. Each algorithm was given 0.5M documents and the final ELBO was averaged over the final 10% of the iterations. We computed statistical significance between the algorithms with a t-test on these noisy estimates of the ELBO. Our BMF datasets were from item recommendation problems, for which recall is a popular metric [27]. We computed recall at N by removing a single one from each row during training. We then ranked the zeros by their posterior probability of being a one and computed the fraction of the rows in which the held-out one was in the top N. We used a budget of 2 · 108 observations and computed statistical significance over 8 repeats of the experiment, including the random train/test split. Results The final performance levels on both tasks are plotted in Fig. 3. These plots show that over the six datasets and two tasks the Student’s t variational filter is the strongest non-oracle method. SVI [1] and Adapt-SVI [8] come close on LDA, which they were originally used for, but on the WebView and Kosarak binary matrices they yield a substantially lower recall. In terms of the ELBO in BMF (not plotted), TVF was the best non-oracle method on WebView and Kosarak and SVI was best on Netflix, with TVF second best. The Gaussian Kalman filter worked less well. It produced high learning rates due to the inaccurate Gaussian noise assumption. The t-distributed filter appears to be robust to highly non-Gaussian noise. It was even competitive with the oracle method (2 wins, 2 draws, 1 loss). Note that the oracle picked the best final performance at time T, but at t < T the variational filter converged faster, particularly in LDA. Fig. 4 (a) shows example learning curves on the ArXiv data. Although the oracle just outperforms TVF at 0.5M documents, TVF converged much faster. Fig. 4 (b) shows example learning curves in BMF on the WebView data. This figure shows that most of the BMF routines converge within the budget. Again, TVF not only reached the best solution, but also converged fastest. Conclusions We have presented a new perspective on SVI as approximate parallel coordinate descent. With our model-based approach to this problem, we shift the requirement from hand tuning optimization schedules to constructing an appropriate tracking model. This approach allows us to derive a new algorithm for robust SVI that uses a model with Student’s t-distributed noise. This Student’s t variational filtering algorithm performed strongly on two domains with completely different variational distributions. Variational filtering is a promising new direction for SVI. Acknowedgements NMTH is grateful to the Google European Doctoral Fellowship scheme for funding this research. DMB is supported by NSF CAREER NSF IIS-0745520, NSF BIGDATA NSF IIS-1247664, NSF NEURO NSF IIS-1009542, ONR N00014-11-1-0651 and DARPA FA8750-142-0009. We thank James McInerney, Alp Kucukelbir, Stephan Mandt, Rajesh Ranganath, Maxim Rabinovich, David Duvenaud, Thang Bui and the anonymous reviews for insightful feedback. 8 References [1] M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14:1303– 1347, 2013. [2] M.D. Hoffman, D.M. Blei, and F. Bach. Online learning for latent Dirichlet allocation. NIPS, 23:856–864, 2010. [3] J.M. Hernandez-Lobato, N.M.T. Houlsby, and Z. Ghahramani. Stochastic inference for scalable probabilistic modeling of binary matrices. ICML, 2014. [4] P.K. Gopalan and D.M. Blei. Efficient discovery of overlapping communities in massive networks. PNAS, 110(36):14534–14539, 2013. [5] J. Yin, Q. Ho, and E. Xing. A scalable approach to probabilistic latent space inference of large-scale networks. In NIPS, pages 422–430. 2013. [6] J. Hensman, N. Fusi, and N.D. Lawrence. Gaussian processes for big data. CoRR, abs/1309.6835, 2013. [7] H. Robbins and S. Monro. A stochastic approximation method. The Annals of Mathematical Statistics, 22(3):400–407, 1951. [8] R. Ranganath, C. Wang, D.M. Blei, and E.P. Xing. An adaptive learning rate for stochastic variational inference. In ICML, pages 298–306, 2013. [9] Shun-Ichi Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. [10] R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82(1):35–45, 1960. [11] S. Roweis and Z. Ghahramani. A unifying review of linear gaussian models. Neural computation, 11(2):305–345, 1999. [12] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. In ICML, 2013. [13] B. Lakshminarayanan, G. Bouchard, and C. Archambeau. Robust Bayesian matrix factorisation. In AISTATS, pages 425–433, 2011. [14] M. Roth, E. Ozkan, and F. Gustafsson. A Student’s t filter for heavy tailed process and measurement noise. In ICASSP, pages 5770–5774. IEEE, 2013. [15] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C Wilson, and Michael Jordan. Streaming variational Bayes. In NIPS, pages 1727–1735, 2013. [16] Zoubin Ghahramani and H Attias. Online variational Bayesian learning. In Slides from talk presented at NIPS workshop on Online Learning, 2000. [17] J.F.G. de Freitas, M. Niranjan, and A.H. Gee. Hierarchical Bayesian models for regularization in sequential learning. Neural Computation, 12(4):933–953, 2000. [18] S.S. Haykin. Kalman filtering and neural networks. Wiley Online Library, 2001. [19] Enrico Capobianco. Robust control methods for on-line statistical learning. EURASIP Journal on Advances in Signal Processing, (2):121–127, 2001. [20] Y.T. Chien and K. Fu. On Bayesian learning and stochastic approximation. Systems Science and Cybernetics, IEEE Transactions on, 3(1):28–38, 1967. [21] A.P. George and W.B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Machine learning, 65(1):167–198, 2006. [22] Jasper Snoek, Hugo Larochelle, and Ryan P Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, pages 2960–2968, 2012. [23] P. Hennig and M. Kiefel. Quasi-newton methods: A new direction. JMLR, 14(1):843–865, 2013. [24] D. M Blei, A. Y Ng, and M.I. Jordan. Latent Dirichlet allocation. JMLR, 3:993–1022, 2003. [25] R. Kohavi, C.E. Brodley, B. Frasca, L. Mason, and Z. Zheng. Kdd-cup 2000 organizers’ report: peeling the onion. ACM SIGKDD Explorations Newsletter, 2(2):86–93, 2000. [26] S. Nakajima, M. Sugiyama, and R. Tomioka. Global analytic solution for variational Bayesian matrix factorization. NIPS, 23:1759–1767, 2010. [27] A. Gunawardana and G. Shani. A survey of accuracy evaluation metrics of recommendation tasks. JMLR, 10:2935–2962, 2009. 9
2014
411
5,527
Attentional Neural Network: Feature Selection Using Cognitive Feedback Qian Wang Department of Biomedical Engineering Tsinghua University Beijing, China 100084 qianwang.thu@gmail.com Jiaxing Zhang Microsoft Research Asia 5 Danning Road, Haidian District Beijing, China 100080 jiaxz@microsoft.com Sen Song ∗ Department of Biomedical Engineering Tsinghua University Beijing, China 100084 sen.song@gmail.com Zheng Zhang * † Department of Computer Science NYU Shanghai 1555 Century Ave, Pudong Shanghai, China 200122 zz@nyu.edu Abstract Attentional Neural Network is a new framework that integrates top-down cognitive bias and bottom-up feature extraction in one coherent architecture. The top-down influence is especially effective when dealing with high noise or difficult segmentation problems. Our system is modular and extensible. It is also easy to train and cheap to run, and yet can accommodate complex behaviors. We obtain classification accuracy better than or competitive with state of art results on the MNIST variation dataset, and successfully disentangle overlaid digits with high success rates. We view such a general purpose framework as an essential foundation for a larger system emulating the cognitive abilities of the whole brain. 1 Introduction How our visual system achieves robust performance against corruptions is a mystery. Although its performance may degrade, it is capable of performing denoising and segmentation tasks with different levels of difficulties using the same underlying architecture. Consider the first two examples in Figure 1. Digits overlaid over random images are harder to recognize than those over random noise, since pixels in the background images are structured and highly correlated. It is even more challenging if two digits are overlaid altogether, in a benchmark that we call MNIST-2. Yet, with different levels of efforts (and error rates), we are able to recognize these digits for all three cases. Figure 1: Handwriting digits with different corruptions. From left to right: random background noise, random background images, and MNIST-2 ∗These authors supervised the project jointly and are co-corresponding authors. †Work partially done while at Microsoft Resarch Asia 1 Another interesting property of the human visual system is that recognition is fast for low noise level but takes longer for cluttered scenes. Testers perform well on recognition tasks even when the exposure duration is short enough to allow only one feed-forward pass [18], while finding the target in cluttered scenes requires more time[4]. These evidences suggest that our visual system is simultaneously optimized for the common, and over-engineered for the worst. One hypothesis is that, when challenged with high noise, top-down “explanations” propagate downwards via feedback connections, and modulate lower level features in an iterative refinement process[19]. Inspired by these intuitions, we propose a framework called attentional neural network (aNN). aNN is composed of a collection of simple modules. The denoising module performs multiplicative feature selection controlled by a top-down cognitive bias, and returns a modified input. The classification module receives inputs from the denoising module and generates assignments. If necessary, multiple proposals can be evaluated and compared to pick the final winner. Although the modules are simple, their combined behaviors can be complex, and new algorithms can be plugged in to rewire the behavior, e.g., a fast pathway for low noise, and an iterative mode for complex problems such as MNIST-2. We have validated the performance of aNN on the MNIST variation dataset. We obtained accuracy better than or competitive to state of art. In the challenging benchmark of MNIST-2, we are able to predict one digit or both digits correctly more than 95% and 44% of the time, respectively. aNN is easy to train and cheap to run. All the modules are trained with known techniques (e.g. sparse RBM and back propagation), and inference takes much fewer rounds of iterations than existing proposals based on generative models. 2 Model aNN deals with two related issues: 1) constructing a segmentation module under the influence of cognitive bias and 2) its application to the challenging task of classifying highly corrupted data. We describe them in turn, and will conclude with a brief description of training methodologies. 2.1 Segmentation with cognitive bias 𝑔= 𝜎(𝑈⋅𝑏) ℎ= 𝜎(𝑊⋅𝑥) 𝑏 𝑥 ℎ𝑔= ℎ⊙𝑔 𝑦= 𝜎(𝑊′ ⋅ℎ𝑔) 𝑊 𝑊’ 𝑈 M ⊙ (a) M 𝑏 𝑥 𝑦> 𝜖 ⊙ C (b) M 𝑏 𝑦> 𝜖 ⊙ C (c) 𝑥 𝑥 𝑧 𝑧 𝑦 𝑦 feedback Figure 2: Segmentation module with cognitive bias (a) and classification based on that (b,c). As illustrated in Figure 2(a), the objective of the segmentation module M is to segment out an object y belonging to one of N classes in the noisy input image x. Unlike in the traditional deonising models such as autoencoders, M is given a cognitive bias vector b ∈{0, 1}N, whose ith element indicates a prior belief on the existence of objects belonging to the i-th class in the noisy image. During the bottom up pass, input image x is mapped into a feature vector h = σ(W · x), where W is the feature weight matrix and σ represents element-wise nonlinear Sigmoid function. During the top-down pass, b generates a gating vector g = σ(U · b) with the feedback weights U. g selects and de-selects the features by modifying hidden activation hg = h ⊙g, where ⊙means pair-wised multiplication. Reconstruction occurs from hg by z = σ(W ′ · hg). In general, bias b can be a probability distribution indicating a mixture of several guesses, but in this paper we only use two simpler scenarios: a binary vector to indicate whether there is a particular object with its associated weights UG, or a group bias bG with equal values for all objects, which indicates the presence of some object in general. 2 2.2 Classification A simple strategy would be to feed the segmented input y into a classifier C. However, this suffers from the loss of details during M’s reconstruction and is prone to hallucinations, i.e. y transforming to a wrong digit when given a wrong bias. We opted to use the reconstruction y to gate the raw image x with a threshold ϵ to produce gated image z = (y > ϵ)⊙x for classification (Figure 2b). To segment complex images, we explored an iterative design that is reminiscent of a recurrent network (Figure 2c). At time step t, the input to the segmentation module M is zt = (yt−1 > ϵ) ⊙x, and the result yt is used for the next iteration. Consulting the raw input x each time prevents hallucination. Alternatively, we could feed the intermediate representation hg to the classifier and such a strategy gives reasonable performance (see section 3.2 group bias subsection), but in general this suffers from loss of modularity. For iterative classification, we can give the system an initial cognitive bias, and the system produces a series of guesses b and classification results given by C. If the guess b is confirmed by the output of C, then we consider b as a candidate for the final classification result. A wrong bias b will lead the network to transform x to a different class, but the segmented images with the correct bias is often still better than transformed images under the wrong bias. In the simplest version, we can give initial bs over all classes and compare the fitness of the candidates. Such fitness metrics can be the number of iterations it takes C to confirm the guess, the confidence of the confirmation , or a combination of many related factors. For simplicity, we use the entropy of outputs of C, but more sophisticated extensions are possible (see section 3.2 making it scalable subsection). 2.3 Training the model We used a shallow network of RBM for the generative model, and autoencoders gave qualitatively similar results. The parameters to be learned include the feature weights W and the feedback weights U. The multiplicative nature of feature selection makes learning both W and U simultaneously problematic, and we overcame this problem with a two-step procedure: firstly, W is trained with noisy data in a standalone RBM (i.e. with the feedback disabled); next, we fix W and learn U with the noisy data as input but with clean data as target, using the standard back propagation procedure. This forces U to learn to select relevant features and de-select distractors. We find it helpful to use different noise levels in these two stages. In the results presented below, training W and U uses half and full noise intensity, respectively. In practice, this simple strategy is surprisingly effective (see Section 3). We found it important to use sparsity constraint when learning W to produce local features. Global features (e.g. templates) tend to be activated by noises and data alike, and tend to be de-selected by the feedback weights. We speculate that feature locality might be especially important when compositionality and segmentation is considered. Jointly training the features and the classifier is a tantalizing idea but proves to be difficult in practice as the procedure is iterative and the feedback weights need to be handled. But attempts could be made in this direction in the future to fine-tune performance for a particular task. Another hyper-parameter is the threshold ϵ. We assume that there is a global minimum, and used binary search on a small validation set. 1 3 Results and Analysis We used the MNIST variation dataset and MNIST-2 to evaluate the effectiveness of our framework. MNIST-2 is composed by overlaying two randomly chosen clean MNIST digits. Unless otherwise stated, we used an off-the-shelf classifier: a 3-layer perceptron with 256 hidden nodes, trained on clean MNIST data with a 1.6% error rate. In the following sections, we will discuss bias-induced feature selection, its application in denosing, segmentation and finally classification. 3.1 Effectiveness of feedback If feature selection is sensitive to the cognitive bias b, then a given b should leads to the activation of the corresponding relevant features. In Figure 3(a), we sorted the hidden units by the associated 1The training and testing code can be found in https://github.com/qianwangthu/feedback-nips2014-wq.git 3 b=0 b=1 b=2 b=8 sum (a) Top features input no bias group bias correct bias wrong bias (b) Reconstruction activated group bias b=1 b=2 sum (c) feature selection Figure 3: The effectiveness of bias-controlled feature selection. (a) top features selected by different cognitive bias (0, 1, 2, 8) and their accumulation; (b) denoising without bias, with group bias, correct bias and wrong bias (b = 1); (c) how bias selects and de-selects features, the second and the third rows correspond to the correct and wrong bias, respectively. weights in U for a given bias from the set {0, 1, 2, 8}, and inspected their associated feature weights in W. The top features, when superimposed, successfully compose a crude version of the target digit. Since b controls feature selection, it can lead to effective segmentation (shown in Figure 3(b))) By comparing the reconstruction results in the second row without bias, with those in the third and fouth rows (with group bias and correct bias respectively), it is clear that segmentation quality progressively improves. On the other hand, a wrong bias (fifth row) will try to select features to its favor in two ways: selecting features shared with the correct bias, and hallucinating incorrect features by segmenting from the background noises. Figure 3(c) goes further to reveal how feature selection works. The first row shows features for one noisy input, sorted by their activity levels without the bias. Next three rows show their deactivtion by the cognitive biases. The last column shows a reconstructed image using the selected features in this figure. It is clear how a wrong bias fails to produce a reasonable reconstructed image. (a) guess 1 → 5 5 5 5 7 5 2 → 2 2 2 2 2 2 3 → 2 2 2 2 2 2 4 → 2 7 7 7 7 7 7 → 7 7 7 7 7 7 9 → 2 7 7 7 7 7 (b) guess 1 → 1 1 1 1 1 1 2 → 2 2 2 2 2 2 3 → 3 3 3 3 3 3 4 → 4 4 4 4 4 4 5 → 5 5 5 5 5 5 9 → 7 9 9 9 9 9 Figure 4: Recurrent segmentation examples in six iterations. In each iteration, the classification result is shown under the reconstructed image, along with the confidence (red bar, the longer the higher confidence). As described in Section 2, segmentation might take multiple iterations, and each iteration produces a reconstruction that can be processed by an off-the-shelf classifier. Figure 4 shows two cases, with as4 sociated predictions generated by the 3-layer MLP. In the first example (Figure 4(a)), two cognitive biase guesses 2 and 7 are confirmed by the network, and the correct guess 2 has a greater confidence. The second example (Figure 4(b)) illustratess that, under high intensity background, transformations can happen and a hallucinated digit can be “built” from a patch of high intensity region since they can indiscriminately activate features. Such transformations constitute false-positives (i.e. confirming a wrong guess) and pose challenges to classification. More complicated strategies such as local contrast normalization can be used in the future to deal with such cases. This phenomenon is not at all uncommon in everyday life experiences: when truth is flooded with high noises, all interpretations are possible, and each one picks evidence in its favor while ignoring others. As described in Section 2, we used an entropy confidence metric to select the winner from candidates. The MLP classifier C produces a predicted score for the likelihood of each class, and we take the total confidence as the entropy of the prediction distribution, normalized by its class average obtained under clean data. This confidence metric, as well as the associated classification result, are displayed under each reconstruction. The first example shows that confidence under the right guess (i.e. 2) is higher. On the other hand, the second example shows that, with high noise, confidences of many guesses are equally poor. Furthermore, more iterations often lead to higher confidence, regardless of whether the guess is correct or not. This self-fulfilling process locks predictions to their given biases, instead of differentiating them, which is also a familiar scenario. 3.2 Classification Table 1: Classification performance back-rand back-image RBM 11.39 15.42 imRBM 10.46 16.35 discRBM 10.29 15.56 DBN-3 6.73 16.31 CAE-2 10.90 15.50 PGBM 6.08 12.25 sDBN 4.48 14.34 aNN - θrand 3.22 22.30 aNN - θimage 6.09 15.33 0 0.2 0.4 0.6 0.8 1 0 0.05 0.1 0.15 0.2 background level err rate mnist-background-noise mnist-background-image 1 2 3 4 5 0 0.05 0.1 0.15 0.2 0.25 iteration err rate false negative false positive (a) (b) Figure 5: (a) error vs. background level. (b) error vs. iteration number. To compare with previous results, we used the standard training/testing split (12K/50K) of the MNIST variation set, and results are shown in the Table 1. We ran one-iteration denoising, and then picked the winner by comparing normalized entropies among the candidates, i.e. those with biases matching the prediction of the 3-layer MLP classifier. We trained two parameter sets separately in random-noise background (θrand) and image background dataset(θimage). To test transfer abilities, we also applied θimage to random-noise background data and θrand to image background data. On MNIST-back-rand and MNIST-back-image dataset, θnoise achieves 3.22% and 22.3% err rate respectively, while θimage achieves 6.09% and 15.33%. Figure 5(a) shows how the performance deteriorates with increasing noise level. In these experiments, random noise and random images are modulated by scaling down their pixel intensity linearly. Intuitively, at low noise the performance should approach the default accuracy of the classifier C and is indeed the case. The effect of iterations: We have chosen to run only one iteration because under high noise, each guess will insist on picking features to its favor and some hallucination can still occur. With more iterations, false positive rates will rise and false negative rates will decrease, as confidence scores for 5 both the right and the wrong guesses will keep on improving. This is shown in Figure-5(b). As such, more iterations do not necessarily lead to better performance. In the current model, the predicted class from the previous step is not feed into the next step, and more sophisticated strategies with such an extension might produce better results in the future. The power of group bias: For this benchmark, good performance mostly depends on the quality of segmentation. Therefore, a simpler approach is to denoise with coarse-grained group bias, followed by classification. For θimage, we attached a SVM to the hidden units with bG turned on, and obtained a 16.2% error rate. However, if we trained a SVM with 60K samples, the error rate drops to 12.1%. This confirms that supervised learning can achieve better performance with more training data. Making it scalable. So far, we enumerate over all the guesses. This is clearly not scalable if number of classes is large. One sensible solution is to first denoise with a group bias bG, and pick top-K candidates from the prediction distribution, and then iterate among them. Finally, we emphasize that the above results are obtained with only one up-down pass. This is in stark contrast to other generative model based systems. For example, in PGBM [15], each inference takes 25 rounds. 3.3 MNIST-2 problem Compared to corruption by background noises, MNIST-2 is a much more challenging task, even for a human observer. It is a problem of segmentation, not denoising. In fact, such segmentation requires semantic understanding of the object. Knowing which features are task-irrelevant is not sufficient, we need to discover and utilize per-class features. Any denoising architectures only removing taskirrelevant features will fail on such a task without additional mechanisms. In aNN, each bias has its own associated features and explicitly call these features out in the reconstruction phase (modulated by input activations). Meanwhile, its framework permits multiple predictions so it can accommodate such problems. (a) guess ground truth 2 → 2 2 2 2 2 2 6 → 6 6 6 6 6 6 1 → 2 2 2 2 2 2 4 → 6 4 4 4 4 4 8 → 2 2 2 2 2 2 (b) guess ground truth 2 → 2 2 2 2 2 2 7 → 7 7 7 7 7 7 0 → 2 2 2 2 2 2 1 → 1 1 1 1 1 1 4 → 7 7 4 4 4 4 (c) guess ground truth 1 → 2 1 1 1 1 1 5 → 5 5 5 5 5 5 2 → 8 8 8 2 2 2 3 → 3 3 3 3 3 3 4 → 8 4 4 4 4 4 Figure 6: Sample results on MNIST-2. In each example, each column is one iteration. The first two rows are runs with two ground truth digits, others are with wrong biases. For the MNIST-2 task, we used the same off-the-shelf 3-layer classifier to validate a guess. In the first two examples in Figure 6, the pair of digits in the ground truth is correctly identified. Supplying either digit as the bias successfully segments its features, resulting in imperfect reconstructions that are nonetheless confident enough to win over competing proposals. One would expect that the random nature of MNIST-2 would create much more challenging (and interesting) cases that either defy or confuse any segmentation attempts. This is indeed true. The last example is an overlay of the digit 1 and 5 that look like a perfect 8. Each of the 5 biases successfully segment out their target “digit”, and sometimes creatively. It is satisfying to see that a human observor would make similar misjudgements in those cases. 6 (a) → → → → → → → → → → ground truth image result (b) → → → → → → → → → → ground truth image result (c) → → → → → → → → → → ground truth image result Figure 7: Sample results on MNIST-2 when adding background noises. (a) (b) (c) are examples three groups of results, when both digits, one digit, or none are predicted, respectively. Out of the 5000 MNIST-2 pairs, there are 95.46% and 44.62% cases where at least one digit or both digits get correctly predicted, respectively. Given the challenging nature of the benchmark, we are surprised by this performance. Contrary to random background dataset, in this problem, more iterations conclusively lead to better performance. The above accuracy is obtained with 5 iterations, and the accuracy for matching both digits will drop to 36.28% if only 1 iteration is used. Even more interestingly, this performance is resilient against background noise (Figure 7), the accuracy only drops slightly (93.72% and 41.66%). The top-down biases allowed us to achieve segmentaion and denoising at the same time. 4 Discussion and Related Work 4.1 Architecture Feedforward multilayer neural networks have achieved good performance in many classification tasks in the past few years, notably achieving the best performance in the ImageNet competition in vision([21] [7]). However, they typically give a fixed outcome for each input image, therefore cannot naturally model the influence of cognitive biases and are difficult to incorporate into a larger cognitive framework. The current frontier of vision research is to go beyond object recognition towards image understanding [16]. Inspired by neuroscience research, we believe that an unified module which integrates feedback predictions and interpretations with information from the world is an important step towards this goal. Generative models have been a popular approach([5, 13]). They are typically based on a probabilistic framework such as Boltzmann Machines and can be stacked into a deep architecture. They have advantages over discriminative models in dealing with object occlusion. In addition, prior knowledge can be easily incorporated in generative models in the forms of latent variables. However, despite the mathematical beauty of a probabilistic framework, this class of models currently suffer from the difficulty of generative learning and have been mostly successful in learning small patches of natural images and objects [17, 22, 13]. In addition, inferring the hidden variables from images is a difficult process and many iterations are typically needed for the model to converge[13, 15]. A recent trend is to first train a DBN or DBM model then turn the model into a discriminative network for classification. This allows for fast recognition but the discriminative network loses the generative ability and cannot combine top-down and bottom-up information. We sought a simple architecture that can flexibly navigate between discriminative and generative frameworks. This should ideally allow for one-pass quick recognition for images with easy and well-segmented objects, but naturally allow for iteration and influence by cognitive-bias when the need for segmentation arises in corrupted or occluded image settings. 4.2 Models of Attention In the field of computational modeling of attention, many models have been proposed to model the saliency map and used to predict where attention will be deployed and provide fits to eye-tracking data[1]. We are instead more interested in how attentional signals propagating back from higher lev7 els in the visual hierarchy can be merged with bottom up information. Volitional top-down control could update, bias or disambiguate the bottom-up information based on high-level tasks, contextual cues or behavior goals. Computational models incorporating this principle has so far mostly focused on spatial attention [12, 1]. For example, in a pedestrian detection task, it was shown that visual search can be sped up if the search is limited to spatial locations of high prior or posterior probabilities [3]. However, human attention abilities go beyond simple highlighting based on location. For example, the ability to segment and disentangle object based on high level expectations as in the MNIST-2 dataset represents an interesting case. Here, we demonstrate that top-down attention can also be used to segment out relevant parts in a cluttered and entangled scene guided by top-down interpretation, demonstrating that attentional bias can be successfully deployed on a far-more fine-grained level than previous realized. We have chosen the image-denoising and image-segmentation tasks as our test cases. In the context of image-denoising, feedforward neural networks have been shown to have good performance [6, 20, 11]. However, their work has not included a feedback component and has no generative ability. Several Boltzmann machine based architectures have been proposed[9, 8]. In PGBM, gates on input images are trained to partition such pixel as belonging to objects or backgrounds, which are modeled by two RBMs separately [15]. The gates and the RBM components make up a high-order RBM. However, such a high-order RBM is difficult to train and needs costly iterations during inference. sDBN [17] used a RBM to model the distribution of the hidden layer, and then denoises the hidden layer by Gibbs sampling over the hidden units affected by noise. Besides the complexity of Gibbs sampling, the process of iteratively finding which units are affected by noise is also complicated and costly, as there is a process of Gibbs sampling for each unit. When there are multiple digits appearing in the image as in the case of MNSIT-2, the hidden layer denoising step leads to uncertain results, and the best outcome is an arbitrary choice of one of the mixed digits. a DBM based architecture has also been proposed for modeling attention, but the complexity of learning and inference also makes it difficult to apply in practice [10]. All those works also lack the ability of controlled generation and input reconstruction under the direction of a top-down bias. In our work, top-down biases influence the processing of feedforward information at two levels. The inputs are gated at the raw image stage by top-down reconstructions. We propose that this might be equivalent to the powerful gating influence of the thalamus in the brain [1, 15]. If the influence of input image is shut off at this stage, then the system can engage in hallucination and might get into a state akin to dreams, as when the thalamic gates are closed. Top-down biases also affect processing at a higher stage of high-level features. We think this might be equivalent to the processing level of V4 in the visual hierarchy. At this level, top-down biases mostly suppresses task-irrelevant features and we have modeled the interactions as multiplicative in accordance with results from neuroscience research [1, 2]. 4.3 Philosophical Points The issue of whether top-down connections and iterative processing are useful for object recognition has been a point of hot contention. Early work inspired by Hopfield network and the tradition of probabilistic models based on Gibbs sampling argue for the usefulness of feedback and iteration [14],[13], but results from neuroscience research and recent success by purely feedforward networks argue against it [18],[7]. In our work, we find that feedforward processing is sufficient for good performance on clean digits. Feedback connections play an essential role for digit denoising. However, one pass with a simple cognitive bias towards digits seems to suffice and iteration seems only to confirm the initial bias and does not improve performance. We hypothesize that this “see what you want to see” is a side-effect of our ability to denoise a cluttered scene, as the deep hierarchy possesses the ability to decompose objects into many shareable parts. In the more complex case of MNIST-2, performance does increase with iteration. This suggests that top-down connections and iteration might be particularly important for good performance in the case of cluttered scenes. The architecture we proposed can naturally accommodate all these task requirements simultaneously with essentially no further fine-tuning. We view such a general purpose framework as an essential foundation for a larger system emulating the cognitive abilities of the whole brain. 8 References [1] F. Baluch and L. Itti. Mechanisms of top-down attention. Trends in neurosciences, 34(4):210– 224, 2011. [2] T. C¸ ukur, S. Nishimoto, A. G. Huth, and J. L. Gallant. Attention during natural vision warps semantic representation across the human brain. Nature neuroscience, 16(6):763–770, 2013. [3] K. A. Ehinger, B. Hidalgo-Sotelo, A. Torralba, and A. Oliva. Modelling search for people in 900 scenes: A combined source model of eye guidance. Visual cognition, 17(6-7):945–978, 2009. [4] J. M. Henderson, M. Chanceaux, and T. J. Smith. The influence of clutter on real-world scene search: Evidence from search efficiency and eye movements. Journal of Vision, 9(1):32, 2009. [5] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [6] V. Jain and H. S. Seung. Natural image denoising with convolutional networks. In NIPS, volume 8, pages 769–776, 2008. [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, volume 1, page 4, 2012. [8] H. Larochelle and Y. Bengio. Classification using discriminative restricted boltzmann machines. In Proceedings of the 25th international conference on Machine learning, pages 536– 543. ACM, 2008. [9] V. Nair and G. E. Hinton. Implicit mixtures of restricted boltzmann machines. In NIPS, volume 21, pages 1145–1152, 2008. [10] D. P. Reichert, P. Series, and A. J. Storkey. A hierarchical generative model of recurrent objectbased attention in the visual cortex. In Artificial Neural Networks and Machine Learning– ICANN 2011, pages 18–25. Springer, 2011. [11] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive auto-encoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 833–840, 2011. [12] A. L. Rothenstein and J. K. Tsotsos. Attention links sensing to recognition. Image and Vision Computing, 26(1):114–126, 2008. [13] R. Salakhutdinov and G. E. Hinton. Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pages 448–455, 2009. [14] F. Schwenker, F. T. Sommer, and G. Palm. Iterative retrieval of sparsely coded associative memory patterns. Neural Networks, 9(3):445–455, 1996. [15] K. Sohn, G. Zhou, C. Lee, and H. Lee. Learning and selecting features jointly with pointwise gated {B} oltzmann machines. In Proceedings of The 30th International Conference on Machine Learning, pages 217–225, 2013. [16] C. Tan, J. Z. Leibo, and T. Poggio. Throwing down the visual intelligence gauntlet. In Machine Learning for Computer Vision, pages 1–15. Springer, 2013. [17] Y. Tang and C. Eliasmith. Deep networks for robust visual recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 1055–1062, 2010. [18] S. Thorpe, D. Fize, C. Marlot, et al. Speed of processing in the human visual system. nature, 381(6582):520–522, 1996. [19] S. Ullman. Sequence seeking and counter streams: a computational model for bidirectional information flow in the visual cortex. Cerebral cortex, 5(1):1–11, 1995. [20] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096–1103. ACM, 2008. [21] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013. [22] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 479–486. IEEE, 2011. 9
2014
42
5,528
The Blinded Bandit: Learning with Adaptive Feedback Ofer Dekel Microsoft Research oferd@microsoft.com Elad Hazan Technion ehazan@ie.technion.ac.il Tomer Koren Technion tomerk@technion.ac.il Abstract We study an online learning setting where the player is temporarily deprived of feedback each time it switches to a different action. Such model of adaptive feedback naturally occurs in scenarios where the environment reacts to the player’s actions and requires some time to recover and stabilize after the algorithm switches actions. This motivates a variant of the multi-armed bandit problem, which we call the blinded multi-armed bandit, in which no feedback is given to the algorithm whenever it switches arms. We develop efficient online learning algorithms for this problem and prove that they guarantee the same asymptotic regret as the optimal algorithms for the standard multi-armed bandit problem. This result stands in stark contrast to another recent result, which states that adding a switching cost to the standard multi-armed bandit makes it substantially harder to learn, and provides a direct comparison of how feedback and loss contribute to the difficulty of an online learning problem. We also extend our results to the general prediction framework of bandit linear optimization, again attaining near-optimal regret bounds. 1 Introduction The adversarial multi-armed bandit problem [4] is a T-round prediction game played by a randomized player in an adversarial environment. On each round of the game, the player chooses an arm (also called an action) from some finite set, and incurs the loss associated with that arm. The player can choose the arm randomly, by choosing a distribution over the arms and then drawing an arm from that distribution. He observes the loss associated with the chosen arm, but he does not observe the loss associated with any of the other arms. The player’s cumulative loss is the sum of all the loss values that he incurs during the game. To minimize his cumulative loss, the player must trade-off exploration (trying different arms to observe their loss values) and exploitation (choosing a good arm based on historical observations). The loss values are assigned by the adversarial environment before the game begins. Each of the loss values is constrained to be in [0, 1] but otherwise they can be arbitrary. Since the loss values are set beforehand, we say that the adversarial environment is oblivious to the player’s actions. The performance of a player strategy is measured in the standard way, using the game-theoretic notion of regret (formally defined below). Auer et al. [4] present a player strategy called EXP3, prove that it guarantees a worst-case regret of O( √ T) on any oblivious assignment of loss values, and prove that this guarantee is the best possible. A sublinear upper bound on regret implies that the player’s strategy improves over time and is therefore a learning strategy, but if this upper bound has a rate of O( √ T) then the problem is called an easy1 online learning problem. 1The classification of online problems into easy vs. hard is borrowed from Antos et al. [2]. 1 In this paper, we study a variant of the standard multi-armed bandit problem where the player is temporarily blinded each time he switches arms. In other words, if the player’s current choice is different than his choice on the previous round then we say that he has switched arms, he incurs the loss as before, but he does not observe this loss, or any other feedback. On the other hand, if the player chooses the same arm that he chose on the previous round, he incurs and observes his loss as usual2. We call this setting the blinded multi-armed bandit. For example, say that the player’s task is to choose an advertising campaign (out of k candidates) to reduce the frequency of car accidents. Even if a new advertising campaign has an immediate effect, the new accident rate can only be measured over time (since we must wait for a few accidents to occur) and the environment’s reaction to the change cannot be observed immediately. The blinded bandit setting can also be used to model problems where a switch introduces a temporary bias into the feedback, which makes this feedback useless. A good example is the well-known primacy and novelty effect [14, 15] that occurs in human-computer interaction. Say that we operate an online restaurant directory and the task is to choose the best user interface (UI) for our site (from a set of k candidates). The quality of a UI is measured by the the time it takes the user to complete a successful interaction with our system. Whenever we switch to a new UI, we encounter a primacy effect: users are initially confused by the unfamiliar interface and interaction times artificially increase. In some situations, we may encounter the opposite, a novelty effect: a fresh new UI could intrigue users, increase their desire to engage with the system, and temporarily decrease interaction times. In both cases, feedback is immediately available, but each switch makes the feedback temporarily unreliable. There are also cases where switching introduces a variance in the feedback, rather than a bias. Almost any setting where the feedback is measured by a physical sensor, such as a photometer or a digital thermometer, fits in this category. Most physical sensors apply a low-pass filter to the signal they measure and a low-pass filter in the frequency domain is equivalent to integrating the signal over a sliding window in the time domain. While the sensor may output an immediate reading, it needs time to stabilize and return to an adequate precision. The blinded bandit setting bears a close similarity to another setting called the adversarial multiarmed bandit with switching costs. In that setting, the player incurs an additional loss each time he switches arms. This penalty discourages the player from switching frequently. At first glance, it would seem that the practical problems described above could be formulated and solved as multiarmed bandit problems with switching costs and one might question the need for our new blinded bandit setting. However, Dekel et al. [12] recently proved that the adversarial multi-armed bandit with switching costs is a hard online learning problem, which is a problem where the best possible regret guarantee is eΘ(T 2/3). In other words, for any learning algorithm, there exists an oblivious setting of the loss values that forces a regret of eΩ(T 2/3). In this paper, we present a new algorithm for the blinded bandit setting and prove that it guarantees a regret of O( √ T) on any oblivious sequence of loss values. In other words, we prove that the blinded bandit is surprisingly as easy as the standard multi-armed bandit setting, despite its close similarity to the hard multi-armed bandit with switching costs problem. Our result has a theoretical significance and a practical significance. Theoretically, it provides a direct comparison of how feedback and loss contribute to the difficulty of an online learning problem. Practically, it identifies a rich and important class of online learning problems that would seem to be a natural fit for the multi-armed bandit setting with switching costs, but are in fact much easier to learn. Moreover, to the best of our knowledge, our work is the first to consider online learning in an setting where the loss values are oblivious to the player’s past actions but the feedback is adaptive. We also extend our results and study a blinded version of the more general bandit linear optimization setting. The bandit linear optimization framework is useful for efficiently modeling problems of learning under uncertainty with extremely large, yet structured decision sets. For example, consider the problem of online routing in networks [5], where our task is to route a stream of packets between two nodes in a computer network. While there may be exponentially many paths between the two nodes, the total time it takes to send a packet is simply the sum of the delays on each edge in the path. If the route is switched in the middle of a long streaming transmission, the network protocol 2More generally, we could define a setting where the player is blinded for m rounds following each switch, but for simplicity we focus on m = 1. 2 needs a while to find the new optimal transmission rate, and the delay of the first few packets after the switch can be arbitrary. This view on the packet routing problem demonstrates the need for a blinded version of bandit linear optimization. The paper is organized as follows. In Section 2 we formalize the setting and lay out the necessary definitions. Section 3 is dedicated to presenting our main result, which is an optimal algorithm for the blinded bandit problem. In Section 4 we extend this result to the more general setting of bandit linear optimization. We conclude in Section 5. 2 Problem Setting To describe our contribution to this problem and its significance compared to previous work, we first define our problem setting more formally and give some background on the problem. As mentioned above, the player plays a T-round prediction game against an adversarial environment. Before the game begins, the environment picks a sequence of loss functions ℓ1, . . . , ℓT : K 7→[0, 1] that assigns loss values to arms from the set K = {1, . . . , k}. On each round t, the player chooses an arm xt ∈K, possibly at random, which results in a loss ℓt(xt). In the standard multi-armed bandit setting, the feedback provided to the player at the end of round t is the number ℓt(xt), whereas the other values of the function ℓt are never observed. The player’s expected cumulative loss at the end of the game equals E[PT t=1 ℓt(xt)]. Since the loss values are assigned adversarially, the player’s cumulative loss is only meaningful when compared to an adequate baseline; we compare the player’s cumulative loss to the cumulative loss of a fixed policy, which chooses the same arm on every round. Define the player’s regret as R(T) = E " T X t=1 ℓt(xt) # −min x∈K T X t=1 ℓt(x) . (1) Regret can be positive or negative. If R(T) = o(T) (namely, the regret is either negative or grows at most sublinearly with T), we say that the player is learning. Otherwise, if R(T) = Θ(T) (namely, the regret grows linearly with T), it indicates that the player’s per-round loss does not decrease with time and therefore we say that the player is not learning. In the blinded version of the problem, the feedback on round t, i.e. the number ℓt(xt), is revealed to the player only if he chooses xt to be the same as xt−1. On the other hand, if xt ̸= xt−1, then the player does not observe any feedback. The blinded bandit game is summarized in Fig. 1. Parameters: action set K, time horizon T • Environment determines a sequence of loss functions ℓ1, . . . , ℓT : K 7→[0, 1] • On each round t = 1, 2, . . . , T: 1. Player picks an action xt ∈K and suffers the loss ℓt(xt) ∈[0, 1] 2. If xt = xt−1, the number ℓt(xt) is revealed as feedback to the player 3. Otherwise, if xt ̸= xt−1, the player gets no feedback from the environment Figure 1: The blinded bandit game. Bandit Linear Optimization. In Section 4, we consider the more general setting of online linear optimization with bandit feedback [10, 11, 1]. In this problem, on round t of the game, the player chooses an action, possibly at random, which is a point xt in a fixed action set K ⊂Rn. The loss he suffers on that round is then computed by a linear function ℓt(xt) = ℓt · xt, where ℓt ∈Rn is a loss vector chosen by the oblivious adversarial environment before the game begins. To ensure that the incurred losses are bounded, we assume that the loss vectors ℓ1, . . . , ℓT are admissible, that is, they satisfy |ℓt · x| ≤1 for all t and x ∈K (in other words, the loss vectors reside in the polar set of K). As in the multi-armed bandit problem, the player only observes the loss he incurred, and the full loss vector ℓt is never revealed to him. The player’s performance is measured by his regret, as defined above in Eq. (1). 3 3 Algorithm We recall the classic EXP3 algorithm for the standard multi-armed bandit problem, and specifically focus on the version presented in Bubeck and Cesa-Bianchi [6]. The player maintains a probability distribution over the arms, which we denote by pt ∈∆(K) (where ∆(K) denotes the set of probability measures over K, which is simply the k-dimensional simplex when K = {1, 2, . . . , k}). Initially, p1 is set to the uniform distribution ( 1 k, . . . , 1 k). On round t, the player draws xt according to pt, incurs and observes the loss ℓt(xt), and applies the update rule ∀x ∈K, pt+1(x) ∝pt(x) · exp  −η ℓt(xt) pt(xt) · 11x=xt  . EXP3 provides the following regret guarantee, which depends on the user-defined learning rate parameter η: Theorem 1 (due to Auer et al. [4], taken from Bubeck and Cesa-Bianchi [6]). Let ℓ1, . . . , ℓT be an arbitrary loss sequence, where each ℓt : K 7→[0, 1]. Let x1, . . . , xT be the random sequence of arms chosen by EXP3 (with learning rate η > 0) as it observes this sequence. Then, R(T) ≤ηkT 2 + log k η . EXP3 cannot be used in the blinded bandit setting because the EXP3 update rule cannot be called on rounds where a switch occurs. Also, since switching actions Ω(T) times is, in general, required for obtaining the optimal O( √ T) regret (see [12]), the player must avoid switching actions too frequently and often stick with the action that was chosen on the previous round. Due to the adversarial nature of the problem, randomization must be used in controlling the scheme of action switches. We propose a variation on EXP3, which is presented in Algorithm 1. Our algorithm begins by drawing a sequence of independent Bernoulli random variables b0, b1, . . . , bT +1 (i.e., such that P(bt = 0) = P(bt = 1) = 1 2). This sequence determines the schedule of switches and updates for the entire game. The algorithm draws a new arm (and possibly switches) only on rounds where bt−1 = 0 and bt = 1 and invokes the EXP3 update rule only on rounds where bt = 0 and bt+1 = 1. Note that these two events can never co-occur. Specifically, the algorithm always invokes the update rule one round before the potential switch occurs. This confirms that the algorithm relies on the value of ℓt(xt) only on non-switching rounds. Algorithm 1: BLINDED EXP3 set p1 ←( 1 k, . . . , 1 k), draw x0 ∼p1 draw b0, . . . , bT +1 i.i.d. unbiased Bernoullis for t = 1, 2, . . . , T if bt−1 = 0 and bt = 1 draw xt ∼pt // possible switch else set xt ←xt−1 // no switch play arm xt and incur loss ℓt(xt) if bt = 0 and bt+1 = 1 observe ℓt(xt) and for all x ∈K, update wt+1(x) ←pt(x) · exp  −η ℓt(xt) pt(xt) · 11x=xt  set pt+1 ←wt+1/∥wt+1∥1 else set pt+1 ←pt We set out to prove the following regret bound. 4 Theorem 2. Let ℓ1, . . . , ℓT be an arbitrary loss sequence, where each ℓt : K 7→[0, 1]. Let x1, . . . , xT be the random sequence of arms chosen by Algorithm 1 as it plays the blinded bandit game on this sequence (with learning rate fixed to η = q 2 log k kT ). Then, R(T) ≤6 p Tk log k . We prove Theorem 2 with the below sequence of lemmas. In the following, we let ℓ1, . . . , ℓT be an arbitrary loss sequence and let x1, . . . , xT be the sequence of arms chosen by Algorithm 1 (with parameter η > 0). First, we define the set S =  t ∈[T] : bt = 0 and bt+1 = 1 . In words, S is a random subset of [T] that indicates the rounds on which Algorithm 1 uses its feedback and applies the EXP3 update. Lemma 1. For any x ∈K, it holds that E "X t∈S ℓt(xt) − X t∈S ℓt(x) # ≤ηkT 8 + log k η . Proof. For any concrete instantiation of b0, . . . , bT +1, the set S is fixed and the sequence (ℓt)t∈S is an oblivious sequence of loss functions. Note that the steps performed by Algorithm 1 on the rounds indicated in S are precisely the steps that the standard EXP3 algorithm would perform if it were presented with the loss sequence (ℓt)t∈S. Therefore, Theorem 1 guarantees that E "X t∈S ℓt(xt) − X t∈S ℓt(x) S # ≤ηk|S| 2 + log k η . Taking expectations on both sides of the above and noting that E[|S|] ≤T/4 proves the lemma. Lemma 1 proves a regret bound that is restricted to the rounds indicated by S. The following lemma relates that regret to the total regret, on all T rounds. Lemma 2. For any x ∈K, we have E " T X t=1 ℓt(xt) # − T X t=1 ℓt(x) ≤4 E "X t∈S ℓt(xt) − X t∈S ℓt(x) # + E " T X t=1 ∥pt −pt−1∥1 # . Proof. Using the definition of S, we have E "X t∈S ℓt(x) # = T X t=1 ℓt(x) E[(1 −bt)bt+1] = 1 4 T X t=1 ℓt(x) . (2) Similarly, we have E "X t∈S ℓt(xt) # = T X t=1 E  ℓt(xt) (1 −bt)bt+1  . (3) We focus on the t’th summand in the right-hand side above. Since bt+1 is independent of ℓt(xt)(1− bt), it holds that E  ℓt(xt)(1 −bt)bt+1  = E[bt+1]E  ℓt(xt)(1 −bt)  = 1 2 E  ℓt(xt)(1 −bt)  . Using the law of total expectation, we get 1 2 E  ℓt(xt)(1 −bt)  = 1 4 E h ℓt(xt)(1 −bt) bt = 0 i + 1 4 E h ℓt(xt)(1 −bt) bt = 1 i = 1 4 E  ℓt(xt) bt = 0  . 5 If bt = 0 then Algorithm 1 sets xt ←xt−1 so we have that xt = xt−1. Therefore, the above equals 1 4E[ℓt(xt−1) | bt = 0]. Since xt−1 is independent of bt, this simply equals 1 4E[ℓt(xt−1)]. H¨older’s inequality can be used to upper bound E[ℓt(xt) −ℓt(xt−1)] = E h X x∈K pt(x) −pt−1(x)  ℓt(x) i ≤E[∥pt −pt−1∥1] · max x∈K ℓt(x) , where we have used the fact that xt and xt−1 are distributed according to pt and pt−1 respectively (regardless of whether an update took place or not). Since it is assumed that ℓt(x) ∈[0, 1] for all t and x ∈K, we obtain 1 4 E  ℓt(xt−1)  ≥1 4 E  ℓt(xt)  −E[∥pt −pt−1∥1]  . Overall, we have shown that E  ℓt(xt)(1 −bt)bt+1  ≥1 4 E  ℓt(xt)  −E[∥pt −pt−1∥1]  . Plugging this inequality back into Eq. (3) gives E "X t∈S ℓt(xt) # ≥1 4 E " T X t=1 ℓt(xt) − T X t=1 ∥pt −pt−1∥1 # . Summing the inequality above with the one in Eq. (2) concludes the proof. Next, we prove that the probability distributions over arms do not change much on consecutive rounds of EXP3. Lemma 3. The distributions p1, p2, . . . , pT generated by the BLINDED EXP3 algorithm satisfy E[∥pt+1 −pt∥1] ≤2η for all t. Proof. Fix a round t; we shall prove the stronger claim that ∥pt+1 −pt∥1 ≤2η with probability 1. If no update had occurred on round t and pt+1 = pt, this holds trivially. Otherwise, we can use the triangle inequality to bound ∥pt+1 −pt∥1 ≤∥pt+1 −wt+1∥1 + ∥wt+1 −pt∥1 , with the vector wt+1 as specified in Algorithm 1. Letting Wt+1 = ∥wt+1∥1 we have pt+1 = wt+1/Wt+1, so we can rewrite the first term on the right-hand side above as ∥pt+1 −Wt+1 · pt+1∥1 = |1 −Wt+1| · ∥pt+1∥1 = 1 −Wt+1 = ∥pt −wt+1∥1 , where the last equality follows by observing that pt ≥wt+1 entrywise, ∥pt∥1 = 1 and ∥wt+1∥1 = Wt+1. By the definition of wt+1, the second term on the right-hand side above equals pt(xt) · 1 − e−ηℓt(xt)/pt(xt) . Overall, we have ∥pt+1 −pt∥1 ≤2pt(xt) · 1 −e−ηℓt(xt)/pt(xt) . Using the inequality 1 −exp(−α) ≤α, we get ∥pt+1 −pt∥1 ≤2ηℓt(xt). The claim now follows from the assumption that ℓt(xt) ∈[0, 1]. We can now proceed to prove our regret bound. Proof of Theorem 2. Combining the bounds of Lemmas 1–3 proves that for any fixed arm x ∈K, it holds that E " T X t=1 ℓt(xt) # − T X t=1 ℓt(x) ≤ηkT 2 + 4 log k η + 2ηT ≤2ηkT + 4 log k η . Specifically, the above holds for the best arm in hindsight. Setting η = q 2 log k kT proves the theorem. 6 4 Blinded Bandit Linear Optimization In this section we extend our results to the setting of linear optimization with bandit feedback, formally defined in Section 2. We focus on the GEOMETRICHEDGE algorithm [11], that was the first algorithm for the problem to attain the optimal O( √ T) regret, and adapt it to the blinded setup. Our BLINDED GEOMETRICHEDGE algorithm is detailed in Algorithm 2. The algorithm uses a mechanism similar to that of Algorithm 1 for deciding when to avoid switching actions. Following the presentation of [11], we assume that K ⊆[−1, 1]n is finite and that the standard basis vectors e1, . . . , en are contained in K. Then, the set E = {e1, . . . , en} is a barycentric spanner of K [5] that serves the algorithm as an exploration basis. We denote the uniform distribution over E by uE. Algorithm 2: BLINDED GEOMETRICHEDGE Parameter: learning rate η > 0 let q1 be the uniform distribution over K, and draw x0 ∼q1 draw b0, . . . , bT +1 i.i.d. unbiased Bernoullis set γ ←n2η for t = 1, 2, . . . , T set pt ←(1 −γ) qt + γ uE compute covariance Ct ←Ex∼pt[xx⊤] if bt−1 = 0 and bt = 1 draw xt ∼pt // possible switch else set xt ←xt−1 // no switch play arm xt and incur loss ℓt(xt) = ℓt · xt if bt = 0 and bt+1 = 1 observe ℓt(xt) and let ˆℓt ←ℓt(xt) · C−1 t xt update qt+1(x) ∝qt(x) · exp(−ηˆℓt · x) else set qt+1 ←qt The main result of this section is an O( √ T) upper-bound over the expected regret of Algorithm 2. Theorem 3. Let ℓ1, . . . , ℓT be an arbitrary sequence of linear loss functions, admissible with respect to the action set K ⊆Rn. Let x1, . . . , xT be the random sequence of arms chosen by Algorithm 2 as it plays the blinded bandit game on this sequence, with learning rate fixed to η = q log(nT ) 10nT . Then, R(T) ≤4n3/2p T log(nT) . With minor modifications, our technique can also be applied to variants of the GEOMETRICHEDGE algorithm (that differ by their exploration basis) for obtaining regret bounds with improved dependence of the dimension n. This includes the COMBAND algorithm [8], EXP2 with John’s exploration [7], and the more recent version employing volumetric spanners [13]. We now turn to prove Theorem 3. Our first step is proving an analogue of Lemma 1, using the regret bound of the GEOMETRICHEDGE algorithm proved by Dani et al. [11]. Lemma 4. For any x ∈K, it holds that E P t∈S ℓt(xt) −P t∈S ℓt(x)  ≤ηn2T 2 + n log(nT ) 2η . We proceed to prove that the distributions generated by Algorithm 2 do not change too quickly. Lemma 5. The distributions p1, p2, . . . , pT produced by the BLINDED GEOMETRICHEDGE algorithm (from which the actions x1, x2, . . . , xT are drawn) satisfy E[∥pt+1 −pt∥1] ≤4η√n for all t. The proofs of both lemmas are omitted due to space constraints. We now prove Theorem 3. 7 Proof of Theorem 3. Notice that the bound of Lemma 2 is independent of the construction of the distributions p1, p2, . . . , pT and the structure of K, and thus applies for Algorithm 2 as well. Combining this bound with the results of Lemmas 4 and 5, it follows that for any fixed action x ∈K, E " T X t=1 ℓt(xt) # − T X t=1 ℓt(x) ≤ηn2T 2 + n log(nT) 2η + 4η√nT ≤5ηn2T + n log(nT) 2η . Setting η = q log(nT ) 10nT proves the theorem. 5 Discussion and Open Problems In this paper, we studied a new online learning scenario where the player receives feedback from the adversarial environment only when his action is the same as the one from the previous round, a setting that we named the blinded bandit. We devised an optimal algorithm for the blinded multiarmed bandit problem based on the EXP3 strategy, and used similar ideas to adapt the GEOMETRICHEDGE algorithm to the blinded bandit linear optimization setting. In fact, a similar analysis can be applied to any online algorithm that does not change its underlying prediction distributions too quickly (in total variation distance). In the practical examples given in the introduction, where each switch introduces a bias or a variance, we argued that the multi-armed bandit problem with switching costs is an inadequate solution, since it is unreasonable to solve an easy problem by reducing it to one that is substantially harder. Alternatively, one might consider simply ignoring the noise in the feedback after each switch and using a standard adversarial multi-armed bandit algorithm like EXP3 despite the bias or the variance. However, if we do that, the player’s observed losses would no longer be oblivious (as the observed loss on round t would depend on xt−1), and the regret guarantees of EXP3 would no longer hold3. Moreover, any multi-armed bandit algorithm with O( √ T) regret can be forced to make Θ(T) switches [12], so the loss observed by the player could actually be non-oblivious in a constant fraction of the rounds, which would deteriorate the performance of EXP3. Our setting might seem similar to the related problem of label-efficient prediction (with bandit feedback), see [9]. In the label-efficient prediction setting, the feedback for the action performed on some round is received only if the player explicitly asks for it. The player may freely choose when to observe feedback, subject to a global constraint on the number of total feedback queries. In contrast, in our setting there is a strong correlation between the actions the player takes and the presence of the feedback signal. As a consequence, the player is not free to decide when he observes feedback as in the label-efficient setting. Another setting that may seem closely related to our setting is the multi-armed bandit problem with delayed feedback [16, 17]. In this setting, the feedback for the action performed on round t is received at the end of round t+1. However, note that in all of the examples we have discussed, the feedback is always immediate, but is either nonexistent or unreliable right after a switch. The important aspect of our setup, which does not apply to the label-efficient and delayed feedback settings, is that the feedback adapts to the player’s past actions. Our work leaves a few interesting questions for future research. A closely related adaptive-feedback problem is one where feedback is revealed only on rounds where the player does switch actions. Can the player attain O( √ T) regret in this setting as well, or is the need to constantly switch actions detrimental to the player? More generally, we can consider other multi-armed bandit problems with adaptive feedback, where the feedback depends on the player’s actions on previous rounds. It would be quite interesting to understand what kind of adaptive-feedback patterns give rise to easy problems, for which a regret of O( √ T) is attainable. Specifically, is there a problem with oblivious losses and adaptive feedback whose minimax regret is eΘ(T 2/3), as is the case with adaptive losses? Acknowledgments The research leading to these results has received funding from the Microsoft-Technion EC center, and the European Union’s Seventh Framework Programme (FP7/2007-2013]) under grant agreement n◦336078 ERC-SUBLRN. 3Auer et al. [4] also present an algorithm called EXP3.P and seemingly prove O( √ T) regret guarantees against non-oblivious adversaries. These bounds are irrelevant in our setting—see Arora et al. [3]. 8 References [1] J. Abernethy, E. Hazan, and A. Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT, pages 263–274, 2008. [2] A. Antos, G. Bart´ok, D. P´al, and C. Szepesv´ari. Toward a classification of finite partialmonitoring games. Theoretical Computer Science, 2012. [3] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. In Proceedings of the Twenty-Ninth International Conference on Machine Learning, 2012. [4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002. [5] B. Awerbuch and R. D. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 45–53. ACM, 2004. [6] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [7] S. Bubeck, N. Cesa-Bianchi, and S. M. Kakade. Towards minimax policies for online linear optimization with bandit feedback. In Proceedings of the 25th Annual Conference on Learning Theory (COLT), volume 23, pages 41.1–41.14, 2012. [8] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78(5):1404–1422, 2012. [9] N. Cesa-Bianchi, G. Lugosi, and G. Stoltz. Minimizing regret with label efficient prediction. IEEE Transactions on Information Theory, 51(6):2152–2162, 2005. [10] V. Dani and T. P. Hayes. Robbing the bandit: Less regret in online geometric optimization against an adaptive adversary. In Proceedings of the Seventeenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006. [11] V. Dani, S. M. Kakade, and T. P. Hayes. The price of bandit information for online optimization. In Advances in Neural Information Processing Systems, pages 345–352, 2007. [12] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. arXiv preprint arXiv:1310.2997, 2013. [13] E. Hazan, Z. Karnin, and R. Mehka. Volumetric spanners and their applications to machine learning. In arXiv:1312.6214, 2013. [14] R. Kohavi, R. Longbotham, D. Sommerfield, and R. M. Henne. Controlled experiments on the web: survey and practical guide. Data Mining and Knowledge Discovery, 18(1):140–181, 2009. [15] R. Kohavi, A. Deng, B. Frasca, R. Longbotham, T. Walker, and Y. Xu. Trustworthy online controlled experiments: Five puzzling outcomes explained. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 786–794. ACM, 2012. [16] C. Mesterharm. Online learning with delayed label feedback. In Proceedings of the Sixteenth International Conference on Algorithmic Learning Theory, 2005. [17] G. Neu, A. Gy¨orgy, C. Szepesv´ari, and A. Antos. Online Markov decision processes under bandit feedback. In Advances in Neural Information Processing Systems 23, pages 1804–1812, 2010. 9
2014
43
5,529
Zero-Shot Recognition with Unreliable Attributes Dinesh Jayaraman University of Texas at Austin Austin, TX 78701 dineshj@cs.utexas.edu Kristen Grauman University of Texas at Austin Austin, TX 78701 grauman@cs.utexas.edu Abstract In principle, zero-shot learning makes it possible to train a recognition model simply by specifying the category’s attributes. For example, with classifiers for generic attributes like striped and four-legged, one can construct a classifier for the zebra category by enumerating which properties it possesses—even without providing zebra training images. In practice, however, the standard zero-shot paradigm suffers because attribute predictions in novel images are hard to get right. We propose a novel random forest approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. By leveraging statistics about each attribute’s error tendencies, our method obtains more robust discriminative models for the unseen classes. We further devise extensions to handle the few-shot scenario and unreliable attribute descriptions. On three datasets, we demonstrate the benefit for visual category learning with zero or few training examples, a critical domain for rare categories or categories defined on the fly. 1 Introduction Visual recognition research has achieved major successes in recent years using large datasets and discriminative learning algorithms. The typical scenario assumes a multi-class task where one has ample labeled training images for each class (object, scene, etc.) of interest. However, many realworld settings do not meet these assumptions. Rather than fix the system to a closed set of thoroughly trained object detectors, one would like to acquire models for new categories with minimal effort and training examples. Doing so is essential not only to cope with the “long-tailed” distribution of objects in the world, but also to support applications where new categories emerge dynamically—for example, when a scientist defines a new phenomenon of interest to be detected in her visual data. Zero-shot learning offers a compelling solution. In zero-shot learning, a novel class is trained via description—not labeled training examples [10, 18, 8]. In general, this requires the learner to have access to some mid-level semantic representation, such that a human teacher can define a novel unseen class by specifying a configuration of those semantic properties. In visual recognition, the semantic properties are attributes shared among categories, like black, has ears, or rugged. Supposing the system can predict the presence of any such attribute in novel images, then adding a new category model amounts to defining its attribute “signature” [8, 3, 18, 24, 19]. For example, even without labeling any images of zebras, one could build a zebra classifier by instructing the system that zebras are striped, black and white, etc. Interestingly, computational models for attribute-based recognition are supported by the cognitive science literature, where researchers explore how humans conceive of objects as bundles of attributes [25, 17, 5]. So, in principle, if we could perfectly predict attribute presence1, zero-shot learning would offer an elegant solution to generating novel classifiers on the fly. The problem, however, is that we can’t assume perfect attribute predictions. Visual attributes are in practice quite difficult to learn 1and have an attribute vocabulary rich enough to form distinct signatures for each category of interest 1 accurately—often even more so than object categories themselves. This is because many attributes are correlated with one another (given only images of furry brown bears, how do we learn furry and brown separately? [6]), and abstract linguistic properties can have very diverse visual instantiations (compare a bumpy road to a bumpy rash). Thus, attribute-based zero-shot recognition remains in the “proof of concept” realm, in practice falling short of alternate transfer methods [23]. We propose an approach to train zero-shot models that explicitly accounts for the unreliability of attribute predictions. Whereas existing methods take attribute predictions at face value, our method during training acknowledges the known biases of the mid-level attribute models. Specifically, we develop a random forest algorithm that, given attribute signatures for each category, exploits the attribute classifiers’ receiver operating characteristics to select discriminative and predictable decision nodes. We further generalize the idea to account for unreliable class-attribute associations. Finally, we extend the solution to the “few-shot” setting, where a small number of category-labeled images are also available for training. We demonstrate the idea on three large datasets of object and scene categories, and show its clear advantages over status quo models. Our results suggest the valuable role attributes can play for low-cost visual category learning, in spite of the inherent difficulty in learning them reliably. 2 Related Work Most existing zero-shot models take a two-stage classification approach: given a novel image, first its attributes are predicted, then its class label is predicted as a function of those attributes. For example, in [3, 18, 30], each unseen object class is described by a binary indicator vector (“signature”) over its attributes; a new image is mapped to the unseen class with the signature most similar to its attribute predictions. The probabilistic Direct Attribute Prediction (DAP) method [8] takes a similar form, but adds priors for the classes and attributes and computes a MAP prediction of the unseen class label. A topic model variant is explored in [31]. The DAP model has gained traction and is often used in other work [23, 19, 29]. In all of the above methods, as in ours, training an unseen class amounts to specifying its attribute signature. In contrast to our approach, none of the existing methods account for attribute unreliability when learning an unseen category. As we will see in the results, this has a dramatic impact on generalization. We stress that attribute unreliability is distinct from attribute strength. The former (our focus) pertains to how reliable the mid-level classifier is, whereas the latter pertains to how strongly an image exhibits an attribute (e.g., as modeled by relative [19] or probabilistic [8] attributes). PAC bounds on the tolerable error for mid-level classifiers are given in [18], but that work does not propose a solution to mitigate the influence of their uncertainty. While the above two-stage attribute-based formulation is most common, an alternative zero-shot strategy is to exploit external knowledge about class relationships to adapt classifiers to an unseen class. For example, an unseen object’s classifier can be estimated by combining the nearest existing classifiers (trained with images) in the ImageNet hierarchy [23, 14], or by combining classifiers based on label co-occurrences [13]. In a similar spirit, label embeddings [1] or feature embeddings [4] can exploit semantic information for zero-shot predictions. Unlike these models, we focus on defining new categories through language-based description (with attributes). This has the advantage of giving a human supervisor direct control on the unseen class’s definition, even if its attribute signature is unlike that observed in any existing trained model. Acknowledging that attribute classifiers are often unreliable, recent work abandons purely semantic attributes in favor of discovering mid-level features that are both detectable and discriminative for a set of class labels [11, 22, 26, 15, 30, 27, 1]. However, there is no guarantee that the discovered features will align with semantic properties, particularly “nameable” ones. This typically makes them inapplicable to zero-shot learning, since a human supervisor can no longer define the unseen class with concise semantic terms. Nonetheless, one can attempt to assign semantics post-hoc (e.g., [30]). We demonstrate that our method can benefit zero-shot learning with such discovered (pseudo)attributes as well. Our idea for handling unreliable attributes in random forests is related to fractional tuples for handling missing values in decision trees [21]. In that approach, points with missing values are distributed down the tree in proportion to the observed values in all other data. Similar concepts are explored in [28] to handle features represented as discrete distributions and in [16] to propagate 2 instances with soft node memberships. Our approach also entails propagating training instances in proportion to uncertainty. However, our zero-shot scenario is distinct, and, accordingly, the training and testing domains differ in important ways. At training time, rather than build a decision tree from labeled data points, we construct each tree using the unseen classes’ attribute signatures. Then, at test time, the inputs are attribute classifier predictions. Furthermore, we show how to propagate both signatures and data points through the tree simultaneously, which makes it possible to account for inter-dependencies among the input dimensions and also enables a few-shot extension. 3 Approach Given a vocabulary of M visual attributes, each unseen class k is described in terms of its attribute signature Ak, which is an M-dimensional vector where Ak(i) gives the association of attribute i with class k.2 Typically the association values would be binary—meaning that the attribute is always present/absent in the class—but they may also be real-valued when such fine-grained data is available. We model each unseen class with a single signature (e.g., whales are big and gray). However, it is straightforward to handle the case where a class has a multi-modal definition (e.g., whales are big and gray OR whales are big and black), by learning a zero-shot model per “mode”. Whether the attribute vocabulary is hand-designed [8, 3, 19, 29, 23] or discovered [30, 11, 22], our approach assumes it is expressive enough to discriminate between the categories. Suppose there are K unseen classes of interest, for which we have no training images. Our zero-shot method takes as input the K attribute signatures and a dataset of images labeled with attributes, and produces a classifier for each unseen class as output. At test time, the goal is to predict which unseen class appears in a novel image. In the following, we first describe the initial stage of building the attribute classifiers (Sec. 3.1). Then we introduce a zero-shot random forest trained with attribute signatures (Sec. 3.2). Next we explain how to augment that training procedure to account for attribute unreliability (Sec. 3.2.2) and signature uncertainty (Sec. 3.2.3). Finally, we present an extension to few-shot learning (Sec. 3.3). 3.1 Learning the attribute vocabulary As in any attribute-based zero-shot method [3, 8, 18, 23, 19, 7, 29], we first must train classifiers to predict the presence or absence of each of the M attributes in novel images. Importantly, the images used to train the attribute classifiers may come from a variety of objects/scenes and need not contain any instances of the unseen categories. The fact that attributes are shared across category boundaries is precisely what allows zero-shot learning. We train one SVM per attribute, using a training set of images xi (represented with standard descriptors) with binary M-dimensional label vectors yi, where yi(m) = 1 indicates that attribute m is present in xi. Let ˆam(x) denote the Platt probability score from the m-th such SVM applied to test input x. 3.2 Zero-shot random forests Next we introduce our key contribution: a random forest model for zero-shot learning. 3.2.1 Basic formulation: Signature random forest First we define a basic random forest training algorithm for the zero-shot setting. The main idea is to train an ensemble of decision trees using attribute signatures—not image descriptors or vectors of attribute predictions. In the zero-shot setting, this is all the training information available. Later, at test time, we will have an image in hand, and we will apply the trained random forest to estimate its class posteriors. Recall that the k-th unseen class is defined by its attribute signature Ak ∈ℜM. We treat each such signature as the lone positive “exemplar” for its class, and discriminatively train random forests to distinguish all the signatures, A1, . . . , AK. We take a one-versus-all approach, training one forest for each unseen class. So, when training class k, the K −1 other class signatures are the negatives. 2We use “class” and “category” to refer to an object or scene, e.g., zebra or beach, and “attribute” to refer to a property, e.g., striped or sunny. “Unseen” means we have no training images for that class. 3 For each class, we build an ensemble of decision trees in a breadth-first manner. Each tree is learned by recursively splitting the signatures into subsets at each node, starting at the root. Let In denote an indicator vector of length K that records which signatures appear at node n. For the root node, all K signatures are present, so we have In = [1, . . . , 1]. Following the typical random forest protocol [2], the training instances are recursively split according to a randomized test; it compares one dimension of the signature against a threshold t, then propagates each one to the left child l or right child r depending on the outcome, yielding indicator vectors Il and Ir. Specifically, if In(k) = 1, then if Ak(m) > t, we have Ir(k) = 1. Otherwise, Ir(k) = 0. Further, Il = In −Ir. Thus, during training we must choose two things at each node: the query attribute m and the threshold t, represented jointly as the split (m, t). We sample a limited number of (m, t) combinations3 and choose the one that maximizes the expected information gain IGbasic: IGbasic(m, t) = H(pIn) − ` P(Ai(m) > t|In(i) = 1) H(pIl) + P(Ai(m) ≤t|In(i) = 1) H(pIr) ´ (1) = H(pIn) − „ ∥Il∥1 ∥In∥1 H(pIl) + ∥Ir∥1 ∥In∥1 H(pIr) « , (2) where H(p) = −P i p(i) log2 p(i) is the entropy of a distribution p. The 1-norm on an indicator vector I sums up the occurrences I(k) of each signature, which for now are binary, I(k) ∈{0, 1}. Since we are training a zero-shot forest to discriminate class k from the rest, the distribution over class labels at node n is a length-2 vector: pIn =  In(k) ∥In∥1 , P i̸=k In(i) ∥In∥1  . (3) We grow each tree in the forest to a fixed, maximum depth, terminating a branch prematurely if less than 5% of training samples have reached a node on it. We learn J = 100 trees per forest. Given a novel test image xtest, we compute its predicted attribute signature ˆa(xtest) = [ˆa1(xtest), . . . , ˆaM(xtest)] by applying the attribute SVMs. Then, to predict the posterior for class k, we use ˆa(xtest) to traverse to a leaf node in each tree of k’s forest. Let P j k(ℓ) denote the fraction of positive training instances at a leaf node ℓin tree j of the forest for class k. Then P(k|ˆa(xtest)) = 1 J P jP j k(ℓ), the average of the posteriors across the ensemble. If we somehow had perfect attribute classifiers, this basic zero-shot random forest (in fact, one such tree alone) would be sufficient. Next, we show how to adapt the training procedure defined so far to account for their unreliability. 3.2.2 Accounting for attribute prediction unreliability While our training “exemplars” are the true attribute signatures for each unseen class, the test images will have only approximate estimates of the attributes they contain. We therefore augment the zero-shot random forest to account for this unreliability during training. The main idea is to generalize the recursive splitting procedure above such that a given signature can pursue multiple paths down the tree. Critically, those paths will be determined by the false positive/true positive rates of the individual attribute predictors. In this way, we expand each idealized training signature into a distribution in the predicted attribute space. Essentially, this preemptively builds in the appropriate “cushion” of expected errors when choosing discriminative splits. Implementing this idea requires two primary extensions to the formulation in Sec. 3.2.1: (i) we inject attribute validation data and its associated attribute classification error statistics into the tree formation process, and (ii) we redefine the information gain to account for the partial propagation of training signatures. We explain each of these components in turn next. First, in addition to signatures, at each node we maintain a set of validation data in order to gauge the error tendencies of each attribute classifier. For the experiments in this paper (Sec 4), our method reserves some attribute classifier training data for this purpose. Denote this set of attribute-labeled images as DV . During random forest training, this data is recursively propagated down the tree following each split once it is chosen. Let DV (n) ⊆DV denote the set of validation data inherited at node n. At the root, DV (n) = DV . 3With binary Ai(m), all 0 < t < 1 are equivalent in Sec 3.2.1. Selecting t becomes important in Sec 3.2.2. 4 With validation data thus injected, we can estimate the test-time receiver operating characteristic (ROC)4 for an attribute classifier at any node in the tree. For example, the estimated false positive rate at node n for attribute m at threshold t is FP(n, m, t) = Pn(ˆam(x) > t | y(m) = 0), which is the fraction of examples in DV (n) for which the attribute m is absent, but the SVM predicts it to be present at threshold t. Here, y(m) denotes the m-th attribute’s label for image x. For any node n, let I′ n be a real-valued indicator vector, such that I′ n(k) ∈[0, 1] records the fractional occurrence of the training signature for class k at node n. At the root node, I′ n(k) = 1, ∀k. For a split (m, t) at node n, a signature Ak splits into the right and left child nodes according to its ROC for attribute m at the operating point specified by t. In particular, we have: I′ r(k) = I′ n(k)Pn(ˆam(x) > t | y(m) = Ak(m)), and I′ l(k) = I′ n(k)Pn(ˆam(x) ≤t | y(m) = Ak(m)), (4) where x ∈DV (n) . When Ak(m) = 1, the probability terms are TP(n, m, t) and FN(n, m, t) respectively; when Ak(m) = 0, they are FP(n, m, t) and TN(n, m, t). In this way, we channel all predicted negatives to the left child node. In contrast, a naive random forest (RF) trained on signatures assumes ideal attribute classifiers and channels all ground truth negatives—i.e., true negatives and false positives—through the left node. To illustrate the meaning of this fractional propagation, consider a class “elephant” known to have the attribute “gray”. If the “gray” attribute classifier fires only on 60% of the “gray” samples in the validation set, i.e., TP=0.6, then only 0.6 fraction of the “elephant” signature is passed on to the positive (i.e., right) node. This process repeats through more levels until fractions of the single “elephant” signature have reached all leaf nodes. Thus, a single class signature emulates the estimated statistics of a full training set of class-labeled instances with attribute predictions. We stress two things about the validation data propagation. First, the data in DV is labeled by attributes only; it has no unseen class labels and never features in the information gain computation. Its only role is to estimate the ROC values. Second, the recursive sub-selection of the validation data is important to capture the dependency of TP/FP rates at higher level splits. For example, if we were to select split (m, t) at the root, then the fractional signatures pushed to the left child must all have A(m) < t, meaning that for a candidate split (m, s) at the left child, where s > t, the correct TP and FP rates are both 0. This is accounted for when we use DV (n) to compute the ROC, but would not have been, had we just used DV . Thus, our formulation properly accounts for dependencies between attributes when selecting discriminative thresholds, an issue not addressed by existing methods for missing [21] or probabilistically distributed features [28]. Next, we redefine the information gain. When building a zero-shot tree conscious of attribute unreliability, we choose the split maximizing the expected information gain according to the fractionally propagated signatures (compare to Eqn. (2)): IGzero(m, t) = H(pI′n) −  ∥I′ l∥1 ∥I′n∥1 H(pI′ l) + ∥I′ r∥1 ∥I′n∥1 H(pI′r)  . (5) The distribution pI′z, z ∈{l, r} is computed as in Eqn. (3). For full pseudocode and a schematic illustration of our method, please see supp. The discriminative splits under this criterion will be those that not only distinguish the unseen classes but also persevere (at test time) as a strong signal in spite of the attribute classifiers’ error tendencies. This means the trees will prefer both reliable attributes that are discriminative among the classes, as well as less reliable attributes coupled with intelligently selected operating points that remain distinctive. Furthermore, they will omit splits that, though highly discriminative in terms of idealized signatures, were found to be “unlearnable” among the validation data. For example, in the extreme case, if an attribute classifier cannot distinguish positives and negatives, meaning that TPR=FPR, then the signatures of all classes are equally likely to propagate to the left or right, i.e., I′ r(k)/I′ n(k) = I′ r(j)/I′ n(j) and I′ l(k)/I′ n(k) = I′ l(j)/I′ n(j) for all k, j, which yields an information gain of 0 in Eqn. (5) (see supp). Thus, our method, while explicitly making the best of imperfect attribute classification, inherently prefers more learnable attributes. 4The ROC captures the true positive (TP) vs. false positive (FP) rates (equivalently the true negative (TN) and false negative (FN) rates) as a function of a decision value threshold. 5 The proposed approach produces unseen category classifiers with zero category-labeled images. The attribute-labeled validation data is important to our solution’s robustness. If that data perfectly represented the true attribute errors on images from the unseen classes (which we cannot access, of course, because images from those classes appear only at test time), then our training procedure would be equivalent to building a random forest on the test samples’ attribute classifier outputs. 3.2.3 Accounting for class signature uncertainty Beyond attribute classifier unreliability, our framework can also deal with another source of zeroshot uncertainty: instances of a class often deviate from class-level attribute signatures. To tackle this, we redefine the soft indicators I′ r and I′ l in Eqn. 4, appending a term to account for annotation noise. Please see supp. for details. 3.3 Extending to few-shot random forests Our approach also admits a natural extension to few-shot training. Extensions of zero-shot models to the few-shot setting have been attempted before [31, 26, 14, 1]. In this case, we are given not only attribute signatures, but also a dataset DT consisting of a small number of images with their class labels. We essentially use the signatures A1, . . . , AK as a prior for selecting good tree splits that also satisfy the traditional training examples. The information gain on the signatures is as defined in Sec. 3.2.2, while the information gain on the training images, for which we can compute classifier outputs, uses the standard measure defined in Sec. 3.2.1. Using some notation shortcuts, for few-shot training we recursively select the split that maximizes the combined information gain: IGfew(m, t) = λ IGzero(m, t){A1, . . . , AK} + (1 −λ) IGbasic(m, t){DT }, (6) where λ controls the role of the signature-based prior. Intuitively, we can expect lower values of λ to suffice as the size of DT increases, since with more training examples we can more precisely learn the class’s appearance. This few-shot extension can be interpreted as a new way to learn random forests with descriptive priors. 4 Experiments Datasets and setup We use three datasets: (1) Animals with Attributes (AwA) [8] (M = 85 attributes, K = 10 unseen classes, 30,475 total images), (2) aPascal/aYahoo objects (aPY) [3] (M = 65, K = 12, 15,339 images) (3) SUN scene attributes (SUN) [20] (M = 102, K = 10, 14,340 images). These datasets capture a wide array of categories (animals, indoor and outdoor scenes, household objects, etc.) and attributes (parts, affordances, habitats, shapes, materials, etc.). The attribute-labeled images originate from 40, 20, and 707 “seen” classes in each dataset, respectively; we use the class labels solely to map to attribute annotations. We use the unseen class splits specified in [9] for AwA and aPY, and randomly select the 10 unseen classes for SUN (see supp.). For all three, we use the features provided with the datasets, which include color histograms, SIFT, PHOG, and others (see [9, 3, 20] for details). Following [8], we train attribute SVMs with combined χ2-kernels, one kernel per feature channel, and set C = 10. Our method reserves 20% of the attribute-labeled images as ROC validation data, then pools it with the remaining 80% to train the final attribute classifiers. We stress that our method and all baselines have access to exactly the same amount of attribute-labeled data. We report results as mean and standard error measured over 20 random trials. Based on crossvalidation, we use tree depths of (AwA-9, aPY-6, SUN-8), and generate (#m, #t) tests per node (AwA-(10,7), aPY-(8,2), SUN-(4,5)). When too few validation points (< 10 positives or negatives) reach a node n, we revert to computing statistics over the full validation set DV rather than DV (n). Baselines In addition to several state-of-the-art published results and ablated variants of our method, we also compare to two baselines: (1) SIGNATURE RF: random forests trained on classattribute signatures as described in Sec. 3.2.1, without an attribute uncertainty model, and (2) DAP: Direct Attribute Prediction [8, 9], which is a leading attribute-based zero-shot object recognition method widely used in the literature [8, 3, 18, 30, 8, 23, 19, 29].5 5We use the authors’ code: http://attributes.kyb.tuebingen.mpg.de/ 6 0 0.5 1 1.5 2 2.5 3 3.5 4 0 20 40 60 80 100 noise level η accuracy(%) Uniform noise levels ours signature−RF DAP 0 0.5 1 1.5 2 2.5 3 3.5 4 0 20 40 60 80 100 mean noise level η accuracy(%) Attribute−specific noise levels ours signature−RF DAP Figure 1: Zero-shot accuracy on AwA as a function of attribute uncertainty, in controlled noise scenarios. Method/Dataset AwA aPY SUN DAP 40.50 18.12 52.50 SIGNATURE-RF 36.65 ± 0.16 12.70 ± 0.38 13.20 ± 0.34 OURS W/O ROC PROP, SIG UNCERTAINTY 39.97 ± 0.09 24.25 ± 0.18 47.46 ± 0.29 OURS W/O SIG UNCERTAINTY 41.88 ± 0.08 24.79 ± 0.11 56.18 ± 0.27 OURS 43.01 ± 0.07 26.02 ± 0.05 56.18 ± 0.27 OURS+TRUE ROC 54.22 ± 0.03 33.54 ± 0.07 66.65 ± 0.31 Table 1: Zero-shot learning accuracy on all three datasets. Accuracy is percentage of correct category predictions on unseen class images, ± standard error. 4.1 Zero-shot object and scene recognition Controlled noise experiments Our approach is designed to overcome the unreliability of attribute classifiers. To glean insight into how it works, we first test it with controlled noise in the test images’ attribute predictions. We start with hypothetical perfect attribute classifier scores ˆam(x) = Ak(m) for x in class k, then progressively add noise to represent increasing errors in the predictions. We examine two scenarios: (1) where all attribute classifiers are equally noisy, and (2) where the average noise level varies per attribute. See supp. for details on the noise model. Figure 1 shows the results using AwA. By definition, all methods are perfectly accurate with zero noise. Once the attributes are unreliable (i.e., noise > 0), however, our approach is consistently better. Furthermore, our gains are notably larger in the second scenario where noise levels vary per attribute (right plot), illustrating how our approach properly favors more learnable attributes as discussed in Sec. 3.2.2. In contrast, SIGNATURE-RF is liable to break down with even minor imperfections in attribute prediction. These results affirm that our method benefits from both (1) estimating and accounting for classifier noisiness and (2) avoiding uninformative attribute classifiers. Real unreliable attributes experiments Next we present the key zero-shot results for our method applied to three challenging datasets using over 250 real attribute classifiers. Table 1 shows the results. Our method significantly outperforms the existing DAP method [9]. This is an important result: DAP is today the most commonly used model for zero-shot object recognition, whether using this exact DAP formulation [8, 23, 19, 29] or very similar non-probabilistic variants [3, 30]. Note that our approach beats DAP despite the fact we use only 80% of the attribute-labelled images to train attribute classifiers. This indicates that modeling how good/bad the attribute classifiers are is even more important than having better attribute classifiers. Furthermore, this demonstrates that modeling only the confidence of an attribute’s presence in a test image (which DAP does) is inadequate; our idea to characterize their error tendencies during training is valuable. Our substantial improvements over SIGNATURE-RF also confirm it is imperative to model attribute classifier unreliability. Our gains over DAP are especially large on SUN and aPY, which have fewer positive training samples per attribute, leading to less reliable attribute classifiers—exactly where our method is needed most. On AwA too, we outperform DAP on 7 out of 10 categories, with largest gains on “giant panda”(10.2%),“whale seal”(9.4%) and “persian cat”(7.4%), classes that are very different from the train classes. Further, if we repeat the experiment on AwA reducing to 500 randomly chosen images for attribute training, our overall accuracy gain over DAP widens to 8 points (28.0 ± 0.9 vs. 20.42). 7 0 0.2 0.4 0.6 0.8 1 1.2 36 38 40 42 44 46 48 50 52 54 56 58 50 shot (our prior) 100 shot (our prior) 200 shot (our prior) 50 shot (baseline) 100 shot (baseline) 200 shot (baseline) lambda accuracy(%) (a) Few-shot. Stars denote selected λ. Method Accuracy Lampert et al. [8] 40.5 Yu and Aloimonos [31] 40.0 Rohrbach et al. [24] 35.7 Kankuekul et al. [7] 32.7 Yu et al. [30] 48.3 OURS (named attributes) 43.0 ± 0.07 OURS (discovered attributes) 48.7 ± 0.09 (b) Zero-shot vs. state of the art Figure 2: (a) Few-shot results. (b) Zero-shot results on AwA compared to the state of the art. Table 1 also helps isolate the impact of two components of our method: the model of signature uncertainty (see OURS W/O SIG UNCERTAINTY), and the recursive propagation of validation data (see OURS W/O ROC PROP, SIG UNCERTAINTY). For the latter, we further compute TPR/FPRs globally on the full validation dataset DV rather than for node-specific subsets DV (n). We see both aspects contribute to our full method’s best performance (see OURS). Finally, OURS+TRUE ROC provides an “upper bound” on the accuracy achievable with our method for these datasets; this is the result attainable were we to use the unseen class images as validation data DV . This also points to an interesting direction for future work: to better model expected error rates on images with unseen attribute combinations. Our initial attempts in this regard included focusing validation data on seen class images with signatures most like those of the unseen classes, but the impact was negligible. Figure 2b compares our method against all published results on AwA, using both named and discovered attributes. When using standard AwA named attributes, our method comfortably outperforms all prior methods. Further, when we use the discovered attributes from [30], it performs comparably to their attribute decoding method, achieving the state-of-the-art on AwA. This result was obtained using a generalization of our method to handle the continuous attribute strength signatures of [30]. 4.2 Few-shot object and scene recognition Finally, we demonstrate our few-shot extension. Figure 2a shows the results, as a function of both the amount of labeled training images and the prior-weighting parameter λ (cf. Sec 3.3).6 When λ = 0, we rely solely on the training images DT ; when λ = 1, we rely solely on the attribute signatures i.e., zero-shot learning. As a baseline, we compare to a method that uses solely the few training images to learn the unseen classes (dotted lines). We see the clear advantage of our attribute signature prior for few-shot random forest training. Furthermore, we see that, as expected, the optimal λ shifts towards 0 as more samples are added. Still, even with 200 training images in DT , the prior plays a role (e.g., the best λ = 0.3 on blue curve). The star per curve indicates the λ value our method selects automatically with cross-validation. 5 Conclusion We introduced a zero-shot training approach that models unreliable attributes—both due to classifier predictions and uncertainty in their association with unseen classes. Our results on three challenging datasets indicate the method’s promise, and suggest that the elegance of zero-shot learning need not be abandoned in spite of the fact that visual attributes remain very difficult to predict reliably. Further, our idea is applicable to other uses of semantic mid-level concepts for higher tasks e.g., poselets for action recognition [12], discriminative mid-level patches for location recognition [27] etc., and in domains outside computer vision. In future work, we plan to develop extensions to accommodate inter-attribute correlations in the random forest tests and multi-label random forests to improve scalability for many unseen classes. Acknowledgements: We thank Christoph Lampert and Felix Yu for helpful discussions and sharing their code. This research is supported in part by NSF IIS-1065390 and ONR ATL. 6These are for AwA; see supp. for similar results on the other two datasets. 8 References [1] Z. Akata, F. Perronnin, Z. Harchaoui, and C. Schmid. Label-Embedding for Attribute-Based Classification. In CVPR, 2013. [2] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. [3] A. Farhadi, I. Endres, D. Hoiem, and D. Forsyth. Describing objects by their attributes. In CVPR, 2009. [4] A. Frome, G. Corrado, J. Shlens, S. Bengio, J. Dean, M. Ranzato, and T. Mikolov. Devise: A deep visual-semantic embedding model. In NIPS, 2013. [5] P. G¨ardenfors. Conceptual Spaces: The Geometry of Thought, volume 106. 2000. [6] D. Jayaraman, F. Sha, and K. Grauman. Decorrelating semantic visual attributes by resisting the urge to share. In CVPR, 2014. [7] P. Kankuekul, A. Kawewong, S. Tangruamsub, and O. Hasegawa. Online incremental attribute-based zero-shot learning. In CVPR, 2012. [8] C Lampert, H Nickisch, and S Harmeling. Learning to Detect Unseen Object Classes by Between-class Attribute Transfer. In CVPR, 2009. [9] Christoph H Lampert, Hannes Nickisch, and Stefan Harmeling. Attribute-based classification for zeroshot visual object categorization. TPAMI, 2014. [10] H. Larochelle, D. Erhan, and Y. Bengio. Zero-data learning of new tasks. In AAAI, 2008. [11] D. Mahajan, S. Sellamanicka, and V. Nair. A joint learning framework for attribute models and object descriptions. In ICCV, 2011. [12] S. Maji, L. Bourdev, and J. Malik. Action recognition from a distributed representation of pose and appearance. In CVPR, 2011. [13] T. Mensink, E. Gavves, and C. Snoek. COSTA: Co-occurrence statistics for zero-shot classification. In CVPR, 2014. [14] T. Mensink and J. Verbeek. Metric learning for large scale image classification: Generalizing to new classes at near-zero cost. In ECCV, 2012. [15] R. Mittelman, H. Lee, B. Kuipers, and S. Savarese. Weakly Supervised Learning of Mid-Level Features with Beta-Bernoulli Process Restricted Boltzmann Machines. In CVPR, 2013. [16] C. Olaru and L. Wehenkel. A complete fuzzy decision tree technique. Fuzzy Sets and Systems, 138(2):221–254, Sept 2003. [17] D. Osherson, E. Smith, T. Myers, E. Shafir, and M. Stob. Extrapolating human probability judgment. Theory and Decision, 36:103–129, 1994. [18] M. Palatucci, D. Pomerleau, G. Hinton, and T. Mitchell. Zero-shot Learning with Semantic Output Codes. In NIPS, 2009. [19] D. Parikh and K. Grauman. Relative attributes. In ICCV, 2011. [20] G Patterson and J Hays. SUN Attribute Database: Discovering, Annotating, and Recognizing Scene Attributes. In CVPR, 2012. [21] J. Quinlan. Induction of decision trees. Machine learning, pages 81–106, 1986. [22] M. Rastegari, A. Farhadi, and D. Forsyth. Attribute discovery via predictable discriminative binary codes. In ECCV, 2012. [23] M. Rohrbach, M. Stark, and B. Schiele. Evaluating knowledge transfer and zero-shot learning in a largescale setting. In CVPR, 2011. [24] M. Rohrbach, M. Stark, G. Szarvas, I. Gurevych, and B. Schiele. What helps where and why? semantic relatedness for knowledge transfer. In CVPR, 2010. [25] E. Rosch and B. Lloyd. Cognition and categorization. 1978. [26] V. Sharmanska, N. Quadrianto, and C. Lampert. Augmented attribute representations. In ECCV, 2012. [27] S. Singh, A. Gupta, and A. Efros. Unsupervised discovery of mid-level discriminative patches. In ECCV, 2012. [28] S. Tsang, B. Kao, K. Yip, W.-S. Ho, and S. Lee. Decision Trees for Uncertain Data. IEEE Transactions on Knowledge and Data Engineering, 23(1):64–78, January 2011. [29] N. Turakhia and D. Parikh. Attribute dominance: what pops out? In ICCV, 2013. [30] F. Yu, L. Cao, R. Feris, J. Smith, and S.-F. Chang. Designing Category-Level Attributes for Discriminative Visual Recognition. In CVPR, 2013. [31] X. Yu and Y. Aloimonos. Attribute-based transfer learning for object categorization with zero/one training example. In ECCV, 2010. 9
2014
44
5,530
Communication Efficient Distributed Machine Learning with the Parameter Server Mu Li∗†, David G. Andersen∗, Alexander Smola∗‡, and Kai Yu† ∗Carnegie Mellon University †Baidu ‡Google {muli, dga}@cs.cmu.edu, alex@smola.org, yukai@baidu.com Abstract This paper describes a third-generation parameter server framework for distributed machine learning. This framework offers two relaxations to balance system performance and algorithm efficiency. We propose a new algorithm that takes advantage of this framework to solve non-convex non-smooth problems with convergence guarantees. We present an in-depth analysis of two large scale machine learning problems ranging from ℓ1-regularized logistic regression on CPUs to reconstruction ICA on GPUs, using 636TB of real data with hundreds of billions of samples and dimensions. We demonstrate using these examples that the parameter server framework is an effective and straightforward way to scale machine learning to larger problems and systems than have been previously achieved. 1 Introduction In realistic industrial machine learning applications the datasets range from 1TB to 1PB. For example, a social network with 100 million users and 1KB data per user has 100TB. Problems in online advertising and user generated content analysis have complexities of similar order of magnitudes [12]. Such huge quantities of data allow learning powerful and complex models with 109 to 1012 parameters [9], at which scale a single machine is often not powerful enough to complete these tasks in time. Distributed optimization is becoming a key tool for solving large scale machine learning problems [1, 3, 10, 21, 19]. The workloads are partitioned into worker machines, which access the globally shared model as they simultaneously perform local computations to refine the model. However, efficient implementations of the distributed optimization algorithms for machine learning applications are not easy. A major challenge is the inter-machine data communication: • Worker machines must frequently read and write the global shared parameters. This massive data access requires an enormous amount of network bandwidth. However, bandwidth is one of the scarcest resources in datacenters [6], often 10-100 times smaller than memory bandwidth and shared among all running applications and machines. This leads to a huge communication overhead and becomes a bottleneck for distributed optimization algorithms. • Many optimization algorithms are sequential, requiring frequent synchronization among worker machines. In each synchronization, all machines need to wait the slowest machine. However, due to imperfect workload partition, network congestion, or interference by other running jobs, slow machines are inevitable, which then becomes another bottleneck. In this work, we build upon our prior work designing an open-source third generation parameter server framework [4] to understand the scope of machine learning algorithms to which it can be applied, and to what benefit. Figure 1 gives an overview of the scale of the largest machine learning experiments performed on a number of state-of-the-art systems. We confirmed with the authors of these systems whenever possible. 1 10 1 10 2 10 3 10 4 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 # of cores # of shared parameters Distbelief (DNN) VW (LR) Yahoo!LDA (LDA) Graphlab (LDA) Naiad (LR) REEF (LR) Petuum (Lasso) MLbase (LR) Parameter server (Sparse LR) Figure 1: Comparison of the public largest machine learning experiments each system performed. The results are current as of April 2014. Compared to these systems, our parameter server is several orders of magnitude more scalable in terms of both parameters and nodes. The parameter server communicates data asynchronously to reduce the communication cost. The resulting data inconsistency is a trade-off between the system performance and the algorithm convergence rate. The system offers two relaxations to address data (in)consistency: First, rather than arguing for a specific consistency model [29, 7, 15], we support flexible consistency models. Second, the system allows user-specific filters for fine-grained consistency management. Besides, the system provides other features such as data replication, instantaneous failover, and elastic scalability. Motivating Application. Consider the following general regularized optimization problem: minimize w F(w) where F(w) := f(w) + h(w) and w ∈Rp, (1) We assume that the loss function f : Rp →R is continuously differentiable but not necessarily convex, and the regularizer h : Rp →R is convex, left side continuous, block separable, but possibly non-smooth. The proposed algorithm solves this problem based on the proximal gradient method [23]. However, it differs with the later in four aspects to efficiently tackle very high dimensional and sparse data: • Only a subset (block) of coordinates is updated in each time: (block) Gauss-Seidel updates are shown to be efficient on sparse data [36, 27]. • The model a worker maintains is only partially consistent with other machines, due to asynchronous data communication. • The proximal operator uses coordinate-specific learning rates to adapt progress to sparsity pattern inherent in the data. • Only coordinates that would change the associated model weights are communicated to reduce network traffic. We demonstrate the efficiency of the proposed algorithm by applying it to two challenging problems: (1) non-smooth ℓ1-regularized logistic regression on sparse text datasets with over 100 billion examples and features; (2) a non-convex and non-smooth ICA reconstruction problem [18], extracting billions of sparse features from dense image data. We show that the combination of the proposed algorithm and system effectively reduces both the communication cost and programming effort. In particular, 300 lines of codes suffice to implement ℓ1-regularized logistic regression with nearly no communication overhead for industrial-scale problems. Outline: We first provide background in Section 2. Next, we address the two relaxations in Section 3 and the proposed algorithm in Section 4. In Section 5 (and also Appendix B and C), we present the applications with the experimental results. We conclude with a discussion in Section 6. 2 Background Related Work. The parameter server framework [29] has proliferated both in academia and in industry. Related systems have been implemented at Amazon, Baidu, Facebook, Google [10], Microsoft, and Yahoo [2]. There are also open source codes, such as YahooLDA [2] and Petuum [15]. As introduced in [29, 2], the first generation of the parameter servers lacked flexibility and performance. The second generation parameter servers were application specific, exemplified by Distbelief [10] and the synchronization mechanism in [20]. Petuum modified YahooLDA by imposing bounded delay instead of eventual consistency and aimed for a general platform [15], but it placed 2 more constraints on the threading model of worker machines. Compared to previous work, our third generation system greatly improves system performance, and also provides flexibility and fault tolerance. Beyond the parameter server, there exist many general-purpose distributed systems for machine learning applications. Many mandate synchronous and iterative communication. For example, Mahout [5], based on Hadoop [13] and MLI [30], based on Spark [37], both adopt the iterative MapReduce framework [11]. On the other hand, Graphlab [21] supports global parameter synchronization on a best effort basis. These systems scale well to few hundreds of nodes, primarily on dedicated research clusters. However, at a larger scale the synchronization requirement creates performance bottlenecks. The primary advantage over these systems is the flexibility of consistency models offered by the parameter server. There is also a growing interest in asynchronous algorithms. Shotgun [7], as a part of Graphlab, performs parallel coordinate descent for solving ℓ1 optimization problems. Other methods partition observations over several machines and update the model in a data parallel fashion [34, 17, 38, 3, 1, 19]. Lock-free variants were proposed in Hogwild [26]. Mixed variants which partition data and parameters into non-overlapping components were introduced in [33], albeit at the price of having to move or replicate data on several machines. Lastly, the NIPS framework [31] discusses general non-convex approximate proximal methods. The proposed algorithm differs from existing approaches mainly in two aspects. First, we focus on solving large scale problems. Given the size of data and the limited network bandwidth, neither the shared memory approach of Shotgun and Hogwild nor moving the entire data during training is desirable. Second, we aim at solving general non-convex and non-smooth composite objective functions. Different to [31], we derive a convergence theorem with weaker assumptions, and furthermore we carry out experiments that are of many orders of magnitude larger scale. The Parameter Server Architecture. An instance of the parameter server [4] contains a server group and several worker groups, in which a group has several machines. Each machine in the server group maintains a portion of the global parameters, and all servers communicate with each other to replicate and/or migrate parameters for reliability and scaling. A worker stores only a portion of the training data and it computes the local gradients or other statistics. Workers communicate only with the servers to retrieve and update the shared parameters. In each worker group, there might be a scheduler machine, which assigns workloads to workers as well as monitors their progress. When workers are added or removed from the group, the scheduler can reschedule the unfinished workloads. Each worker group runs an application, thus allowing for multi-tenancy. For example, an ad-serving system and an inference algorithm can run concurrently in different worker groups. The shared model parameters are represented as sorted (key,value) pairs. Alternatively we can view this as a sparse vector or matrix that interacts with the training data through the built-in multithreaded linear algebra functions. Data exchange can be achieved via two operations: push and pull. A worker can push all (key, value) pairs within a range to servers, or pull the corresponding values from the servers. Distributed Subgradient Descent. For the motivating example introduced in (1), we can implement a standard distributed subgradient descent algorithm [34] using the parameter server. As illustrated in Figure 2 and Algorithm 1, training data is partitioned and distributed among all the workers. The model w is learned iteratively. In each iteration, each worker computes the local gradients using its own training data, and the servers aggregate these gradients to update the globally shared parameter w. Then the workers retrieve the updated weights from the servers. A worker needs the model w to compute the gradients. However, for very high-dimensional training data, the model may not fit in a worker. Fortunately, such data are often sparse, and a worker typically only requires a subset of the model. To illustrate this point, we randomly assigned samples in the dataset used in Section 5 to workers, and then counted the model parameters a worker needed for computing gradients. We found that when using 100 workers, the average worker only needs 7.8% of the model. With 10,000 workers this reduces to 0.15%. Therefore, despite the large total size of w, the working set of w needed by a particular worker can be cached trivially. 3 Algorithm 1 Distributed Subgradient Descent Solving (1) in the Parameter Server Worker r = 1, . . . , m: 1: Load a part of training data {yik, xik}nr k=1 2: Pull the working set w(0) r from servers 3: for t = 1 to T do 4: Gradient g(t) r ←Pnr k=1 ∂ℓ(xik, yik, w(t) r ) 5: Push g(t) r to servers 6: Pull w(t+1) r from servers 7: end for Servers: 1: for t = 1 to T do 2: Aggregate g(t) ←Pm r=1 g(t) r 3: w(t+1) ←w(t) −η g(t) + ∂h(w(t) 4: end for worker 1     g1 +... +gm w    servers g1 w1 gm wm    worker m ... 2. push training data 4. pull 4. pull 2. push 3. update 1. compute 1. compute Figure 2: One iteration of Algorithm 1. Each worker only caches the working set of w. 3 Two Relaxations of Data Consistency We now introduce the two relaxations that are key to the proposed system. We encourage the reader interested in systems details such as server key layout, elastic scalability, and continuous fault tolerance, to see our prior work [4]. 3.1 Asynchronous Task Dependency We decompose the workloads in the parameter server into tasks that are issued by a caller to a remote callee. There is considerable flexibility in terms of what constitutes a task: for instance, a task can be a push or a pull that a worker issues to servers, or a user-defined function that the scheduler issues to any node, such as an iteration in the distributed subgradient algorithm. Tasks can also contains subtasks. For example, a worker performs one push and one pull per iteration in Algorithm 1. Tasks are executed asynchronously: the caller can perform further computation immediately after issuing a task. The caller marks a task as finished only once it receives the callee’s reply. A reply could be the function return of a user-defined function, the (key,value) pairs requested by the pull, or an empty acknowledgement. The callee marks a task as finished only if the call of the task is returned and all subtasks issued by this call are finished. iter 10: iter 11: iter 12: gradient gradient gradient push & pull push & pull pu By default callees execute tasks in parallel for best performance. A caller wishing to render task execution sequential can insert an execute-after-finished dependency between tasks. The diagram on the right illustrates the execution of three tasks. Tasks 10 and 11 are independent, but 12 depends on 11. The callee therefore begins task 11 immediately after the gradients are computed in task 10. Task 12, however, is postponed to after pull of 11. Task dependencies aid implementing algorithm logic. For example, the aggregation logic at servers in Algorithm 1 can be implemented by having the updating task depend on the push tasks of all workers. In this way, the weight w is updated only after all worker gradients have been aggregated. 3.2 Flexible Consistency Models via Task Dependency Graphs The dependency graph introduced above can be used to relax consistency requirements. Independent tasks improve the system efficiency by parallelizing the usage of CPU, disk and network bandwidth. However, this may lead to data inconsistency between nodes. In the diagram above, the worker r starts iteration 11 before the updated model w(11) r is pulled back, thus it uses the outdated model w(10) r and compute the same gradient as it did in iteration 10, namely g(11) r = g(10) r . This inconsis4 tency can potentially slows down the convergence speed of Algorithm 1. However, some algorithms may be less sensitive to this inconsistency. For example, if only a block of w is updated in each iteration of Algorithm 2, starting iteration 11 without waiting for 10 causes only a portion of w to be inconsistent. The trade-off between algorithm efficiency and system performance depends on various factors in practice, such as feature correlation, hardware capacity, datacenter load, etc. Unlike other systems that force the algorithm designer to adopt a specific consistency model that may be ill-suited to the real situations, the parameter server can provide full flexibility for different consistency models by creating task dependency graphs, which are directed acyclic graphs defined by tasks with their dependencies. Consider the following three examples: 0 1 2 0 1 2 0 1 2 3 (a) Sequential (b) Eventual (c) 1 Bounded delay 4 Sequential Consistency requires all tasks to be executed one by one. The next task can be started only if the previous one has finished. It produces results identical to the single-thread implementation. Bulk Synchronous Processing uses this approach. Eventual Consistency to the contrary allows all tasks to be started simultaneously. [29] describe such a system for LDA. This approach is only recommendable whenever the underlying algorithms are very robust with regard to delays. Bounded Delay limits the staleness of parameters. When a maximal delay time τ is set, a new task will be blocked until all previous tasks τ times ago have been finished (τ = 0 yields sequential consistency and for τ = ∞we recover eventual consistency). Algorithm 2 uses such a model. Note that dependency graphs allow for more advanced consistency models. For example, the scheduler may increase or decrease the maximal delay according to the runtime progress to dynamically balance the efficiency-convergence trade-off. 3.3 Flexible Consistency Models via User-defined Filters Task dependency graphs manage data consistency between tasks. User-defined filters allow for a more fine-grained control of consistency (e.g. within a task). A filter can transform and selectively synchronize the the (key,value) pairs communicated in a task. Several filters can be applied together for better data compression. Some example filters are: Significantly modified filter: it only pushes entries that have changed by more than a threshold since synchronized last time. Random skip filter: it subsamples entries before sending. They are skipped in calculations. KKT filter: it takes advantage of the optimality condition when solving the proximal operator: a worker only pushes gradients that are likely to affect the weights on the servers. We will discuss it in more detail in section 5. Key caching filter: Each time a range of (key,value) pairs is communicated because of the rangebased push and pull. When the same range is chosen again, it is likely that only values are modified while the keys are unchanged. If both the sender and receiver have cached these keys, the sender then only needs to send the values with a signature of the keys. Therefore, we effectively double the network bandwidth. Compressing filter: The values communicated are often compressible numbers, such as zeros, small integers, and floating point numbers with more than enough precision. This filter reduces the data size by using lossless or lossy data compression algorithms1. 4 Delayed Block Proximal Gradient Method In this section, we propose an efficient algorithm taking advantage of the parameter server to solve the previously defined nonconvex and nonsmooth optimization problem (1). 1Both key caching and data compressing are presented as system-level optimization in the prior work [4], here we generalize them into user-defined filters. 5 Algorithm 2 Delayed Block Proximal Gradient Method Solving (1) Scheduler: 1: Partition parameters into k blocks b1, . . . , bk 2: for t = 1 to T: Pick a block bit and issue the task to workers Worker r at iteration t 1: Wait until all iterations before t −τ are finished 2: Compute first-order gradient g(t) r and coordinate-specific learning rates u(t) r on block bit 3: Push g(t) r and u(t) r to servers with user-defined filters, e.g., the random skip or the KKT filter 4: Pull w(t+1) r from servers with user-defined filters, e.g., the significantly modified filter Servers at iteration t 1: Aggregate g(t) and u(t) 2: Solve the generalized proximal operator (2) w(t+1) ←ProxU γt(w(t)) with U = diag(u(t)). Proximal Gradient Methods. For a closed proper convex function h(x) : Rp →R ∪{∞} define the generalized proximal operator ProxU γ (x) := argmin y∈Rp h(y) + 1 2γ ∥x −y∥2 U where ∥x∥2 U := x⊤Ux. (2) The Mahalanobis norm ∥x∥U is taken with respect to a positive semidefinite matrix U ⪰0. Many proximal algorithms choose U = 1. To minimize the composite objective function f(w) + h(w), proximal gradient algorithms update w in two steps: a forward step performing steepest gradient descent on f and a backward step carrying out projection using h. Given learning rate γt > 0 at iteration t these two steps can be written as w(t+1) = ProxU γt h w(t) −γt∇f(w(t)) i for t = 1, 2, . . . (3) Algorithm. We relax the consistency model of the proximal gradient methods with a block scheme to reduce the sensitivity to data inconsistency. The proposed algorithm is shown in Algorithm 2. It differs from the standard method as well as Algorithm 1 in four substantial ways to take advantage of the opportunities offered by the parameter server and to handle high-dimensional sparse data. 1. Only a block of parameters is updated per iteration. 2. The workers compute both gradients and coordinate-specific learning rates, e.g., the diagonal part of the second derivative, on this block. 3. Iterations are asynchronous. We use a bounded-delay model over iterations. 4. We employ user-defined filters to suppress transmission of parts of data whose effect on the model is likely to be negligible. Convergence Analysis. To prove convergence we need to make a number of assumptions. As before, we decompose the loss f into blocks fi associated with the training data stored by worker i, that is f = P i fi. Next we assume that block bt is chosen at iteration t. A key assumption is that for given parameter changes the rate of change in the gradients of f is bounded. More specifically, we need to bound the change affecting the very block and the amount of “crosstalk” to other blocks. Assumption 1 (Block Lipschitz Continuity) There exists positive constants Lvar,i and Lcov,i such that for any iteration t and all x, y ∈Rp with xi = yi for any i /∈bt we have ∥∇btfi(x) −∇btfi(y)∥≤Lvar,i ∥x −y∥ for 1 ≤i ≤m (4a) ∥∇bsfi(x) −∇bsfi(y)∥≤Lcov,i ∥x −y∥ for 1 ≤i ≤m, t < s ≤t + τ (4b) where ∇bf(x) is the block b of ∇f(x). Further define Lvar := Pm i=1 Lvar,i and Lcov := Pm i=1 Lcov,i. The following Theorem 2 indicates that this algorithm converges to a stationary point under the relaxed consistency model, provided that a suitable learning rate is chosen. Note that since the overall objective is nonconvex, no guarantees of optimality are possible in general. 6 Theorem 2 Assume that updates are performed with a delay bounded by τ, also assume that we apply a random skip filter on pushing gradients and a significantly-modified filter on pulling weights with threshold O(t−1). Moreover assume that gradients of the loss are Lipschitz continuous as per Assumption 1. Denote by Mt the minimal coordinate-specific learning rate at time t. For any ϵ > 0, Algorithm 2 converges to a stationary point in expectation if the learning rate γt satisfies γt ≤ Mt Lvar + τLcov + ϵ for all t > 0. (5) The proof is shown in Appendix A. Intuitively, the difference between w(t−τ) and w(t) will be small when reaching a stationary point. As a consequence, also the change in gradients will vanish. The inexact gradient obtained by delayed and inexact model, therefore, is likely a good approximation of the true gradient, so the convergence results of proximal gradient methods can be applied. Note that, when the delay increase, we should decrease the learning rate to guarantee convergence. However, a larger value is possible when careful block partition and order are chosen. For example, if features in a block are less correlated then Lvar decreases. If the block is less related to the previous blocks, then Lcov decreases, as also exploited in [26, 7]. 5 Experiments We now show how the general framework discussed above can be used to solve challenging machine learning problems. Due to space constraints we only present experimental results for a 0.6PB dataset below. Details on smaller datasets are relegated to Appendix B. Moreover, we discuss non-smooth Reconstruction ICA in Appendix C. Setup. We chose ℓ1-regularized logistic regression for evaluation because that it is one of the most popular algorithms used in industry for large scale risk minimization [9]. We collected an ad click prediction dataset with 170 billion samples and 65 billion unique features. The uncompressed dataset size is 636TB. We ran the parameter server on 1000 machines, each with 16 CPU cores, 192GB DRAM, and connected by 10 Gb Ethernet. 800 machines acted as workers, and 200 were servers. The cluster was in concurrent use by other jobs during operation. Algorithm. We adopted Algorithm 2 with upper bounds of the diagonal entries of the Hessian as the coordinate-specific learning rates. Features were randomly split into 580 blocks according the feature group information. We chose a fixed learning rate by observing the convergence speed. We designed a Karush-Kuhn-Tucker (KKT) filter to skip inactive coordinates. It is analogous to the active-set selection strategies of SVM optimization [16] and active set selectors [22]. Assume wk = 0 for coordinate k and gk the current gradient. According to the optimality condition of the proximal operator, also known as soft-shrinkage operator, wk will remain 0 if |gk| ≤λ. Therefore, it is not necessary for a worker to send gk (as well as uk). We use an old value ˆgk to approximate gk to further avoid computing gk. Thus, coordinate k will be skipped in the KKT filter if |ˆgk| ≤λ −δ, where δ ∈[0, λ] controls how aggressive the filtering is. Implementation. To the best of our knowledge, no open source system can scale sparse logistic regression to the scale described in this paper. Graphlab provides only a multi-threaded, single machine implementation. We compared it with ours in Appendix B. Mlbase, Petuum and REEF do not support sparse logistic regression (as confirmed with the authors in 4/2014). We compare the parameter server with two special-purpose second general parameter servers, named System A and B, developed by a large Internet company. Both system A and B adopt the sequential consistency model, but the former uses a variant of LBFGS while the latter runs a similar algorithm as ours. Notably, both systems consist of more than 10K lines of code. The parameter server only requires 300 lines of code for the same functionality as System B (the latter was developed by an author of this paper). The parameter server successfully moves most of the system complexity from the algorithmic implementation into reusable components. 7 10 −1 10 0 10 1 10 10.6 10 10.7 time (hours) objective value System−A System−B Parameter Server Figure 3: Convergence of sparse logistic regression on a 636TB dataset. System−A System−B Parameter Server 0 1 2 3 4 5 time (hours) computing waiting Figure 4: Average time per worker spent on computation and waiting during optimization. 0 1 2 4 8 16 0 0.5 1 1.5 2 time (hours) maximal delays computing waiting Figure 5: Time to reach the same convergence criteria under various allowed delays. key caching compressing KKT Filter 0 20 40 60 80 100 relative network traffic (%) server worker Figure 6: The reduction of sent data size when stacking various filters together. Experimental Results. We compare these systems by running them to reach the same convergence criteria. Figure 3 shows that System B outperforms system A due to its better algorithm. The parameter server, in turn, speeds up System B in 2 times while using essentially the same algorithm. It achieves this because the consistency relaxations significantly reduce the waiting time (Figure 4). Figure 5 shows that increasing the allowed delays significantly decreases the waiting time though slightly slows the convergence. The best trade-off is 8-delay, which results in a 1.6x speedup comparing the sequential consistency model. As can be seen in Figure 6, key caching saves 50% network traffic. Compressing reduce servers’ traffic significantly due to the model sparsity, while it is less effective for workers because the gradients are often non-zeros. But these gradients can be filtered efficiently by the KKT filter. In total, these filters give 40x and 12x compression rates for servers and workers, respectively. 6 Conclusion This paper examined the application of a third-generation parameter server framework to modern distributed machine learning algorithms. We show that it is possible to design algorithms well suited to this framework; in this case, an asynchronous block proximal gradient method to solve general non-convex and non-smooth problems, with provable convergence. This algorithm is a good match to the relaxations available in the parameter server framework: controllable asynchrony via task dependencies and user-definable filters to reduce data communication volumes. We showed experiments for several challenging tasks on real datasets up to 0.6PB size with hundreds billions samples and features to demonstrate its efficiency. We believe that this third-generation parameter server is an important and useful building block for scalable machine learning. Finally, the source codes are available at http://parameterserver.org. 8 References [1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In IEEE CDC, 2012. [2] A. Ahmed, M. Aly, J. Gonzalez, S. Narayanamurthy, and A. J. Smola. Scalable inference in latent variable models. In WSDM, 2012. [3] A. Ahmed, N. Shervashidze, S. Narayanamurthy, V. Josifovski, and A. J. Smola. Distributed large-scale natural graph factorization. In WWW, 2013. [4] M. Li, D. G. Andersen, J. Park h, A. J. Smola, A. Amhed, V. Josifovski, J. Long, E. Shekita, and B. Y. Su. Scaling Distributed Machine Learning with the Parameter Server. In OSDI, 2014 [5] Apache Foundation. Mahout project, 2012. http://mahout.apache.org. [6] L. A. Barroso and H. H¨olzle. The datacenter as a computer: An introduction to the design of warehousescale machines. Synthesis lectures on computer architecture, 4(1):1–108, 2009. [7] J.K. Bradley, A. Kyrola, D. Bickson, and C. Guestrin. Parallel coordinate descent for L1-regularized loss minimization. In ICML, 2011. [8] J. Byers, J. Considine, and M. Mitzenmacher. Simple load balancing for distributed hash tables. In Peer-to-peer systems II, pages 80–87. Springer, 2003. [9] K. Canini. Sibyl: A system for large scale supervised machine learning. Technical Talk, 2012. [10] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, Q. Le, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Ng. Large scale distributed deep networks. In NIPS, 2012. [11] J. Dean and S. Ghemawat. MapReduce: simplified data processing on large clusters. CACM, 2008. [12] Domo. Data Never Sleeps 2.0, 2014. http://www.domo.com/learn. [13] The Apache Software Foundation. Apache hadoop, 2009. http://hadoop.apache.org/core/. [14] S. H. Gunderson. Snappy https://code.google.com/p/snappy/. [15] Q. Ho, J. Cipar, H. Cui, S. Lee, J. Kim, P. Gibbons, G. Gibson, G. Ganger, and E. Xing. More effective distributed ml via a stale synchronous parallel parameter server. In NIPS, 2013. [16] T. Joachims. Making large-scale SVM learning practical. Advances in Kernel Methods, 1999 [17] J. Langford, A. J. Smola, and M. Zinkevich. Slow learners are fast. In NIPS, 2009. [18] Q.V. Le, A. Karpenko, J. Ngiam, and A.Y. Ng. ICA with reconstruction cost for efficient overcomplete feature learning. NIPS, 2011. [19] M. Li, D. G. Andersen, and A. J. Smola. Distributed delayed proximal gradient methods. In NIPS Workshop on Optimization for Machine Learning, 2013. [20] M. Li, L. Zhou, Z. Yang, A. Li, F. Xia, D.G. Andersen, and A. J. Smola. Parameter server for distributed machine learning. In Big Learning NIPS Workshop, 2013. [21] Y. Low, J. Gonzalez, A. Kyrola, D. Bickson, C. Guestrin, and J. M. Hellerstein. Distributed graphlab: A framework for machine learning and data mining in the cloud. In PVLDB, 2012. [22] S. Matsushima, S.V.N. Vishwanathan, and A.J. Smola. Linear support vector machines via dual cached loops. In KDD, 2012. [23] N. Parikh and S. Boyd. Proximal algorithms. In Foundations and Trends in Optimization, 2013. [24] K. B. Petersen and M. S. Pedersen. The matrix cookbook, 2008. Version 20081110. [25] A. Phanishayee, D. G. Andersen, H. Pucha, A. Povzner, and W. Belluomini. Flex-kv: Enabling highperformance and flexible KV systems. In Management of big data systems, 2012. [26] B. Recht, C. Re, S.J. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. NIPS, 2011. [27] P. Richt´arik and M. Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 2012. [28] A. Rowstron and P. Druschel. Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems. In Distributed Systems Platforms, 2001. [29] A. J. Smola and S. Narayanamurthy. An architecture for parallel topic models. In VLDB, 2010. [30] E. Sparks, A. Talwalkar, V. Smith, J. Kottalam, X. Pan, J. Gonzalez, M. J. Franklin, M. I. Jordan, and T. Kraska. MLI: An API for distributed machine learning. 2013. [31] S. Sra. Scalable nonconvex inexact proximal splitting. In NIPS, 2012. [32] I. Stoica, R. Morris, D. Karger, M. F. Kaashoek, and H. Balakrishnan. Chord: A scalable peer-to-peer lookup service for internet applications. SIGCOMM Computer Communication Review, 2001. [33] C. Teflioudi, F. Makari, and R. Gemulla. Distributed matrix completion. In ICDM, 2012. [34] C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. JMLR, January 2010. [35] R. van Renesse and F. B. Schneider. Chain replication for supporting high throughput and availability. In OSDI, 2004. [36] G. X. Yuan, K. W. Chang, C. J. Hsieh, and C. J. Lin. A comparison of optimization methods and software for large-scale l1-regularized linear classification. JMLR, 2010. [37] M. Zaharia, M. Chowdhury, T. Das, A. Dave, J. Ma, M. Mccauley, M. J. Franklin, S. Shenker, and I. Stoica. Fast and interactive analytics over hadoop data with spark. USENIX ;login:, August 2012. [38] M. Zinkevich, A. J. Smola, M. Weimer, and L. Li. Parallelized stochastic gradient descent. In NIPS, 2010. 9
2014
45
5,531
Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems Cong Han Lim Department of Computer Sciences University of Wisconsin - Madison Madison, WI 53706 conghan@cs.wisc.edu Stephen J. Wright Department of Computer Sciences University of Wisconsin - Madison Madison, WI 53706 swright@cs.wisc.edu Abstract The Birkhoff polytope (the convex hull of the set of permutation matrices), which is represented using Θ(n2) variables and constraints, is frequently invoked in formulating relaxations of optimization problems over permutations. Using a recent construction of Goemans [1], we show that when optimizing over the convex hull of the permutation vectors (the permutahedron), we can reduce the number of variables and constraints to Θ(n log n) in theory and Θ(n log2 n) in practice. We modify the recent convex formulation of the 2-SUM problem introduced by Fogel et al. [2] to use this polytope, and demonstrate how we can attain results of similar quality in significantly less computational time for large n. To our knowledge, this is the first usage of Goemans’ compact formulation of the permutahedron in a convex optimization problem. We also introduce a simpler regularization scheme for this convex formulation of the 2-SUM problem that yields good empirical results. 1 Introduction A typical workflow for converting a discrete optimization problem over the set of permutations of n objects into a continuous relaxation is as follows: (1) use permutation matrices to represent permutations; (2) relax to the convex hull of the set of permutation matrices — the Birkhoff polytope; (3) relax other constraints to ensure convexity/continuity. Instances of this procedure appear in [3, 2]. Representation of the Birkhoff polytope requires Θ(n2) variables, significantly more than the n variables required to represent the permutation directly. The increase in dimension is unappealing, especially if we are only interested in optimizing over permutation vectors, as opposed to permutations of a more complex object, such as a graph. The obvious alternative of using a relaxation based on the convex hull of the set of permutations (the permutahedron) is computationally infeasible, because the permutahedron has exponentially many facets (whereas the Birkhoff polytope has only n2 facets). We can achieve a better trade-off between the number of variables and facets by using sorting networks to construct polytopes that can be linearly projected to recover the permutahedron. This construction, introduced by Goemans [1], can have as few as Θ(n log n) facets, which is optimal up to constant factors. In this paper, we use a relaxation based on these polytopes, which we call “sorting network polytopes.” We apply the sorting network polytope to the noisy seriation problem, defined as follows. Given a noisy similarity matrix A, recover a symmetric row/column ordering of A for which the entries generally decrease with distance from the diagonal. Fogel et al. [2] introduced a convex relaxation of the 2-SUM problem to solve the noisy seriation problem. They proved that the solution to the 2SUM problem recovers the exact solution of the seriation problem in the “noiseless” case (in which an ordering exists that ensures monotonic decrease of similarity measures with distance from the diagonal). They further show that the formulation allows side information about the ordering to be incorporated, and is more robust to noise than a spectral formulation of the 2-SUM problem de1 scribed by Atkins et al. [4]. The formulation in [2] makes use of the Birkhoff polytope. We propose instead a formulation based on the sorting network polytope. Performing convex optimization over the sorting network polytope requires different techniques from those described in [2]. In addition, we describe a new regularization scheme, applicable both to our formulation and that of [2], that is more natural for the 2-SUM problem and has good practical performance. The paper is organized as follows. We begin by describing polytopes for representing permutations in Section 2. In Section 3, we introduce the seriation problem and the 2-SUM problem, describe two continuous relaxations for the latter, (one of which uses the sorting network polytope) and introduce our regularization scheme for strengthening the relaxations. Issues that arise in using the sorting network polytope are discussed in Section 4. In Section 5, we provide experimental results showing the effectiveness of our approach. The extended version of this paper [5] includes some additional computational results, along with several proofs. It also describes an efficient algorithm for taking a conditional gradient step for the convex formulation, for the case in which the formulation contains no side information. 2 Permutahedron, Birkhoff Polytope, and Sorting Networks We use n throughout the paper to refer to the length of the permutation vectors. πIn = (1, 2, . . . , n)T denotes the identity permutation. (When the size n can be inferred from the context, we write the identity permutation as πI.) Pn denotes the set of all permutations vectors of length n. We use π ∈Pn to denote a generic permutation, and denote its components by π(i), i = 1, 2, . . . , n. We use 1 to denote the vector of length n whose components are all 1. Definition 2.1. The permutahedron PHn, the convex hull of Pn, is defined as follows: PHn :=   x ∈Rn n X i=1 xi = n(n + 1) 2 , X i∈S xi ≤ |S| X i=1 (n + 1 −i) for all S ⊂[n]   . The permutahedron PHn has 2n−2 facets, which prevents us from using it in optimization problems directly. (We should note however that the permutahedron is a submodular polyhedron and hence admits efficient algorithms for certain optimization problems.) Relaxations are commonly derived from the set of permutation matrices (the set of n × n matrices containing zeros and ones, with a single one in each row and column) and its convex hull instead. Definition 2.2. The convex hull of the set of n×n permutation matrices is the Birkhoff polytope Bn, which is the set of all doubly-stochastic n × n matrices {X ∈Rn×n | X ≥0, X1 = 1, XT 1 = 1}. The Birkhoff polytope has been widely used in the machine learning and computer vision communities for various permutation problems (see for example [2], [3]). The permutahedron can be represented as the projection of the Birkhoff polytope from Rn×n to Rn by xi = Pn j=1 j · Xij. The Birkhoff polytope is sometimes said to be an extended formulation of the permutahedron. A natural question to ask is whether a more compact extended formulation exists for the permutahedron. Goemans [1] answered this question in the affirmative by constructing one with Θ(n log n) constraints and variables, which is optimal up to constant factors. His construction is based on sorting networks, a collection of wires and binary comparators that sorts a list of numbers. Figure 1 displays a sorting network on 4 variables. (See [6] for further information on sorting networks.) Given a sorting network on n inputs with m comparators (we will subsequently always use m to refer to the number of comparators), an extended formulation for the permutahedron with O(m) variables and constraints can be constructed as follows [1]. Referring to the notation in the right subfigure in Figure 1, we introduce a set of constraints for each comparator k = 1, 2, . . . , m to indicate the relationships between the two inputs and the two outputs of each comparator: xk (in, top) + xk (in, bot) = xk (out, top) + xk (out, bot), xk (out, top) ≤xk (in, top), and xk (out, top) ≤xk (in, bot). (1) Note that these constraints require the sum of the two inputs to be the same as the sum of the two outputs, but the inputs can be closer together than the outputs. Let xin i and xout i , i = 1, 2, . . . , n denote the x variables corresponding to the ith input and ith output of the entire sorting network, respectively. We introduce the additional constraints xout i = i, for i ∈[n]. (2) 2 xin 4 xin 3 xin 2 xin 1 xout 4 xout 3 xout 2 xout 1 xk (in,bottom) xk (in,top) xk (out,bottom) xk (out,top) Figure 1: A bitonic sorting network on 4 variables (left) and the k-th comparator (right). The input to the sorting network is on the left and the output is on the right. At each comparator, we take the two input values and sort them such that the smaller value is the one at the top in the output. Sorting takes place progressively as we move from left to right through the network, sorting pairs of values as we encounter comparators. The details of this construction depend on the particular choice of sorting network (see Section 4), but we will refer to it generically as the sorting network polytope SN n. Each element in this polytope can be viewed as a concatenation of two vectors: the subvector associated with the network inputs xin = (xin 1 , xin 2 , . . . , xin n), and the rest of the coordinates xrest, which includes all the internal variables as well as the outputs. The following theorem attests to the fact that any input vector xin vector that is part of a feasible vector for the entire network is a point in the permutahedron: Theorem 2.3 (Goemans [1]). The set {xin | (xin, xrest) ∈SN n} is the permutahedron PHn. 3 Convex Relaxations of 2-SUM via Sorting Network Polytope In this section we will briefly describe the seriation problem, and some of the continuous relaxations of the combinatorial 2-SUM problem that can be used to solve this problem. The Noiseless Seriation Problem. The term seriation generally refers to data analysis techniques that arrange objects in a linear ordering in a way that fits available information and thus reveals underlying structure of the system [7]. We adopt here the definition of the seriation problem from [4]. Suppose we have n objects arranged along a line, and a similarity function that increases with distance between objects in the line. The similarity matrix is the symmetric n × n matrix whose (i, j) entry is the similarity measure between the ith and jth objects in the linear arrangement. This similarity matrix is a R-matrix, according to the following definition. Definition 3.1. A symmetric matrix A is a Robinson matrix (R-matrix) if for all points (i, j) where i > j, we have Aij ≤min(A(i−1),j, Ai,(j+1)). A symmetric matrix A is a pre-R matrix if ΠT AΠ is R for some permutation Π. In other words, a symmetric matrix is a R-matrix if the entries are nonincreasing as we move away from the diagonal in either the horizontal or vertical direction. The goal of the noiseless seriation problem is to recover the ordering of the variables along the line from the pairwise similarity data, which is equivalent to finding the permutation that recovers an R-matrix from a pre-R-matrix. The seriation problem was introduced in the archaeology literature [8], and has applications across a wide range of areas including clustering [9], shotgun DNA sequencing [2], and taxonomy [10]. R-matrices are useful in part because of their relation to the consecutive-ones property in a matrix of zeros and ones, where the ones in each column form a contiguous block. A matrix M with the consecutive-ones property gives rise to a R-matrix MM T . Noisy Seriation, 2-SUM and Continuous Relaxations. Given a binary symmetric matrix A, the 2-SUM problem can be expressed as follows: min π∈Pn n X i=1 n X j=1 Aij(π(i) −π(j))2. (3) A slightly simpler but equivalent formulation, defined via the Laplacian LA = diag(A1) −A, is min π∈Pn πT LAπ. (4) 3 The seriation problem is closely related to the combinatorial 2-SUM problem, and Fogel et al. [2] proved that if A is a pre-R-matrix such that each row/column has unique entries, then the solution to the 2-SUM problem also solves the noiseless seriation problem. In another relaxation of the 2-SUM problem, Atkins et al. [4] demonstrate that finding the second smallest eigenvalue, also known as the Fiedler value, solves the noiseless seriation problem. Hence, the 2-SUM problem provides a good model for the noisy seriation problem, where the similarity matrices are close to, but not exactly, pre-R matrices. The 2-SUM problem is known to be NP-hard [11], so we seek efficient relaxations. We describe below two continuous relaxations that are computationally practical. (Other relaxations of these problems require solution of semidefinite programs and are intractable in practice for large n.) The spectral formulation of [4] seeks the Fiedler value by searching over the space orthogonal to the vector 1, which is the eigenvector that corresponds to the zero eigenvalue. The Fiedler value is the optimal objective value of the following problem: min y∈Rn yT LAy such that yT 1 = 0, ∥y∥2 = 1. (5) This problem is non-convex, but its solution can be found efficiently from an eigenvalue decomposition of LA. With Fiedler vector y, one can obtain a candidate solution to the 2-SUM problem by picking the permutation π ∈Pn to have the same ordering as the elements of y. The spectral formulation (5) is a continuous relaxation of the 2-SUM problem (4). The second relaxation of (4), described by Fogel et al. [2], makes use of the Birkhoff polytope Bn. The basic version of the formulation is min Π∈Bn πT I ΠT LAΠπI, (6) (recall that πI is the identity permutation (1, 2, . . . , n)T ), which is a convex quadratic program over the n2 components of Π. Fogel et al. augment and enhance this formulation as follows. • Introduce a “tiebreaking” constraint eT 1 ΠπI + 1 ≤eT nΠπI to resolve ambiguity about the direction of the ordering, where ek = (0, . . . , 0, 1, 0, . . . , 0)T with the 1 in position k. • Average over several perturbations of πI to improve robustness of the solution. • Add a penalty to maximize the Frobenius norm of the matrix Π, which pushes the solution closer to a vertex of the Birkhoff polytope. • Incorporate additional ordering constraints of the form xi −xj ≤δk, to exploit prior knowledge about the ordering. With these modifications, the problem to be solved is min Π∈Bn 1 pTrace(Y T ΠT LAΠY ) −µ p ∥PΠ∥2 F such that DΠπI ≤δ, (7) where each column of Y ∈Rn×p is a slightly perturbed version of a permutation,1 µ is the regularization coefficient, the constraint DΠπI ≤δ contains the ordering information and tiebreaking constraints, and the operator P = I −1 n11T is the projection of Π onto elements orthogonal to the all-ones matrix. The penalization is applied to ∥PΠ∥2 F rather than to ∥Π∥2 F directly, thus ensuring that the program remains convex if the regularization factor is sufficiently small (for which a sufficient condition is µ < λ2(LA)λ1(Y Y T )). We will refer to this regularization scheme as the matrix-based regularization, and to the formulation (7) as the matrix-regularized Birkhoff-based convex formulation. Figure 2 illustrates the permutahedron in the case of n = 3, and compares minimization of the objective yT LAy over the permutahedron (as attempted by the convex formulation) with minimization of the same objective over the constraints in the spectral formulation (5). The spectral method returns good solutions when the noise is low, and it is computationally efficient since there are many fast algorithms and software for obtaining selected eigenvectors. However, the Birkhoff-based convex formulation can return a solution that is significantly better in situations with high noise or significant additional ordering information. For the rest of this section, we will focus on the convex formulation. 1In [2], each column of Y is said to contain a perturbation of πI, but in a response to referees of their paper, the authors say that they used sorted uniform random vectors instead in the revised version. 4 x y z Figure 2: A geometric interpretation of spectral and convex formulation solutions on the 3permutahedron. The left image shows the 3-permutahedron in 3D space and the dashed line shows the eigenvector 1 corresponding to the zero eigenvalue. The right image shows the projection of the 3-permutahedron along the trivial eigenvector together with the elliptical level curves of the objective function yT LAy. Points on the circumscribed circle have an ℓ2-norm equal to that of a permutation, and the objective is minimized over this circle at the point denoted by a cross. The vertical line in the right figure enforces the tiebreaking constraint that 1 must appear before 3 in the ordering; the red dot indicates the minimizer of the objective over the resulting triangular feasible region. Without the tiebreaking constraint, the minimizer is at the center of the permutahedron. A Compact Convex Relaxation via the Permutahedron/Sorting Network Polytope and a New Regularization Scheme. We consider now a different relaxation for the 2-SUM problem (4). Taking the convex hull of Pn directly, we obtain min x∈PHn xT LAx. (8) This is essentially a permutahedron-based version of (6). In fact, two problems are equivalent, except that formulation (8) is more compact when we enforce x ∈PH via the sorting network constraints x ∈{xin | (xin, xrest) ∈SN n}, where SN n incorporates the comparator constraints (1) and the output constraints (2). This formulation can be enhanced and augmented in a similar fashion to (6). The tiebreaking constraint for this formulation can be expressed simply as x1 + 1 ≤xn, since xin consists of the subvector (x1, x2, . . . , xn). (In both (8) and (6), having at least one additional constraint is necessary to remove the trivial solution given by the center of the permutahedron or Birkhoff polytope; see Figure 2.) This constraint is the strongest inequality that will not eliminate any permutation (assuming that a permutation and its reverse are equivalent); we include a proof of this fact in [5]. It is also helpful to introduce a penalty to force the solution x to be closer to a permutation, that is, a vertex of the permutahedron. To this end, we introduce a vector-based regularization scheme. The following statement is an immediate consequence of strict convexity of norms. Proposition 3.2. Let v ∈Rn, and let X be the convex hull of all permutations of v. Then, the points in X with the highest ℓp norm, for 1 < p < ∞, are precisely the permutations of v. It follows that adding a penalty to encourage ∥x∥2 to be large might improve solution quality. However, directly penalizing the negative of the 2-norm of x would destroy convexity, since LA has a zero eigenvalue. Instead we penalize Px, where P = I −1 n11T projects onto the subspace orthogonal to the trivial eigenvector 1. (Note that this projection of the permutahedron still satisfies the assumptions of Proposition 3.2.) When we include a penalty on ∥Px∥2 2 in the formulation (8) along with side constraints Dx ≤δ on the ordering, we obtain the objective xT LAx −µ∥Px∥2 2 which leads to min x∈PHn xT (LA −µP)x such that Dx ≤δ. (9) This objective is convex when µ ≤λ2(LA), a looser condition on µ than is the case in matrix-based regularization. We will refer to (9) as the regularized permutahedron-based convex formulation. 5 Vector-based regularization can also be incorporated into the Birkhoff-based convex formulation. Instead of maximizing the ∥PΠ∥2 2 term in formulation (7) to force the solution to be closer to a permutation, we could maximize ∥PΠY ∥2 2. The vector-regularized version of (6) with side constraints can be written as follows: min Π∈Bn 1 pTrace(YTΠT(LA −µP)ΠY) such that DΠπI ≤δ. (10) We refer to this formulation as the vector-regularized Birkhoff-based convex formulation. Vectorbased regularization is in some sense more natural than the regularization in (7). It acts directly on the set that we are optimizing over, rather than an expanded set. The looser condition µ ≤ λ2(LA) allows for stronger regularization. Experiments reported in [5] show that the vector-based regularization produces permutations that are consistently better those obtained from the Birkhoffbased regularization. The regularized permutahedron-based convex formulation is a convex QP with O(m) variables and constraints, where m is the number of comparators in its sorting network, while the Birkhoff-based one is a convex QP with O(n2) variables. The one feature in the Birkhoff-based formulations that the permutahedron-based formulations do not have is the ability to average the solution over multiple vectors by choosing p > 1 columns in the matrix Y ∈Rn×p. However, our experiments suggested that the best solutions were obtained for p = 1, so this consideration was not important in practice. 4 Key Implementation Issues Choice of Sorting Network. There are numerous possible choices of the sorting network, from which the constraints in formulation (9) are derived. The asymptotically most compact option is the AKS sorting network, which contains Θ(n log n) comparators. This network was introduced in [12] and subsequently improved by others, but is impractical because of its difficulty of construction and the large constant factor in the complexity expression. We opt instead for more elegant networks with slightly worse asymptotic complexity. Batcher [13] introduced two sorting networks with Θ(n log2 n) size — the odd-even sorting network and the bitonic sorting network — that are popular and practical. The sorting network polytope based on these can be generated with a simple recursive algorithm in Θ(n log2 n) time. Obtaining Permutations from a Point in the Permutahedron. Solution of the permutationbased relaxation yields a point x in the permutahedron, but we need techniques to convert this point into a valid permutation, which is a candidate solution for the 2-SUM problem (3). The most obvious recovery technique is to choose this permutation π to have the same ordering as the elements of x, that is, xi < xj implies π(i) < π(j), for all i, j ∈{1, 2, . . . , n}. We could also sample multiple permutations, by applying Gaussian noise to the components of x prior to taking the ordering to produce π. (We used i.i.d. noise with variance 0.5.) The 2-SUM objective (3) can be evaluated for each permutation so obtained, with the best one being reported as the overall solution. This inexpensive randomized recovery procedure can be repeated many times, and it yield significantly better results over the single “obvious” ordering. Solving the Convex Formulation. On our test machine using the Gurobi interior point solver, we were able to solve instances of the permutahedron-based convex formulation (9) of size up to around n = 10000. As in [2], first-order methods can be employed when the scale is larger. In [5], we provide an optimal O(n log n) algorithm for step (1), in the case in which only the tiebreaking constraint is present, with no additional ordering constraints. 5 Experiments We compare the run time and solution quality of algorithms on the two classes of convex formulations — Birkhoff-based and permutahedron-based — with various parameters. Summary results are presented in this section. Additional results, including more extensive experiments comparing the effects of different parameters on the solution quality, appear in [5]. 6 Experimental Setup. The experiments were run on an Intel Xeon X5650 (24 cores @ 2.66Ghz) server with 128GB of RAM in MATLAB 7.13, CVX 2.0 ([14],[15]), and Gurobi 5.5 [16]. We tested four formulation-algorithm-implementation variants, as follows. (i) Spectral method using the MATLAB eigs function, (ii) MATLAB/Gurobi on the permutahedron-based convex formulation, (iii) MATLAB/Gurobi on the Birkhoff-based convex formulation with p = 1 (that is, formulation (7) with Y = πI), and (iv) Experimental MATLAB code provided to us by the authors of [2] implementing FISTA, for solving the matrix-regularized Birkhoff-based convex formulation (7), with projection steps solved using block coordinate ascent on the dual problem. This is the current state-of-the-art algorithm for large instances of the Birkhoff-based convex formulation; we refer to it as RQPS (for “Regularized QP for Seriation”). We report run time data using wall clock time reported by Gurobi, and MATLAB timings for RQPS, excluding all preprocessing time. We used the bitonic sorting network by Batcher [13] for experiments with the permutahedron-based formulation. Linear Markov Chain. The Markov chain reordering problem [2] involves recovering the ordering of a simple Markov chain with Gaussian noise from disordered samples. The Markov chain consists of random variables X1, X2, . . . , Xn such that Xi = bXi−1 + ϵi, where b is a positive constant and ϵi ∼N(0, σ2). A sample covariance matrix taken over multiple independent samples of the Markov chain with permuted labels is used as the similarity matrix in the 2-SUM problem. We use this problem for two different comparisons. First, we compare the solution quality and running time of RQPS algorithm of [2] with the Gurobi interior-point solver on the regularized permutahedron-based convex formulation, to demonstrate the performance of the formulation and algorithm introduced in this paper compared with the prior state of the art. Second, we apply Gurobi to both the permutahedron-based and Birkhoff-based formulations with p = 1, with the goal of discovering which formulation is more efficient in practice. For both sets of experiments, we fixed b = 0.999 and σ = 0.5 and generate 50 chains to form a sample covariance matrix. We chose n ∈{500, 2000, 5000} to see how algorithm performance scales with n. For each n, we perform 10 independent runs, each based on a different set of samples of the Markov chain (and hence a different sample covariance matrix). We added n ordering constraints for each run. Each ordering constraint is of the form xi + π(j) −π(i) ≤xj, where π is the (known) permutation that recovers the original matrix, and i, j ∈[n] is a pair randomly chosen but satisfying π(j) −π(i) > 0. We used a regularization parameter of µ = 0.9λ2(LA) on all formulations. RQPS and the Permutahedron-Based Formulation. We compare the RQPS code for the matrixregularized Birkhoff-based convex formulation (7) to the regularized permutahedron-based convex formulation, solved with Gurobi. We fixed a time limit for each value of n, and ran the RQPS algorithm until the limit was reached. At fixed time intervals, we query the current solution and sample permutations from that point. Figure 3: Plot of 2-SUM objective over time (in seconds) for n ∈{500, 2000, 5000}. We choose the run (out of ten) that shows the best results for RQPS relative to the interior-point algorithm for the regularized permutahedron-based formulation. We test four different variants of RQPS. The curves represent performance of the RQPS code for varying values of p (1 for red/green and n for blue/cyan) and the cap on the maximum number of iterations for the projection step (10 for red/blue and 100 for green/cyan). The white square represents the spectral solution, and the magenta diamond represents the solution returned by Gurobi for the permutahedron-based formulation. The horizontal axis in each graph is positioned at the 2-SUM objective corresponding to the permutation that recovers the original labels for the sample covariance matrix. 7 For RQPS, with a cap of 10 iterations within each projection step, the objective tends to descend rapidly to a certain level, then fluctuates around that level (or gets slightly worse) for the rest of the running time. For a limit of 100 iterations, there is less fluctuation in 2-SUM value, but it takes some time to produce a solution as good as the previous case. In contrast to experience reported in [2], values of p greater than 1 do not seem to help; our runs for p = n plateaued at higher values of the 2-SUM objective than those with p = 1. In most cases, the regularized permutahedron-based formulation gives a better solution value than the RQPS method, but there are occasional exceptions to this trend. For example, in the third run for n = 500 (the left plot in Figure 3), one variant of RQPS converges to a solution that is significantly better. Despite its very fast runtimes, the spectral method does not yield solutions of competitive quality, due to not being able to make use of the side constraint information. Direct Comparison of Birkhoff and Permutahedron Formulations For the second set of experiments, we compare the convergence rate of the objective value in the Gurobi interior-point solver applied to two equivalent formulations: the vector-regularized Birkhoff-based convex formulation (10) with p = 1 and the regularized permutahedron-based convex formulation (9). For each choice of input matrix and sampled ordering information, we ran the Gurobi interior-point method In Figure 4, we plot at each iteration the difference between the primal objective and v. Figure 4: Plot of the difference of the 2-SUM objective from the baseline objective over time (in seconds) for n ∈{2000, 5000}. The red curve represents performance of the permutahedron-based formulation; the blue curve represents the performance of the Birkhoff-based formulation. We display the best run (out of ten) for the Birkhoff-based formulation for each n. When n = 2000, the permutahedron-based formulation converges slightly faster in most cases. However, once we scale up to n = 5000, the permutahedron-based formulation converges significantly faster in all tests. Our comparisons show that the permutahedron-based formulation tends to yield better solutions in faster times than Birkhoff-based formulations, regardless of the algorithm used to solve the latter. The advantage of the permutahedron-based formulation is more pronounced when n is large. 6 Future Work and Acknowledgements We hope that this paper spurs further interest in using sorting networks in the context of other more general classes of permutation problems, such as graph matching or ranking. A direct adaptation of this approach is inadequate, since the permutahedron does not uniquely describe a convex combination of permutations, which is how the Birkhoff polytope is used in many such problems. However, when the permutation problem has a solution in the Birkhoff polytope that is close to an actual permutation, we should expect that the loss of information when projecting this point in the Birkhoff polytope to the permutahedron to be insignificant. We thank Okan Akalin and Taedong Kim for helpful comments and suggestions for the experiments. We thank the anonymous referees for feedback that improved the paper’s presentation. We also thank the authors of [2] for sharing their experimental code, and Fajwal Fogel for helpful discussions. Lim’s work on this project was supported in part by NSF Awards DMS-0914524 and DMS-1216318, and a grant from ExxonMobil. Wright’s work was supported in part by NSF Award DMS-1216318, ONR Award N00014-13-1-0129, DOE Award DE-SC0002283, AFOSR Award FA9550-13-1-0138, and Subcontract 3F-30222 from Argonne National Laboratory. 8 References [1] M. Goemans, “Smallest compact formulation for the permutahedron,” Working Paper, 2010. [2] F. Fogel, R. Jenatton, F. Bach, and A. D’Aspremont, “Convex Relaxations for Permutation Problems,” in Advances in Neural Information Processing Systems, 2013, pp. 1016–1024. [3] M. Fiori, P. Sprechmann, J. Vogelstein, P. Muse, and G. Sapiro, “Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching,” in Advances in Neural Information Processing Systems, 2013, pp. 127–135. [4] J. E. Atkins, E. G. Boman, and B. Hendrickson, “A Spectral Algorithm for Seriation and the Consecutive Ones Problem,” SIAM Journal on Computing, vol. 28, no. 1, pp. 297–310, Jan. 1998. [5] C. H. Lim and S. J. Wright, “Beyond the Birkhoff Polytope: Convex Relaxations for Vector Permutation Problems,” arXiv:1407.6609, 2014. [6] T. H. Cormen, C. Stein, R. L. Rivest, and C. E. Leiserson, Introduction to Algorithms, 2nd ed. McGraw-Hill Higher Education, 2001. [7] I. Liiv, “Seriation and matrix reordering methods: An historical overview,” Statistical Analysis and Data Mining, vol. 3, no. 2, pp. 70–91, 2010. [8] W. S. Robinson, “A Method for Chronologically Ordering Archaeological Deposits,” American Antiquity, vol. 16, no. 4, p. 293, Apr. 1951. [9] C. Ding and X. He, “Linearized cluster assignment via spectral ordering,” in Twenty-first international conference on Machine learning - ICML ’04. New York, New York, USA: ACM Press, Jul. 2004, p. 30. [10] R. Sokal and P. H. A. Sneath, Principles of Numerical Taxonomy. London: W. H. Freeman, 1963. [11] A. George and A. Pothen, “An Analysis of Spectral Envelope Reduction via Quadratic Assignment Problems,” SIAM Journal on Matrix Analysis and Applications, vol. 18, no. 3, pp. 706–732, Jul. 1997. [12] M. Ajtai, J. Koml´os, and E. Szemer´edi, “An O(n log n) sorting network,” in Proceedings of the fifteenth annual ACM symposium on Theory of computing - STOC ’83. New York, New York, USA: ACM Press, Dec. 1983, pp. 1–9. [13] K. E. Batcher, “Sorting networks and their applications,” in Proceedings of the April 30–May 2, 1968, spring joint computer conference on - AFIPS ’68 (Spring). New York, New York, USA: ACM Press, Apr. 1968, p. 307. [14] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.0,” http://cvxr.com/cvx, Aug. 2012. [15] ——, “Graph implementations for nonsmooth convex programs,” in Recent Advances in Learning and Control, ser. Lecture Notes in Control and Information Sciences, V. Blondel, S. Boyd, and H. Kimura, Eds. Springer-Verlag Limited, 2008, pp. 95–110, http: //stanford.edu/∼boyd/graph dcp.html. [16] Gurobi Optimizer Reference Manual, Gurobi Optimization, Inc., 2014. [Online]. Available: http://www.gurobi.com 9
2014
46
5,532
Deconvolution of High Dimensional Mixtures via Boosting, with Application to Diffusion-Weighted MRI of Human Brain Charles Y. Zheng Department of Statistics Stanford University Stanford, CA 94305 snarles@stanford.edu Franco Pestilli Department of Psychological and Brain Sciences Indiana University, Bloomington, IN 47405 franpest@indiana.edu Ariel Rokem Department of Psychology Stanford University Stanford, CA 94305 arokem@stanford.edu Abstract Diffusion-weighted magnetic resonance imaging (DWI) and fiber tractography are the only methods to measure the structure of the white matter in the living human brain. The diffusion signal has been modelled as the combined contribution from many individual fascicles of nerve fibers passing through each location in the white matter. Typically, this is done via basis pursuit, but estimation of the exact directions is limited due to discretization [1, 2]. The difficulties inherent in modeling DWI data are shared by many other problems involving fitting non-parametric mixture models. Ekanadaham et al. [3] proposed an approach, continuous basis pursuit, to overcome discretization error in the 1-dimensional case (e.g., spikesorting). Here, we propose a more general algorithm that fits mixture models of any dimensionality without discretization. Our algorithm uses the principles of L2-boost [4], together with refitting of the weights and pruning of the parameters. The addition of these steps to L2-boost both accelerates the algorithm and assures its accuracy. We refer to the resulting algorithm as elastic basis pursuit, or EBP, since it expands and contracts the active set of kernels as needed. We show that in contrast to existing approaches to fitting mixtures, our boosting framework (1) enables the selection of the optimal bias-variance tradeoff along the solution path, and (2) scales with high-dimensional problems. In simulations of DWI, we find that EBP yields better parameter estimates than a non-negative least squares (NNLS) approach, or the standard model used in DWI, the tensor model, which serves as the basis for diffusion tensor imaging (DTI) [5]. We demonstrate the utility of the method in DWI data acquired in parts of the brain containing crossings of multiple fascicles of nerve fibers. 1 1 Introduction In many applications, one obtains measurements (xi, yi) for which the response y is related to x via some mixture of known kernel functions fθ(x), and the goal is to recover the mixture parameters θk and their associated weights: yi = K X k=1 wkfθk(x) + ϵi (1) where fθ(x) is a known kernel function parameterized by θ, and θ = (θ1, . . . , θK) are model parameters to be estimated, w = (w1, . . . , wK) are unknown nonnegative weights to be estimated, and ϵi is additive noise. The number of components K is also unknown, hence, this is a nonparametric model. One example of a domain in which mixture models are useful is the analysis of data from diffusion-weighted magnetic resonance imaging (DWI). This biomedical imaging technique is sensitive to the direction of water diffusion within millimeter-scale voxels in the human brain in vivo. Water molecules freely diffuse along the length of nerve cell axons, but is restricted by cell membranes and myelin along directions orthogonal to the axon’s trajectory. Thus, DWI provides information about the microstructural properties of brain tissue in different locations, about the trajectories of organized bundles of axons, or fascicles within each voxel, and about the connectivity structure of the brain. Mixture models are employed in DWI to deconvolve the signal within each voxel with a kernel function, fθ, assumed to represent the signal from every individual fascicle [1, 2] (Figure 1B), and wi provide an estimate of the fiber orientation distribution function (fODF) in each voxel, the direction and volume fraction of different fascicles in each voxel. In other applications of mixture modeling these parameters represent other physical quantities. For example, in chemometrics, θ represents a chemical compound and fθ its spectra. In this paper, we focus on the application of mixture models to the data from DWI experiments and simulations of these experiments. 1.1 Model fitting - existing approaches Hereafter, we restrict our attention to the use of squared-error loss; resulting in penalized leastsquares problem minimize ˆ K, ˆ w,ˆθ yi − ˆ K X k=1 ˆwkfˆθk(xi) 2 + λPθ(w) (2) Minimization problems of the form (2) can be found in the signal deconvolution literature and elsewhere: some examples include super-resolution in imaging [6], entropy estimation for discrete distributions [7], X-ray diffraction [8], and neural spike sorting [3]. Here, Pθ(w) is a convex penalty function of (θ, w). Examples of such penalty functions given in Section 2.1; a formal definition of convexity in the nonparametric setting can be found in the supplementary material, but will not be required for the results in the paper. Technically speaking, the objective function (2) is convex in (w, θ), but since its domain is of infinite dimensionality, for all practical purposes (2) is a nonconvex optimization problem. One can consider fixing the number of components in advance, and using a descent method (with random restarts) to find the best model of that size. Alternatively, one could use a stochastic search method, such as simulated annealing or MCMC [9], to estimate the size of the model and the model parameters simultaneously. However, as one begins to consider fitting models with increasing number of components ˆK and of high dimensionality, it becomes increasingly difficult to apply these approaches [3]. Hence a common approach to obtaining an approximate solution to (2) is to limit the search to a discrete grid of candidate parameters θ = θ1, . . . , θp. The estimated weights and parameters are then obtained by solving an optimization problem of the form ˆβ = argminβ>0||y −⃗Fβ||2 + λPθ(β) where ⃗F has the jth column ⃗fθj, where ⃗fθ is defined by (⃗fθ)i = fθ(xi). Examples applications of this non-negative least-squares-based approach (NNLS) include [10] and [1, 2, 7]. In contrast to descent based methods, which get trapped in local minima, NNLS is guaranteed to converge to a solution which is within ϵ of the global optimum, where ϵ depends on the scale of discretization. In 2 some cases, NNLS will predict the signal accurately (with small error), but the parameters resulting will still be erroneous. Figure 1 illustrates the worst-case scenario where discretization is misaligned relative to the true parameters/kernels that generated the signal. A B Signal Parameters Figure 1: The signal deconvolution problem. Fitting a mixture model with a NNLS algorithm is prone to errors due to discretization. For example, in 1D (A), if the true signal (top; dashed line) arises from a mixture of signals from a bell-shaped kernel functions (bottom; dashed line), but only a single kernel function between them is present in the basis set (bottom; solid line), this may result in inaccurate signal predictions (top; solid line), due to erroneous estimates of the parameters wi. This problem arises in deconvolving multi-dimensional signals, such as the 3D DWI signal (B), as well. Here, the DWI signal in an individual voxel is presented as a 3D surface (top). This surface results from a mixture of signals arising from the fascicles presented on the bottom passing through this single (simulated) voxel. Due to the signal generation process, the kernel of the diffusion signal from each one of the fascicles has a minimum at its center, resulting in ’dimples’ in the diffusion signal in the direction of the peaks in the fascicle orientation distribution function. In an effort to improve the discretization error of NNLS, Ekanadham et al [3] introduced continuous basis pursuit (CBP). CBP is an extension of nonnegative least squares in which the points on the discretization grid θ1, . . . , θp can be continuously moved within a small distance; in this way, one can reach any point in the parameter space. But instead of computing the actual kernel functions for the perturbed parameters, CBP uses linear approximations, e.g. obtained by Taylor expansions. Depending on the type of approximation employed, CBP may incur large error. The developers of CBP suggest solutions for this problem in the one-dimensional case, but these solutions cannot be used for many applications of mixture models (e.g DWI). The computational cost of both NNLS and CBP scales exponentially in the dimensionality of the parameter space. In contrast, using stochastic search methods or descent methods to find the global minimum will generally incur a computational cost scaling which is exponential in the sample size times the parameter space dimensions. Thus, when fitting high-dimensional mixture models, practitioners are forced to choose between the discretization errors inherent to NNLS, or the computational difficulties in the descent methods. We will show that our boosting approach to mixture models combines the best of both worlds: while it does not suffer from discretization error, it features computational tractability comparable to NNLS and CBP. We note that for the specific problem of super-resolution, C`andes derived a deconvolution algorithm which finds the global minimum of (2) without discretization error and proved that the algorithm can recover the true parameters under a minimal separation condition on the parameters [6]. However, we are unaware of an extension of this approach to more general applications of mixture models. 1.2 Boosting The model (1) appears in an entirely separate context, as the model for learning a regression function as an ensemble of weak learners fθ, or boosting [4]. However, the problem of fitting a mixture model and the problem of fitting an ensemble of weak learners have several important differences. In the case of learning an ensemble, the family {fθ} can be freely chosen from a universe of possible weak learners, and the only concern is minimizing the prediction risk on a new observation. In contrast, in the case of fitting a mixture model, the family {fθ} is specified by the application. As a result, boosting algorithms, which were derived under the assumption that {fθ} is a suitably flexible class of weak learners, generally perform poorly in the signal deconvolution setting, where the family {fθ} is inflexible. In the context of regression, L2boost, proposed by Buhlmann et al [4] produces a 3 path of ensemble models which progressively minimize the sum of squares of the residual. L2boost fits a series of models of increasing complexity. The first model consists of the single weak learner ⃗fθ which best fits y. The second model is formed by finding the weak learner with the greatest correlation to the residual of the first model, and adding the new weak learner to the model, without changing any of the previously fitted weights. In this way the size of the model grows with the number of iterations: each new learner is fully fit to the residual and added to the model. But because the previous weights are never adjusted, L2Boost fails to converge to the global minimum of (2) in the mixture model setting, producing suboptimal solutions. In the following section, we modify L2Boost for fitting mixture models. We refer to the resulting algorithm as elastic basis pursuit. 2 Elastic Basis Pursuit Our proposed procedure for fitting mixture models consists of two stages. In the first stage, we transform a L1 penalized problem to an equivalent non regularized least squares problem. In the second stage, we employ a modified version of L2Boost, elastic basis pursuit, to solve the transformed problem. We will present the two stages of the procedure, then discuss our fast convergence results. 2.1 Regularization For most mixture problems it is beneficial to apply a L1-norm based penalty, by using a modified input ˜y and kernel function family ˜fθ, so that argminK,w,θ y − K X i=1 ⃗fθ 2 + λPθ(w) = argminK,w,θ ˜y − K X i=1 ˜fθ 2 (3) We will use our modified L2Boost algorithm to produce a path of solutions for objective function on the left side, which results in a solution path for the penalized objective function (2). For example, it is possible to embed the penalty Pθ(w) = ||w||2 1 in the optimization problem (2). One can show that solutions obtained by using the penalty function Pθ(w) = ||w||2 1 have a oneto-one correspondence with solutions of obtained using the usual L1 penalty ||w||1. The penalty ||w||2 1 is implemented by using the transformed input: ˜y =  y 0  and using modified kernel vectors ˜fθ =  ⃗fθ √ λ  . Other kinds of regularization are also possible, and are presented in the supplemental material. 2.2 From L2Boost to Elastic Basis Pursuit Motivated by the connection between boosting and mixture modelling, we consider application of L2Boost to solve the transformed problem (the left side of(3)). Again, we reiterate the nonparametric nature of the model space; by minimizing (3), we seek to find the model with any number of components which minimizes the residual sum of squares. In fact, given appropriate regularization, this results in a well-posed problem. In each iteration of our algorithm a subset of the parameters, θ are considered for adjustment. Following Lawson and Hanson [11], we refer to these as the active set. As stated before, L2Boost can only grow the active set at each iteration, converging to inaccurate models. Our solution to this problem is to modify L2Boost so that it grows and contracts the active set as needed; hence we refer to this modification of the L2Boost algorithm as elastic basis pursuit. The key ingredient for any boosting algorithm is an oracle for fitting a weak learner: that is, a function τ which takes a residual as input and returns the parameter θ corresponding to the kernel ˜fθ most correlated with the residual. EBP takes as inputs the oracle τ, the input vector ˜y, the function ˜fθ, and produces a path of solutions which progressively minimize (3). To initialize the algorithm, we use NNLS to find an initial estimate of (w, θ). In the kth iteration of the boosting algorithm, let ˜r(k−1) be residual from the previous iteration (or the NNLS fit, if k = 1). The algorithm proceeds as follows 4 1. Call the oracle to find θnew = τ(˜r(k−1)), and add θnew to the active set θ. 2. Refit the weights w, using NNLS, to solve: minimizew>0||˜y −˜Fw||2 where ˜F is the matrix formed from the regressors in the active set, ˜fθ for θ ∈θ. This yields the residual ˜r(k) = ˜y −˜Fw. 3. Prune the active set θ by removing any parameter θ whose weight is zero, and update the weight vector w in the same way. This ensures that the active set θ remains sparse in each iteration. Let (w(k), θ(k)) denote the values of (w, θ) at the end of this step of the iteration. 4. Stopping may be assessed by computing an estimated prediction error at each iteration, via an independent validation set, and stopping the algorithm early when the prediction error begins to climb (indicating overfitting). Psuedocode and Matlab code implementing this algorithm can be found in the supplement. In the boosting context, the property of refitting the ensemble weights in every iteration is known as the totally corrective property; LPBoost [12] is a well-known example of a totally corrective boosting algorithm. While we derived EBP as a totally corrective variant of L2Boost, one could also view EBP as a generalization of the classical Lawson-Hanson (LH) algorithm [11] for solving nonnegative least-squares problems. Given mild regularity conditions and appropriate regularization, Elastic Basis Pursuit can be shown to deterministically converge to the global optimum: we can bound the objective function gap in the mth iteration by C/√m, where C is an explicit constant (see 2.3). To our knowledge, fixed iteration guarantees are unavailable for all other methods of comparable generality for fitting a mixture with an unknown number of components. 2.3 Convergence Results (Detailed proofs can be found in the supplementary material.) For our convergence results to hold, we require an oracle function τ : R˜n →Θ which satisfies * ˜r, ˜fτ(˜r) || ˜fτ(˜r)|| + ≥αρ(˜r), where ρ(˜r) = sup θ∈Θ * ˜r, ˜fθ || ˜fθ|| + (4) for some fixed 0 < α <= 1. Our proofs can also be modified to apply given a stochastic oracle that satisfies (4) with fixed probability p > 0 for every input ˜r. Recall that ˜y denotes the transformed input, ˜fθ the transformed kernel and ˜n the dimensionality of ˜y. We assume that the parameter space Θ is compact and that ˜fθ, the transformed kernel function, is continuous in θ. Furthermore, we assume that either L1 regularization is imposed, or the kernels satisfy a positivity condition, i.e. infθ∈Θ fθ(xi) ≥0 for i = 1, . . . , n. Proposition 1 states that these conditions imply the existence of a maximally saturated model (w∗, θ∗) of size K∗≤˜n with residual ˜r∗. The existence of such a saturated model, in conjunction with existence of the oracle τ, enables us to state fixed-iteration guarantees on the precision of EBP, which implies asymptotic convergence to the global optimum. To do so, we first define the quantity ρ(m) = ρ(˜r(m)), see (4) above. Proposition 2 uses the fact that the residuals ˜r(m) are orthogonal to ˜F (m), thanks to the NNLS fitting procedure in step 2. This allows us to bound the objective function gap in terms of ρ(m). Proposition 3 uses properties of the oracle τ to lower bound the progress per iteration in terms of ρ(m). Proposition 2 Assume the conditions of Proposition 1. Take saturated model w∗, θ∗. Then defining B∗= 2 K∗ X i=1 w∗ i || ˜fθ∗ i || (5) the mth residual of the EBP algorithm ˜r(m) can be bounded in size by ||˜r(m)||2 ≤||˜r∗||2 + B∗ρ(m) 5 In particular, whenever ρ converges to 0, the algorithm converges to the global minimum. Proposition 3 Assume the conditions of Proposition 1. Then ||˜r(m)||2 −||˜r(m+1)||2 ≥(αρ(m))2 for α defined above in (4). This implies that the sequence ||˜r(0)||2, . . . is decreasing. Combining Propositions 2 and 3 yields our main result for the non-asymptotic convergence rate. Proposition 4 Assume the conditions of Proposition 1. Then for all m > 0, ||˜r(m)||2 −||˜r∗||2 ≤Bmin p ||˜r(0)||2 −||˜r∗||2|| α 1 √m where Bmin = inf w∗,θ∗B∗ for B∗defined in (5) Hence we have characterized the non-asymptotic convergence of EBP at rate 1 √m with an explicit constant, which in turn implies asymptotic convergence to the global minimum. 3 DWI Results and Discussion To demonstrate the utility of EBP in a real-world application, we used this algorithm to fit mixture models of DWI. Different approaches are taken to modeling the DWI signal. The classical Diffusion Tensor Imaging (DTI) model [5], which is widely used in applications of DWI to neuroscience questions, is not a mixture model. Instead, it assumes that diffusion in the voxel is well approximated by a 3-dimensional Gaussian distribution. This distribution can be parameterized as a rank-2 tensor, which is expressed as a 3 by 3 matrix. Because the DWI measurement has antipodal symmetry, the tensor matrix is symmetric, and only 6 independent parameters need to be estimated to specify it. DTI is accurate in many places in the white matter, but its accuracy is lower in locations in which there are multiple crossing fascicles of nerve fibers. In addition, it should not be used to generate estimates of connectivity through these locations. This is because the peak of the fiber orientation distribution function (fODF) estimated in this location using DTI is not oriented towards the direction of any of the crossing fibers. Instead, it is usually oriented towards an intermediate direction (Figure 4B). To address these challenges, mixture models have been developed, that fit the signal as a combination of contributions from fascicles crossing through these locations. These models are more accurate in fitting the signal. Moreover, their estimate of the fODF is useful for tracking the fascicles through the white matter for estimates of connectivity. However, these estimation techniques either use different variants of NNLS, with a discrete set of candidate directions [2], or with a spherical harmonic basis set [1], or use stochastic algorithms [9]. To overcome the problems inherent in these techniques, we demonstrate here the benefits of using EBP to the estimation of a mixture models of fascicles in DWI. We start by demonstrating the utility of EBP in a simulation of a known configuration of crossing fascicles. Then, we demonstrate the performance of the algorithm in DWI data. The DWI measurements for a single voxel in the brain are y1, . . . , yn for directions x1, . . . , xn on the three dimensional unit sphere, given by yi = K X k=1 wkfDk(xi) + ϵi, where fD(x) = exp[−bxT Dx], (6) The kernel functions fD(x) each describe the effect of a single fascicle traversing the measurement voxel on the diffusion signal, well described by the Stejskal-Tanner equation [13]. Because of the non-negative nature of the MRI signal, ϵi > 0 is generated from a Rician distribution [14]. where b is a scalar quantity determined by the experimenter, and related to the parameters of the measurement (the magnitude of diffusion sensitization applied in the MRI instrument). D is a positive definite quadratic form, which is specified by the direction along which the fascicle represented by fD traverses the voxel and by additional parameters λ1 and λ2, corresponding to the axial and radial 6 diffusivity of the fascicle represented by fD. The oracle function τ is implemented by NewtonRaphson with random restarts. In each iteration of the algorithm, the parameters of D (direction and diffusivity) are found using the oracle function, τ(˜r), using gradient descent on ˜r, the current residuals. In each iteration, the set of fD is shrunk or expanded to best match the signal. A B C D Model ft iteration 2 Model ft iteration 1 fθ Residual + Residual – Difusion signal Figure 2: To demonstrate the steps of EBP, we examine data from 100 iterations of the DWI simulation. (A) A cross-section through the data. (B) In the first iteration, the algorithm finds the best single kernel to represent the data (solid line: average kernel). (C) The residuals from this fit (positive in dark gray, negative in light gray) are fed to the next step of the algorithm, which then finds a second kernel (solid line: average kernel). (D) The signal is fit using both of these kernels (which are the active set at this point). The combination of these two kernels fits the data better than any of them separately, and they are both kept (solid line: average fit), but redundant kernels can also be discarded at this point (D). Figure 3: The progress of EBP. In each plot, the abscissa denotes the number of iterations in the algorithm (in log scale). (A) The number of kernel functions in the active set grows as the algorithm progresses, and then plateaus. (B) Meanwhile, the mean square error (MSE) decreases to a minimum and then stabilizes. The algorithm would normally be terminated at this minimum. (C) This point also coincides with a minimum in the optimal bias-variance trade-off, as evidenced by the decrease in EMD towards this point. In a simulation with a complex configuration of fascicles, we demonstrate that accurate recovery of the true fODF can be achieved. In our simulation model, we take b = 1000s/mm2, and generate v1, v2, v3 as uniformly distributed vectors on the unit sphere and weights w1, w2, w3 as i.i.d. uniformly distributed on the interval [0, 1]. Each vi is associated with a λ1,i between 0.5 and 2, and setting λ2,i to 0. We consider the signal in 150 measurement vectors distributed on the unit sphere according to an electrostatic repulsion algorithm. We partition the vectors into a training partition and a test partition to minimize the maximum angular separation in each partition. σ2 = 0.005 we generate a signal We use cross-validation on the training set to fit NNLS with varying L1 regularization parameter c, using the regularization penalty function: λP(w) = λ(c −||w||1)2. We choose this form of penalty function because we interpret the weights w as comprising partial volumes in the voxel; hence c represents the total volume of the voxel weighted by the isotropic component of the diffusion. We fix the regularization penalty parameter λ = 1. The estimated fODFs and predicted signals are obtained by three algorithms: DTI, NNLS, and EBP. Each algorithm is applied to the training set (75 directions), and error is estimated, relative to a prediction on the test set (75 directions). The latter two methods (NNLS, EBP) use the regularization parameters λ = 1 and the c chosen by crossvalidated NNLS. Figure 2 illustrates the first two iterations of EBP applied to these simulated data. The estimated fODF are compared to the true fODF by the antipodally symmetrized Earth Mover’s 7 distance (EMD) [15] in each iteration. Figure 3 demonstrates the progress of the internal state of the EBP algorithm in many repetitions of the simulation. In the simulation results (Figure 4), EBP clearly reaches a more accurate solution than DTI, and a sparser solution than NNLS. A C Model parameters True parameters -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 -1 0 1 B Figure 4: DWI Simulation results. Ground truth entered into the simulation is a configuration of 3 crossing fascicles (A). DTI estimates a single primary diffusion direction that coincides with none of these directions (B). NNLS estimates an fODF with many, demonstrating the discretization error (see also Figure 1). EBP estimates a much sparser solution with weights concentrated around the true peaks (D). The same procedure is used to fit the three models to DWI data, obtained at 2x2x2 mm3, at a bvalue of 4000 s/mm2. In the these data, the true fODF is not known. Hence, only test prediction error can be obtained. We compare RMSE of prediction error between the models in a region of interest (ROI) in the brain containing parts of the corpus callosum, a large fiber bundle that contains many fibers connecting the two hemispheres, as well as the centrum semiovale, containing multiple crossing fibers (Figure 5). NNLS and EBP both have substantially reduced error, relative to DTI. Figure 5: DWI data from a region of interest (A, indicated by red frame) is analyzed and RMSE is displayed for DTI (B), NNLS(C) and EBP(D). 4 Conclusions We developed an algorithm to model multi-dimensional mixtures. This algorithm, Elastic Basis Pursuit (EBP), is a combination of principles from boosting, and principles from the Lawson-Hanson active set algorithm. It fits the data by iteratively generating and testing the match of a set of candidate kernels to the data. Kernels are added and removed from the set of candidates as needed, using a totally corrective backfitting step, based on the match of the entire set of kernels to the data at each step. We show that the algorithm reaches the global optimum, with fixed iteration guarantees. Thus, it can be practically applied to separate a multi-dimensional signal into a sum of component signals. For example, we demonstrate how this algorithm can be used to fit diffusion-weighted MRI signals into nerve fiber fascicle components. Acknowledgments The authors thank Brian Wandell and Eero Simoncelli for useful discussions. CZ was supported through an NIH grant 1T32GM096982 to Robert Tibshirani and Chiara Sabatti, AR was supported through NIH fellowship F32-EY022294. FP was supported through NSF grant BCS1228397 to Brian Wandell 8 References [1] Tournier J-D, Calamante F, Connelly A (2007). Robust determination of the fibre orientation distribution in diffusion MRI: non-negativity constrained super-resolved spherical deconvolution. Neuroimage 35:145972 [2] DellAcqua F, Rizzo G, Scifo P, Clarke RA, Scotti G, Fazio F (2007). A model-based deconvolution approach to solve fiber crossing in diffusion-weighted MR imaging. IEEE Trans Biomed Eng 54:46272 [3] Ekanadham C, Tranchina D, and Simoncelli E. (2011). Recovery of sparse translation-invariant signals with continuous basis pursuit. IEEE transactions on signal processing (59):4735-4744. [4] B¨uhlmann P, Yu B (2003). Boosting with the L2 loss: regression and classification. JASA, 98(462), 324-339. [5] Basser,P. J., Mattiello, J. and Le-Bihan, D. (1994). MR diffusion tensor spectroscopy and imaging. Biophysical Journal, 66:259-267. [6] Cand`es, E. J., and FernandezGranda, C. (2013). Towards a Mathematical Theory of Superresolution. Communications on Pure and Applied Mathematics. [7] Valiant, G., and Valiant, P. (2011, June). Estimating the unseen: an n/log (n)-sample estimator for entropy and support size, shown optimal via new CLTs. In Proceedings of the 43rd annual ACM symposium on Theory of computing (pp. 685-694). ACM. [8] S´anchez-Bajo, F., and Cumbrera, F. L. (2000). Deconvolution of X-ray diffraction profiles by using series expansion. Journal of applied crystallography, 33(2), 259-266. [9] Behrens TEJ, Berg HJ, Jbabdi S, Rushworth MFS, and Woolrich MW (2007). Probabilistic diffusion tractography with multiple fiber orientations: What can we gain? NeuroImage (34):14445. [10] Bro, R., and De Jong, S. (1997). A fast non-negativity-constrained least squares algorithm. Journal of chemometrics, 11(5), 393-401. [11] Lawson CL, and Hanson RJ. (1995). Solving Least Squares Problems. SIAM. [12] Demiriz, A., Bennett, K. P., and Shawe-Taylor, J. (2002). Linear programming boosting via column generation. Machine Learning, 46(1-3), 225-254. [13] Stejskal EO, and Tanner JE. (1965). Spin diffusion measurements: Spin echoes in the presence of a time-dependent gradient field. J Chem Phys(42):288-92. [14] Gudbjartsson, H., and Patz, S. (1995). The Rician distribution of noisy MR data. Magn Reson Med. 34: 910914. [15] Rubner, Y., Tomasi, C. Guibas, L.J. (2000). The earth mover’s distance as a metric for image retrieval. Intl J. Computer Vision, 40(2), 99-121. 9
2014
47
5,533
On Iterative Hard Thresholding Methods for High-dimensional M-Estimation Prateek Jain∗ Ambuj Tewari† Purushottam Kar∗ ∗Microsoft Research, INDIA †University of Michigan, Ann Arbor, USA {prajain,t-purkar}@microsoft.com, tewaria@umich.edu Abstract The use of M-estimators in generalized linear regression models in high dimensional settings requires risk minimization with hard L0 constraints. Of the known methods, the class of projected gradient descent (also known as iterative hard thresholding (IHT)) methods is known to offer the fastest and most scalable solutions. However, the current state-of-the-art is only able to analyze these methods in extremely restrictive settings which do not hold in high dimensional statistical models. In this work we bridge this gap by providing the first analysis for IHT-style methods in the high dimensional statistical setting. Our bounds are tight and match known minimax lower bounds. Our results rely on a general analysis framework that enables us to analyze several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in the high dimensional regression setting. Finally, we extend our analysis to the problem of low-rank matrix recovery. 1 Introduction Modern statistical estimation is routinely faced with real world problems where the number of parameters p handily outnumbers the number of observations n. In general, consistent estimation of parameters is not possible in such a situation. Consequently, a rich line of work has focused on models that satisfy special structural assumptions such as sparsity or low-rank structure. Under these assumptions, several works (for example, see [1, 2, 3, 4, 5]) have established that consistent estimation is information theoretically possible in the “n ≪p” regime as well. The question of efficient estimation, however, is faced with feasibility issues since consistent estimation routines often end-up solving NP-hard problems. Examples include sparse regression which requires loss minimization with sparsity constraints and low-rank regression which requires dealing with rank constraints which are not efficiently solvable in general [6]. Interestingly, recent works have demonstrated that these hardness results can be avoided by assuming certain natural conditions over the loss function being minimized such as restricted strong convexity (RSC) and restricted strong smoothness (RSS). The estimation routines proposed in these works typically make use of convex relaxations [5] or greedy methods [7, 8, 9] which do not suffer from infeasibility issues. Despite this, certain limitations have precluded widespread use of these techniques. Convex relaxation-based methods typically suffer from slow rates as they solve non-smooth optimization problems apart from being hard to analyze in terms of global guarantees. Greedy methods, on the other hand, are slow in situations with non-negligible sparsity or relatively high rank, owing to their incremental approach of adding/removing individual support elements. Instead, the methods of choice for practical applications are actually projected gradient (PGD) methods, also referred to as iterative hard thresholding (IHT) methods. These methods directly project 1 the gradient descent update onto the underlying (non-convex) feasible set. This projection can be performed efficiently for several interesting structures such as sparsity and low rank. However, traditional PGD analyses for convex problems viz. [10] do not apply to these techniques due to the non-convex structure of the problem. An exception to this is the recent work [11] that demonstrates that PGD with non-convex regularization can offer consistent estimates for certain high-dimensional problems. However, the work in [11] is only able to analyze penalties such as SCAD, MCP and capped L1. Moreover, their framework cannot handle commonly used penalties such as L0 or low-rank constraints. Insufficiency of RIP based Guarantees for M-estimation. As noted above, PGD/IHT-style methods have been very popular in literature for sparse recovery and several algorithms including Iterative Hard Thresholding (IHT) [12] or GraDeS [13], Hard Thresholding Pursuit (HTP) [14], CoSaMP [15], Subspace Pursuit (SP) [16], and OMPR(ℓ) [17] have been proposed. However, the analysis of these algorithms has traditionally been restricted to settings that satisfy the Restricted Isometry property (RIP) or incoherence property. As the discussion below demonstrates, this renders these analyses inaccessible to high-dimensional statistical estimation problems. All existing results analyzing these methods require the condition number of the loss function, restricted to sparse vectors, to be smaller than a universal constant. The best known such constant is due to the work of [17] that requires a bound on the RIP constant δ2k ≤0.5 (or equivalently a bound 1+δ2k 1−δ2k ≤3 on the condition number). In contrast, real-life high dimensional statistical settings, wherein pairs of variables can be arbitrarily correlated, routinely require estimation methods to perform under arbitrarily large condition numbers. In particular if two variates have a covariance matrix like  1 1 −ϵ 1 −ϵ 1  , then the restricted condition number (on a support set of size just 2) of the sample matrix cannot be brought down below 1/ϵ even with infinitely many samples. In particular when ϵ < 1/6, none of the existing results for hard thresholding methods offer any guarantees. Moreover, most of these analyses consider only the least squares objective. Although recent attempts have been made to extend this to general differentiable objectives [18, 19], the results continue to require that the restricted condition number be less than a universal constant and remain unsatisfactory in a statistical setting. Overview of Results. Our main contribution in this work is an analysis of PGD/IHT-style methods in statistical settings. Our bounds are tight, achieve known minmax lower bounds [20], and hold for arbitrary differentiable, possibly even non-convex functions. Our results hold even when the underlying condition number is arbitrarily large and only require the function to satisfy RSC/RSS conditions. In particular, this reveals that these iterative methods are indeed applicable to statistical settings, a result that escaped all previous works. Our first result shows that the PGD/IHT methods achieve global convergence if used with a relaxed projection step. More formally, if the optimal parameter is s∗-sparse and the problem satisfies RSC and RSS constraints α and L respectively (see Section 2), then PGD methods offer global convergence so long as they employ projection to an s-sparse set where s ≥4(L/α)2s∗. This gives convergence rates that are identical to those of convex relaxation and greedy methods for the Gaussian sparse linear model. We then move to a family of efficient “fully corrective” methods and show as before, that for arbitrary functions satisfying the RSC/RSS properties, these methods offer global convergence. Next, we show that these results allow PGD-style methods to offer global convergence in a variety of statistical estimation problems such as sparse linear regression and low rank matrix regression. Our results effortlessly extend to the noisy setting as a corollary and give bounds similar to those of [21] that relies on solving an L1 regularized problem. Our proofs are able to exploit that even though hard-thresholding is not the prox-operator for any convex prox function, it still provides strong contraction when projection is performed onto sets of sparsity s ≫s∗. This crucial observation allows us to provide the first unified analysis for hard thresholding based gradient descent algorithms. Our empirical results confirm our predictions with respect to the recovery properties of IHT-style algorithms on badly-conditioned sparse recovery problems, as well as demonstrate that these methods can be orders of magnitudes faster than their L1 and greedy counterparts. 2 Organization. Section 2 sets the notation and the problem statement. Section 3 introduces the PGD/IHT algorithm that we study and proves that the method guarantees recovery assuming the RSC/RSS property. We also generalize our guarantees to the problem of low-rank matrix regression. Section 4 then provides crisp sample complexity bounds and statistical guarantees for the PGD/IHT estimators. Section 5 extends our analysis to a broad family of compressive sensing algorithms that include the so-called fully-corrective hard thresholding methods and provide similar results for them as well. We present some empirical results in Section 6 and conclude in Section 7. 2 Problem Setup and Notations High-dimensional Sparse Estimation. Given data points X = [X1, . . . , Xn]T , where Xi ∈Rp, and the target Y = [Y1, . . . , Yn]T , where Yi ∈R, the goal is to compute an s∗-sparse θ∗s.t., θ∗= arg min θ,∥θ∥0≤s∗f(θ). (1) Typically, f can be thought of as an empirical risk function i.e. f(θ) = 1 n P i ℓ(⟨Xi, θ⟩, Yi) for some loss function ℓ(see examples in Section 4). However, for our analysis of PGD and other algorithms, we need not assume any other property of f other than differentiability and the following two RSC and RSS properties. Definition 1 (RSC Property). A differentiable function f : Rp →R is said to satisfy restricted strong convexity (RSC) at sparsity level s = s1 + s2 with strong convexity constraint αs if the following holds for all θ1, θ2 s.t. ∥θ1∥0 ≤s1 and ∥θ2∥0 ≤s2: f(θ1) −f(θ2) ≥⟨θ1 −θ2, ∇θf(θ2)⟩+ αs 2 ∥θ1 −θ2∥2 2. Definition 2 (RSS Property). A differentiable function f : Rp →R is said to satisfy restricted strong smoothness (RSS) at sparsity level s = s1 + s2 with strong convexity constraint Ls if the following holds for all θ1, θ2 s.t. ∥θ1∥0 ≤s1 and ∥θ2∥0 ≤s2: f(θ1) −f(θ2) ≤⟨θ1 −θ2, ∇θf(θ2)⟩+ Ls 2 ∥θ1 −θ2∥2 2. Low-rank Matrix Regression. Low-rank matrix regression is similar to sparse estimation as presented above except that each data point is now a matrix i.e. Xi ∈Rp1×p2, the goal being to estimate a low-rank matrix W ∈Rp1×p2 that minimizes the empirical loss function on the given data. W ∗= arg min W,rank(W )≤r f(W). (2) For this problem the RSC and RSS properties for f are defined similarly as in Definition 1, 2 except that the L0 norm is replaced by the rank function. 3 Iterative Hard-thresholding Method In this section we study the popular projected gradient descent (a.k.a iterative hard thresholding) method for the case of the feasible set being the set of sparse vectors (see Algorithm 1 for pseudocode). The projection operator Ps(z), can be implemented efficiently in this case by projecting z onto the set of s-sparse vectors by selecting the s largest elements (in magnitude) of z. The standard projection property implies that ∥Ps(z) −z∥2 2 ≤∥θ′ −z∥2 2 for all ∥θ′∥0 ≤s. However, it turns out that we can prove a significantly stronger property of hard thresholding for the case when ∥θ′∥0 ≤s∗and s∗≪s. This property is key to analysing IHT and is formalized below. Lemma 1. For any index set I, any z ∈RI, let θ = Ps(z). Then for any θ∗∈RI such that ∥θ∗∥0 ≤s∗, we have ∥θ −z∥2 2 ≤|I| −s |I| −s∗∥θ∗−z∥2 2. See Appendix A for a detailed proof. Our analysis combines the above observation with the RSC/RSS properties of f to provide geometric convergence rates for the IHT procedure below. 3 Algorithm 1 Iterative Hard-thresholding 1: Input: Function f with gradient oracle, sparsity level s, step-size η 2: θ1 = 0, t = 1 3: while not converged do 4: θt+1 = Ps(θt −η∇θf(θt)), t = t + 1 5: end while 6: Output: θt Theorem 1. Let f have RSC and RSS parameters given by L2s+s∗(f) = L and α2s+s∗(f) = α respectively. Let Algorithm 1 be invoked with f, s ≥32 L α 2 s∗and η = 2 3L. Also let θ∗= arg minθ,∥θ∥0≤s∗f(θ). Then, the τ-th iterate of Algorithm 1, for τ = O( L α · log( f(θ0) ϵ )) satisfies: f(θτ) −f(θ∗) ≤ϵ. Proof. (Sketch) Let St = supp(θt), S∗= supp(θ∗), St+1 = supp(θt+1) and It = S∗∪St∪St+1. Using the RSS property and the fact that supp(θt) ⊆It and supp(θt+1) ⊆It, we have: f(θt+1) −f(θt) ≤⟨θt+1 −θt, gt⟩+ L 2 ∥θt+1 −θt∥2 2, = L 2 ∥θt+1 It −θt It + 2 3L · gt It∥2 2 −1 2L∥gt It∥2 2, ζ1≤L 2 · |It| −s |It| −s∗· ∥θ∗ It −θt It + 1 L · gt It∥2 2 −1 2L(∥gt It\(St∪S∗)∥2 2 + ∥gt St∪S∗∥2 2), (3) where ζ1 follows from an application of Lemma 1 with I = It and the Pythagoras theorem. The above equation has three critical terms. The first term can be bounded using the RSS condition. Using f(θt) −f(θ∗) ≤⟨gt St∪S∗, θt −θ∗⟩−α 2 ∥θt −θ∗∥2 2 ≤ 1 2α∥gt St∪S∗∥2 2 bounds the third term in (3). The second term is more interesting as in general elements of gt S∗can be arbitrarily small. However, elements of gt It\(St∪S∗) should be at least as large as gt S∗\St+1 as they are selected by hard-thresholding. Combining this insight with bounds for gt S∗\St+1 and with (3), we obtain the theorem. See Appendix A for a detailed proof. 3.1 Low-rank Matrix Regression We now generalize our previous analysis to a projected gradient descent (PGD) method for low-rank matrix regression. Formally, we study the following problem: min W f(W), s.t., rank(W) ≤s. (4) The hard-thresholding projection step for low-rank matrices can be solved using SVD i.e. PMs(W) = UsΣsV T s , where W = UΣV T is the singular value decomposition of W. Us, Vs are the top-s singular vectors (left and right, respectively) of W and Σs is the diagonal matrix of the top-s singular values of W. To proceed, we first note a property of the above projection similar to Lemma 1. Lemma 2. Let W ∈Rp1×p2 be a rank-|It| matrix and let p1 ≥p2. Then for any rank-s∗matrix W ∗∈Rp1×p2 we have ∥PMs(W) −W∥2 F ≤|It| −s |It| −s∗∥W ∗−W∥2 F . (5) Proof. Let W = UΣV T be the singular value decomposition of W. Now, ∥PMs(W) −W∥2 F = P|It| i=s+1 σ2 i = ∥Ps(diag(Σ)) −diag(Σ)∥2 2, where σ1 ≥· · · ≥σ|It| ≥0 are the singular values of W. Using Lemma 1, we get: ∥PMs(W) −W∥2 F ≤|It| −s |It| −s∗∥Σ∗−diag(Σ)∥2 2 ≤|It| −s |It| −s∗∥W ∗−W∥2 F , (6) where the last step uses the von Neumann’s trace inequality (Tr(A · B) ≤P i σi(A)σi(B)). 4 The following result for low-rank matrix regression immediately follows from Lemma 4. Theorem 2. Let f have RSC and RSS parameters given by L2s+s∗(f) = L and α2s+s∗(f) = α. Replace the projection operator Ps in Algorithm 1 with its matrix counterpart PMs as defined in (5). Suppose we invoke it with f, s ≥32 L α 2 s∗, η = 2 3L. Also let W ∗= arg minW,rank(W )≤s∗f(W). Then the τ-th iterate of Algorithm 1, for τ = O( L α · log( f(W 0) ϵ ) satisfies: f(W τ) −f(W ∗) ≤ϵ. Proof. A proof progression similar to that of Theorem 1 suffices. The only changes that need to be made are: firstly Lemma 2 has to be invoked in place of Lemma 1. Secondly, in place of considering vectors restricted to a subset of coordinates viz. θS, gt I, we would need to consider matrices restricted to subspaces i.e. WS = USU T S W where US is a set of singular vectors spanning the range-space of S. 4 High Dimensional Statistical Estimation This section elaborates on how the results of the previous section can be used to give guarantees for IHT-style techniques in a variety of statistical estimation problems. We will first present a generic convergence result and then specialize it to various settings. Suppose we have a sample of data points Z1:n and a loss function L(θ; Z1:n) that depends on a parameter θ and the sample. Then we can show the following result. (See Appendix B for a proof.) Theorem 3. Let ¯θ be any s∗-sparse vector. Suppose L(θ; Z1:n) is differentiable and satisfies RSC and RSS at sparsity level s + s∗with parameters αs+s∗and Ls+s∗respectively, for s ≥32  L2s+s∗ α2s+s∗ 2 s∗. Let θτ be the τ-th iterate of Algorithm 1 for τ chosen as in Theorem 1 and ε be the function value error incurred by Algorithm 1. Then we have ∥¯θ −θτ∥2 ≤2√s + s∗∥∇L(¯θ; Z1:n)∥∞ αs+s∗ + s 2ϵ αs+s∗. Note that the result does not require the loss function to be convex. This fact will be crucially used later. We now apply the above result to several statistical estimation scenarios. Sparse Linear Regression. Here Zi = (Xi, Yi) ∈Rp × R and Yi = ⟨¯θ, Xi⟩+ ξi where ξi ∼N(0, σ2) is label noise. The empirical loss is the usual least squares loss i.e. L(θ; Z1:n) = 1 n∥Y −Xθ∥2 2. Suppose X1:n are drawn i.i.d. from a sub-Gaussian distribution with covariance Σ with Σjj ≤1 for all j. Then [22, Lemma 6] immediately implies that RSC and RSS at sparsity level k hold, with probability at least 1 −e−c0n, with αk = 1 2σmin(Σ) −c1 k log p n and Lk = 2σmax(Σ) + c1 k log p n (c0, c1 are universal constants). So we can set k = 2s + s∗and if n > 4c1k log p/σmin(Σ) then we have αk ≥1 4σmin(Σ) and Lk ≤2.25σmax(Σ) which means that Lk/9αk ≤κ(Σ) := σmax(Σ)/σmin(Σ). Thus it is enough to choose s = 2592κ(Σ)2s∗and apply Theorem 3. Note that ∥∇L(¯θ; Z1:n)∥∞= ∥XT ξ/n∥∞≤2σ q log p n with probability at least 1−c2p−c3 (c2, c3 are universal constants). Putting everything together, we have the following bound with high probability: ∥¯θ −θτ∥2 ≤145 κ(Σ) σmin(Σ)σ r s∗log p n + 2 r ϵ σmin(Σ), where ϵ is the function value error incurred by Algorithm 1. Noisy and Missing Data. We now look at cases with feature noise as well. More specifically, assume that we only have access to ˜Xi’s that are corrupted versions of Xi’s. Two models of noise are popular in literature [21]: a) (additive noise) ˜Xi = Xi+Wi where Wi ∼N(0, ΣW ), and b) (missing data) ˜X is an R∪{⋆}-valued matrix obtained by independently, with probability ν ∈[0, 1), replacing each entry in X with ⋆. For the case of additive noise (missing data can be handled similarly), Zi = ( ˜Xi, Yi) and L(θ; Z1:n) = 1 2θT ˆΓθ −ˆγT θ where ˆΓ = ˜XT ˜X/n −ΣW and ˆγ = ˜XT Y/n are 5 Algorithm 2 Two-stage Hard-thresholding 1: Input: function f with gradient oracle, sparsity level s, sparsity expansion level ℓ 2: θ1 = 0, t = 1 3: while not converged do 4: gt = ∇θf(θt), St = supp(θt) 5: Zt = St ∪(largest ℓelements of |gt St|) 6: βt = arg minβ,supp(β)⊆Zt f(β) // fully corrective step 7: eθt = Ps(βt) 8: θt+1 = arg minθ,supp(θ)⊆supp(eθt) f(θ), t = t + 1 // fully corrective step 9: end while 10: Output: θt unbiased estimators of Σ and ΣT ¯θ respectively. [21, Appendix A, Lemma 1] implies that RSC, RSS at sparsity level k hold, with failure probability exponentially small in n, with αk = 1 2σmin(Σ) − kτ(p)/n and Lk = 1.5σmax(Σ) + kτ(p)/n for τ(p) = c0σmin(Σ) max( (∥Σ∥2 op+∥ΣW ∥2 op)2 σ2 min(Σ) , 1) log p. Thus for k = 2s + s∗and n ≥4kτ(p)/σmin(Σ) we have Lk/αk ≤7κ(Σ). Note that L(·; Z1:n) is non-convex but we can still apply Theorem 3 with s = 1568κ(Σ)2s∗because RSC, RSS hold. Using the high probability upper bound (see [21, Appendix A, Lemma 2]) ∥∇L(¯θ; Z1:n)∥∞≤ c1˜σ∥¯θ∥2 p log p/n gives us the following ∥¯θ −θτ∥2 ≤c2 κ(Σ) σmin(Σ) ˜σ∥¯θ∥2 r s∗log p n + 2 r ϵ σmin(Σ) where ˜σ = q ∥ΣW ∥2op + ∥Σ∥2op(∥ΣW ∥op + σ) and ϵ is the function value error in Algorithm 1. 5 Fully-corrective Methods In this section, we study a variety of “fully-corrective” methods. These methods keep the optimization objective fully minimized over the support of the current iterate. To this end, we first prove a fundamental theorem for fully-corrective methods that formalizes the intuition that for such methods, a large function value should imply a large gradient at any sparse θ as well. This result is similar to Lemma 1 of [17] but holds under RSC/RSS conditions (rather than the RIP condition as in [17]), as well as for the general loss functions. See Appendix C for a detailed proof. Lemma 3. Consider a function f with RSC parameter given by L2s+s∗(f) = L and RSS parameter given by α2s+s∗(f) = α. Let θ∗= arg minθ,∥θ∥0≤s∗f(θ) with S∗= supp(θ∗). Let St ⊆[p] be any subset of co-ordinates s.t. |St| ≤s. Let θt = arg minθ,supp(θ)⊆St f(θ). Then, we have: 2α(f(θt) −f(θ∗)) ≤∥gt St∪S∗∥2 2 −α2∥θt St\S∗∥2 2 Two-stage Methods. We will, for now, concentrate on a family of two-stage fully corrective methods that contains popular compressive sensing algorithms like CoSaMP and Subspace Pursuit (see Algorithm 2 for pseudocode). These algorithms have thus far been analyzed only under RIP conditions for the least squares objective. Using our analysis framework developed in the previous sections, we present a generic RSC/RSS-based analysis for general two-stage methods for arbitrary loss functions. Our analysis shall use the following key observation that the the hard thresholding step in two stage methods does not increase the objective function a lot. We defer the analysis of partial hard thresholding methods to a later version of the paper. This family includes the OMPR(ℓ) method [17], which is known to provide the best known RIP guarantees in the compressive sensing setting. Using our proof techniques, we can show that this method offers geometric convergence rates in the statistical setting as well. Lemma 4. Let Zt ⊆[n] and |Zt| ≤q. Let βt = arg minβ,supp(β)⊆Zt f(β) and bθt = Pq(βt). Then, the following holds: f(bθt) −f(βt) ≤L α · ℓ s + ℓ−s∗· (f(βt) −f(θ∗)). 6 0 0.1 0.2 0.3 0.4 0 20 40 60 80 Noise level (sigma) Support Recovery Error HTP GraDeS L1 FoBa (a) 0.5 1 1.5 2 2.5 x 10 4 0 50 100 150 200 Dimensionality (p) Runtime (sec) HTP GraDeS L1 FoBa (b) 0 100 200 300 400 500 10 −3 10 −2 10 0 10 2 10 4 Sparsity (s*) Runtime (sec) HTP GraDeS L1 FoBa (c) 80 100 120 140 160 0 10 20 30 40 Projected Sparsity (s) Support Recovery Error CoSaMP HTP GraDeS (d) Figure 1: A comparison of hard thresholding techniques (HTP) and projected gradient methods (GraDeS) with L1 and greedy methods (FoBa) on sparse noisy linear regression tasks. 1(a) gives the number of undiscovered elements from supp(θ∗) as label noise levels are increased. 1(b) shows the variation in running times with increasing dimensionality p. 1(c) gives the variation in running times (in logscale) when the true sparsity level s∗is increased keeping p fixed. HTP and GraDeS are clearly much more scalable than L1 and FoBa. 1(d) shows the recovery properties of different IHT methods under large condition number (κ = 50) setting as the size of projected set is increased. Proof. Let vt = ∇θf(βt). Then, using the RSS property we get: f(bθt) −f(βt) ≤⟨bθt −βt, vt⟩+ L 2 ∥bθt −βt∥2 2 ζ1= L 2 ∥bθt −βt∥2 2 ζ2≤L 2 |ℓ| |s + ℓ−s∗|∥w −βt∥2 2, (7) where w is any vector such that wZt = 0 and ∥w∥0 ≤s∗. ζ1 follows by observing vt Zt = 0 and by noting that supp(bθt) ⊆Zt. ζ2 follows by Lemma 1 and the fact that ∥w∥0 ≤s∗. Now, using the RSC property and the fact that ∇θf(βt) = 0, we have: α 2 ∥w −βt∥2 2 ≤f(βt) −f(w) ≤f(βt) −f(θ∗). (8) The result now follows by combining (7) and (8). Theorem 4. Let f have RSC and RSS parameters given by α2s+s∗(f) = α and L2s+ℓ(f) = L resp. Call Algorithm 2 with f, ℓ≥s∗and s ≥4 L2 α2 ℓ+ s∗−ℓ≥4 L2 α2 s∗. Also let θ∗= arg minθ,∥θ∥0≤s∗f(θ). Then, the τ-th iterate of Algorithm 2, for τ = O( L α · log( f(θ0) ϵ ) satisfies: f(θτ) −f(θ∗) ≤ϵ. See Appendix C for a detailed proof. 6 Experiments We conducted simulations on high dimensional sparse linear regression problems to verify our predictions. Our experiments demonstrate that hard thresholding and projected gradient techniques can not only offer recovery in stochastic setting, but offer much more scalable routines for the same. Data: Our problem setting is identical to the one described in the previous section. We fixed a parameter vector ¯θ by choosing s∗random coordinates and setting them randomly to ±1 values. Data samples were generated as Zi = (Xi, Yi) where Xi ∼N(0, Ip) and Yi = ⟨¯θ, Xi⟩+ ξi where ξi ∼N(0, σ2). We studied the effect of varying dimensionality p, sparsity s∗, sample size n and label noise level σ on the recovery properties of the various algorithms as well as their run times. We chose baseline values of p = 20000, s∗= 100, σ = 0.1, n = fo · s∗log p where fo is the oversampling factor with default value fo = 2. Keeping all other quantities fixed, we varied one of the quantities and generated independent data samples for the experiments. Algorithms: We studied a variety of hard-thresholding style algorithms including HTP [14], GraDeS [13] (or IHT [12]), CoSaMP [15], OMPR [17] and SP [16]. We compared them with a standard implementation of the L1 projected scaled sub-gradient technique [23] for the lasso problem and a greedy method FoBa [24] for the same. 7 Evaluation Metrics: For the baseline noise level σ = 0.1, we found that all the algorithms were able to recover the support set within an error of 2%. Consequently, our focus shifted to running times for these experiments. In the experiments where noise levels were varied, we recorded, for each method, the number of undiscovered support set elements. Results: Figure1 describes the results of our experiments in graphical form. For sake of clarity we included only HTP, GraDeS, L1 and FoBa results in these graphs. Graphs for the other algorithms CoSaMP, SP and OMPR can be seen in the supplementary material. The graphs indicate that whereas hard thresholding techniques are equally effective as L1 and greedy techniques for recovery in noisy settings, as indicated by Figure1(a), the former can be much more efficient and scalable than the latter. For instance, as Figure1(b), for the base level of p = 20000, HTP was 150× faster than the L1 method. For higher values of p, the runtime gap widened to more than 350×. We also note that in both these cases, HTP actually offered exact support recovery whereas L1 was unable to recover 2 and 4 support elements respectively. Although FoBa was faster than L1 on Figure1(b) experiments, it was still slower than HTP by 50× and 90× for p = 20000 and 25000 respectively. Moreover, due to its greedy and incremental nature, FoBa was found to suffer badly in settings with larger true sparsity levels. As Figure 1(c) indicates, for even moderate sparsity levels of s∗= 300 and 500, FoBa is 60 −75× slower than HTP. As mentioned before, the reason for this slowdown is the greedy approach followed by FoBa: whereas HTP took less than 5 iterations to converge for these two problems, FoBa spend 300 and 500 iterations respectively. GraDeS was found to offer much lesser run times in comparison being slower than HTP by 30 −40× for larger values of p and 2 −5× slower for larger values of s∗. Experiments on badly conditioned problems. We also ran experiments to verify the performance of IHT algorithms in high condition number setting. Values of p, s∗and σ were kept at baseline levels. After selecting the optimal parameter vector ¯θ, we selected s∗/2 random coordinates from its support and s∗/2 random coordinates outside its support and constructed a covariance matrix with heavy correlations between these chosen coordinates. The condition number of the resulting matrix was close to 50. Samples were drawn from this distribution and the recovery properties of the different IHT-style algorithms was observed as the projected sparsity levels s were increased. Our results (see Figure 1(d)) corroborate our theoretical observation that these algorithms show a remarkable improvement in recovery properties for ill-conditioned problems with an enlarged projection size. 7 Discussion and Conclusions In our work we studied iterative hard thresholding algorithms and showed that these techniques can offer global convergence guarantees for arbitrary, possibly non-convex, differentiable objective functions, which nevertheless satisfy Restricted Strong Convexity/Smoothness (RSC/RSM) conditions. Our results apply to a large family of algorithms that includes existing algorithms such as IHT, GraDeS, CoSaMP, SP and OMPR. Previously the analyses of these algorithms required stringent RIP conditions that did not allow the (restricted) condition number to be larger than universal constants specific to these algorithms. Our basic insight was to relax this stringent requirement by running these iterative algorithms with an enlarged support size. We showed that guarantees for high-dimensional M-estimation follow seamlessly from our results by invoking results on RSC/RSM conditions that have already been established in the literature for a variety of statistical settings. Our theoretical results put hard thresholding methods on par with those based on convex relaxation or greedy algorithms. Our experimental results demonstrate that hard thresholding methods outperform convex relaxation and greedy methods in terms of running time, sometime by orders of magnitude, all the while offering competitive or better recovery properties. Our results apply to sparsity and low rank structure, arguably two of the most commonly used structures in high dimensional statistical learning problems. In future work, it would be interesting to generalize our algorithms and their analyses to more general structures. A unified analysis for general structures will probably create interesting connections with existing unified frameworks such as those based on decomposability [5] and atomic norms [25]. 8 References [1] Peter B¨uhlmann and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and applications. Springer, 2011. [2] Sahand Negahban, Martin J Wainwright, et al. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39(2):1069–1097, 2011. [3] Garvesh Raskutti, Martin J Wainwright, and Bin Yu. Minimax rates of estimation for high-dimensional linear regression over ℓq-balls. Information Theory, IEEE Transactions on, 57(10):6976–6994, 2011. [4] Angelika Rohde and Alexandre B Tsybakov. Estimation of high-dimensional low-rank matrices. The Annals of Statistics, 39(2):887–930, 2011. [5] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, Bin Yu, et al. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. [6] Balas Kausik Natarajan. Sparse approximate solutions to linear systems. SIAM Journal on Computing, 24(2):227–234, 1995. [7] Ji Liu, Ryohei Fujimaki, and Jieping Ye. Forward-backward greedy algorithms for general convex smooth functions over a cardinality constraint. In Proceedings of The 31st International Conference on Machine Learning, pages 503–511, 2014. [8] Ali Jalali, Christopher C Johnson, and Pradeep D Ravikumar. On learning discrete graphical models using greedy methods. In NIPS, pages 1935–1943, 2011. [9] Shai Shalev-Shwartz, Nathan Srebro, and Tong Zhang. Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM Journal on Optimization, 20(6):2807–2832, 2010. [10] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87 of Applied Optimization. Springer, 2004. [11] P. Loh and M. J. Wainwright. Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima, 2013. arXiv:1305.2436 [math.ST]. [12] Thomas Blumensath and Mike E. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27(3):265 – 274, 2009. [13] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In ICML, 2009. [14] Simon Foucart. Hard thresholding pursuit: an algorithm for compressive sensing. SIAM J. on Num. Anal., 49(6):2543–2563, 2011. [15] Deanna Needell and Joel A. Tropp. CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples. Appl. Comput. Harmon. Anal., 26:301–321, 2008. [16] Wei Dai and Olgica Milenkovic. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory, 55(5):22302249, 2009. [17] Prateek Jain, Ambuj Tewari, and Inderjit S. Dhillon. Orthogonal matching pursuit with replacement. In Annual Conference on Neural Information Processing Systems, 2011. [18] Sohail Bahmani, Bhiksha Raj, and Petros T Boufounos. Greedy sparsity-constrained optimization. The Journal of Machine Learning Research, 14(1):807–841, 2013. [19] Xiaotong Yuan, Ping Li, and Tong Zhang. Gradient hard thresholding pursuit for sparsity-constrained optimization. In Proceedings of The 31st International Conference on Machine Learning, 2014. [20] Yuchen Zhang, Martin J. Wainwright, and Michael I. Jordan. Lower bounds on the performance of polynomial-time algorithms for sparse linear regression. arXiv:1402.1918, 2014. [21] P. Loh and M. J. Wainwright. High-dimension regression with noisy and missing data: Provable guarantees with non-convexity. Annals of Statistics, 40(3):1637–1664, 2012. [22] Alekh Agarwal, Sahand N. Negahban, and Martin J. Wainwright. Fast global convergence of gradient methods for high-dimensional statistical recovery. Annals of Statistics, 40(5):2452—2482, 2012. [23] Mark Schmidt. Graphical Model Structure Learning with L1-Regularization. PhD thesis, University of British Columbia, 2010. [24] Tong Zhang. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations. IEEE Trans. Inf. Theory, 57:4689–4708, 2011. [25] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. 9
2014
48
5,534
Bayesian Sampling Using Stochastic Gradient Thermostats Nan Ding∗ Google Inc. dingnan@google.com Youhan Fang∗ Purdue University yfang@cs.purdue.edu Ryan Babbush Google Inc. babbush@google.com Changyou Chen Duke University cchangyou@gmail.com Robert D. Skeel Purdue University skeel@cs.purdue.edu Hartmut Neven Google Inc. neven@google.com Abstract Dynamics-based sampling methods, such as Hybrid Monte Carlo (HMC) and Langevin dynamics (LD), are commonly used to sample target distributions. Recently, such approaches have been combined with stochastic gradient techniques to increase sampling efficiency when dealing with large datasets. An outstanding problem with this approach is that the stochastic gradient introduces an unknown amount of noise which can prevent proper sampling after discretization. To remedy this problem, we show that one can leverage a small number of additional variables to stabilize momentum fluctuations induced by the unknown noise. Our method is inspired by the idea of a thermostat in statistical physics and is justified by a general theory. 1 Introduction The generation of random samples from a posterior distribution is a pervasive problem in Bayesian statistics which has many important applications in machine learning. The Markov Chain Monte Carlo method (MCMC), proposed by Metropolis et al.[16], generates unbiased samples from a desired distribution when the density function is known up to a normalizing constant. However, traditional MCMC methods are based on random walk proposals which lead to highly correlated samples. On the other hand, dynamics-based sampling methods, e.g. Hybrid Monte Carlo (HMC) [6, 10], avoid this high degree of correlation by combining dynamic systems with the Metropolis step. The dynamic system uses information from the gradient of the log density to reduce the random walk effect, and the Metropolis step serves as a correction of the discretization error introduced by the numerical integration of the dynamic systems. The computational cost of HMC methods depends primarily on the gradient evaluation. In many machine learning problems, expensive gradient computations are a consequence of working with extremely large datasets. In such scenarios, methods based on stochastic gradients have been very successful. A stochastic gradient uses the gradient obtained from a random subset of the data to approximate the true gradient. This idea was first used in optimization [9, 19] but was recently adapted for sampling methods based on stochastic differential equations (SDEs) such as Brownian dynamics [1, 18, 24] and Langevin dynamics [5]. Due to discretization, stochastic gradients introduce an unknown amount of noise into the dynamic system. Existing methods sample correctly only when the step size is small or when a good estimate of the noise is available. In this paper, we propose a method based on SDEs that self-adapts to the ∗indicates equal contribution. 1 unknown noise with the help of a small number of additional variables. This allows for the use of larger discretization step, smaller diffusion factor, or smaller minibatch to improve the sampling efficiency without sacrificing accuracy. From the statistical physics perspective, all these dynamics-based sampling methods are approaches that use dynamics to approximate a canonical ensemble [23]. In a canonical ensemble, the distribution of the states follows the canonical distribution which corresponds to the target posterior distribution of interests. In attemping to sample from the canonical ensemble, existing methods have neglected the condition that, the system temperature must remain near a target temperature (Eq.(4) of Sec. 3). When this requirement is ignored, noise introduced by stochastic gradients may drive the system temperature away from the target temperature and cause inaccurate sampling. The additional variables in our method essentially play the role of a thermostat which controls the temperature and, as a consequence, handles the unknown noise. This approach can also be found by following a general recipe which helps designing dynamic systems that produce correct samples. The rest of the paper is organized as follows. Section 2 briefly reviews the related background. Section 3 proposes the stochastic gradient Nos´e-Hoover thermostat method which maintains the canonical ensemble. Section 4 presents the general recipe for finding proper SDEs and mathematically shows that the proposed method produces samples from the correct target distribution. Section 5 compares our method with previous methods on synthetic and real world machine learning applications. The paper is concluded in Section 6. 2 Background Our objective is to generate random samples from the posterior probability density p(θ| X) ∝ p(X |θ)p(θ), where θ represents an n-dim parameter vector and X represents data. The canonical form is p(θ| X) = (1/Z) exp(−U(θ)) where U(θ) = −log p(X |θ) −log p(θ) is referred to as the potential energy and Z is the normalizing constant. Here, we briefly review a few dynamicsbased sampling methods, including HMC, LD, stochastic gradient LD (SGLD) [24], and stochastic gradient HMC (SGHMC) [5], while relegating a more comprehensive review to Appendix A. HMC [17] works in an extended space Γ = (θ, p), where θ and p simulate the positions and the momenta of particles in a system. Although some works, e.g. [7, 8], make use of variable mass, we assume that all particles have unit constant mass (i.e. mi = 1). The joint density of θ and p can be written as ρ(θ, p) ∝exp(−H(θ, p)), where H(θ, p) = U(θ) + K(p) is called the Hamiltonian (the total energy). U(θ) is called the potential energy and K(p) = p⊤p /2 is called the kinetic energy. Note that p has standard normal distribution. The force on the system is defined as f(θ) = −∇U(θ). It can be shown that the Hamiltonian dynamics dθ = p dt, d p = f(θ)dt, maintain a constant total energy [17]. In each step of the HMC algorithm, one first randomizes p according to the standard normal distribution; then evolves (θ, p) according to the Hamiltonian dynamics (solved by numerical integrators); and finally uses the Metropolis step to correct the discretization error. Langevin dynamics (with diffusion factor A) are described by the following SDE, dθ = p dt, d p = f(θ)dt −A p dt + √ 2Ad W, (1) where W is n independent Wiener processes (see Appendix A), and d W can be informally written as N(0, I dt) or simply N(0, dt) as in [5]. Brownian dynamics dθ = f(θ)dt + N(0, 2dt) is obtained from Langevin dynamics by rescaling time t ←At and letting A →∞, i.e., on long time scales inertia effects can be neglected [11]. When the size of the dataset is big, the computation of the gradient of −log p(X |θ) = −PN i=1 log p(xi |θ) can be very expensive. In such situations, one could use the likelihood of a random subset of the data xi’s to approximate the true likelihood, ˜U(θ) = −N ˜N ˜ N X i=1 log p(x(i) |θ) −log p(θ), (2) 2 where  x(i) represents a random subset of {xi} and ˜N ≪N. Define the stochastic force ˜f(θ) = −∇˜U(θ). The SGLD algorithm [24] uses ˜f(θ) and the Brownian dynamics to generate samples, dθ = ˜f(θ)dt + N(0, 2dt). In [5], the stochastic force with a discretization step h is approximated as h˜f(θ) ≃h f(θ) + N(0, 2h B(θ)) (note that the argument is not rigorous and that other significant artifacts of discretization may have been neglected). The SGHMC algorithm uses a modified LD, dθ = p dt, d p = ˜f(θ)dt −A p dt + N(0, 2(A I −ˆB(θ))dt), (3) where ˆB(θ) is intended to offset B(θ), the noise from the stochastic force. However, ˆB(θ) is hard to estimate in practice and cannot be omitted when the discretization step h is not small enough. Since poor estimation of ˆB(θ) may lead to inaccurate sampling, we attempt to find a dynamic system which is able to adaptively fit to the noise without explicit estimation. The intuition comes from the practice of sampling a canonical ensemble in statistical physics. The Metropolis step in SDE-based samplers with stochastic gradients is sometimes omitted on large datasets, because the evaluation of the potential energy requires using the entire dataset which cancels the benefit of using stochastic gradients. There is some recent work [2, 3, 14] which attempts to estimate the Metropolis step using partial data. Although an interesting direction for future work, in this paper we do not consider applying Metropolis step in conjunction with stochastic gradients. 3 Stochastic Gradient Thermostats In statistical physics, a canonical ensemble represents the possible states of a system in thermal equilibrium with a heat bath at fixed temperature T [23]. The probability of the states in a canonical ensemble follows the canonical distribution ρ(θ, p) ∝exp(−H(θ, p)/(kBT)), where kB is the Boltzmann constant. A critical characteristic of the canonical ensemble is that the system temperature, defined as the mean kinetic energy, satisfies the following thermal equilibrium condition, kBT 2 = 1 n E[K(p)], or equivalently, kBT = 1 n E[p⊤p]. (4) All dynamics-based sampling methods approximate the canonical ensemble to generate samples. In Bayesian statistics, n is the dimension of θ, and kBT = 1 so that ρ(θ, p) ∝exp(−H(θ, p)) and more importantly ρθ(θ) ∝exp(−U(θ)). However, one key fact that was overlooked in previous methods, is that the dynamics that correctly simulate the canonical ensemble must maintain the thermal equilibrium condition (4). Besides its physical meaning, the condition is necessary for p being distributed as its marginal canonical distribution ρp(p) ∝exp(−K(p)). It can be verified that ordinary HMC and LD (1) with true force both maintain (4). However, after combination with the stochastic force ˜f(θ), the dynamics (3) may drift away from thermal equilibrium if ˆB(θ) is poorly estimated. Therefore, to generate correct samples, one needs to introduce a proper thermostat, which adaptively controls the mean kinetic energy. To this end, we introduce an additional variable ξ, and use the following dynamics (with diffusion factor A and kBT = 1), dθ = p dt, d p = ˜f(θ)dt −ξ p dt + √ 2A N(0, dt), (5) dξ = ( 1 n p⊤p −1)dt. (6) Intuitively, if the mean kinetic energy is higher than 1/2, then ξ gets bigger and p experiences more friction in (5); on the other hand, if the mean kinetic energy is lower, then ξ gets smaller and p experiences less friction. Because (6) appears to be the same as the Nos´e-Hoover thermostat [13] in statistical physics, we call our method stochastic gradient Nos´e-Hoover thermostat (SGNHT, Algorithm 1). In Section 4, we will show that (6) is a simplified version of a more general SGNHT method that is able to handle high dimensional non-isotropic noise from ˜f. But before that, let us first look at a 1-D illustration of the SGNHT sampling in the presence of unknown noise. 3 Algorithm 1: Stochastic Gradient Nos´e-Hoover Thermostat Input: Parameters h, A. Initialize θ(0) ∈Rn, p(0) ∼N(0, I), and ξ(0) = A ; for t = 1, 2, . . . do Evaluate ∇˜U(θ(t−1)) from (2) ; p(t) = p(t−1) −ξ(t−1) p(t−1) h −∇˜U(θ(t−1))h + √ 2A N(0, h); θ(t) = θ(t−1) + p(t) h; ξ(t) = ξ(t−1) + ( 1 n p⊤ (t) p(t) −1)h; end Illustrations of a Double-well Potential To illustrate that the adaptive update (6) is able to control the mean kinetic energy, and more importantly, produce correct sampling with unknown noise on the gradient, we consider the following double-well potential, U(θ) = (θ + 4)(θ + 1)(θ −1)(θ −3)/14 + 0.5. The target distribution is ρ(θ) ∝exp(−U(θ)). To simulate the unknown noise, we let ∇˜U(θ)h = ∇U(θ)h + N(0, 2Bh), where h = 0.01 and B = 1. In the interest of clarity we did not inject additional noise other than the noise from ∇˜U(θ), namely A = 0. In Figure 1 we plot the estimated density based on 106 samples and the mean kinetic energy over iterations, when ξ is fixed at 0.1, 1, 10 successively, as well as when ξ follows our thermostat update in (6). From Figure 1, when ξ = B = 1, the SDE is the ordinary Langevin dynamics. In this case, the sampling is accurate and the kinetic energy is controlled around 0.5. When ξ > B, the kinetic energy drops to a low value, and the sampling gets stuck in one local minimum; this is what happens in the SGD optimization with momentum. When ξ < B, the kinetic energy gets too high, and the sampling looks like a random walk. For SGNHT, the sampling looks as accurate as the one with ξ = B and the kinetic energy is also controlled around 0.5. Actually in Appendix B, we see that the value of ξ of SGNHT quickly converges to B = 1. −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 θ ρ(θ) True distribution ξ = 1 −6 −4 −2 0 2 4 6 0 1 2 θ ρ(θ) True distribution ξ = 10 −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 θ ρ(θ) True distribution ξ = 0.1 −6 −4 −2 0 2 4 6 0 0.2 0.4 0.6 θ ρ(θ) True distribution SGNHT 0 0.2 0.4 0.6 0.8 1 ·106 0.2 0.3 0.4 0.5 0.6 iterations K(p) ξ = 1 0 0.2 0.4 0.6 0.8 1 ·106 0 0.2 0.4 0.6 0.8 1 1.2 iterations K(p) ξ = 10 0 0.2 0.4 0.6 0.8 1 ·106 0 2 4 iterations K(p) ξ = 0.1 0 0.2 0.4 0.6 0.8 1 ·106 0.2 0.3 0.4 0.5 iterations K(p) SGNHT Figure 1: The samples on ρ(θ) and the mean kinetic energy over iterations K(p) with ξ = 1 (1st), ξ = 10 (2nd), ξ = 0.1 (3rd), and the SGNHT (4th). The first three do not use a thermostat. The fourth column shows that the SGNHT method is able to sample accurately and maintains the mean kinetic energy with unknown noise. 4 The General Recipe In this section, we mathematically justify the proposed SGNHT method. We begin with a theorem showing why and how a sampler based on SDEs using stochastic gradients can produce the correct target distribution. The theorem serves two purposes. First, one can examine whether a given SDE sampler is correct or not. The theorem is more general than previous ones in [5][24] which focus on justifying individual methods. Second, the theorem can be a general recipe for proposing new methods. As a concrete example of using this approach, we show how to obtain SGNHT from the main theorem. 4 4.1 The Main Theorem Consider the following general stochastic differential equations that use the stochastic force: dΓ = v(Γ)dt + N(0, 2 D(θ)dt) (7) where Γ = (θ, p, ξ), and both p and ξ are optional. v is a vector field that characterizes the deterministic part of the dynamics. D(θ) = A +diag(0, B(θ), 0), where the injected noise A is known and constant, whereas the noise of the stochastic gradient B(θ) is unknown, may vary, and only appears in blocks corresponding to rows of the momentum. Both A and B are symmetric positive semidefinite. Taking the dynamics of SGHMC as an example, it has Γ = (θ, p), v = (p, f −Ap) and D(θ) = diag(0, A I −ˆB(θ) + B(θ)). Let ρ(Γ) = (1/Z) exp(−H(Γ)) be the joint probability density of all variables, and write H as H(Γ) = U(θ) + Q(θ, p, ξ). The marginal density for θ must equal the target density, exp (−U(θ)) ∝ ZZ exp (−U(θ) −Q(θ, p, ξ)) dpdξ (8) which will be referred as the marginalization condition. Main Theorem. The stochastic process of θ generated by the stochastic differential equation (7) has the target distribution ρθ(θ) = (1/Z) exp(−U(θ)) as its stationary distribution, if ρ ∝exp (−H) satisfies the marginalization condition (8), and ∇· (ρv) = ∇∇⊤: (ρ D), (9) where we use concise notation, ∇= (∂/∂θ, ∂/∂p, ∂/∂ξ) being a column vector, · representing a vector inner product x · y = x⊤y, and : representing a matrix double dot product X : Y = trace(X⊤Y). Proof. See Appendix C. Remark. The theorem implies that when the SDE is solved exactly (namely h →0), then the noise of the stochastic force has no effect, because limh→0 D = A [5]. In this case, any dynamics that produce the correct distribution with the true gradient, such as the original Langevin dynamics, can also produce the correct distribution with the stochastic gradient. However, when there is discretization error one must find the proper H, v and A to ensure production of the correct distribution of θ. Towards this end, the theorem provides a general recipe for finding proper dynamics that can sample correctly in the presence of stochastic forces. To use this prescription, one may freely select the dynamics characterized by v and A as well as the joint stationary distribution for which the marginalization condition holds. Together, the selected v, A and ρ must satisfy this main theorem. The marginalization condition is important because for some stochastic differential equations there exists a ρ that makes (9) hold even though the marginalized distribution is not the target distribution. Therefore, care must be taken when designing the dynamics. In the following subsection, we will use the proposed stochastic gradient Nos´e-Hoover thermostats as an illustrative example of how our recipe may be used to discover new methods. We will show more examples in Appendix D. 4.2 Revisiting the Stochastic Gradient Nos´e-Hoover Thermostat Let us start from the following dynamics: dθ = p dt, d p = fdt −Ξ p dt + N(0, 2 D dt), where both Ξ and D are n × n matrices. Apparently, when Ξ ̸= D, the dynamics will not generate the correct target distribution (see Appendix D). Now let us add dynamics for Ξ, denoted by dΞ = v(Ξ) dt, and demonstrate application of the main theorem. Let ρ(θ, p, Ξ) = (1/Z) exp(−H(θ, p, Ξ)) be our target distribution, where H(θ, p, Ξ) = U(θ) + Q(p, Ξ) and Q(p, Ξ) is also to be determined. Clearly, the marginalization condition is satisfied for such H(θ, p, Ξ). 5 Let Rz denote the gradient of a function R, and Rz z denote the Hessian. For simplicity, we constrain ∇Ξ · v(Ξ) = 0, and assume that D is a constant matrix. Then the LHS and RHS of (9) become LHS = (∇· v −∇H · v)ρ = (−trace(Ξ) + f Tp −QT pf + QT pΞp −QΞ : v(Ξ))ρ, RHS = D : ρpp = D : (QpQT p −Qpp)ρ. Equating both sides, one gets −trace(Ξ) + f Tp −QT pf + QT pΞp −QΞ : v(Ξ) = D : (QpQT p) −D : Qpp. To cancel the f terms, set Qp = p, then Q(p, Ξ) = 1 2pT p + S(Ξ), which leaves S(Ξ) to be determined. The equation becomes −Ξ : I +Ξ : (ppT) −SΞ : v(Ξ) = D : (ppT) −D : I . (10) Obviously, v(Ξ) must be a function of ppT since SΞ is independent of p. Also, D must only appear in SΞ, since we want v(Ξ) to be independent of the unknown D. Finally, v(Ξ) should be independent of Ξ, since we let ∇Ξ · v(Ξ) = 0. Combining all three observations, we let v(Ξ) be a linear function of ppT, and SΞ a linear function of Ξ. With some algebra, one finds that v(Ξ) = (ppT −I)/µ, (11) and SΞ = (Ξ −D)µ which means Q(p, Ξ) = 1 2pTp + 1 2µ(Ξ −D) : (Ξ −D). (11) defines a general stochastic gradient Nos´e-Hoover thermostats. When D = D I and Ξ = ξ I (here D and ξ are both scalars and I is the identity matrix), one can simplify (10) and obtain v(ξ) = (pTp −n)/µ. It reduces to (6) of the SGNHT in section 3 when µ = n. The Nos´e-Hoover thermostat without stochastic terms has ξ ∼N(0, µ−1). When there is a stochastic term N(0, 2 D dt), the distribution of Ξ changes to a matrix normal distribution MN(D, µ−1 I, I) (in the scalar case, N(D, µ−1)). This indicates that the thermostat absorbs the stochastic term D, since the expected value of Ξ is equal to D, and leaves the marginal distribution of θ invariant. In the derivation above, we assumed that D is constant (by assuming B constant). This assumption is reasonable when the data size is large so that the posterior of θ has small variance. In addition, the full dynamics of Ξ requires additional n × n equations of motion, which is generally too costly. In practice, we found that Algorithm 1 with a single scalar ξ works well. 5 Experiments 5.1 Gaussian Distribution Estimation Using Stochastic Gradient We first demonstrate our method on a simple example: Bayesian inference on 1D normal distributions. The first part of the experiment tries to estimate the mean of the normal distribution with known variance and N = 100 random examples from N(0, 1). The likelihood is N(xi|µ, 1), and an improper prior of µ being uniform is assigned. Each iteration we randomly select ˜N = 10 examples. The noise of the stochastic gradient is a constant given ˜N (Appendix E). Figure 2 shows the density of 106 samples obtained by SGNHT (1st plot) and SGHMC (2nd plot). As we can see, SGNHT samples accurately without knowing the variance of the noise of the stochastic force under all parameter settings, whereas SGHMC samples accurately only when h is small and A is large. The 3rd plot shows the mean of ξ values in SGNHT. When h = 0.001, ξ and A are close. However, when h = 0.01, ξ becomes much larger than A. This indicates that the discretization introduces a large noise from the stochastic gradient, and the ξ variable effectively absorbs the noise. The second part of the experiment is to estimate both mean and variance of the normal distribution. We use the likelihood function N(xi|µ, γ−1) and the Normal-Gamma distribution µ, γ ∼ N(µ|0, γ)Gam(γ|1, 1) as prior. The variance of the stochastic gradient noise is no longer a constant and depends on the values of µ and γ (see Appendix E). Similar density plots are available in Appendix E. Here we plot the Root Mean Square Error (RMSE) of the density estimation vs. the autocorrelation time of the observable µ + γ under various h and A in the 4th plot in Figure 2. We can see that SGNHT has significantly lower autocorrelation time than SGHMC at similar sampling accuracy. More details about the h, A values which produces the plot are also available in Appendix E. 6 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 µ Density of µ Density of µ (SGNHT) True h=0.01,A=1 h=0.01,A=10 h=0.001,A=1 h=0.001,A=10 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 0 1 2 3 4 5 µ Density of µ Density of µ (SGHMC) True h=0.01,A=1 h=0.01,A=10 h=0.001,A=1 h=0.001,A=10 0 0.2 0.4 0.6 0.8 1 ·106 0 5 10 15 iterations ξ ξ value of SGNHT h=0.01,A=1 h=0.01,A=10 h=0.001,A=1 h=0.001,A=10 0 100 200 300 400 0 0.2 0.4 0.6 Autocorrrelation Time RMSE of Density Estimation RMSE vs. Autocorrelation time SGHMC SGNHT Figure 2: Density of µ obtained by SGNHT with known variance (1st), density of µ obtained by SGHMC with known variance (2nd), mean of ξ over iterations with known variance in SGNHT (3rd), RMSE vs. Autocorrelation time for both methods with unknown variance (4th). 5.2 Machine Learning Applications In the following machine learning experiments, we used a reformulation of (5) and (6) similar to [5], by letting u = p h, η = h2, α = ξh and a = Ah. The resulting Algorithm 2 is provided in Appendix F. In [5], SGHMC has been extensively compared with SGLD, SGD and SGD-momentum. Our experiments will focus on comparing SGHMC and SGNHT. Details of the experiment settings are described below. The test results over various parameters are reported in Figure 3. Bayesian Neural Network We first evaluate the benchmark MNIST dataset, using the Bayesian Neural Network (BNN) as in [5]. The MNIST dataset contains 50,000 training examples, 10,000 validation examples, and 10,000 test examples. To show our algorithm being able to handle large stochastic gradient noise due to small minibatch, we chose the minibatch of size 20. Each algorithm is run for a total number of 50k iterations with burn-in of the first 10k iterations. The hidden layer size is 100, parameter a is from {0.001, 0.01} and η from {2, 4, 6, 8} × 10−7. Bayesian Matrix Factorization Next, we evaluate our methods on two collaborative filtering tasks: the Movielens ml-1m dataset and the Netflix dataset, using the Bayesian probabilistic matrix factorization (BPMF) model [21]. The Movielens dataset contains 6,050 users and 3,883 movies with about 1M ratings, and the Netflix dataset contains 480,046 users and 17,000 movies with about 100M ratings. To conduct the experiments, Each dataset is partitioned into training (80%) and testing (20%), and the training set is further partitioned for 5-fold cross validation. Each minibatch contains 400 ratings for Movielens1M and 40k ratings for Netflix. Each algorithm is run for 100k iterations with burn-in of the first 20k iterations. The base number is chosen as 10, parameter a is from {0.01, 0.1} and η from {2, 4, 6, 8} × 10−7. Latent Dirichlet Allocation Finally, we evaluate our method on the ICML dataset using Latent Dirichlet Allocation [4]. The ICML dataset contains 765 documents from the abstracts or ICML proceedings from 2007 to 2011. After simple stopword removal, we obtained a vocabulary size of about 2K and total words of about 44K. We used 80% documents for 5-fold cross validation and the remaining 20% for testing. Similar to [18], we used the semi-collapsed LDA whose posterior of θkw is provided in Appendix H. The Dirichlet prior parameter for the topic distribution for each document is set to 0.1 and the Gaussian prior for θkw is set as N(0.1, 1). Each minibatch contains 100 documents. Each algorithm is run for 50k iterations with the first 10k iterations as burn-in. Topic number is 30, parameter a is from {0.01, 0.1} and η from {2, 4, 6, 8} × 10−5. 5.2.1 Result Analysis From Figure 3, SGNHT is apparently more stable than SGHMC when the discretization step η is larger. In all four datasets, especially with the smaller a, SGHMC gets worse and worse results as η increases. With the largest η, SGHMC diverges (as the green curve is way beyond the range) due to its failure to handle the large unknown noise with small a. Figure 3 also gives a comprehensive view of the critical role that a plays on. On one hand, larger a may cause more random walk effect which slows down the convergence (as in Movielens1M and Netflix). On the other hand, it is helpful to increase the ergodicity and compensate the unknown noise from the stochastic gradient (as in MNIST and ICML). 7 Throughout the experiment, we find that the kinetic energy of SGNHT is always maintained around 0.5 while that of SGHMC is usually higher. And overall SGNHT has better test performance with the choice of the parameters selected by cross validation (see Table 2 of Appendix G). 1 2 3 4 5 ·104 2 3 4 5 6 ·10−2 iterations Test Error MNIST (η = 2 × 10−7) SGHMC(a = 0.001) SGNHT(a = 0.001) SGHMC(a = 0.01) SGNHT(a = 0.01) 1 2 3 4 5 ·104 2 3 4 5 6 ·10−2 iterations Test Error MNIST (η = 4 × 10−7) SGHMC(a = 0.001) SGNHT(a = 0.001) SGHMC(a = 0.01) SGNHT(a = 0.01) 1 2 3 4 5 ·104 2 3 4 5 6 ·10−2 iterations Test Error MNIST (η = 6 × 10−7) SGHMC(a = 0.001) SGNHT(a = 0.001) SGHMC(a = 0.01) SGNHT(a = 0.01) 2 3 4 5 ·104 2 3 4 5 6 ·10−2 iterations Test Error MNIST (η = 8 × 10−7) SGHMC(a = 0.001) SGNHT(a = 0.001) SGHMC(a = 0.01) SGNHT(a = 0.01) 0.2 0.4 0.6 0.8 1 ·105 0.86 0.88 0.9 0.92 0.94 iterations Test RMSE Movielens1M (η = 2 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.86 0.88 0.9 0.92 0.94 iterations Test RMSE Movielens1M (η = 4 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.86 0.88 0.9 0.92 0.94 iterations Test RMSE Movielens1M (η = 6 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.86 0.88 0.9 0.92 0.94 iterations Test RMSE Movielens1M (η = 8 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.82 0.83 0.84 0.85 0.86 iterations Test RMSE Netflix (η = 2 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.82 0.83 0.84 0.85 0.86 iterations Test RMSE Netflix (η = 4 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.82 0.83 0.84 0.85 0.86 iterations Test RMSE Netflix (η = 6 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 0.2 0.4 0.6 0.8 1 ·105 0.82 0.83 0.84 0.85 0.86 iterations Test RMSE Netflix (η = 8 × 10−7) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 1 2 3 4 5 ·104 1,000 1,100 1,200 1,300 1,400 1,500 iterations Test Perplexity ICML (η = 2 × 10−5) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 1 2 3 4 5 ·104 1,000 1,100 1,200 1,300 1,400 1,500 iterations Test Perplexity ICML (η = 4 × 10−5) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 1 2 3 4 5 ·104 1,000 1,100 1,200 1,300 1,400 1,500 iterations Test Perplexity ICML (η = 6 × 10−5) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) 1 2 3 4 5 ·104 1,000 1,100 1,200 1,300 1,400 1,500 iterations Test Perplexity ICML (η = 8 × 10−5) SGHMC(a = 0.01) SGNHT(a = 0.01) SGHMC(a = 0.1) SGNHT(a = 0.1) Figure 3: The test error of MNIST (1st row), test RMSE of Movielens1M (2nd row), test RMSE of Netflix (3rd row) and test perplexity of ICML (4th row) datasets with their standard deviations (close to 0 in row 2 and 3) under various η and a. 6 Conclusion and Discussion In this paper, we find proper dynamics that adpatively fit to the noise introduced by stochastic gradients. Experiments show that our method is able to control the temperature, estimate the unknown noise, and perform competitively in practice. Our method can be justified in continuous time by a general theorem. The discretization of continuous SDEs, however, introduces bias. This issue has been extensively studied by previous work such as [20, 22, 15, 12]. The existency of an invariant measure has been proved (e.g., Theorem 3.2 [22] and Proposition 2.5 [12]) and the bound of the error has been obtained (e.g, O(h2) for a symmetric splitting scheme [12]). Due to space limitation, we leave a deeper discussion on this topic and a more rigorous justification to future work. Acknowledgments We acknowledge Kevin P. Murphy and Julien Cornebise for helpful discussions and comments. References [1] S. Ahn, A. K. Balan, and M. Welling. Bayesian Posterior Sampling via Stochastic Gradient Fisher Scoring. Proceedings of the 29th International Conference on Machine Learning, pages 8 1591–1598, 2012. [2] A. K. Balan, Y. Chen, and M. Welling. Austerity in MCMC Land: Cutting the MetropolisHastings Budget. Proceedings of the 31st International Conference on Machine Learning, 2014. [3] R. Bardenet, A. Doucet, and C. Holmes. Towards Scaling up Markov Chain Monte Carlo: an Adaptive Subsampling Approach. Proceedings of the 31st International Conference on Machine Learning, pages 405–413, 2014. [4] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. J. Mach. Learn. Res., 3:993–1022, March 2003. [5] T. Chen, E. B. Fox, and C. Guestrin. Stochastic Gradient Hamiltonian Monte Carlo. Proceedings of the 31st International Conference on Machine Learning, 2014. [6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Phys. Lett. B, 195:216–222, 1987. [7] Y. Fang, J. M. Sanz-Serna, and R. D. Skeel. Compressible Generalized Hybrid Monte Carlo. J. Chem. Phys., 140:174108 (10 pages), 2014. [8] M. Girolami and B. Calderhead. Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods. J. R. Statist. Soc. B, 73, Part 2:123–214(with discussion), 2011. [9] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic Variational Inference. Journal of Maching Learning Research, 14(1):1303–1347, 2013. [10] A. M. Horowitz. A Generalized Guided Monte-Carlo Algorithm. Phys. Lett. B, 268:247–252, 1991. [11] B. Leimkuhler and C. Matthews. Rational Construction of Stochastic Numerical Methods for Molecular Sampling. arXiv:1203.5428, 2012. [12] B. Leimkuhler, C. Matthews, and G. Stoltz. The Computation of Averages from Equilibrium and Nonequilibrium Langevin Molecular Dynamics. IMA J Num. Anal., 2014. [13] B. Leimkuhler and S. Reich. A Metropolis Adjusted Nos´e-Hoover Thermostat. Math. Modellinig Numer. Anal., 43(4):743–755, 2009. [14] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with Subsets of Data. arXiv: 1403.5693, 2014. [15] J. C. Mattingly, A. M. Stuart, and M. Tretyakov. Convergence of Numerical Time-averaging and Stationary Measures via Poisson Equations. SIAM J. Num. Anal., 48:552–577, 2014. [16] N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equation of State Calculations by Fast Computing Machines. J. Chem. Phys., 21:1087–1092, 1953. [17] R. M. Neal. MCMC Using Hamiltonian Dynamics. arXiv:1206.1901, 2012. [18] S. Patterson and Y. W. Teh. Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex. Advances in Neural Information Processing Systems 26, pages 3102–3110, 2013. [19] H. Robbins and S. Monro. A Stochastic Approximation Method. The Annals of Mathematical Statistics, 22(3):400–407, 1951. [20] G. Roberts and R. Tweedie. Exponential Convergence of Langevin Distributions and Their Discrete Approximations. Bernoulli, 2:341–363, 1996. [21] R. Salakhutdinov and A. Mnih. Bayesian Probabilistic Matrix Factorization Using Markov Chain Monte Carlo. Proceedings of the 25th International Conference on Machine Learning, pages 880–887, 2008. [22] D. Talay. Second Order Discretization Schemes of Stochastic Differential Systems for the Computation of the Invariant Law. Stochastics and Stochastics Reports, 29:13–36, 1990. [23] M. E. Tuckerman. Statistical Mechanics: Theory and Molecular Simulation. Oxford University Press, 2010. [24] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. Proceedings of the 28th International Conference on Machine Learning, 2011. 9
2014
49
5,535
QUIC & DIRTY: A Quadratic Approximation Approach for Dirty Statistical Models Cho-Jui Hsieh, Inderjit S. Dhillon, Pradeep Ravikumar University of Texas at Austin Austin, TX 78712 USA {cjhsieh,inderjit,pradeepr}@cs.utexas.edu Stephen Becker University of Colorado at Boulder Boulder, CO 80309 USA stephen.becker@colorado.edu Peder A. Olsen IBM T.J. Watson Research Center Yorktown Heights, NY 10598 USA pederao@us.ibm.com Abstract In this paper, we develop a family of algorithms for optimizing “superpositionstructured” or “dirty” statistical estimators for high-dimensional problems involving the minimization of the sum of a smooth loss function with a hybrid regularization. Most of the current approaches are first-order methods, including proximal gradient or Alternating Direction Method of Multipliers (ADMM). We propose a new family of second-order methods where we approximate the loss function using quadratic approximation. The superposition structured regularizer then leads to a subproblem that can be efficiently solved by alternating minimization. We propose a general active subspace selection approach to speed up the solver by utilizing the low-dimensional structure given by the regularizers, and provide convergence guarantees for our algorithm. Empirically, we show that our approach is more than 10 times faster than state-of-the-art first-order approaches for the latent variable graphical model selection problems and multi-task learning problems when there is more than one regularizer. For these problems, our approach appears to be the first algorithm that can extend active subspace ideas to multiple regularizers. 1 Introduction From the considerable amount of recent research on high-dimensional statistical estimation, it has now become well understood that it is vital to impose structural constraints upon the statistical model parameters for their statistically consistent estimation. These structural constraints take the form of sparsity, group-sparsity, and low-rank structure, among others; see [18] for unified statistical views of such structural constraints. In recent years, such “clean” structural constraints are frequently proving insufficient, and accordingly there has been a line of work on “superposition-structured” or “dirty model” constraints, where the model parameter is expressed as the sum of a number of parameter components, each of which have their own structure. For instance, [4, 6] consider the estimation of a matrix that is neither low-rank nor sparse, but which can be decomposed into the sum of a lowrank matrix and a sparse outlier matrix (this corresponds to robust PCA when the matrix-structured parameter corresponds to a covariance matrix). [5] use such matrix decomposition to estimate the structure of latent-variable Gaussian graphical models. [15] in turn use a superposition of sparse and group-sparse structure for multi-task learning. For other recent work on such superpositionstructured models, see [1, 7, 14]. For a unified statistical view of such superposition-structured models, and the resulting classes of M-estimators, please see [27]. Consider a general superposition-structured parameter ¯θ := Pk r=1 θ(r), where {θ(r)}k r=1 are the parameter-components, each with their own structure. Let {R(r)(·)}k r=1 be regularization functions suited to the respective parameter components, and let L(·) be a (typically non-linear) loss function 1 that measures the goodness of fit of the superposition-structured parameter ¯θ to the data. We now have the notation to consider a popular class of M-estimators studied in the papers above for these superposition-structured models: min {θ(r)}k r=1  L  X r θ(r)  + X r λrR(r)(θ(r))  := F(θ), (1) where {λr}k r=1 are regularization penalties. In (1), the overall regularization contribution is separable in the individual parameter components, but the loss function term itself is not, and depends on the sum ¯θ := Pk r=1 θ(r). Throughout the paper, we use ¯θ to denote the overall superpositionstructured parameter, and θ=[θ(1),. . . ,θ(k)] to denote the concatenation of all the parameters. Due to the wide applicability of this class of M-estimators in (1), there has been a line of work on developing efficient optimization methods for solving special instances of this class of M-estimators [14, 26], in addition to the papers listed above. In particular, due to the superposition-structure in (1) and the high-dimensionality of the problem, this class seems naturally amenable to a proximal gradient descent approach or the ADMM method [2, 17]; note that these are first-order methods and are thus very scalable. In this paper, we consider instead a proximal Newton framework to minimize the M-estimation objective in (1). Specifically, we use iterative quadratic approximations, and for each of the quadratic subproblems, we use an alternating minimization approach to individually update each of the parameter components comprising the superposition-structure. Note that the Hessian of the loss might be structured, as for instance with the logdet loss for inverse covariance estimation and the logistic loss, which allows us to develop very efficient second-order methods. Even given this structure, solving the regularized quadratic problem in order to obtain the proximal Newton direction is too expensive due to the high dimensional setting. The key algorithmic contribution of this paper is in developing a general active subspace selection framework for general decomposable norms, which allows us to solve the proximal Newton steps over a significantly reduced search space. We are able to do so by leveraging the structural properties of decomposable regularization functions in the M-estimator in (1). Our other key contribution is theoretical. While recent works [16, 21] have analyzed the convergence of proximal Newton methods, the superposition-structure here poses a key caveat: since the loss function term only depends on the sum of the individual parameter components, the Hessian is not positive-definite, as is required in previous analyses of proximal Newton methods. The theoretical analysis [9] relaxes this assumption by instead assuming the loss is self-concordant but again allows at most one regularizer. Another key theoretical difficulty is our use of active subspace selection, where we do not solve for the vanilla proximal Newton direction, but solve the proximal Newton step subproblem only over a restricted subspace, which moreover varies with each step. We deal with these issues and show super-linear convergence of the algorithm when the sub-problems are solved exactly. We apply our algorithm to two real world applications: latent Gaussian Markov random field (GMRF) structure learning (with low-rank + sparse structure), and multitask learning (with sparse + group sparse structure), and demonstrate that our algorithm is more than ten times faster than state-of-the-art methods. Overall, our algorithmic and theoretical developments open up the state of the art but forbidding class of M-estimators in (1) to very large-scale problems. Outline of the paper. We begin by introducing some background in Section 2. In Section 3, we propose our quadratic approximation framework with active subspace selection for general dirty statistical models. We derive the convergence guarantees of our algorithm in Section 4. Finally, in Section 5, we apply our model to solve two real applications, and show experimental comparisons with other state-of-the-art methods. 2 Background and Applications Decomposable norms. We consider the case where all the regularizers {R(r)}k r=1 are decomposable norms ∥· ∥Ar. A norm ∥· ∥is decomposable at x if there is a subspace T and a vector e ∈T such that the sub differential at x has the following form: ∂∥x∥r = {ρ ∈Rn | ΠT (ρ) = e and ∥ΠT ⊥(ρ)∥∗ Ar ≤1}, (2) where ΠT (·) is the orthogonal projection onto T , and ∥x∥∗:= sup∥a∥≤1⟨x, a⟩is the dual norm of ∥· ∥. The decomposable norm was defined in [3, 18], and many interesting regularizers belong to this category, including: 2 • Sparse vectors: for the ℓ1 regularizer, T is the span of all points with the same support as x. • Group sparse vectors: suppose that the index set can be partitioned into a set of NG disjoint groups, say G = {G1, . . . , GNG}, and define the (1,α)-group norm by ∥x∥1,α := PNG t=1 ∥xGt∥α. If SG denotes the subset of groups where xGt ̸= 0, then the subgradient has the following form: ∂∥x∥1,α := {ρ | ρ = X t∈SG xGt/∥xGt∥∗ α + X t/∈SG mt}, where ∥mt∥∗ α ≤1 for all t /∈SG. Therefore, the group sparse norm is also decomposable with T := {x | xGt = 0 for all t /∈SG}. (3) • Low-rank matrices: for the nuclear norm regularizer ∥· ∥∗, which is defined to be the sum of singular values, the subgradient can be written as ∂∥X∥∗= {UV T + W | U T W = 0, WV = 0, ∥W∥2 ≤1}, where ∥· ∥2 is the matrix 2 norm and U, V are the left/right singular vectors of X corresponding to non-zero singular values. The above subgradient can also be written in the decomposable form (2), where T is defined to be span({uivT j }k i,j=1) where {ui}k i=1, {vi}k i=1 are the columns of U and V . Applications. Next we discuss some widely used applications of superposition-structured models, and the corresponding instances of the class of M-estimators in (1). • Gaussian graphical model with latent variables: let Θ denote the precision matrix with corresponding covariance matrix Σ = Θ−1. [5] showed that the precision matrix will have a low rank + sparse structure when some random variables are hidden, thus Θ = S −L can be estimated by solving the following regularized MLE problem: min S,L:L⪰0,S−L≻0 −log det(S −L) + ⟨S −L, Σ⟩+ λS∥S∥1 + λL trace(L). (4) While proximal Newton methods have recently become a dominant technique for solving the ℓ1regularized log-determinant problems [12, 10, 13, 19], our development is the first to apply proximal Newton methods to solve log-determinant problems with sparse and low rank regularizers. • Multi-task learning: given k tasks, each with sample matrix X(r) ∈Rnr×d (nr samples in the r-th task) and labels y(r), [15] proposes minimizing the following objective: k X r=1 ℓ(y(r), X(r)(S(r) + B(r))) + λS∥S∥1 + λB∥B∥1,∞, (5) where ℓ(·) is the loss function and S(r) is the r-th column of S. • Noisy PCA: to recover a covariance matrix corrupted with sparse noise, a widely used technique is to solve the matrix decomposition problem [6]. In contrast to the squared loss above, an exponential PCA problem [8] would use a Bregman divergence for the loss function. 3 Our proposed framework To perform a Newton-like step, we iteratively form quadratic approximations of the smooth loss function. Generally the quadratic subproblem will have a large number of variables and will be hard to solve. Therefore we propose a general active subspace selection technique to reduce the problem size by exploiting the structure of the regularizers R1, . . . , Rk. 3.1 Quadratic Approximation Given k sets of variables θ = [θ(1), . . . , θ(k)], and each θ(r) ∈Rn, let ∆(r) denote perturbation of θ(r), and ∆= [∆(1), . . . , ∆(k)]. We define g(θ) := L(Pk r=1 θ(r)) = L(¯θ) to be the loss function, and h(θ) := Pk r=1 R(r)(θ(r)) to be the regularization. Given the current estimate θ, we form the quadratic approximation of the smooth loss function: ¯g(θ + ∆) = g(θ) + k X r=1 ⟨∆(r), G⟩+ 1 2∆T H∆, (6) where G = ∇L(¯θ) is the gradient of L and H is the Hessian matrix of g(θ). Note that ∇¯θL(¯θ) = ∇θ(r)L(¯θ) for all r so we simply write ∇and refer to the gradient at ¯θ as G (and similarly for ∇2). By the chain rule, we can show that 3 Lemma 1. The Hessian matrix of g(θ) is H := ∇2g(θ) =   H · · · H ... ... ... H · · · H  , H := ∇2L(¯θ). (7) In this paper we focus on the case where H is positive definite. When it is not, we add a small constant ϵ to the diagonal of H to ensure that each block is positive definite. Note that the full Hessian, H, will in general, not be positive definite (in fact rank(H) = rank(H)). However, based on its special structure, we can still give convergence guarantees (along with rate of convergence) for our algorithm. The Newton direction d is defined to be: [d(1), . . . , d(k)] = argmin ∆(1),...,∆(k) ¯g(θ + ∆) + k X r=1 λr∥θ(r) + ∆(r)∥Ar := QH(∆; θ). (8) The quadratic subproblem (8) cannot be directly separated into k parts because the Hessian matrix (7) is not a block-diagonal matrix. Also, each set of parameters has its own regularizer, so it is hard to solve them all together. Therefore, to solve (8), we propose a block coordinate descent method. At each iteration, we pick a variable set ∆(r) where r ∈{1, 2, . . . , k} by a cyclic (or random) order, and update the parameter set ∆(r) while keeping other parameters fixed. Assume ∆is the current solution (for all the variable sets), then the subproblem with respect to ∆(r) can be written as ∆(r) ←argmin d∈Rn 1 2dT Hd + ⟨d, G + X t:r̸=t H∆(t)⟩+ λr∥θ(r) + d∥Ar. (9) The subproblem (9) is just a typical quadratic problem with a specific regularizer, so there already exist efficient algorithms for solving it for different choices of ∥· ∥A. For the ℓ1 norm regularizer, coordinate descent methods can be applied to solve (9) efficiently as used in [12, 21]; (accelerated) proximal gradient descent or projected Newton’s method can also be used, as shown in [19]. For a general atomic norm where there might be infinitely many atoms (coordinates), a greedy coordinate descent approach can be applied, as shown in [22]. To iterate between different groups of parameters, we have to maintain the term Pk r=1 H∆(r) during the Newton iteration. Directly computing H∆(r) requires O(n2) flops; however, the Hessian matrix often has a special structure so that H∆(r) can be computed efficiently. For example, in the inverse covariance estimation problem H = Θ−1 ⊗Θ−1 where Θ−1 is the current estimate of covariance, and in the empirical risk minimization problem H = XDXT where X is the data matrix and D is diagonal. After solving the subproblem (8), we have to search for a suitable stepsize. We apply an Armijo rule for line search [24], where we test the step size α = 20, 2−1, ... until the following sufficient decrease condition is satisfied for a pre-specified σ ∈(0, 1) (typically σ = 10−4): F(θ + α∆) ≤F(θ) + ασδ, δ = ⟨G, ∆⟩+ k X r=1 λr∥Θr + α∆(r)∥Ar − k X r=1 λr∥θ(r)∥Ar. (10) 3.2 Active Subspace Selection Since the quadratic subproblem (8) contains a large number of variables, directly applying the above quadratic approximation framework is not efficient. In this subsection, we provide a general active subspace selection technique, which dramatically reduces the size of variables by exploiting the structure of regularizers. A similar method has been discussed in [12] for the ℓ1 norm and in [11] for the nuclear norm, but it has not been generalized to all decomposable norms. Furthermore, a key point to note is that in this paper our active subspace selection is not only a heuristic, but comes with strong convergence guarantees that we derive in Section 4. Given the current θ, our subspace selection approach partitions each θ(r) into S(r) fixed and S(r) free = (S(r) fixed)⊥and then restricts the search space of the Newton direction in (8) within S(r) free, which yields the following quadratic approximation problem: [d(1), . . . , d(k)] = argmin ∆(1)∈S(1) free ,...,∆(k)∈S(k) free ¯g(θ + ∆) + k X r=1 λr∥θ(r) + ∆(r)∥Ar. (11) 4 Each group of parameter has its own fixed/free subspace, so we now focus on a single parameter component θ(r). An ideal subspace selection procedure would satisfy: Property (I). Given the current iterate θ, any updates along directions in the fixed set, for instance as θ(r) ←θ(r) + a, a ∈S(r) fixed, does not improve the objective function value. Property (II). The subspace Sfree converges to the support of the final solution in a finite number of iterations. Suppose given the current iterate, we first do updates along directions in the fixed set, and then do updates along directions in the free set. Property (I) ensures that this is equivalent to ignoring updates along directions in the fixed set in this current iteration, and focusing on updates along the free set. As we will show in the next section, this property would suffice to ensure global convergence of our procedure. Property (II) will be used to derive the asymptotic quadratic convergence rate. We will now discuss our active subspace selection strategy which will satisfy both properties above. Consider the parameter component θ(r), and its corresponding regularizer ∥· ∥Ar. Based on the definition of decomposable norm in (2), there exists a subspace Tr where ΠTr(ρ) is a fixed vector for any subgradient of ∥· ∥Ar. The following proposition explores some properties of the subdifferential of the overall objective F(θ) in (1). Proposition 1. Consider any unit-norm vector a, with ∥a∥Ar = 1, such that a ∈T ⊥ r . (a) The inner-product of the sub-differential ∂θ(r)F(θ) with a satisfies: ⟨a, ∂θ(r)F(θ)⟩∈[⟨a, G⟩−λr, ⟨a, G⟩+ λr]. (12) (b) Suppose |⟨a, G⟩| ≤λr. Then, 0 ∈argminσ F(θ + σa). See Appendix 7.8 for the proof. Note that G = ∇L(¯θ) denotes the gradient of L. The proposition thus implies that if |⟨a, G⟩| ≤λr and S(r) fixed ⊂T ⊥ r then Property (I) immediately follows. The difficulty is that the set {a | |⟨a, G⟩| ≤λr} is possibly hard to characterize, and even if we could characterize this set, it may not be amenable enough for the optimization solvers to leverage in order to provide a speedup. Therefore, we propose an alternative characterization of the fixed subspace: Definition 1. Let θ(r) be the current iterate, prox(r) λ be the proximal operator defined by prox(r) λ (x) = argmin y 1 2∥y −x∥2 + λ∥y∥Ar, and Tr(x) be the subspace for the decomposable norm (2) ∥· ∥Ar at point x. We can define the fixed/free subset at θ(r) as: S(r) fixed := [T (θ(r))]⊥∩[T (prox(r) λr (G))]⊥, S(r) free = S(r) fixed ⊥. (13) It can be shown that from the definition of the proximal operator, and Definition 1, it holds that |⟨a, G⟩| < λr, so that we would have local optimality in the direction a as before. We have the following proposition: Proposition 2. Let S(r) fixed be the fixed subspace defined in Definition 1. We then have: 0 = argmin ∆(r)∈S(r) fixed QH([0, . . . , 0, ∆(r), 0, . . . , 0]; θ). We will prove that Sfree as defined above converges to the final support in Section 4, as required in Property (II) above. We will now detail some examples of the fixed/free subsets defined above. • For ℓ1 regularization: Sfixed = span{ei | θi = 0 and |∇iL(¯θ)| ≤λ} where ei is the ith canonical vector. • For nuclear norm regularization: the selection scheme can be written as Sfree = {UAMV T A | M ∈Rk×k}, (14) where UA = span(U, Ug), VA = span(V, Vg), with Θ = UΣV T is the thin SVD of Θ and Ug, Vg are the left and right singular vectors of proxλ(Θ−∇L(Θ)). The proximal operator proxλ(·) in this case corresponds to singular-value soft-thresholding, and can be computed by randomized SVD or the Lanczos algorithm. 5 • For group sparse regularization: in the (1, 2)-group norm case, let SG be the nonzero groups, then the fixed groups FG can be defined by FG := {i | i /∈SG and ∥∇LGi(¯θ)∥≤λ}, and the free subspace will be Sfree = {θ | θi = 0 ∀i ∈FG}. (15) In Figure 3 (in the appendix) that the active subspace selection can significantly improve the speed for the block coordinate descent algorithm [20]. Algorithm 1: QUIC & DIRTY: Quadratic Approximation Framework for Dirty Statistical Models Input : Loss function L(·), regularizers λr∥· ∥Ar for r = 1, . . . , k, and initial iterate θ0. Output: Sequence {θt} such that {¯θt} converges to ¯θ ⋆. 1 for t = 0, 1, . . . do 2 Compute ¯θt ←Pk r=1 θ(r) t . 3 Compute ∇L(¯θt). 4 Compute Sfree by (13). 5 for sweep = 1, . . . , Touter do 6 for r = 1, . . . , k do 7 Solve the subproblem (9) within S(r) free. 8 Update Pk r=1 ∇2L(¯θt)∆(r). 9 Find the step size α by (10). 10 θ(r) ←θ(r) + α∆(r) for all r = 1, . . . , k. 4 Convergence The recently developed theoretical analysis of proximal Newton methods [16, 21] cannot be directly applied because (1) we have the active subspace selection step, and (2) the Hessian matrix for each quadratic subproblem is not positive definite. We first prove the global convergence of our algorithm when the quadratic approximation subproblem (11) is solved exactly. Interestingly, in our proof we show that the active subspace selection can be modeled within the framework of the Block Coordinate Gradient Descent algorithm [24] with a carefully designed Hessian approximation, and by making this connection we are able to prove global convergence. Theorem 1. Suppose L(·) is convex (may not be strongly convex), and the quadratic subproblem (8) at each iteration is solved exactly, Algorithm 1 converges to the optimal solution. The proof is in Appendix 7.1. Next we consider the case that L(¯θ) is strongly convex. Note that even when L(¯θ) is strongly convex with respect to ¯θ, L(Pk r=1 θ(r)) will not be strongly convex in θ (if k > 1) and there may exist more than one optimal solution. However, we show that all solutions give the same ¯θ := Pk r=1 θ(r). Lemma 2. Assume L(·) is strongly convex, and {x(r)}k r=1, {y(r)}k r=1 are two optimal solutions of (1), then Pk r=1 x(r) = Pk r=1 y(r). The proof is in Appendix 7.2. Next, we show that S(r) free (from Definition 1) will converge to the final support ¯T (r) for each parameter set r = 1, . . . , k. Let ¯θ ⋆be the global minimizer (which is unique as shown in Lemma 2), and assume that we have ∥Π( ¯T (r))⊥ ∇L(¯θ ⋆)  ∥∗ Ar < λr ∀r = 1, . . . , k. (16) This is the generalization of the assumption used in earlier literature [12] where only ℓ1 regularization was considered. The condition is similar to strict complementary in linear programming. Theorem 2. If L(·) is strongly convex and assumption (16) holds, then there exists a finite T > 0 such that S(r) free = ¯T (r) ∀r = 1, . . . , k after t > T iterations. The proof is in Appendix 7.3. Next we show that our algorithm has an asymptotic quadratic convergence rate (the proof is in Appendix 7.4). Theorem 3. Assume that ∇2L(·) is Lipschitz continuous, and assumption (16) holds. If at each iteration the quadratic subproblem (8) is solved exactly, and L(·) is strongly convex, then our algorithm converges with asymptotic quadratic convergence rate. 6 5 Applications We demonstrate that our algorithm is extremely efficient for two applications: Gaussian Markov Random Fields (GMRF) with latent variables (with sparse + low rank structure) and multi-task learning problems (with sparse + group sparse structure). 5.1 GMRF with Latent Variables We first apply our algorithm to solve the latent feature GMRF structure learning problem in eq (4), where S ∈Rp×p is the sparse part, L ∈Rp×p is the low-rank part, and we require L = LT ⪰ 0, S = ST and Y = S −L ≻0 (i.e. θ(2) = −L). In this case, L(Y ) = −log det(Y ) + ⟨Σ, Y ⟩, hence ∇2L(Y ) = Y −1 ⊗Y −1, and ∇L(Y ) = Σ −Y −1. (17) Active Subspace. For the sparse part, the free subspace is a subset of indices {(i, j) | Sij ̸= 0 or |∇ijL(Y )| ≥λ}. For the low-rank part, the free subspace can be presented as {UAMV T A | M ∈Rk×k} where UA and VA are defined in (14). Updating ∆L. To solve the quadratic subproblem (11), first we discuss how to update ∆L using subspace selection. The subproblem is min ∆L=U∆DU T :L+∆L⪰0 1 2 trace(∆LY −1∆LY −1)+trace((Y −1−Σ−Y −1∆SY −1)∆L)+λL∥L+∆L∥∗, and since ∆L is constrained to be a perturbation of L = UAMU T A so that we can write ∆L = UA∆MU T A, and the subproblem becomes min ∆M:M+∆M⪰0 1 2 trace( ¯Y ∆M ¯Y ∆M) + trace(¯Σ∆M) + λL trace(M + ∆M) := q(∆M), (18) where ¯Y := U T AY −1UA and ¯Σ := U T A(Y −1 −Σ −Y −1∆SY −1)UA. Therefore the subproblem (18) becomes a k × k dimensional problem where k ≪p. To solve (18), we first check if the closed form solution exists. Note that ∇q(∆M) = ¯Y ∆M ¯Y + ¯Σ + λLI, thus the minimizer is ∆M = −¯Y −1(¯Σ + λLI) ¯Y −1 if M + ∆M ⪰0. If not, we solve the subproblem by the projected gradient descent method, where each step only requires O(k2) time. Updating ∆S. The subproblem with respect to ∆S can be written as min ∆S 1 2 vec(∆S)T (Y −1⊗Y −1) vec(∆S)+trace((Σ−Y −1−Y −1(∆L)Y −1)∆S)+λS∥S +∆S∥1, In our implementation we apply the same coordinate descent procedure proposed in QUIC [12] to solve this subproblem. Results. We compare our algorithm with two state-of-the-art software packages. The LogdetPPA algorithm was proposed in [26] and used in [5] to solve (4). The PGALM algorithm was proposed in [17]. We run our algorithm on three gene expression datasets: the ER dataset (p = 692), the Leukemia dataset (p = 1255), and a subset of the Rosetta dataset (p = 2000)1 For the parameters, we use λS = 0.5, λL = 50 for the ER and Leukemia datasets, which give us low-rank and sparse results. For the Rosetta dataset, we use the parameters suggested in LogdetPPA, with λS = 0.0313, λL = 0.1565. The results in Figure 1 shows that our algorithm is more than 10 times faster than other algorithms. Note that in the beginning PGALM tends to produce infeasible solutions (L or S −L is not positive definite), which is not plotted in the figures. Our proximal Newton framework has two algorithmic components: the quadratic approximation, and our active subspace selection. From Figure 1 we can observe that although our algorithm is a Newton-like method, the time cost for each iteration is similar or even cheaper than other first order methods. The reason is (1) we take advantage from active selection, and (2) the problem has a special structure of the Hessian (17), where computing it is no more expensive than the gradient. To delineate the contribution of the quadratic approximation to the gain in speed of convergence, we further compare our algorithm to an alternating minimization approach for solving (4), together with our active subspace selection. Such an alternating minimization approach would iteratively fix one of S, L, and update the other; we defer detailed algorithmic and implementation details to Appendix 7.6 for reasons of space. The results show that by using the quadratic approximation, we get a much faster convergence rate (see Figure 2 in Appendix 7.6). 1The full dataset has p = 6316 but the other methods cannot solve this size problem. 7 0 50 100 150 900 1000 1100 1200 1300 time (sec) Objective value Quic & Dirty PGALM LogdetPPM . (a) ER dataset 0 100 200 300 400 500 1500 2000 2500 3000 time (sec) Objective value Quic & Dirty PGALM LogdetPPM . (b) Leukemia dataset 0 200 400 600 −2000 −1500 −1000 −500 time (sec) Objective value Quic & Dirty PGALM LogdetPPM . (c) Rosetta dataset Figure 1: Comparison of algorithms on the latent feature GMRF problem using gene expression datasets. Our algorithm is much faster than PGALM and LogdetPPA. Table 1: The comparisons on multi-task problems. dataset number of relative Dirty Models (sparse + group sparse) Other Models training data error QUIC & DIRTY proximal gradient ADMM Lasso Group Lasso USPS 100 10−1 8.3% / 0.42s 8.5% / 1.8s 8.3% / 1.3 10.27% 8.36% 100 10−4 7.47% / 0.75s 7.49% / 10.8s 7.47% / 4.5s 400 10−1 2.92% / 1.01s 2.9% / 9.4s 3.0% / 3.6s 4.87% 2.93% 400 10−4 2.5% / 1.55s 2.5% / 35.8 2.5% / 11.0s RCV1 1000 10−1 18.91% / 10.5s 18.5%/47s 18.9% / 23.8s 22.67% 20.8% 1000 10−4 18.45% / 23.1s 18.49% / 430.8s 18.5% / 259s 5000 10−1 10.54% / 42s 10.8% / 541s 10.6% / 281s 13.67% 12.25% 5000 10−4 10.27% / 87s 10.27% / 2254s 10.27% / 1191s 5.2 Multiple-task learning with superposition-structured regularizers Next we solve the multi-task learning problem (5) where the parameter is a sparse matrix S ∈Rd×k and a group sparse matrix B ∈Rd×k. Instead of using the square loss (as in [15]), we consider the logistic loss ℓlogistic(y, a) = log(1 + e−ya), which gives better performance as seen by comparing Table 1 to results in [15]. Here the Hessian matrix has a special structure again: H = XDXT where X is the data matrix and D is the diagonal matrix, and in Appendix 7.7 we have a detail description of how to applying our algorithm to solve this problem. Results. We follow [15] and transform multi-class problems into multi-task problems. For a multiclass dataset with k classes and n samples, for each r = 1, . . . , k, we generate yr ∈{0, 1}n to be the vector such that y(k) i = 1 if and only if the i-th sample is in class r. Our first dataset is the USPS dataset which was first collected in [25] and subsequently widely used in multi-task papers. On this dataset, the use of several regularizers is crucial for good performance. For example, [15] demonstrates that on USPS, using lasso and group lasso regularizations together outperforms models with a single regularizer. However, they only consider the squared loss in their paper, whereas we consider a logistic loss which leads to better performance. For example, we get 7.47% error rate using 100 samples in USPS dataset, while using the squared loss the error rate is 10.8% [15]. Our second dataset is a larger document dataset RCV1 downloaded from LIBSVM Data, which has 53 classes and 47,236 features. We show that our algorithm is much faster than other algorithms on both datasets, especially on RCV1 where we are more than 20 times faster than proximal gradient descent. Here our subspace selection techniques works well because we expect that the active subspace at the true solution is small. 6 Acknowledgements This research was supported by NSF grants CCF-1320746 and CCF-1117055. C.-J.H also acknowledges support from an IBM PhD fellowship. P.R. acknowledges the support of ARO via W911NF12-1-0390 and NSF via IIS-1149803, IIS-1447574, and DMS-1264033. S.R.B. was supported by an IBM Research Goldstine Postdoctoral Fellowship while the work was performed. 8 References [1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Annals if Statistics, 40(2):1171–1197, 2012. [2] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1– 122, 2011. [3] E. Candes and B. Recht. Simple bounds for recovering low-complexity models. Mathemetical Programming, 2012. [4] E. J. Candes, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? J. Assoc. Comput. Mach., 58(3):1–37, 2011. [5] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex optimization. The Annals of Statistics, 2012. [6] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. Siam J. Optim, 21(2):572–596, 2011. [7] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures. IEEE Transactions on Information Theory, 59(7):4324–4337, 2013. [8] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal component analysis to the exponential family. In NIPS, 2012. [9] Q. T. Dinh, A. Kyrillidis, and V. Cevher. An inexact proximal path-following algorithm for constrained convex minimization. arxiv:1311.1756, 2013. [10] C.-J. Hsieh, I. S. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse covariance estimation. In NIPS, 2012. [11] C.-J. Hsieh and P. A. Olsen. Nuclear norm minimization via active subspace selection. In ICML, 2014. [12] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, and P. Ravikumar. Sparse inverse covariance matrix estimation using quadratic approximation. In NIPS, 2011. [13] C.-J. Hsieh, M. A. Sustik, I. S. Dhillon, P. Ravikumar, and R. A. Poldrack. BIG & QUIC: Sparse inverse covariance estimation for a million variables. In NIPS, 2013. [14] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Trans. Inform. Theory, 57:7221–7234, 2011. [15] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In NIPS, 2010. [16] J. D. Lee, Y. Sun, and M. A. Saunders. Proximal Newton-type methods for convex optimization. In NIPS, 2012. [17] S. Ma, L. Xue, and H. Zou. Alternating direction methods for latent variable Gaussian graphical model selection. Neural Computation, 25(8):2172–2198, 2013. [18] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. [19] P. Olsen, F. Oztoprak, J. Nocedal, and S. Rennie. Newton-like methods for sparse inverse covariance estimation. In NIPS, 2012. [20] Z. Qin, K. Scheinberg, and D. Goldfarb. Efficient block-coordinate descent algorithm for the group lasso. Mathematical Programming Computation, 2013. [21] K. Scheinberg and X. Tang. Practical inexact proximal quasi-newton method with global complexity analysis. arxiv:1311.6547, 2014. [22] A. Tewari, P. Ravikumar, and I. Dhillon. Greedy algorithms for structurally constrained high dimensional problems. In NIPS, 2011. [23] K.-C. Toh, P. Tseng, and S. Yun. A block coordinate gradient descent method for regularized convex separable optimization and covariance selection. Mathemetical Programming, 129:331–355, 2011. [24] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117:387–423, 2007. [25] M. van Breukelen, R. P. W. Duin, D. M. J. Tax, and J. E. den Hartog. Handwritten digit recognition by combined classifiers. Kybernetika, 34(4):381–386, 1998. [26] C. Wang, D. Sun, and K.-C. Toh. Solving log-determinant optimization problems by a Newton-CG primal proximal point algorithm. SIAM J. Optimization, 20:2994–3013, 2010. [27] E. Yang and P. Ravikumar. Dirty statistical models. In NIPS, 2013. [28] E.-H. Yen, C.-J. Hsieh, P. Ravikumar, and I. S. Dhillon. Constant nullspace strong convexity and fast convergence of proximal methods under high-dimensional settings. In NIPS, 2014. [29] G.-X. Yuan, C.-H. Ho, and C.-J. Lin. An improved GLMNET for L1-regularized logistic regression. JMLR, 13:1999–2030, 2012. 9
2014
5
5,536
Encoding High Dimensional Local Features by Sparse Coding Based Fisher Vectors Lingqiao Liu1, Chunhua Shen1,2, Lei Wang3, Anton van den Hengel1,2, Chao Wang3 1 School of Computer Science, University of Adelaide, Australia 2 ARC Centre of Excellence for Robotic Vision 3 School of Computer Science and Software Engineering, University of Wollongong, Australia Abstract Deriving from the gradient vector of a generative model of local features, Fisher vector coding (FVC) has been identified as an effective coding method for image classification. Most, if not all, FVC implementations employ the Gaussian mixture model (GMM) to characterize the generation process of local features. This choice has shown to be sufficient for traditional low dimensional local features, e.g., SIFT; and typically, good performance can be achieved with only a few hundred Gaussian distributions. However, the same number of Gaussians is insufficient to model the feature space spanned by higher dimensional local features, which have become popular recently. In order to improve the modeling capacity for high dimensional features, it turns out to be inefficient and computationally impractical to simply increase the number of Gaussians. In this paper, we propose a model in which each local feature is drawn from a Gaussian distribution whose mean vector is sampled from a subspace. With certain approximation, this model can be converted to a sparse coding procedure and the learning/inference problems can be readily solved by standard sparse coding methods. By calculating the gradient vector of the proposed model, we derive a new fisher vector encoding strategy, termed Sparse Coding based Fisher Vector Coding (SCFVC). Moreover, we adopt the recently developed Deep Convolutional Neural Network (CNN) descriptor as a high dimensional local feature and implement image classification with the proposed SCFVC. Our experimental evaluations demonstrate that our method not only significantly outperforms the traditional GMM based Fisher vector encoding but also achieves the state-ofthe-art performance in generic object recognition, indoor scene, and fine-grained image classification problems. 1 Introduction Fisher vector coding is a coding method derived from the Fisher kernel [1] which was originally proposed to compare two samples induced by a generative model. Since its introduction to computer vision [2], many improvements and variants have been proposed. For example, in [3] the normalization of Fisher vectors is identified as an essential step to achieve good performance; in [4] the spatial information of local features is incorporated; in [5] the model parameters are learned through a endto-end supervised training algorithm and in [6] multiple layers of Fisher vector coding modules are stacked into a deep architecture. With these extensions, Fisher vector coding has been established as the state-of-the-art image classification approach. Almost all of these methods share one common component: they all employ Gaussian mixture model (GMM) as the generative model for local features. This choice has been proved effective in modeling standard local features such as SIFT, which are often of low dimension. Usually, using a 1 mixture of a few hundred Gaussians has been sufficient to guarantee good performance. Generally speaking, the distribution of local features can only be well captured by a Gaussian distribution within a local region due to the variety of local feature appearances and thus the number of Gaussian mixtures needed is essentially determined by the volume of the feature space of local features. Recently, the choice of local features has gone beyond the traditional local patch descriptors such as SIFT or SURF [7] and higher dimensional local features such as the activation of a pre-trained deep neural-network [8] or pooled coding vectors from a local region [9, 10] have demonstrated promising performance. The higher dimensionality and rich visual content captured by those features make the volume of their feature space much larger than that of traditional local features. Consequently, a much larger number of Gaussian mixtures will be needed in order to model the feature space accurately. However, this would lead to the explosion of the resulted image representation dimensionality and thus is usually computationally impractical. To alleviate this difficulty, here we propose an alternative solution. We model the generation process of local features as randomly drawing features from a Gaussian distribution whose mean vector is randomly drawn from a subspace. With certain approximation, we convert this model to a sparse coding model and leverage an off-the-shelf solver to solve the learning and inference problems. With further derivation, this model leads to a new Fisher vector coding algorithm called Sparse Coding based Fisher Vector Coding (SCFVC). Moreover, we adopt the recently developed Deep Convolutional Neural Network to generate regional local features and apply the proposed SCFVC to these local features to build an image classification system. To demonstrate its effectiveness in encoding the high dimensional local feature, we conduct a series of experiments on generic object, indoor scene and fine-grained image classification datasets, it is shown that our method not only significantly outperforms the traditional GMM based Fisher vector coding in encoding high dimensional local features but also achieves state-of-the-art performance in these image classification problems. 2 Fisher vector coding 2.1 General formulation Given two samples generated from a generative model, their similarity can be evaluated by using a Fisher kernel [1]. The sample can take any form, including a vector or a vector set, as long as its generation process can be modeled. For a Fisher vector based image classification approach, the sample is a set of local features extracted from an image which we denote it as X = {x1, x2, · · · , xT }. Assuming each xi is modeled by a p.d.f P(x|λ) and is drawn i.i.d, in Fisher kernel a sample X can be described by the gradient vector over the model parameter λ GX λ = ∇λ log P(X|λ) = X i ∇λ log P(xi|λ). (1) The Fisher kernel is then defined as K(X, Y) = GX λ T F−1GX λ , where F is the information matrix and is defined as F = E[GX λ GX λ T ]. In practice, the role of the information matrix is less significant and is often omitted for computational simplicity [3]. As a result, two samples can be directly compared by the linear kernel of their corresponding gradient vectors which are often called Fisher vectors. From a bag-of-features model perspective, the evaluation of the Fisher kernel for two images can be seen as first calculating the gradient or Fisher vector of each local feature and then performing sum-pooling. In this sense, the Fisher vector calculation for each local feature can be seen as a coding method and we call it Fisher vector coding in this paper. 2.2 GMM based Fisher vector coding and its limitation To implement the Fisher vector coding framework introduced above, one needs to specify the distribution P(x|λ). In the literature, most, if not all, works choose GMM to model the generation process of x, which can be described as follows: • Draw a Gaussian model N(µk, Σk) from the prior distribution P(k), k = 1, 2, · · · , m . • Draw a local feature x from N(µk, Σk). 2 Generally speaking, the distribution of x resembles a Gaussian distribution only within a local region of feature space. Thus, for a GMM, each of Gaussian essentially models a small partition of the feature space and many of them are needed to depict the whole feature space. Consequently, the number of mixtures needed will be determined by the volume of the feature space. For the commonly used low dimensional local features, such as SIFT, it has been shown that it is sufficient to set the number of mixtures to few hundreds. However, for higher dimensional local features this number may be insufficient. This is because the volume of feature space usually increases quickly with the feature dimensionality. Consequently, the same number of mixtures will result in a coarser partition resolution and imprecise modeling. To increase the partition resolution for higher dimensional feature space, one straightforward solution is to increase the number of Gaussians. However, it turns out that the partition resolution increases slowly (compared to our method which will be introduced in the next section) with the number of mixtures. In other words, much larger number of Gaussians will be needed and this will result in a Fisher vector whose dimensionality is too high to be handled in practice. 3 Our method 3.1 Infinite number of Gaussians mixture Our solution to this issue is to go beyond a fixed number of Gaussian distributions and use an infinite number of them. More specifically, we assume that a local feature is drawn from a Gaussian distribution with a randomly generated mean vector. The mean vector is a point on a subspace spanned by a set of bases (which can be complete or over-complete) and is indexed by a latent coding vector u. The detailed generation process is as follows: • Draw a coding vector u from a zero mean Laplacian distribution P(u) = 1 2λ exp(−|u| λ ). • Draw a local feature x from the Gaussian distribution N(Bu, Σ), where the Laplace prior for P(u) ensures the sparsity of resulting Fisher vector which can be helpful for coding. Essentially, the above process resembles a sparse coding model. To show this relationship, let’s first write the marginal distribution of x: P(x) = Z u P(x, u|B)du = Z u P(x|u, B)P(u)du. (2) The above formulation involves an integral operator which makes the likelihood evaluation difficult. To simplify the calculation, we use the point-wise maximum within the integral term to approximate the likelihood, that is, P(x) ≈P(x|u∗, B)P(u∗). u∗= argmax u P(x|u, B)P(u) (3) By assumming that Σ = diag(σ2 1, · · · , σ2 m) and setting σ2 1 = · · · = σ2 m = σ2 as a constant. The logarithm of P(x) is written as log(P(x|B)) = min u 1 σ2 ∥x −Bu∥2 2 + λ∥u∥1, (4) which is exactly the objective value of a sparse coding problem. This relationship suggests that we can learn the model parameter B and infer the latent variable u by using the off-the-shelf sparse coding solvers. One question for the above method is that compared to simply increasing the number of models in traditional GMM, how much improvement is achieved by increasing the partition resolution. To answer this question, we designed an experiment to compare these two schemes. In our experiment, the partition resolution is roughly measured by the average distance (denoted as d ) between a feature and its closest mean vector in the GMM or the above model. The larger d is, the lower the partition resolution is. The comparison is shown in Figure 1. In Figure 1 (a), we increase the dimensionality 3 of local features 1 and for each dimension we calculate d in a GMM model with 100 mixtures. As seen, d increases quickly with the feature dimensionality. In Figure 1 (b), we try to reduce d by introducing more mixture distributions in GMM model. However, as can be seen, d drops slowly with the increase in the number of Gaussians. In contrast, with the proposed method, we can achieve much lower d by using only 100 bases. This result illustrates the advantage of our method. 100 200 300 400 500 600 700 800 900 1000 1.8 2 2.2 2.4 2.6 2.8 3 Dimensionality of regional local features d GMM with 100 mixtures (a) 100 200 300 400 500 600 700 800 900 1000 2.25 2.3 2.35 2.4 2.45 2.5 2.55 Number of Gaussian mixtures d GMM Proposed model (with 100 bases) (b) Figure 1: Comparison of two ways to increase the partition resolution. (a) For GMM, d (the average distance between a local feature and its closest mean vector) increases with the local feature dimensionality. Here the GMM is fixed at 100 Gaussians. (b) d is reduced in two ways (1) simply increasing the number of Gaussian distributions in the mixture. (2) using the proposed generation process. As can be seen, the latter achieves much lower d even with a small number of bases. 3.2 Sparse coding based Fisher vector coding Once the generative model of local features is established, we can readily derive the corresponding Fisher coding vector by differentiating its log likelihood, that is, C(x) = ∂log(P(x|B)) ∂B = ∂1 σ2 ∥x −Bu∗∥2 2 + λ∥u∗∥1 ∂B u∗= argmax u P(x|u, B)P(u). (5) Note that the differentiation involves u∗which implicitly interacts with B. To calculate this term, we notice that the sparse coding problem can be reformulated as a general quadratic programming problem by defining u+ and u−as the positive and negative parts of u, that is, the sparse coding problem can be rewritten as min u+,u−∥x −B(u+ −u−)∥2 2 + λ1T (u+ + u−) s.t. u+ ≥0 u−≥0 (6) By further defining u′ = (u+, u−)T , log(P(x|B)) can be expressed in the following general form, log(P(x|B)) = L(B) = max u′ u′T v(B) −1 2u′T P(B)u′, (7) where P(B) and v(B) are a matrix term and a vector term depending on B respectively. The derivative of L(B) has been studied in [11]. According to the Lemma 2 in [11], we can differentiate L(B) with respect to B as if u′ did not depend on B. In other words, we can firstly calculate u′ or equivalently u∗by solving the sparse coding problem and then obtain the Fisher vector ∂log(P (x|B)) ∂B as ∂1 σ2 ∥x −Bu∗∥2 2 + λ∥u∗∥1 ∂B = (x −Bu∗)u∗T . (8) 1This is achieved by performing PCA on a 4096-dimensional CNN regional descriptor. For more details about the feature we used, please refer to Section 3.4 4 Table 1: Comparison of results on Pascal VOC 2007. The lower part of this table lists some results reported in the literature. We only report the mean average precision over 20 classes. The average precision for each class is listed in Table 2. Methods mean average precision Comments SCFVC (proposed) 76.9% single scale, no augmented data GMMFVC 73.8% single scale, no augmented data CNNaug-SVM [8] 77.2% with augmented data, use CNN for whole image CNN-SVM [8] 73.9% no augmented data.use CNN for whole image NUS [13] 70.5% GHM [14] 64.7% AGS [15] 71.1% Note that the Fisher vector expressed in Eq. (8) has an interesting form: it is simply the outer product between the sparse coding vector u∗and the reconstruction residual term (x −Bu∗). In traditional sparse coding, only the kth dimension of a coding vector uk is used to indicate the relationship between a local feature x and the kth basis. Here in the sparse coding based Fisher vector, the coding value uk multiplied by the reconstruction residual is used to capture their relationship. 3.3 Pooling and normalization From the i.i.d assumption in Eq. (1), the Fisher vector of the whole image is 2 ∂log(P(I|B)) ∂B = X xi∈I ∂log(P(xi|B)) ∂B = X xi∈I (xi −Bu∗ i )u∗ i ⊤. (9) This is equivalent to performing the sum-pooling for the extracted Fisher coding vectors. However, it has been observed [3, 12] that the image signature obtained using sum-pooling tends to overemphasize the information from the background [3] or bursting visual words [12]. It is important to apply normalization when sum-pooling is used. In this paper, we apply intra-normalization [12] to normalize the pooled Fisher vectors. More specifically, we apply l2 normalization to the subvectors P xi∈I(xi −Bu∗ i )u∗ i,k ∀k, where k indicates the kth dimension of the sparse coding u∗ i . Besides intra-normalization, we also utilize the power normalization as suggested in [3]. 3.4 Deep CNN based regional local features Recently, the middle-layer activation of a pre-trained deep CNN has been demonstrated to be a powerful image descriptor [8, 16]. In this paper, we employ this descriptor to generate a number of local features for an image. More specifically, an input image is first resized to 512×512 pixels and regions with the size of 227×227 pixels are cropped with the stride 8 pixels. These regions are subsequently feed into the deep CNN and the activation of the sixth layer is extracted as local features for these regions. In our implementation, we use the Caffe [17] package which provides a deep CNN pre-trained on ILSVRC2012 dataset and its 6-th layer is a 4096-dimensional vector. This strategy has demonstrated better performance than directly using deep CNN features for the whole image recently [16]. Once regional local features are extracted, we encoded them using the proposed SCFVC method and generate an image level representation by sum-pooling and normalization. Certainly, our method is open to the choice of other high-dimensional local features. The reason for choosing deep CNN features in this paper is that by doing so we can demonstrate state-of-the-art image classification performance. 4 Experimental results We conduct experimental evaluation of the proposed sparse coding based Fisher vector coding (SCFVC) on three large datasets: Pascal VOC 2007, MIT indoor scene-67 and Caltech-UCSD Birds2the vectorized form of ∂log(P (I|B)) ∂B is used as the image representation. 5 Table 2: Comparison of results on Pascal VOC 2007 for each of 20 classes. Besides the proposed SCFVC and the GMMFVC baseline, the performance obtained by directly using CNN as global feature is also compared. aero bike bird boat bottle bus car cat chair cow SCFVC 89.5 84.1 83.7 83.7 43.9 76.7 87.8 82.5 60.6 69.6 GMMFVC 87.1 80.6 80.3 79.7 42.8 72.2 87.4 76.1 58.6 64.0 CNN-SVM 88.5 81.0 83.5 82.0 42.0 72.5 85.3 81.6 59.9 58.5 table dog horse mbike person plant sheep sofa train TV SCFVC 72.0 77.1 88.7 82.1 94.4 56.8 71.4 67.7 90.9 75.0 GMMFVC 66.9 75.1 84.9 81.2 93.1 53.1 70.8 66.2 87.9 71.3 CNN-SVM 66.5 77.8 81.8 78.8 90.2 54.8 71.1 62.6 87.2 71.8 Table 3: Comparison of results on MIT-67. The lower part of this table lists some results reported in the literature. Methods Classification Accuracy Comments SCFVC (proposed) 68.2% with single scale GMMFVC 64.3% with single scale MOP-CNN [16] 68.9% with three scales VLAD level2 [16] 65.5% with single best scale CNN-SVM [8] 58.4% use CNN for whole image FV+Bag of parts [19] 63.2% DPM [20] 37.6% 200-2011. These are commonly used evaluation benchmarks for generic object classification, scene classification and fine-grained image classification respectively. The focus of these experiments is to examine that whether the proposed SCFVC outperforms the traditional Fisher vector coding in encoding high dimensional local features. 4.1 Experiment setup In our experiments, the activations of the sixth layer of a pre-trained deep CNN are used as regional local features. PCA is applied to further reduce the regional local features from 4096 dimensions to 2000 dimensions. The number of Gaussian distributions and the codebook size for sparse coding is set to 100 throughout our experiments unless otherwise mentioned. For the sparse coding, we use the algorithm in [18] to learn the codebook and perform the coding vector inference. For all experiments, linear SVM is used as the classifier. 4.2 Main results Pascal-07 Pascal VOC 2007 contains 9963 images with 20 object categories which form 20 binary (object vs. non-object) classification tasks. The use of deep CNN features has demonstrated the state-of-the-art performance [8] on this dataset. In contrast to [8], here we use the deep CNN features as local features to model a set of image regions rather than as a global feature to model the whole image. The results of the proposed SCFVC and traditional Fisher vector coding, denoted as GMMFVC, are shown in Table 1 and Table 2. As can be seen from Table 1, the proposed SCFVC leads to superior performance over the traditional GMMFVC and outperforms GMMFVC by 3%. By cross-referencing Table 2, it is clear that the proposed SCFVC outperforms GMMFVC in all of 20 categories. Also, we notice that the GMMFVC is merely comparable to the performance of directly using deep CNN as global features, namely, CNN-SVM in Table 1. Since both the proposed SCFVC and GMMFVC adopt deep CNN features as local features, this observation suggests that the advantage of using deep CNN features as local features can only be clearly demonstrated when the appropriate coding method, i.e. the proposed SCFVC is employed. Note that to further boost the 6 Table 4: Comparison of results on Birds-200 2011. The lower part of this table lists some results reported in the literature. Methods Classification Accuracy Comments SCFVC (proposed) 66.4% with single scale GMMFVC 61.7% with single scale CNNaug-SVM [8] 61.8% with augmented data, use CNN for the whole image CNN-SVM [8] 53.3% no augmented data, use CNN as global features DPD+CNN+LogReg [21] 65.0% use part information DPD [22] 51.0% 0 200 400 600 800 1000 1200 1400 1600 1800 2000 61 62 63 64 65 66 67 68 69 Dimensionality of regional local features Classification Accuracy % SCFV GMMFV Figure 2: The performance comparison of classification accuracy vs. local feature dimensionality for the proposed SCFVC and GMMFVC on MIT-67. performance, one can adopt some additional approaches like introducing augmented data or combining multiple scales. Some of the methods compared in Table 1 have employed these approaches and we have commented this fact as so inform readers that whether these methods are directly comparable to the proposed SCFVC. We do not pursue these approaches in this paper since the focus of our experiment is to compare the proposed SCFVC against traditional GMMFVC. MIT-67 MIT-67 contains 6700 images with 67 indoor scene categories. This dataset is quite challenging because the differences between some categories are very subtle. The comparison of classification results are shown in Table 3. Again, we observe that the proposed SCFVC significantly outperforms traditional GMMFVC. To the best of our knowledge, the best performance on this dataset is achieved in [16] by concatenating the features extracted from three different scales. The proposed method achieves the same performance only using a single scale. We also tried to concatenate the image representation generated from the proposed SCFVC with the global deep CNN feature. The resulted performance can be as high as 70% which is by far the best performance achieved on this dataset. Birds-200-2011 Birds-200-2011 contains 11788 with 200 different birds species, which is a commonly used benchmark for fine-grained image classification. The experimental results on this dataset are shown in Table 4. The advantage of SCFVC over GMMFVC is more pronounced on this dataset: SCFVC outperforms GMMFVC by over 4%. We also notice two interesting observations: (1) GMMFVC even achieves comparable performance to the method of using the global deep CNN feature with augmented data, namely, CNNaug-SVM in Table 4. (2) Although we do not use any parts information (of birds), our method outperforms the result using parts information (DPD+CNN+LogReg in Table 4). These two observations suggest that using deep CNN features as local features is better for fine-grained problems and the proposed method can further boost its advantage. 7 Table 5: Comparison of results on MIT-67 with three different settings: (1) 100-basis codebook with 1000 dimensional local features, denoted as SCFV-100-1000D (2) 400 Gaussian mixtures with 300 dimensional local features, denoted as GMMFV-400-300D (3) 1000 Gaussian mixtures with 100 dimensional local features denoted as GMMFV-1000-100D. They have the same/similar total image representation dimensionality. SCFV-100-1000D GMMFV-400-300D GMMFV-1000-100D 68.1% 64.0% 60.8% 4.3 Discussion In the above experiments, the dimensionality of local features is fixed to 2000. But how about the performance comparison between the proposed SCFV and traditional GMMFV on lower dimensional features? To investigate this issue, we vary the dimensionality of the deep CNN features from 100 to 2000 and compare the performance of the two Fisher vector coding methods on MIT-67. The results are shown in Figure 2. As can be seen, for lower dimensionality (like 100), the two methods achieve comparable performance and in general both methods benefit from using higher dimensional features. However, for traditional GMMFVC, the performance gain obtained from increasing feature dimensionality is lower than that obtained by the proposed SCFVC. For example, from 100 to 1000 dimensions, the traditional GMMFVC only obtains 4% performance improvement while our SCFVC achieves 7% performance gain. This validates our argument that the proposed SCFVC is especially suited for encoding high dimensional local features. Since GMMFVC works well for lower dimensional features, how about reducing the higher dimensional local features to lower dimensions and use more Gaussian mixtures? Will it be able to achieve comparable performance to our SCFVC which uses higher dimensional local features but a smaller number of bases? To investigate this issue, we also evaluate the classification performance on MIT67 using 400 Gaussian mixtures with 300-dimension local features and 1000 Gaussian mixtures with 100-dimension local features. Thus the total dimensionality of these two image representations will be similar to that of our SCFVC which uses 100 bases and 1000-dimension local features. The comparison is shown in Table 5. As can be seen, the performance of these two settings are much inferior to the proposed one. This suggests that some discriminative information may have already been lost after the PCA dimensionality reduction and the discriminative power can not be re-boosted by simply introducing more Gaussian distributions. This verifies the necessity of using high dimensional local features and justifies the value of the proposed method. In general, the inference step in sparse coding can be slower than the membership assignment in GMM model. However, the computational efficiency can be greatly improved by using an approximated sparse coding algorithm such as learned FISTA [23] or orthogonal matching pursuit [10]. Also, the proposed method can be easily generalized to several similar coding models, such as local linear coding [24]. In that case, the computational efficiency is almost identical (or even faster if approximated k-nearest neighbor algorithms are used) to the traditional GMMFVC. 5 Conclusion In this work, we study the use of Fisher vector coding to encode high-dimensional local features. Our main discovery is that traditional GMM based Fisher vector coding is not particular well suited to modeling high-dimensional local features. As an alternative, we proposed to use a generation process which allows the mean vector of a Gaussian distribution to be chosen from a point in a subspace. This model leads to a new Fisher vector coding method which is based on sparse coding model. Combining with the activation of the middle layer of a pre-trained CNN as high-dimensional local features, we build an image classification system and experimentally demonstrate that the proposed coding method is superior to the traditional GMM in encoding high-dimensional local features and can achieve state-of-the-art performance in three image classification problems. Acknowledgements This work was in part supported by Australian Research Council grants FT120100969, LP120200485, and the Data to Decisions Cooperative Research Centre. Correspondence should be addressed to C. Shen (email: chhshen@gmail.com). 8 References [1] T. Jaakkola and D. Haussler, “Exploiting generative models in discriminative classifiers,” in Proc. Adv. Neural Inf. Process. Syst., 1998, pp. 487–493. [2] F. Perronnin and C. R. Dance, “Fisher kernels on visual vocabularies for image categorization,” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2007. [3] F. Perronnin, J. S´anchez, and T. Mensink, “Improving the Fisher kernel for large-scale image classification,” in Proc. Eur. Conf. Comp. Vis., 2010. [4] J. Krapac, J. J. Verbeek, and F. Jurie, “Modeling spatial layout with fisher vectors for image categorization.” in Proc. IEEE Int. Conf. Comp. Vis., 2011, pp. 1487–1494. [5] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep fisher networks for large-scale image classification,” in Proc. Adv. Neural Inf. Process. Syst., 2013. [6] V. Sydorov, M. Sakurada, and C. H. Lampert, “Deep fisher kernels—end to end learning of the Fisher kernel GMM parameters,” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2014. [7] H. Bay, A. Ess, T. Tuytelaars, and L. J. V. Gool, “Speeded-up robust features (SURF),” Computer Vision & Image Understanding, vol. 110, no. 3, pp. 346–359, 2008. [8] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson, “CNN features off-the-shelf: an astounding baseline for recognition,” 2014, http://arxiv.org/abs/1403.6382. [9] S. Yan, X. Xu, D. Xu, S. Lin, and X. Li, “Beyond spatial pyramids: A new feature extraction framework with dense spatial sampling for image classification,” in Proc. Eur. Conf. Comp. Vis., 2012, pp. 473–487. [10] L. Bo, X. Ren, and D. Fox, “Hierarchical matching pursuit for image classification: Architecture and fast algorithms,” in Proc. Adv. Neural Inf. Process. Syst., 2011, pp. 2115–2123. [11] O. Chapelle, V. Vapnik, O. Bousquet, and S. Mukherjee, “Choosing multiple parameters for support vector machines,” Machine Learning, vol. 46, no. 1–3, pp. 131–159, 2002. [12] R. Arandjelovi´c and A. Zisserman, “All about VLAD,” in Proc. IEEE Int. Conf. Comp. Vis., 2013. [13] Z. Song, Q. Chen, Z. Huang, Y. Hua, and S. Yan, “Contextualizing object detection and classification.” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2011. [14] Q. Chen, Z. Song, Y. Hua, Z. Huang, and S. Yan, “Hierarchical matching with side information for image classification.” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2012, pp. 3426–3433. [15] J. Dong, W. Xia, Q. Chen, J. Feng, Z. Huang, and S. Yan, “Subcategory-aware object classification.” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2013, pp. 827–834. [16] Y. Gong, L. Wang, R. Guo, and S. Lazebnik, “Multi-scale orderless pooling of deep convolutional activation features,” in Proc. Eur. Conf. Comp. Vis., 2014. [17] Y. Jia, “Caffe,” 2014, https://github.com/BVLC/caffe. [18] H. Lee, A. Battle, R. Raina, and A. Y. Ng, “Efficient sparse coding algorithms,” in Proc. Adv. Neural Inf. Process. Syst., 2007, pp. 801–808. [19] C. Doersch, A. Gupta, and A. A. Efros, “Mid-level visual element discovery as discriminative mode seeking,” in Proc. Adv. Neural Inf. Process. Syst., 2013. [20] M. Pandey and S. Lazebnik, “Scene recognition and weakly supervised object localization with deformable part-based models,” in Proc. IEEE Int. Conf. Comp. Vis., 2011, pp. 1307–1314. [21] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell, “DeCAF: A deep convolutional activation feature for generic visual recognition,” in Proc. Int. Conf. Mach. Learn., 2013. [22] N. Zhang, R. Farrell, F. Iandola, and T. Darrell, “Deformable part descriptors for fine-grained recognition and attribute prediction,” in Proc. IEEE Int. Conf. Comp. Vis., December 2013. [23] K. Gregor and Y. LeCun, “Learning fast approximations of sparse coding,” in Proc. Int. Conf. Mach. Learn., 2010, pp. 399–406. [24] J. Wang, J. Yang, K. Yu, F. Lv, T. Huang, and Y. Gong, “Locality-constrained linear coding for image classification,” in Proc. IEEE Conf. Comp. Vis. Patt. Recogn., 2010. 9
2014
50
5,537
Efficient learning by implicit exploration in bandit problems with side observations Tom´aˇs Koc´ak Gergely Neu Michal Valko R´emi Munos∗ SequeL team, INRIA Lille – Nord Europe, France {tomas.kocak,gergely.neu,michal.valko,remi.munos}@inria.fr Abstract We consider online learning problems under a a partial observability model capturing situations where the information conveyed to the learner is between full information and bandit feedback. In the simplest variant, we assume that in addition to its own loss, the learner also gets to observe losses of some other actions. The revealed losses depend on the learner’s action and a directed observation system chosen by the environment. For this setting, we propose the first algorithm that enjoys near-optimal regret guarantees without having to know the observation system before selecting its actions. Along similar lines, we also define a new partial information setting that models online combinatorial optimization problems where the feedback received by the learner is between semi-bandit and full feedback. As the predictions of our first algorithm cannot be always computed efficiently in this setting, we propose another algorithm with similar properties and with the benefit of always being computationally efficient, at the price of a slightly more complicated tuning mechanism. Both algorithms rely on a novel exploration strategy called implicit exploration, which is shown to be more efficient both computationally and information-theoretically than previously studied exploration strategies for the problem. 1 Introduction Consider the problem of sequentially recommending content for a set of users. In each period of this online decision problem, we have to assign content from a news feed to each of our subscribers so as to maximize clickthrough. We assume that this assignment needs to be done well in advance, so that we only observe the actual content after the assignment was made and the user had the opportunity to click. While we can easily formalize the above problem in the classical multi-armed bandit framework [3], notice that we will be throwing out important information if we do so! The additional information in this problem comes from the fact that several news feeds can refer to the same content, giving us the opportunity to infer clickthroughs for a number of assignments that we did not actually make. For example, consider the situation shown on Figure 1a. In this simple example, we want to suggest one out of three news feeds to each user, that is, we want to choose a matching on the graph shown on Figure 1a which covers the users. Assume that news feeds 2 and 3 refer to the same content, so whenever we assign news feed 2 or 3 to any of the users, we learn the value of both of these assignments. The relations between these assignments can be described by a graph structure (shown on Figure 1b), where nodes represent user-news feed assignments, and edges mean that the corresponding assignments reveal the clickthroughs of each other. For a more compact representation, we can group the nodes by the users, and rephrase our task as having to choose one node from each group. Besides its own reward, each selected node reveals the rewards assigned to all their neighbors. ∗Current affiliation: Google DeepMind 1 content1 content2 e1,1 e1,2 e1,3 e2,1 e2,2 e2,3 user1 user2 news feed1 news feed2 news feed3 Figure 1a: Users and news feeds. The thick edges represent one potential matching of users to feeds, grouped news feeds show the same content. user1 user2 content2 content2 e1,1 e1,2 e1,3 e2,1 e2,2 e2,3 Figure 1b: Users and news feeds. Connected feeds mutually reveal each others clickthroughs. The problem described above fits into the framework of online combinatorial optimization where in each round, a learner selects one of a very large number of available actions so as to minimize the losses associated with its sequence of decisions. Various instances of this problem have been widely studied in recent years under different feedback assumptions [7, 2, 8], notably including the so-called full-information [13] and semi-bandit [2, 16] settings. Using the example in Figure 1a, assuming full information means that clickthroughs are observable for all assignments, whereas assuming semibandit feedback, clickthroughs are only observable on the actually realized assignments. While it is unrealistic to assume full feedback in this setting, assuming semi-bandit feedback is far too restrictive in our example. Similar situations arise in other practical problems such as packet routing in computer networks where we may have additional information on the delays in the network besides the delays of our own packets. In this paper, we generalize the partial observability model first proposed by Mannor and Shamir [15] and later revisited by Alon et al. [1] to accommodate the feedback settings situated between the full-information and the semi-bandit schemes. Formally, we consider a sequential decision making problem where in each time step t the (potentially adversarial) environment assigns a loss value to each out of d components, and generates an observation system whose role will be clarified soon. Obliviously of the environment’s choices, the learner chooses an action Vt from a fixed action set S ⊂{0, 1}d represented by a binary vector with at most m nonzero components, and incurs the sum of losses associated with the nonzero components of Vt. At the end of the round, the learner observes the individual losses along the chosen components and some additional feedback based on its action and the observation system. We represent this observation system by a directed observability graph with d nodes, with an edge connecting i →j if and only if the loss associated with j is revealed to the learner whenever Vt,i = 1. The goal of the learner is to minimize its total loss obtained over T repetitions of the above procedure. The two most well-studied variants of this general framework are the multi-armed bandit problem [3] where each action consists of a single component and the observability graph is a graph without edges, and the problem of prediction with expert advice [17, 14, 5] where each action consists of exactly one component and the observability graph is complete. In the true combinatorial setting where m > 1, the empty and complete graphs correspond to the semi-bandit and full-information settings respectively. Our model directly extends the model of Alon et al. [1], whose setup coincides with m = 1 in our framework. Alon et al. themselves were motivated by the work of Mannor and Shamir [15], who considered undirected observability systems where actions mutually uncover each other’s losses. Mannor and Shamir proposed an algorithm based on linear programming that achieves a regret of ˜O( √ cT), where c is the number of cliques into which the graph can be split. Later, Alon et al. [1] proposed an algorithm called EXP3-SET that guarantees a regret of O(√αT log d), where α is an upper bound on the independence numbers of the observability graphs assigned by the environment. In particular, this bound is tighter than the bound of Mannor and Shamir since α ≤c for any graph. Furthermore, EXP3-SET is much more efficient than the algorithm of Mannor and Shamir as it only requires running the EXP3 algorithm of Auer et al. [3] on the decision set, which runs in time linear in d. Alon et al. [1] also extend the model of Mannor and Shamir in allowing the observability graph to be directed. For this setting, they offer another algorithm called EXP3-DOM with similar guarantees, although with the serious drawback that it requires access to the observation system before choosing its actions. This assumption poses severe limitations to the practical applicability of EXP3-DOM, which also needs to solve a sequence of set cover problems as a subroutine. 2 In the present paper, we offer two computationally and information-theoretically efficient algorithms for bandit problems with directed observation systems. Both of our algorithms circumvent the costly exploration phase required by EXP3-DOM by a trick that we will refer to IX as in Implicit eXploration. Accordingly, we name our algorithms EXP3-IX and FPL-IX, which are variants of the well-known EXP3 [3] and FPL [12] algorithms enhanced with implicit exploration. Our first algorithm EXP3-IX is specifically designed1 to work in the setting of Alon et al. [1] with m = 1 and does not need to solve any set cover problems or have any sort of prior knowledge concerning the observation systems chosen by the adversary.2 FPL-IX, on the other hand, does need either to solve set cover problems or have a prior upper bound on the independence numbers of the observability graphs, but can be computed efficiently for a wide range of true combinatorial problems with m > 1. We note that our algorithms do not even need to know the number of rounds T and our regret bounds scale with the average independence number ¯α of the graphs played by the adversary rather than the largest of these numbers. They both employ adaptive learning rates and unlike EXP3-DOM, they do not need to use a doubling trick to be anytime or to aggregate outputs of multiple algorithms to optimally set their learning rates. Both algorithms achieve regret guarantees of ˜O(m3/2√ ¯αT) in the combinatorial setting, which becomes ˜O( √ ¯αT) in the simple setting. Before diving into the main content, we give an important graph-theoretic statement that we will rely on when analyzing both of our algorithms. The lemma is a generalized version of Lemma 13 of Alon et al. [1] and its proof is given in Appendix A. Lemma 1. Let G be a directed graph with vertex set V = {1, . . . , d}. Let N − i be the inneighborhood of node i, i.e., the set of nodes j such that (j →i) ∈G. Let α be the independence number of G and p1,...,pd are numbers from [0, 1] such that Pd i=1 pi ≤m. Then d X i=1 pi 1 mpi + 1 mPi + c ≤2mα log  1 + m⌈d2/c⌉+ d α  + 2m, where Pi = P j∈N − i pj and c is a positive constant. 2 Multi-armed bandit problems with side information In this section, we start by the simplest setting fitting into our framework, namely the multi-armed bandit problem with side observations. We provide intuition about the implicit exploration procedure behind our algorithms and describe EXP3-IX, the most natural algorithm based on the IX trick. The problem we consider is defined as follows. In each round t = 1, 2, . . . , T, the environment assigns a loss vector ℓt ∈[0, 1]d for d actions and also selects an observation system described by the directed graph Gt. Then, based on its previous observations (and likely some external source of randomness) the learner selects action It and subsequently incurs and observes loss ℓt,It. Furthermore, the learner also observes the losses ℓt,j for all j such that (It →j) ∈Gt, denoted by the indicator Ot,i. Let Ft−1 = σ(It−1, . . . , I1) capture the interaction history up to time t. As usual in online settings [6], the performance is measured in terms of (total expected) regret, which is the difference between a total loss received and the total loss of the best single action chosen in hindsight, RT = max i∈[d] E " T X t=1 (ℓt,It −ℓt,i) # , where the expectation integrates over the random choices made by the learning algorithm. Alon et al. [1] adapted the well-known EXP3 algorithm of Auer et al. [3] for this precise problem. Their algorithm, EXP3-DOM, works by maintaining a weight wt,i for each individual arm i ∈[d] in each round, and selecting It according to the distribution P [It = i |Ft−1 ] = (1 −γ)pt,i + γµt,i = (1 −γ) wt,i Pd j=1 wt,j + γµt,i, 1EXP3-IX can also be efficiently implemented for some specific combinatorial decision sets even with m > 1, see, e.g., Cesa-Bianchi and Lugosi [7] for some examples. 2However, it is still necessary to have access to the observability graph to construct low bias estimates of losses, but only after the action is selected. 3 where γ ∈(0, 1) is parameter of the algorithm and µt is an exploration distribution whose role we will shortly clarify. After each round, EXP3-DOM defines the loss estimates ˆℓt,i = ℓt,i ot,i 1{(It→i)∈Gt} where ot,i = E [Ot,i |Ft−1 ] = P [(It →i) ∈Gt |Ft−1 ] for each i ∈[d]. These loss estimates are then used to update the weights for all i as wt+1,i = wt,ie−γˆℓt,i. It is easy to see that the these loss estimates ˆℓt,i are unbiased estimates of the true losses whenever pt,i > 0 holds for all i. This requirement along with another important technical issue justify the presence of the exploration distribution µt. The key idea behind EXP3-DOM is to compute a dominating set Dt ⊆[d] of the observability graph Gt in each round, and define µt as the uniform distribution over Dt. This choice ensures that ot,i ≥pt,i + γ/|Dt|, a crucial requirement for the analysis of [1]. In what follows, we propose an exploration scheme that does not need any fancy computations but, more importantly, works without any prior knowledge of the observability graphs. 2.1 Efficient learning by implicit exploration In this section, we propose the simplest exploration scheme imaginable, which consists of merely pretending to explore. Precisely, we simply sample our action It from the distribution defined as P [It = i |Ft−1 ] = pt,i = wt,i Pd j=1 wt,j , (1) without explicitly mixing with any exploration distribution. Our key trick is to define the loss estimates for all arms i as ˆℓt,i = ℓt,i ot,i + γt 1{(It→i)∈Gt}, where γt > 0 is a parameter of our algorithm. It is easy to check that ˆℓt,i is a biased estimate of ℓt,i. The nature of this bias, however, is very special. First, observe that ˆℓt,i is an optimistic estimate of ℓt,i in the sense that E h ˆℓt,i |Ft−1 i ≤ℓt,i. That is, our bias always ensures that, on expectation, we underestimate the loss of any fixed arm i. Even more importantly, our loss estimates also satisfy E " d X i=1 pt,iˆℓt,i Ft−1 # = d X i=1 pt,iℓt,i + d X i=1 pt,iℓt,i  ot,i ot,i + γt −1  = d X i=1 pt,iℓt,i −γt d X i=1 pt,iℓt,i ot,i + γt , (2) that is, the bias of the estimated losses suffered by our algorithm is directly controlled by γt. As we will see in the analysis, it is sufficient to control the bias of our own estimated performance as long as we can guarantee that the loss estimates associated with any fixed arm are optimistic—which is precisely what we have. Note that this slight modification ensures that the denominator of ˆℓt,i is lower bounded by pt,i + γt, which is a very similar property as the one achieved by the exploration scheme used by EXP3-DOM. We call the above loss estimation method implicit exploration or IX, as it gives rise to the same effect as explicit exploration without actually having to implement any exploration policy. In fact, explicit and implicit explorations can both be regarded as two different approaches for bias-variance tradeoff: while explicit exploration biases the sampling distribution of It to reduce the variance of the loss estimates, implicit exploration achieves the same result by biasing the loss estimates themselves. From this point on, we take a somewhat more predictable course and define our algorithm EXP3-IX as a variant of EXP3 using the IX loss estimates. One of the twists is that EXP3-IX is actually based on the adaptive learning-rate variant of EXP3 proposed by Auer et al. [4], which avoids the necessity of prior knowledge of the observability graphs in order to set a proper learning rate. This algorithm is defined by setting bLt−1,i = Pt−1 s=1 ˆℓs,i and for all i ∈[d] computing the weights as wt,i = (1/d)e−ηt bLt−1,i. These weights are then used to construct the sampling distribution of It as defined in (1). The resulting EXP3-IX algorithm is shown as Algorithm 1. 4 2.2 Performance guarantees for EXP3-IX Algorithm 1 EXP3-IX 1: Input: Set of actions S = [d], 2: parameters γt ∈(0, 1), ηt > 0 for t ∈[T]. 3: for t = 1 to T do 4: wt,i ←(1/d) exp (−ηtbLt−1,i) for i ∈[d] 5: An adversary privately chooses losses ℓt,i for i ∈[d] and generates a graph Gt 6: Wt ←Pd i=1 wt,i 7: pt,i ←wt,i/Wt 8: Choose It ∼pt = (pt,1, . . . , pt,d) 9: Observe graph Gt 10: Observe pairs {i, ℓt,i} for (It →i) ∈Gt 11: ot,i ←P (j→i)∈Gt pt,j for i ∈[d] 12: ˆℓt,i ← ℓt,i ot,i+γt 1{(It→i)∈Gt} for i ∈[d] 13: end for Our analysis follows the footsteps of Auer et al. [3] and Gy¨orfiand Ottucs´ak [9], who provide an improved analysis of the adaptive learningrate rule proposed by Auer et al. [4]. However, a technical subtlety will force us to proceed a little differently than these standard proofs: for achieving the tightest possible bounds and the most efficient algorithm, we need to tune our learning rates according to some random quantities that depend on the performance of EXP3IX. In fact, the key quantities in our analysis are the terms Qt = d X i=1 pt,i ot,i + γt , which depend on the interaction history Ft−1 for all t. Our theorem below gives the performance guarantee for EXP3-IX using a parameter setting adaptive to the values of Qt. A full proof of the theorem is given in the supplementary material. Theorem 1. Setting ηt = γt = q (log d)/(d + Pt−1 s=1 Qs) , the regret of EXP3-IX satisfies RT ≤4E "r d + PT t=1 Qt  log d # . (3) Proof sketch. Following the proof of Lemma 1 in Gy¨orfiand Ottucs´ak [9], we can prove that d X i=1 pt,iˆℓt,i ≤ηt 2 d X i=1 pt,i  ˆℓt,i 2 + log Wt ηt −log Wt+1 ηt+1  . (4) Taking conditional expectations, using Equation (2) and summing up both sides, we get T X t=1 d X i=1 pt,iℓt,i ≤ T X t=1 ηt 2 + γt  Qt + T X t=1 E log Wt ηt −log Wt+1 ηt+1  Ft−1  . Using Lemma 3.5 of Auer et al. [4] and plugging in ηt and γt, this becomes T X t=1 d X i=1 pt,iℓt,i ≤3 r d + PT t=1 Qt  log d + T X t=1 E log Wt ηt −log Wt+1 ηt+1  Ft−1  . Taking expectations on both sides, the second term on the right hand side telescopes into E log W1 η1 −log WT +1 ηT +1  ≤E  −log wT +1,j ηT +1  = E  log d ηT +1  + E h ˆLT,j i for any j ∈[d], giving the desired result as T X t=1 d X i=1 pt,iℓt,i ≤ T X t=1 ℓt,j + 4E "r d + PT t=1 Qt  log d # , where we used the definition of ηT and the optimistic property of the loss estimates. Setting m = 1 and c = γt in Lemma 1, gives the following deterministic upper bound on each Qt. Lemma 2. For all t ∈[T], Qt = d X i=1 pt,i ot,i + γt ≤2αt log  1 + ⌈d2/γt⌉+ d αt  + 2. 5 Combining Lemma 2 with Theorem 1 we prove our main result concerning the regret of EXP3-IX. Corollary 1. The regret of EXP3-IX satisfies RT ≤4 r d + 2 PT t=1 (Htαt + 1)  log d, where Ht = log 1 + ⌈d2p td/ log d⌉+ d αt ! = O(log(dT)). 3 Combinatorial semi-bandit problems with side observations We now turn our attention to the setting of online combinatorial optimization (see [13, 7, 2]). In this variant of the online learning problem, the learner has access to a possibly huge action set S ⊆{0, 1}d where each action is represented by a binary vector v of dimensionality d. In what follows, we assume that ∥v∥1 ≤m holds for all v ∈S and some 1 ≤m ≪d, with the case m = 1 corresponding to the multi-armed bandit setting considered in the previous section. In each round t = 1, 2, . . . , T of the decision process, the learner picks an action Vt ∈S and incurs a loss of V T t ℓt. At the end of the round, the learner receives some feedback based on its decision Vt and the loss vector ℓt. The regret of the learner is defined as RT = max v∈S E " T X t=1 (Vt −v) T ℓt # . Previous work has considered the following feedback schemes in the combinatorial setting: • The full information scheme where the learner gets to observe ℓt regardless of the chosen action. The minimax optimal regret of order m√T log d here is achieved by COMPONENTHEDGE algorithm of [13], while the Follow-the-Perturbed-Leader (FPL) [12, 10] was shown to enjoy a regret of order m3/2√T log d by [16]. • The semi-bandit scheme where the learner gets to observe the components ℓt,i of the loss vector where Vt,i = 1, that is, the losses along the components chosen by the learner at time t. As shown by [2], COMPONENTHEDGE achieves a near-optimal O(√mdT log d) regret guarantee, while [16] show that FPL enjoys a bound of O(m√dT log d). • The bandit scheme where the learner only observes its own loss V T t ℓt. There are currently no known efficient algorithms that get close to the minimax regret in this setting—the reader is referred to Audibert et al. [2] for an overview of recent results. In this section, we define a new feedback scheme situated between the semi-bandit and the fullinformation schemes. In particular, we assume that the learner gets to observe the losses of some other components not included in its own decision vector Vt. Similarly to the model of Alon et al. [1], the relation between the chosen action and the side observations are given by a directed observability Gt (see example in Figure 1). We refer to this feedback scheme as semi-bandit with side observations. While our theoretical results stated in the previous section continue to hold in this setting, combinatorial EXP3-IX could rarely be implemented efficiently—we refer to [7, 13] for some positive examples. As one of the main concerns in this paper is computational efficiency, we take a different approach: we propose a variant of FPL that efficiently implements the idea of implicit exploration in combinatorial semi-bandit problems with side observations. 3.1 Implicit exploration by geometric resampling In each round t, FPL bases its decision on some estimate bLt−1 = Pt−1 s=1 ˆℓs of the total losses Lt−1 = Pt−1 s=1 ℓs as follows: Vt = arg min v∈S v T  ηt bLt−1 −Zt  . (5) Here, ηt > 0 is a parameter of the algorithm and Zt is a perturbation vector with components drawn independently from an exponential distribution with unit expectation. The power of FPL lies in that it only requires an oracle that solves the (offline) optimization problem minv∈S vTℓand thus 6 can be used to turn any efficient offline solver into an online optimization algorithm with strong guarantees. To define our algorithm precisely, we need to some further notation. We redefine Ft−1 to be σ(Vt−1, . . . , V1), Ot,i to be the indicator of the observed component and let qt,i = E [Vt,i |Ft−1 ] and ot,i = E [Ot,i |Ft−1 ] . The most crucial point of our algorithm is the construction of our loss estimates. To implement the idea of implicit exploration by optimistic biasing, we apply a modified version of the geometric resampling method of Neu and Bart´ok [16] constructed as follows: Let O′ t(1), O′ t(2), . . . be independent copies3 of Ot and let Ut,i be geometrically distributed random variables for all i = [d] with parameter γt. We let Kt,i = min  k : O′ t,i(k) = 1 ∪{Ut,i}  (6) and define our loss-estimate vector ˆℓt ∈Rd with its i-th element as ˆℓt,i = Kt,iOt,iℓt,i. (7) By definition, we have E [Kt,i |Ft−1 ] = 1/(ot,i + (1 −ot,i)γt), implying that our loss estimates are optimistic in the sense that they lower bound the losses in expectation: E h ˆℓt,i Ft−1 i = ot,i ot,i + (1 −ot,i)γt ℓt,i ≤ℓt,i. Here we used the fact that Ot,i is independent of Kt,i and has expectation ot,i given Ft−1. We call this algorithm Follow-the-Perturbed-Leader with Implicit eXploration (FPL-IX, Algorithm 2). Note that the geometric resampling procedure can be terminated as soon as Kt,i becomes welldefined for all i with Ot,i = 1. As noted by Neu and Bart´ok [16], this requires generating at most d copies of Ot on expectation. As each of these copies requires one access to the linear optimization oracle over S, we conclude that the expected running time of FPL-IX is at most d times that of the expected running time of the oracle. A high-probability guarantee of the running time can be obtained by observing that Ut,i ≤log 1 δ  /γt holds with probability at least 1 −δ and thus we can stop sampling after at most d log d δ  /γt steps with probability at least 1 −δ. 3.2 Performance guarantees for FPL-IX Algorithm 2 FPL-IX 1: Input: Set of actions S, 2: parameters γt ∈(0, 1), ηt > 0 for t ∈[T]. 3: for t = 1 to T do 4: An adversary privately chooses losses ℓt,i for all i ∈[d] and generates a graph Gt 5: Draw Zt,i ∼Exp(1) for all i ∈[d] 6: Vt ←arg minv∈S vT  ηt bLt−1 −Zt  7: Receive loss V T t ℓt 8: Observe graph Gt 9: Observe pairs {i, ℓt,i} for all i, such that (j →i) ∈Gt and v(It)j = 1 10: Compute Kt,i for all i ∈[d] using Eq. (6) 11: ˆℓt,i ←Kt,iOt,iℓt,i 12: end for The analysis presented in this section combines some techniques used by Kalai and Vempala [12], Hutter and Poland [11], and Neu and Bart´ok [16] for analyzing FPL-style learners. Our proofs also heavily rely on some specific properties of the IX loss estimate defined in Equation 7. The most important difference from the analysis presented in Section 2.2 is that now we are not able to use random learning rates as we cannot compute the values corresponding to Qt efficiently. In fact, these values are observable in the information-theoretic sense, so we could prove bounds similar to Theorem 1 had we had access to infinite computational resources. As our focus in this paper is on computationally efficient algorithms, we choose to pursue a different path. In particular, our learning rates will be tuned according to efficiently computable approximations eαt of the respective independence numbers αt that satisfy αt/C ≤eαt ≤αt ≤d for some C ≥1. For the sake of simplicity, we analyze the algorithm in the oblivious adversary model. The following theorem states the performance guarantee for FPL-IX in terms of the learning rates and random variables of the form eQt(c) = d X i=1 qt,i ot,i + c. 3Such independent copies can be simply generated by sampling independent copies of Vt using the FPL rule (5) and then computing O′ t(k) using the observability Gt. Notice that this procedure requires no interaction between the learner and the environment, although each sample requires an oracle access. 7 Theorem 2. Assume γt ≤1/2 for all t and η1 ≥η2 ≥· · · ≥ηT . The regret of FPL-IX satisfies RT ≤m (log d + 1) ηT + 4m T X t=1 ηtE  eQt  γt 1 −γt  + T X t=1 γtE h eQt(γt) i . Proof sketch. As usual for analyzing FPL methods [12, 11, 16], we first define a hypothetical learner that uses a time-independent perturbation vector eZ ∼Z1 and has access to ˆℓt on top of bLt−1 eVt = arg min v∈S v T  ηt bLt −eZ  . Clearly, this learner is infeasible as it uses observations from the future. Also, observe that this learner does not actually interact with the environment and depends on the predictions made by the actual learner only through the loss estimates. By standard arguments, we can prove E " T X t=1  eVt −v T ˆℓt # ≤m (log d + 1) ηT . Using the techniques of Neu and Bart´ok [16], we can relate the performance of Vt to that of eVt, which we can further upper bounded after a long and tedious calculation as E h (Vt −eVt) T ˆℓt Ft−1 i ≤ηt E  eV T t−1 ˆℓt 2 Ft−1  ≤4mηtE  eQt  γ 1 −γ  Ft−1  . The result follows by observing that E h vT ˆℓt Ft−1 i ≤vTℓt for any fixed v ∈S by the optimistic property of the IX estimate and also from the fact that by the definition of the estimates we infer that E h eV T t−1 ˆℓt Ft−1 i ≥E [V T t ℓt| Ft−1] −γtE h eQt(γt) i . The next lemma shows a suitable upper bound for the last two terms in the bound of Theorem 2. It follows from observing that ot,i ≥(1/m) P j∈{N − t,i∪{i}} qt,j and applying Lemma 1. Lemma 3. For all t ∈[T] and any c ∈(0, 1), eQt(c) = d X i=1 qt,i ot,i + c ≤2mαt log  1 + m⌈d2/c⌉+ d αt  + 2m. We are now ready to state the main result of this section, which is obtained by combining Theorem 2, Lemma 3, and Lemma 3.5 of Auer et al. [4] applied to the following upper bound T X t=1 αt q d + Pt−1 s=1 eαs ≤ T X t=1 αt qPt s=1 αs/C ≤2 q C PT t=1 αt ≤2 q d + C PT t=1 αt. Corollary 2. Assume that for all t ∈[T], αt/C ≤eαt ≤αt ≤d for some C > 1, and assume md > 4. Setting ηt = γt = r (log d + 1) /  m  d + Pt−1 s=1 eαs  , the regret of FPL-IX satisfies RT ≤Hm3/2 r d + C PT t=1 αt  (log d + 1), where H = O(log(mdT)). Conclusion We presented an efficient algorithm for learning with side observations based on implicit exploration. This technique gave rise to multitude of improvements. Remarkably, our algorithms no longer need to know the observation system before choosing the action unlike the method of [1]. Moreover, we extended the partial observability model of [15, 1] to accommodate problems with large and structured action sets and also gave an efficient algorithm for this setting. Acknowledgements The research presented in this paper was supported by French Ministry of Higher Education and Research, by European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement no270327 (CompLACS), and by FUI project Herm`es. 8 References [1] Alon, N., Cesa-Bianchi, N., Gentile, C., and Mansour, Y. (2013). From Bandits to Experts: A Tale of Domination and Independence. In Neural Information Processing Systems. [2] Audibert, J. Y., Bubeck, S., and Lugosi, G. (2014). Regret in Online Combinatorial Optimization. Mathematics of Operations Research, 39:31–45. [3] Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. (2002a). The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77. [4] Auer, P., Cesa-Bianchi, N., and Gentile, C. (2002b). Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48–75. [5] Cesa-Bianchi, N., Freund, Y., Haussler, D., Helmbold, D., Schapire, R., and Warmuth, M. (1997). How to use expert advice. Journal of the ACM, 44:427–485. [6] Cesa-Bianchi, N. and Lugosi, G. (2006). Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA. [7] Cesa-Bianchi, N. and Lugosi, G. (2012). Combinatorial bandits. Journal of Computer and System Sciences, 78:1404–1422. [8] Chen, W., Wang, Y., and Yuan, Y. (2013). Combinatorial Multi-Armed Bandit: General Framework and Applications. In International Conference on Machine Learning, pages 151–159. [9] Gy¨orfi, L. and Ottucs´ak, b. (2007). Sequential prediction of unbounded stationary time series. IEEE Transactions on Information Theory, 53(5):866–1872. [10] Hannan, J. (1957). Approximation to Bayes Risk in Repeated Play. Contributions to the theory of games, 3:97–139. [11] Hutter, M. and Poland, J. (2004). Prediction with Expert Advice by Following the Perturbed Leader for General Weights. In Algorithmic Learning Theory, pages 279–293. [12] Kalai, A. and Vempala, S. (2005). Efficient algorithms for online decision problems. Journal of Computer and System Sciences, 71:291–307. [13] Koolen, W. M., Warmuth, M. K., and Kivinen, J. (2010). Hedging structured concepts. In Proceedings of the 23rd Annual Conference on Learning Theory (COLT), pages 93–105. [14] Littlestone, N. and Warmuth, M. (1994). The weighted majority algorithm. Information and Computation, 108:212–261. [15] Mannor, S. and Shamir, O. (2011). From Bandits to Experts: On the Value of SideObservations. In Neural Information Processing Systems. [16] Neu, G. and Bart´ok, G. (2013). An Efficient Algorithm for Learning with Semi-bandit Feedback. In Jain, S., Munos, R., Stephan, F., and Zeugmann, T., editors, Algorithmic Learning Theory, volume 8139 of Lecture Notes in Computer Science, pages 234–248. Springer Berlin Heidelberg. [17] Vovk, V. (1990). Aggregating strategies. In Proceedings of the third annual workshop on Computational learning theory (COLT), pages 371–386. 9
2014
51
5,538
An Integer Polynomial Programming Based Framework for Lifted MAP Inference Somdeb Sarkhel, Deepak Venugopal Computer Science Department The University of Texas at Dallas {sxs104721,dxv021000}@utdallas.edu Parag Singla Department of CSE I.I.T. Delhi parags@cse.iitd.ac.in Vibhav Gogate Computer Science Department The University of Texas at Dallas vgogate@hlt.utdallas.edu Abstract In this paper, we present a new approach for lifted MAP inference in Markov logic networks (MLNs). The key idea in our approach is to compactly encode the MAP inference problem as an Integer Polynomial Program (IPP) by schematically applying three lifted inference steps to the MLN: lifted decomposition, lifted conditioning, and partial grounding. Our IPP encoding is lifted in the sense that an integer assignment to a variable in the IPP may represent a truth-assignment to multiple indistinguishable ground atoms in the MLN. We show how to solve the IPP by first converting it to an Integer Linear Program (ILP) and then solving the latter using state-of-the-art ILP techniques. Experiments on several benchmark MLNs show that our new algorithm is substantially superior to ground inference and existing methods in terms of computational efficiency and solution quality. 1 Introduction Many domains in AI and machine learning (e.g., NLP, vision, etc.) are characterized by rich relational structure as well as uncertainty. Statistical relational learning (SRL) models [5] combine the power of first-order logic with probabilistic graphical models to effectively handle both of these aspects. Among a number of SRL representations that have been proposed to date, Markov logic [4] is arguably the most popular one because of its simplicity; it compactly represents domain knowledge using a set of weighted first order formulas and thus only minimally modifies first-order logic. The key task over Markov logic networks (MLNs) is inference which is the means of answering queries posed over the MLN. Although, one can reduce the problem of inference in MLNs to inference in graphical models by propositionalizing or grounding the MLN (which yields a Markov network), this approach is not scalable. The reason is that the resulting Markov network can be quite large, having millions of variables and features. One approach to achieve scalability is lifted inference, which operates on groups of indistinguishable random variables rather than on individual variables. Lifted inference algorithms identify groups of indistinguishable atoms by looking for symmetries in the first-order logic representation, grounding the MLN only as necessary. Naturally, when the number of such groups is small, lifted inference is significantly better than propositional inference. Starting with the work of Poole [17], researchers have invented a number of lifted inference algorithms. At a high level, these algorithms “lift” existing probabilistic inference algorithms (cf. [3, 6, 7, 21, 22, 23, 24]). However, many of these lifted inference algorithms have focused on the task of marginal inference, i.e., finding the marginal probability of a ground atom given evidence. For many problems 1 of interest such as in vision and NLP, one is often interested in the MAP inference task, i.e., finding the most likely assignment to all ground atoms given evidence. In recent years, there has been a growing interest in lifted MAP inference. Notable lifted MAP approaches include exploiting uniform assignments for lifted MPE [1], lifted variational inference using graph automorphism [2], lifted likelihood-maximization for MAP [8], exploiting symmetry for MAP inference [15] and efficient lifting of MAP LP relaxations using k-locality [13]. However, a key problem with most of the existing lifted approaches is that they require significant modifications to be made to propositional inference algorithms, and for optimal performance require lifting several steps of propositional algorithms. This is time consuming because one has to lift decades of advances in propositional inference. To circumvent this problem, recently Sarkhel et al. [18] advocated using the “lifting as pre-processing” paradigm [20]. The key idea is to apply lifted inference as pre-processing step and construct a Markov network that is lifted in the sense that its size can be much smaller than ground Markov network and a complete assignment to its variables may represent several complete assignments in the ground Markov network. Unfortunately, Sarkhel et al.’s approach does not use existing research on lifted inference to the fullest extent and is efficient only when first-order formulas have no shared terms. In this paper, we propose a novel lifted MAP inference approach which is also based on the “lifting as pre-processing” paradigm but unlike Sarkhel et al.’s approach is at least as powerful as probabilistic theorem proving [6], an advanced lifted inference algorithm. Moreover, our new approach can easily subsume Sarkhel et al.’s approach by using it as just another lifted inference rule. The key idea in our approach is to reduce the lifted MAP inference (maximization) problem to an equivalent Integer Polynomial Program (IPP). Each variable in the IPP potentially refers to an assignment to a large number of ground atoms in the original MLN. Hence, the size of search space of the generated IPP can be significantly smaller than the ground Markov network. Our algorithm to generate the IPP is based on the following three lifted inference operations which incrementally build the polynomial objective function and its associated constraints: (1) Lifted decomposition [6] finds sub-problems with identical structure and solves only one of them; (2) Lifted conditioning [6] replaces an atom with only one logical variable (singleton atom) by a variable in the integer polynomial program such that each of its values denotes the number of the true ground atoms of the singleton atom in a solution; and (3) Partial grounding is used to simplify the MLN further so that one of the above two operations can be applied. To solve the IPP generated from the MLN we convert it to an equivalent zero-one Integer Linear Program (ILP) using a classic conversion method outlined in [25]. A desirable characteristic of our reduction is that we can use any off-the-shelf ILP solver to get exact or approximate solution to the original problem. We used a parallel ILP solver, Gurobi [9] for this purpose. We evaluated our approach on multiple benchmark MLNs and compared with Alchemy [11] and Tuffy [14], two state-of-the-art MLN systems that perform MAP inference by grounding the MLN, as well as with the lifted MAP inference approach of Sarkhel et al. [18]. Experimental results show that our approach is superior to Alchemy, Tuffy and Sarkhel et al.’s approach in terms of scalability and accuracy. 2 Notation And Background Propositional Logic. In propositional logic, sentences or formulas, denoted by f, are composed of symbols called propositions or atoms, denoted by upper case letters (e.g., X, Y , Z, etc.) that are joined by five logical operators ∧(conjunction), ∨(disjunction), ¬ (negation), ⇒(implication) and ⇔(equivalence). Each atom takes values from the binary domain {true, false}. First-order Logic. An atom in first-order logic (FOL) is a predicate that represents relations between objects. A predicate consists of a predicate symbol, denoted by Monospace fonts (e.g., Friends, R, etc.), followed by a parenthesized list of arguments. A term is a logical variable, denoted by lower case letters such as x, y, and z, or a constant, denoted by upper case letters such as X, Y , and Z. We assume that each logical variable, say x, is typed and takes values from a finite set of constants, called its domain, denoted by ∆x. In addition to the logical operators, FOL includes universal ∀and existential ∃quantifiers. Quantifiers express properties of an entire collection of objects. A formula in first order logic is an atom, or any complex sentence that can be constructed from atoms using logical operators and quantifiers. For example, the formula ∀x Smokes(x) ⇒Asthma(x) states that all persons who smoke have asthma. A Knowledge base (KB) is a set of first-order formulas. 2 In this paper we use a subset of FOL which has no function symbols, equality constraints or existential quantifiers. We assume that formulas are standardized apart, namely no two formulas share a logical variable. We also assume that domains are finite and there is a one-to-one mapping between constants and objects in the domain (Herbrand interpretations). We assume that each formula f is of the form ∀xf, where x is the set of variables in f (also denoted by V (f)) and f is a disjunction of literals (clause); each literal being an atom or its negation. For brevity, we will drop ∀from all formulas. A ground atom is an atom containing only constants. A ground formula is a formula obtained by substituting all of its variables with a constant, namely a formula containing only ground atoms. A ground KB is a KB containing all possible groundings of all of its formulas. Markov Logic. Markov logic [4] extends FOL by softening hard constraints expressed by formulas and is arguably the most popular modeling language for SRL. A soft formula or a weighted formula is a pair (f, w) where f is a formula in FOL and w is a real-number. A Markov logic network (MLN), denoted by M, is a set of weighted formulas (fi, wi). Given a set of constants that represent objects in the domain, a Markov logic network represents a Markov network or a log-linear model. The ground Markov network is obtained by grounding the weighted first-order knowledge base with one feature for each grounding of each formula. The weight of the feature is the weight attached to the formula. The ground network represents the probability distribution P(ω) = 1 Z exp (P i wiN(fi, ω)) where ω is a world, namely a truth-assignment to all ground atoms, N(fi, ω) is the number of groundings of fi that evaluate to true given ω and Z is a normalization constant. For simplicity, we will assume that the MLN is in normal form and has no self joins, namely no two atoms in a formula have the same predicate symbol [10]. A normal MLN is an MLN that satisfies the following two properties: (i) there are no constants in any formula; and (ii) If two distinct atoms of predicate R have variables x and y as the same argument of R, then ∆x = ∆y. Because of the second condition, in normal MLNs, we can associate domains with each argument of a predicate. Moreover, for inference purposes, in normal MLNs, we do not have to keep track of the actual elements in the domain of a variable, all we need to know is the size of the domain [10]. Let iR denote the i-th argument of predicate R and let D(iR) denote the number of elements in the domain of iR. Henceforth, we will abuse notation and refer to normal MLNs as MLNs. MAP Inference in MLNs. A common optimization inference task over MLNs is finding the most probable state of the world ω, that is finding a complete assignment to all ground atoms which maximizes the probability. Formally, arg max ω PM(ω) = arg max ω 1 Z(M) exp X i wiN(fi, ω) ! = arg max ω X i wiN(fi, ω) (1) From Eq. (1), we can see that the MAP problem in Markov logic reduces to finding the truth assignment that maximizes the sum of weights of satisfied clauses. Therefore, any weighted satisfiability solver can used to solve this problem. The problem is NP-hard in general, but effective solvers exist, both exact and approximate. Examples of such solvers are MaxWalkSAT [19], a local search solver and Clone [16], a branch-and-bound solver. Both these algorithms are propositional and therefore they are unable to exploit relational structure that is inherent to MLNs. Integer Polynomial Programming (IPP). An IPP problem is defined as follows: Maximize f(x1, x2, ..., xn) Subject to gj(x1, x2, ..., xn) ≥0 (j = 1, 2, ..., m) where each xi takes finite integer values, and the objective function f(x1, x2, ..., xn), and each of the constraints gj(x1, x2, ..., xn) are polynomials on x1, x2, ..., xn. We will compactly represent an integer polynomial programming problem (IPP) as an ordered triple I = ⟨f, G, X⟩, where X = {x1, x2, ..., xn}, and G = {g1, g2, ..., gm}. 3 Probabilistic Theorem Proving Based MAP Inference Algorithm We motivate our approach by presenting in Algorithm 1, the most basic algorithm for lifted MAP inference. Algorithm 1 extends the probabilistic theorem proving (PTP) algorithm of Gogate and Domingos [6] to MAP inference and integrates it with Sarkhel et al’s lifted MAP inference rule [18]. It is obtained by replacing the summation operator in the conditioning step of PTP by the maximization operator (PTP computes the partition function). Note that throughout the paper, we will present 3 algorithms that compute the MAP value rather than the MAP assignment; the assignment can be recovered by tracing back the path that yielded the MAP value. We describe the steps in Algorithm 1 next, starting with some required definitions. Algorithm 1 PTP-MAP(MLN M) if M is empty return 0 Simplify(M) if M has disjoint MLNs M1, . . . , Mk then return Pk i=1 PTP-MAP(Mi) if M has a decomposer d such that D(i ∈d) > 1 then return PTP-MAP(M|d) if M has an isolated atom R such that D(iR) > 1 then return PTP-MAP (M|{1R}) if M has a singleton atom A then return maxD(1A) i=0 PTP-MAP(M|(A, i)) + w(A, i) Heuristically select an argument iR return PTP-MAP(M|G(iR)) Two arguments iR and jS are called unifiable if they share a logical variable in a MLN formula. Clearly, unifiable, if we consider it as a binary relation U(iR, jS) is symmetric and reflexive. Let U be the transitive closure of U. Given an argument iS, let Unify(iS) denote the equivalence class under U. Simplification. In the simplification step, we simplify the predicates possibly reducing their arity (cf. [6, 10] for details). An example simplification step is the following: if no atoms of a predicate share logical variables with other atoms in the MLN then we can replace the predicate by a new predicate having just one argument; the domain size of the argument is the product of domain sizes of the individual arguments. Example 1. Consider a normal MLN with two weighted formulas: R(x1, y1) ∨S(z1, u1), w1 and R(x2, y2) ∨S(z2, u2) ∨T(z2, v2), w2. We can simplify this MLN by replacing R by a predicate J having one argument such that D(1J) = D(1R) × D(2R). The new MLN has two formulas: J(x1) ∨S(z1, u1), w1 and J(x2) ∨S(z2, u2) ∨T(z2, v2), w2. Decomposition. If an MLN can be decomposed into two or more disjoint MLNs sharing no first-order atom, then the MAP solution is just a sum over the MAP solutions of all the disjoint MLNs. Lifted Decomposition. Main idea in lifted decomposition [6] is to identify identical but disconnected components in ground Markov network by looking for symmetries in the first-order representation. Since the disconnected components are identical, only one of them needs to be solved and the MAP value is the MAP value of one of the components times the number of components. One way of identifying identical disconnected components is by using a decomposer [6, 10], defined below. Definition 1. [Decomposer] Given a MLN M having m formulas denoted by f1, . . . , fm, d = Unify(iR) where R is a predicate in M, is called a decomposer iff the following conditions are satisfied: (i) for each predicate R in M there is exactly one argument iR such that iR ∈d; and (ii) in each formula fi, there exists a variable x such that x appears in all atoms of fi and for each atom having predicate symbol R in fi, x appears at position iR ∈d. Denoted by M|d the MLN obtained from M by setting domain size of all elements iR of d to one and updating weight of each formula that mentions R by multiplying it by D(iR). We can prove that: Proposition 1. Given a decomposer d, the MAP value of M is equal to the MAP value of M|d. Example 2. Consider a normal MLN M having two weighted formulas R(x) ∨S(x), w1 and R(y) ∨ T(y), w2 where D(1R) = D(1S) = D(1T) = n. Here, d = {1R, 1S, 1T} is a decomposer. The MLN M|d is the MLN having the same two formulas as M with weights updated to nw1 and nw2 respectively. Moreover, in the new MLN D(1R) = D(1S) = D(1T) = 1. Isolated Singleton Rule. Sarkhel et al. [18] proved that if the MLN M has an isolated predicate R such that all atoms of R do not share any logical variables with other atoms, then one of the MAP solutions of M has either all ground atoms of R set to true or all of them set to false, namely, the solution lies at the extreme assignments to groundings of R. Since we simplify the MLN, all such predicates R have only one argument, namely, they are singleton. Therefore, the following proposition is immediate: Proposition 2. If M has an isolated singleton predicate R, then the MAP value of M equals the MAP value of M|{1R} (the notation M|{1R} is defined just after the definition of the decomposer). Lifted Conditioning over Singletons. Performing a conditioning operation on a predicate means conditioning on all possible ground atoms of that predicate. Na¨ıvely it can result in exponential 4 number of alternate MLNs that need to be solved, one for each assignment to all groundings of the predicate. However if the predicate is singleton, we can group these assignments into equi-probable sets based on number of true groundings of the predicate (counting assignment) [6, 10, 12]. In this case, we say that the lifted conditioning operator is applicable. For a singleton A, we denote the counting assignment as the ordered pair (A, i) which the reader should interpret as exactly i groundings of A are true and the remaining are false. We denote by M|(A, i) the MLN obtained from M as follows. For each element jR in Unify(1A) (in some order), we split the predicate R into two predicates R1 and R2 such that D(jR1) = i and D(jR2) = D(1A) −i. We then rewrite all formulas using these new predicate symbols. Assume that A is split into two predicates A1 and A2 respectively with D(1A1) = i and D(1A2) = D(1A) −i. Then, we delete all formulas in which either A1 appears positively or A2 appears negatively (because they are satisfied). Next, we delete all literals of A1 and A2 from all formulas in the MLN. The weights of all formulas (which are not deleted) remain unchanged except those formulas in which atoms of A1 or A2 do not share logical variables with other atoms. The weight of each such formula f with weight w is changed to w × D(1A1) if A1 appears in the clause or to w × D(1A2) if A2 appears in the clause. The weight w(A, i) is calculated as follows. Let F(A1) and F(A2) denote the set of satisfied formulas (which are deleted) in which A1 and A2 participate in. We introduce some additional notation. Let V (f) denote the set of logical variables in a formula f. Given a formula f, for each variable y ∈V (f), let iR(y) denote the position of the argument of a predicate R such that y appears at that position in an atom of R in f. Then, w(A, i) is given by: w(A, i) = 2 X k=1 X fj∈F (Ak) wj Y y∈V (fj) D(iR(y)) We can show that: Proposition 3. Given an MLN M having singleton atom A, the MAP-value of M equals maxD(1A) i=0 MAP-value(M|(A, i)) + w(A, i). Example 3. Consider a normal MLN M having two weighted formulas R(x) ∨S(x), w1 and R(y) ∨ S(z), w2 with domain sizes D(1R) = D(1S) = n. The MLN M|(R, i) is the MLN having three weighted formulas: S2(x2), w1; S1(x1), w2(n −i) and S2(x3), w2(n −i) with domains D(1S1) = i and D(1S2) = n −i. The weight w(R, i) = iw1 + niw2. Partial grounding. In the absence of a decomposer, or when the singleton rule is not applicable, we will have to partially ground a predicate. For this, we heuristically select an argument iR to ground. Let M|G(iR) denote the MLN obtained from M as follows. For each argument iS ∈Unify(iR), we create D(iS) new predicates which have all arguments of S except iS. We then update all formulas with the new predicates. For example, Example 4. Consider a MLN with two formulas: R(x, y) ∨S(y, z), w1 and S(a, b) ∨T(a, c), w2. Let D(2R) = 2. After grounding 2R, we get an MLN having four formulas: R1(x1) ∨S1(z1), w1, R2(x2) ∨S2(z2), w1, S1(b1) ∨T1(c1), w2 and S2(b2) ∨T2(c2), w2. Since partial grounding will create many new clauses, we will try to use this operator as sparingly as possible. The following theorem is immediate from [6, 18] and the discussion above. Theorem 1. PTP-MAP(M) computes the MAP value of M. 4 Integer Polynomial Programming formulation for Lifted MAP PTP-MAP performs an exhaustive search over all possible lifted assignments in order to find the optimal MAP value. It can be very slow without proper pruning, and that is why branch-and-bound algorithms are widely used for many similar optimization tasks. The branch-and-bound algorithm maintains a global best solution found so far, as a lower bound. If the estimated upper bound of a node is not better than the lower bound, the node is pruned and the search continues with other branches. However instead of developing a lifted MAP specific upper bound heuristic to improve Algorithm 1, we propose to encode the lifted search problem as an Integer Polynomial Programming (IPP) problem. This way we can use existing off-the-shelf advanced machinery, which includes pruning techniques, search heuristics, caching, problem decomposition and upper bounding techniques, to solve the IPP. 5 At a high level, our encoding algorithm runs PTP-MAP schematically, performing all steps in PTPMAP except the search or conditioning step. Before we present our algorithm, we define schematic MLNs (SMLNs) – a basic structure on which our algorithm operates. SMLNs are normal MLNs with two differences: (1) weights attached to formulas are polynomials instead of constants and (2) Domain sizes of arguments are linear expressions instead of constants. Algorithm 2 SMLN-2-IPP(SMLN S) if S is empty return ⟨0, ∅, ∅⟩ Simplify(S) if S has disjoint SMLNs then for disjoint SMLNs Si...Sk in S ⟨fi, Gi, Xi⟩= SMLN-2-IPP(Si) return ⟨Pk i=1 fi, ∪k i=1Gi, ∪k i=1Xi⟩ if S has a decomposer d then return SMLN-2-IPP(S|d) if S has a isolated singleton R then return SMLN-2-IPP(S|{iR}) if S has a singleton atom A then Introduce an IPP variable ‘i’ Form a constraint g as ‘(0 ≤i ≤D(1A))’ ⟨f, G, X⟩= SMLN-2-IPP(S|(A, i)) return ⟨f + w(A, i), G ∪{g}, X ∪{i}⟩ Heuristically select an argument iR return SMLN-2-IPP(S|G(iR)) Algorithm 2 presents our approach to encode lifted MAP problem as an IPP problem. It mirrors Algorithm 1, with only difference being at the lifted conditioning step. Specifically, in lifted conditioning step, instead of going over all possible branches corresponding to all possible counting assignments, the algorithm uses a representative branch which has a variable associated for the corresponding counting assignment. All update steps described in the previous section remain unchanged with the caveat that in S|(A, i), i is symbolic(an integer variable). At termination, Algorithm 2 yields an IPP. Following theorem is immediate from the correctness of Algorithm 1. Theorem 2. Given an MLN M and its associated schematic MLN S, the optimum solution to the Integer Polynomial Programming problem returned by SMLN-2-IPP(S) is the MAP solution of M. In the next three examples, we show the IPP output by Algorithm 2 on some example MLNs. Example 5. Consider an MLN having one weighted formula: R(x) ∨S(x), w1 such that D(1R) = D(1S) = n. Here, d = {1R, 1S} is a decomposer. By applying the decomposer rule, weight of the formula becomes nw1 and domain size is set to 1. After conditioning on R objective function obtained is nw1r and the formula changes to S(x), nw1(1 −r). After conditioning on S, the IPP obtained has objective function nw1r + nw1(1 −r)s and two constraints: 0 ≤r ≤1 and 0 ≤s ≤1. Example 6. Consider an MLN having one weighted formula: R(x) ∨S(y), w1 such that D(1R) = nx and D(1S) = ny. Here R and S are isolated, and therefore by applying the isolated singleton rule weight of the formula becomes nxnyw1. This is similar to the previous example; only weight of the formula is different. Therefore, substituting this new weight, IPP output by Algorithm 2 will have objective function nxnyw1r + nxnyw1(1 −r)s and two constraints 0 ≤r ≤1 and 0 ≤s ≤1. Example 7. Consider an MLN having two weighted formulas: R(x) ∨S(x), w1 and R(z) ∨S(y), w2 such that D(1R) = D(1S) = n. On this MLN, the IPP output by Algorithm 2 has the objective function rw1 + r2w2 + rw2(n −r) + s2w1(n −r) + s2w2(n −r)2 + s1w2(n −r)r and constraints 0 ≤r ≤n, 0 ≤s1 ≤1 and 0 ≤s2 ≤1. The operations that will be applied in order are: lifted conditioning on R creating two new predicates S1 and S2; decomposer on 1S1; decomposer on 1S2; and then lifted conditioning on S1 and S2 respectively. 4.1 Solving Integer Polynomial Programming Problem Although we can directly solve the IPP using any off-the-shelf mathematical optimization software, IPP solvers are not as mature as Integer Linear programming(ILP) solvers. Therefore, for efficiency reasons, we propose to convert the IPP to an ILP using the classic method outlined in [25] (we skip the details for lack of space). The method first converts the IPP to a zero-one Polynomial Programming problem and then subsequently linearizes it by adding additional variables and constraints for each higher degree terms. Once the problem is converted to an ILP problem we can use any standard ILP solver to solve it. Next, we state a key property about this conversion in the following theorem. Theorem 3. The search space for solving the IPP obtained from Algorithm 2 by using the conversion described in [25] is polynomial in the max-range of the variables. Proof. Let n be number of variables of the IPP problem, where each of the variables has range from 0 to (d −1) (i.e., for each variable 0 ≤vi ≤d −1). As we first convert everything to binary, the 6 zero-one Polynomial Programming problem will have O(n log2 d) variables. If the highest degree of a term in the IPP problem is k, we will need to introduce O(log2 dk) binary variables (as multiplying k variables, each bounded by d, will result in terms bounded by dk) to linearize it. Since search space of an ILP is exponential in number of variables, search space for solving the IPP problem is: O(2(n log2 d+log2 dk)) = O(2n log2 d)O(2k log2 d) = O(dn)O(dk) = O(dn+k) We conclude this section by summarizing the power of our new approach: Theorem 4. The search space of the IPP returned by Algorithm 2 is smaller than or equal to the search space of the Integer Linear Program (ILP) obtained using the algorithm proposed in Sarkhel et al. [18], which in turn is smaller than the size of the search space associated with the ground Markov network. 5 Experiments We used a parallelized ILP solver called Gurobi [9] to solve ILPs generated by our algorithm as well as by other competing algorithms used in our experimental study. We compared performance of our new lifted algorithm (which we call IPP) with four other algorithms from literature: Alchemy (ALY) [11], Tuffy(TUFFY) [14], ground inference based on ILP (ILP), and lifted MAP (LMAP) algorithm of Sarkhel et al. [18]. Alchemy and Tuffy are two state-of-the-art open source software for learning and inference in MLNs. Both of them first ground the MLN and then use an approximate solver, MaxWalkSAT [19] to compute MAP solution. Unlike Alchemy, Tuffy uses clever Database tricks to speed up computation. ILP is obtained by converting MAP problem over ground Markov network to an ILP. LMAP also converts the MAP problem to ILP, however its ILP encoding can be much more compact than ones used by ground inference methods because it processes “non-shared atoms” in a lifted manner (see [18] for details). We used following three MLNs to evaluate our algorithm: (i) An MLN which we call Student that consists of following four formulas, Teaches(teacher,course) ∧Takes(student,course) →JobOffers(student,company); Teaches(teacher,course); Takes(student,course); ¬JobOffers(student,company) (ii) An MLN which we call Relationship that consists of following four formulas, Loves(person1 ,person2) ∧Friends(person2, person3) →Hates(person1, person3); Loves(person1, person2); Friends(person1, person2); ¬Hates(person1, person2); (iii) Citation Information-Extraction (IE) MLN [11] from the Alchemy web page, consisting of five predicates and fourteen formulas. To compare performance and scalability, we ran each algorithm on aforementioned MLNs for varying time-bounds and recorded solution quality (i.e., the total weight of false clauses) achieved by each. All our experiments were run on a third generation i7 quad-core machine having 8GB RAM. For Student MLNs, results are shown in Fig 1(a)-(c). On the MLN having 161K clauses, ILP, LMAP and IPP converge quickly to the optimal answer while TUFFY converges faster than ALY. For the MLN with 812K clauses, LMAP and IPP converge faster than ILP and TUFFY. ALY is unable to handle this large Markov network and runs out of memory. For the MLN with 8.1B clauses, only LMAP and IPP are able to produce a solution with IPP converging much faster than LMAP. On this large MLN, all three ground inference algorithms, ILP, ALY and TUFFY ran out of memory. Results for Relationship MLNs are shown in Fig 1(d)-(f) and are similar to Student MLNs. On MLNs with 9.2K and 29.7K clauses ILP, LMAP and IPP converge faster than TUFFY and ALY, while TUFFY converges faster than ALY. On the largest MLN having 1M clauses only LMAP, ILP and IPP are able to produce a solution with IPP converging much faster than other two. For IE MLN results are shown in Fig 1(g)-(i) which show a similar picture with IPP outperforming other algorithms as we increase number of objects in the domain. In fact on the largest IE MLN having 15.6B clauses only IPP is able to output a solution while other approaches ran out of memory. In summary, as expected, IPP and LMAP, two lifted approaches are more accurate and scalable than three propositional inference approaches: ILP, TUFFY and ALY. IPP not only scales much better but also converges much faster than LMAP, clearly demonstrating the power of our new approach. 7 100 1000 10000 100000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds ALY TUFFY IPP ILP LMAP (a) Student(1.2K,161K,200) 1000 10000 100000 1e+06 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds TUFFY IPP ILP LMAP (b) Student(2.7K,812K,450) 100000 1e+06 1e+07 1e+08 1e+09 1e+10 1e+11 1e+12 1e+13 1e+14 1e+15 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds IPP LMAP (c) Student(270K,8.1B,45K) 100 1000 10000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds ALY TUFFY IPP ILP LMAP (d) Relation(1.2K,9.2K,200) 100 1000 10000 100000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds TUFFY IPP ILP LMAP (e) Relation(2.7K,29.7K,450) 10000 100000 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds IPP ILP LMAP (f) Relation(30K,1M,5K) 1000 10000 100000 1e+06 1e+07 1e+08 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds ALY TUFFY IPP ILP LMAP (g) IE(3.2K,1M,100) 100000 1e+06 1e+07 1e+08 1e+09 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds IPP LMAP (h) IE(82.8K,731.6M,900) 100000 1e+06 0 20 40 60 80 100 120 140 160 180 200 Cost Time in Seconds IPP (i) IE(380K,15.6B,2.5K) Figure 1: Cost vs Time: Cost of unsatisfied clauses(smaller is better) vs time for different domain sizes. Notation used to label each figure: MLN(numvariables, numclauses, numevidences). Note: three quantities reported are for ground Markov network associated with the MLN. Standard deviation is plotted as error bars. 6 Conclusion In this paper we presented a general approach for lifted MAP inference in Markov logic networks (MLNs). The main idea in our approach is to encode MAP problem as an Integer Polynomial Program (IPP) by schematically applying three lifted inference steps to the MLN: lifted decomposition, lifted conditioning and partial grounding. To solve the IPP, we propose to convert it to an Integer Linear Program (ILP) using the classic method outlined in [25]. The virtue of our approach is that the resulting ILP can be much smaller than the one obtained from ground Markov network. Moreover, our approach subsumes the recently proposed lifted MAP inference approach of Sarkhel et al. [18] and is at least as powerful as probabilistic theorem proving [6]. Perhaps, the key advantage of our approach is that it runs lifted inference as a pre-processing step, reducing the size of the theory and then applies advanced propositional inference algorithms to this theory without any modifications. Thus, we do not have to explicitly lift (and efficiently implement) decades worth of research and advances on propositional inference algorithms, treating them as a black-box. Acknowledgments This work was supported in part by the AFRL under contract number FA8750-14-C-0021, by the ARO MURI grant W911NF-08-1-0242, and by the DARPA Probabilistic Programming for Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of DARPA, AFRL, ARO or the US government. 8 References [1] Udi Apsel and Ronen I. Braman. Exploiting uniform assignments in first-order MPE. In AAAI, pages 74–83, 2012. [2] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference. In UAI, 2013. [3] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007. [4] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, San Rafael, CA, 2009. [5] L. Getoor and B. Taskar, editors. Introduction to Statistical Relational Learning. MIT Press, 2007. [6] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In UAI, pages 256–265. AUAI Press, 2011. [7] V. Gogate, A. Jha, and D. Venugopal. Advances in Lifted Importance Sampling. In AAAI, 2012. [8] Fabian Hadiji and Kristian Kersting. Reduce and re-lift: Bootstrapped lifted likelihood maximization for MAP. In AAAI, pages 394–400, Seattle, WA, 2013. AAAI Press. [9] Gurobi Optimization Inc. Gurobi Optimizer Reference Manual, 2014. [10] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable Features. In NIPS, pages 973–981, 2010. [11] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu. [12] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted Probabilistic Inference with Counting Formulas. In AAAI, pages 1062–1068, 2008. [13] Martin Mladenov, Amir Globerson, and Kristian Kersting. Efficient Lifting of MAP LP Relaxations Using k-Locality. AISTATS 2014, 2014. [14] Niu, Feng and R´e, Christopher and Doan, AnHai and Shavlik, Jude. Tuffy: Scaling up statistical inference in markov logic networks using an RDBMS. Proceedings of the VLDB Endowment, 4(6):373–384, 2011. [15] Jan Noessner, Mathias Niepert, and Heiner Stuckenschmidt. RockIt:exploiting parallelism and symmetry for MAP inference in statistical relational models. In AAAI, Seattle,WA, 2013. [16] K.; Pipatsrisawat and A.. Darwiche. Clone: Solving Weighted Max-SAT in a Reduced Search Space. In AI, pages 223–233, 2007. [17] D. Poole. First-Order Probabilistic Inference. In IJCAI 2003, pages 985–991, Acapulco, Mexico, 2003. Morgan Kaufmann. [18] Somdeb Sarkhel, Deepak Venugopal, Parag Singla, and Vibhav Gogate. Lifted MAP inference for Markov Logic Networks. AISTATS 2014, 2014. [19] B. Selman, H. Kautz, and B. Cohen. Local Search Strategies for Satisfiability Testing. In Cliques, Coloring, and Satisfiability: Second DIMACS Implementation Challenge, pages 521–532. American Mathematical Society, 1996. [20] J. W. Shavlik and S. Natarajan. Speeding up inference in markov logic networks by preprocessing to reduce the size of the resulting grounded network. In IJCAI, pages 1951–1956, 2009. [21] P. Singla and P. Domingos. Lifted First-Order Belief Propagation. In AAAI, pages 1094–1099, Chicago, IL, 2008. AAAI Press. [22] G. Van den Broeck, A. Choi, and A. Darwiche. Lifted relax, compensate and then recover: From approximate to exact lifted probabilistic inference. In UAI, pages 131–141, 2012. [23] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In IJCAI, pages 2178–2185, 2011. [24] D. Venugopal and V. Gogate. On Lifting the Gibbs Sampling Algorithm. In NIPS, pages 1655–1663, 2012. [25] Lawrence J Watters. Reduction of Integer Polynomial Programming Problems to Zero-One Linear Programming Problems. Operations Research, 15(6):1171–1174, 1967. 9
2014
52
5,539
Hardness of parameter estimation in graphical models Guy Bresler1 David Gamarnik2 Devavrat Shah1 Laboratory for Information and Decision Systems Department of EECS1 and Sloan School of Management2 Massachusetts Institute of Technology {gbresler,gamarnik,devavrat}@mit.edu Abstract We consider the problem of learning the canonical parameters specifying an undirected graphical model (Markov random field) from the mean parameters. For graphical models representing a minimal exponential family, the canonical parameters are uniquely determined by the mean parameters, so the problem is feasible in principle. The goal of this paper is to investigate the computational feasibility of this statistical task. Our main result shows that parameter estimation is in general intractable: no algorithm can learn the canonical parameters of a generic pair-wise binary graphical model from the mean parameters in time bounded by a polynomial in the number of variables (unless RP = NP). Indeed, such a result has been believed to be true (see [1]) but no proof was known. Our proof gives a polynomial time reduction from approximating the partition function of the hard-core model, known to be hard, to learning approximate parameters. Our reduction entails showing that the marginal polytope boundary has an inherent repulsive property, which validates an optimization procedure over the polytope that does not use any knowledge of its structure (as required by the ellipsoid method and others). 1 Introduction Graphical models are a powerful framework for succinct representation of complex highdimensional distributions. As such, they are at the core of machine learning and artificial intelligence, and are used in a variety of applied fields including finance, signal processing, communications, biology, as well as the modeling of social and other complex networks. In this paper we focus on binary pairwise undirected graphical models, a rich class of models with wide applicability. This is a parametric family of probability distributions, and for the models we consider, the canonical parameters ✓are uniquely determined by the vector µ of mean parameters, which consist of the node-wise and pairwise marginals. Two primary statistical tasks pertaining to graphical models are inference and parameter estimation. A basic inference problem is the computation of marginals (or conditional probabilities) given the model, that is, the forward mapping ✓7! µ. Conversely, the backward mapping µ 7! ✓corresponds to learning the canonical parameters from the mean parameters. The backward mapping is defined only for µ in the marginal polytope M of realizable mean parameters, and this is important in what follows. The backward mapping captures maximum likelihood estimation of parameters; the study of the statistical properties of maximum likelihood estimation for exponential families is a classical and important subject. In this paper we are interested in the computational tractability of these statistical tasks. A basic question is whether or not these maps can be computed efficiently (namely in time polynomial in 1 the problem size). As far as inference goes, it is well known that approximating the forward map (inference) is computational hard in general. This was shown by Luby and Vigoda [2] for the hardcore model, a simple pairwise binary graphical model (defined in (2.1)). More recently, remarkably sharp results have been obtained, showing that computing the forward map for the hard-core model is tractable if and only if the system exhibits the correlation decay property [3, 4]. In contrast, to the best of our knowledge, no analogous hardness result exists for the backward mapping (parameter estimation), despite its seeming intractability [1]. Tangentially related hardness results have been previously obtained for the problem of learning the graph structure underlying an undirected graphical model. Bogdanov et al. [5] showed hardness of determining graph structure when there are hidden nodes, and Karger and Srebro [6] showed hardness of finding the maximum likelihood graph with a given treewidth. Computing the backward mapping, in comparison, requires estimation of the parameters when the graph is known. Our main result, stated precisely in the next section, establishes hardness of approximating the backward mapping for the hard-core model. Thus, despite the problem being statistically feasible, it is computationally intractable. The proof is by reduction, showing that the backward map can be used as a black box to efficiently estimate the partition function of the hard-core model. The reduction, described in Section 4, uses the variational characterization of the log-partition function as a constrained convex optimization over the marginal polytope of realizable mean parameters. The gradient of the function to be minimized is given by the backward mapping, and we use a projected gradient optimization method. Since approximating the partition function of the hard-core model is known to be computationally hard, the reduction implies hardness of approximating the backward map. The main technical difficulty in carrying out the argument arises because the convex optimization is constrained to the marginal polytope, an intrinsically complicated object. Indeed, even determining membership (or evaluating the projection) to within a crude approximation of the polytope is NP-hard [7]. Nevertheless, we show that it is possible to do the optimization without using any knowledge of the polytope structure, as is normally required by ellipsoid, barrier, or projection methods. To this end, we prove that the polytope boundary has an inherent repulsive property that keeps the iterates inside the polytope without actually enforcing the constraint. The consequence of the boundary repulsion property is stated in Proposition 4.6 of Section 4, which is proved in Section 5. Our reduction has a close connection to the variational approach to approximate inference [1]. There, the conjugate-dual representation of the log-partition function leads to a relaxed optimization problem defined over a tractable bound for the marginal polytope and with a simple surrogate to the entropy function. What our proof shows is that accurate approximation of the gradient of the entropy obviates the need to relax the marginal polytope. We mention a related work of Kearns and Roughgarden [8] showing a polynomial-time reduction from inference to determining membership in the marginal polytope. Note that such a reduction does not establish hardness of parameter estimation: the empirical marginals obtained from samples are guaranteed to be in the marginal polytope, so an efficient algorithm could hypothetically exist for parameter estimation without contradicting the hardness of marginal polytope membership. After completion of our manuscript, we learned that Montanari [9] has independently and simultaneously obtained similar results showing hardness of parameter estimation in graphical models from the mean parameters. His high-level approach is similar to ours, but the details differ substantially. 2 Main result In order to establish hardness of learning parameters from marginals for pairwise binary graphical models, we focus on a specific instance of this class of graphical models, the hard-core model. Given a graph G = (V, E) (where V = {1, . . . , p}), the collection of independent set vectors I(G) ✓{0, 1}V consist of vectors σ such that σi = 0 or σj = 0 (or both) for every edge {i, j} 2 E. Each vector σ 2 I(G) is the indicator vector of an independent set. The hard-core model assigns nonzero probability only to independent set vectors, with P✓(σ) = exp ✓X i2V ✓iσi −Φ(✓) ◆ for each σ 2 I(G) . (2.1) 2 This is an exponential family with vector of sufficient statistics φ(σ) = (σi)i2V 2 {0, 1}p and vector of canonical parameters ✓= (✓i)i2V 2 Rp. In the statistical physics literature the model is usually parameterized in terms of node-wise fugacity (or activity) λi = e✓i. The log-partition function Φ(✓) = log X σ2I(G) exp ✓X i2V ✓iσi ◆! serves to normalize the distribution; note that Φ(✓) is finite for all ✓2 Rp. Here and throughout, all logarithms are to the natural base. The set M of realizable mean parameters plays a major role in the paper, and is defined as M = {µ 2 Rp| there exists a ✓such that E✓[φ(σ)] = µ} . For the hard-core model (2.1), the set M is a polytope equal to the convex hull of independent set vectors I(G) and is called the marginal polytope. The marginal polytope’s structure can be rather complex, and one indication of this is that the number of half-space inequalities needed to represent M can be very large, depending on the structure of the graph G underlying the model [10, 11]. The model (2.1) is a regular minimal exponential family, so for each µ in the interior M◦of the marginal polytope there corresponds a unique ✓(µ) satisfying the dual matching condition E✓[φ(σ)] = µ . We are concerned with approximation of the backward mapping µ 7! ✓, and we use the following notion of approximation. Definition 2.1. We say that ˆy 2 R is a δ-approximation to y 2 R if y(1 −δ) ˆy (1 + δ). A vector ˆv 2 Rp is a δ-approximation to v 2 Rp if each entry ˆvi is a δ-approximation to vi. We next define the appropriate notion of efficient approximation algorithm. Definition 2.2. A fully polynomial randomized approximation scheme (FPRAS) for a mapping fp : Xp ! R is a randomized algorithm that for each δ > 0 and input x 2 Xp, with probability at least 3/4 outputs a δ-approximation ˆfp(x) to fp(x) and moreover the running time is bounded by a polynomial Q(p, δ−1). Our result uses the complexity classes RP and NP, defined precisely in any complexity text (such as [12]). The class RP consists of problems solvable by efficient (randomized polynomial) algorithms, and NP consists of many seemingly difficult problems with no known efficient algorithms. It is widely believed that NP 6= RP. Assuming this, our result says that there cannot be an efficient approximation algorithm for the backward mapping in the hard-core model (and thus also for the more general class of binary pairwise graphical models). We recall that approximating the backward mapping entails taking a vector µ as input and producing an approximation of the corresponding vector of canonical parameters ✓as output. It should be noted that even determining whether a given vector µ belongs to the marginal polytope M is known to be an NP-hard problem [7]. However, our result shows that the problem is NP-hard even if the input vector µ is known a priori to be an element of the marginal polytope M. Theorem 2.3. Assuming NP 6= RP, there does not exist an FPRAS for the backward mapping µ 7! ✓. As discussed in the introduction, Theorem 2.3 is proved by showing that the backward mapping can be used as a black-box to efficiently estimate the partition function of the hard core model, known to be hard. This uses the variational characterization of the log-partition function as well as a projected gradient optimization method. Proving validity of the projected gradient method requires overcoming a substantial technical challenge: we show that the iterates remain within the marginal polytope without explicitly enforcing this (in particular, we do not project onto the polytope). The bulk of the paper is devoted to establishing this fact, which may be of independent interest. In the next section we give necessary background on conjugate-duality and the variational characterization as well as review the result we will use on hardness of computing the log-partition function. The proof of Theorem 2.3 is then given in Section 4. 3 3 Background 3.1 Exponential families and conjugate duality We now provide background on exponential families (as can be found in the monograph by Wainwright and Jordan [1]) specialized to the hard-core model (2.1) on a fixed graph G = (V, E). General theory on conjugate duality justifying the statements of this subsection can be found in Rockafellar’s book [13]. The basic relationship between the canonical and mean parameters is expressed via conjugate (or Fenchel) duality. The conjugate dual of the log-partition function Φ(✓) is Φ⇤(µ) := sup ✓2Rd n hµ, ✓i −Φ(✓) o . Note that for our model Φ(✓) is finite for all ✓2 Rp and furthermore the supremum is uniquely attained. On the interior M◦of the marginal polytope, −Φ⇤is the entropy function. The logpartition function can then be expressed as Φ(✓) = sup µ2M n h✓, µi −Φ⇤(µ) o , (3.1) with µ(✓) = arg max µ2M n h✓, µi −Φ⇤(µ) o . (3.2) The forward mapping ✓7! µ is specified by the variational characterization (3.2) or alternatively by the gradient map rΦ : Rp ! M. As mentioned earlier, for each µ in the interior M◦there is a unique ✓(µ) satisfying the dual matching condition E✓(µ)[φ(σ)] = (rΦ)(✓(µ)) = µ. For mean parameters µ 2 M◦, the backward mapping µ 7! ✓(µ) to the canonical parameters is given by ✓(µ) = arg max ✓2Rp n hµ, ✓i −Φ(✓) o or by the gradient rΦ⇤(µ) = ✓(µ) . The latter representation will be the more useful one for us. 3.2 Hardness of inference We describe an existing result on the hardness of inference and state the corollary we will use. The result says that, subject to widely believed conjectures in computational complexity, no efficient algorithm exists for approximating the partition function of certain hard-core models. Recall that the hard-core model with fugacity λ is given by (2.1) with ✓i = ln λ for each i 2 V . Theorem 3.1 ([3, 4]). Suppose d ≥3 and λ > λc(d) = (d−1)d−1 (d−2)d . Assuming NP 6= RP, there exists no FPRAS for computing the partition function of the hard-core model with fugacity λ on regular graphs of degree d. In particular, no FPRAS exists when λ = 1 and d ≥5. We remark that the source of hardness is the long-range dependence property of the hard-core model for λ > λc(d). It was shown in [14] that for λ < λc(d) the model exhibits decay of correlations and there is an FPRAS for the log-partition function (in fact there is a deterministic approximation scheme as well). We note that a number of hardness results are known for the hardcore and Ising models, including [15, 16, 3, 2, 4, 17, 18, 19]. The result stated in Theorem 3.1 suffices for our purposes. From this section we will need only the following corollary, proved in the Appendix. The proof, standard in the literature, uses the self-reducibility of the hard-core model to express the partition function in terms of marginals computed on subgraphs. Corollary 3.2. Consider the hard-core model (2.1) on graphs of degree most d with parameters ✓i = 0 for all i 2 V . Assuming NP 6= RP, there exists no FPRAS ˆµ(0) for the vector of marginal probabilities µ(0), where error is measured entry-wise as per Definition 2.1. 4 4 Reduction by optimizing over the marginal polytope In this section we describe our reduction and prove Theorem 2.3. We define polynomial constants ✏= p−8 , q = p5 , and s = ( ✏ 2p )2 , (4.1) which we will leave as ✏, q, and s to clarify the calculations. Also, given the asymptotic nature of the results, we assume that p is larger than a universal constant so that certain inequalities are satisfied. Proposition 4.1. Fix a graph G on p nodes. Let ˆ✓: M◦! Rp be a black box giving a γapproximation for the backward mapping µ 7! ✓for the hard-core model (2.1). Using 1/✏γ2 calls to ˆ✓, and computation bounded by a polynomial in p, 1/γ, it is possible to produce a 4γp7/2/q✏2approximation ˆµ(0) to the marginals µ(0) corresponding to all zero parameters. We first observe that Theorem 2.3 follows almost immediately. Proof of Theorem 2.3. A standard median amplification trick (see e.g. [20]) allows to decrease the probability 1/4 of erroneous output by a FPRAS to below 1/p✏γ2 using O(log(p✏γ2)) function calls. Thus the assumed FPRAS for the backward mapping can be made to give a γ-approximation ˆ✓to ✓ on 1/✏γ2 successive calls, with probability of no erroneous outputs equal to at least 3/4. By taking γ = ˜γq✏2p−7/2/2 in Proposition 4.1 we get a ˜γ-approximation to µ(0) with computation bounded by a polynomial in p, 1/˜γ. In other words, the existence of an FPRAS for the mapping µ 7! ✓gives an FPRAS for the marginals µ(0), and by Corollary 3.2 this is not possible if NP 6= RP. We now work towards proving Proposition 4.1, the goal being to estimate the vector of marginals µ(0) for some fixed graph G. The desired marginals are given by the solution to the optimization (3.2) with ✓= 0: µ(0) = −arg min µ2M Φ⇤(µ) . (4.2) We know from Section 3 that for x 2 M◦the gradient rΦ⇤(x) = ✓(x), that is, the backward mapping amounts to a gradient first order (gradient) oracle. A natural approach to solving the optimization problem (4.2) is to use a projected gradient method. For reasons that will be come clear later, instead of projecting onto the marginal polytope M, we project onto the shrunken marginal polytope M1 ⇢M defined as M1 = {µ 2 M \ [q✏, 1)p : µ + ✏· ei 2 M for all i} , (4.3) where ei is the ith standard basis vector. As mentioned before, projecting onto M1 is NP-hard, and this must therefore be avoided if we are to obtain a polynomial-time reduction. Nevertheless, we temporarily assume that it is possible to do the projection and address this difficulty later. With this in mind, we propose to solve the optimization (4.2) by a projected gradient method with fixed step size s, xt+1 = PM1(xt −srΦ⇤(xt)) = PM1(xt −s✓(xt)) , (4.4) In order for the method (4.4) to succeed a first requirement is that the optimum is inside M1. The following lemma is proved in the Appendix. Lemma 4.2. Consider the hard core model (2.1) on a graph G with maximum degree d on p ≥2d+1 nodes and canonical parameters ✓= 0. Then the corresponding vector of mean parameters µ(0) is in M1. One of the benefits of operating within M1 is that the gradient is bounded by a polynomial in p, and this will allow the optimization procedure to converge in a polynomial number of steps. The following lemma amounts to a rephrasing of Lemmas 5.3 and 5.4 in Section 5 and the proof is omitted. Lemma 4.3. We have the gradient bound krΦ⇤(x)k1 = k✓(x)k1 p/✏= p9 for any x 2 M1. Next, we state general conditions under which an approximate projected gradient algorithm converges quickly. Better convergence rates are possible using the strong convexity of Φ⇤(shown in Lemma 4.5 below), but this lemma suffices for our purposes. The proof is standard (see [21] or Theorem 3.1 in [22] for a similar statement) and is given in the Appendix for completeness. 5 Lemma 4.4 (Projected gradient method). Let G : C ! R be a convex function defined over a compact convex set C with minimizer x⇤2 arg minx2C G(x). Suppose we have access to an approximate gradient oracle d rG(x) for x 2 C with error bounded as supx2C kd rG(x)−rG(x)k1 δ/2. Let L = supx2C kd rG(x)k. Consider the projected gradient method xt+1 = PC(xt −sd rG(xt)) starting at x1 2 C and with fixed step size s = δ/2L2. After T = 4kx1 −x⇤k2L2/δ2 iterations the average ¯xT = 1 T PT t=1 xt satisfies G(¯xT ) −G(x⇤) δ. To translate accuracy in approximating the function Φ⇤(x⇤) to approximating x⇤, we use the fact that Φ⇤is strongly convex. The proof (in the Appendix) uses the equivalence between strong convexity of Φ⇤and strong smoothness of the Fenchel dual Φ, the latter being easy to check. Since we only require the implication of the lemma, we defer the definitions of strong convexity and strong smoothness to the appendix where they are used. Lemma 4.5. The function Φ⇤: M◦! R is p−3 2 -strongly convex. As a consequence, if Φ⇤(x) − Φ⇤(x⇤) δ for x 2 M◦and x⇤= arg miny2M◦Φ⇤(y), then kx −x⇤k 2p 3 2 δ. At this point all the ingredients are in place to show that the updates (4.4) rapidly approach µ(0), but a crucial difficulty remains to be overcome. The assumed black box ˆ✓for approximating the mapping µ 7! ✓is only defined for µ inside M, and thus it is not at all obvious how to evaluate the projection onto the closely related polytope M1. Indeed, as shown in [7], even approximate projection onto M is NP-hard, and no polynomial time reduction can require projecting onto M1 (assuming P 6= NP). The goal of the subsequent Section 5 is to prove Proposition 4.6 below, which states that the optimization procedure can be carried out without any knowledge about M or M1. Specifically, we show that thresholding coordinates suffices, that is, instead of projecting onto M1 we may project onto the translated non-negative orthant [q✏, 1)p. Writing P≥for this projection, we show that the original projected gradient method (4.4) has identical iterates xt as the much simpler update rule xt+1 = P≥(xt −s✓(xt)) . (4.5) Proposition 4.6. Choose constants as per (4.1). Suppose x1 2 M1, and consider the iterates xt+1 = P≥(xt −sˆ✓(xt)) for t ≥1, where ˆ✓(xt) is a γ-approximation of ✓(xt) for all t ≥1. Then xt 2 M1, for all t ≥1, and thus the iterates are the same using either P≥or PM1. The next section is devoted to the proof of Proposition 4.6. We now complete the reduction. Proof of Proposition 4.1. We start the gradient update procedure xt+1 = P≥(xt −sˆ✓(xt)) at the point x1 = ( 1 2p, 1 2p, . . . , 1 2p), which we claim is within M1 for any graph G for p = |V | large enough. To see this, note that ( 1 p, 1 p, . . . , 1 p) is in M, because it is a convex combination (with weight 1/p each) of the independent set vectors e1, . . . , ep. Hence x1+ 1 2p ·ei 2 M, and additionally x1 i = 1 2p ≥q✏, for all i. We establish that xt 2 M1 for each t ≥1 by induction, having verified the base case t = 1 in the preceding paragraph. Let xt 2 M1 for some t ≥1. At iteration t of the update rule we make a call to the black box ˆ✓(xt) giving a γ-approximation to the backward mapping ✓(xt), compute xt −sˆ✓(xt), and then project onto [q✏, 1)p. Proposition 4.6 ensures that xt+1 2 M1. Therefore, the update xt+1 = P≥(xt −sˆ✓(xt)) is the same as xt+1 = PM1(xt −sˆ✓(xt)). Now we can now apply Lemma 4.4 with G = Φ⇤, C = M1, δ = 2γp2/✏and L = supx2C kd rG(x)k2  p p(p/✏)2 = p3/2/✏. After T = 4kx1 −x⇤k2L2/δ2 4p(p3/✏2)/(4γ2p4/✏2) = 1/γ2 iterations the average ¯xT = 1 T PT t=1 xt satisfies G(¯xT ) −G(x⇤) δ. Lemma 4.5 implies that k¯xT −x⇤k2 2δp 3 2 , and since x⇤ i ≥q✏, we get the entry-wise bound |¯xT i −x⇤ i | 2δp 3 2 x⇤ i /q✏for each i 2 V . Hence ¯xT is a 4γp7/2/q✏2-approximation for x⇤. 6 5 Proof of Proposition 4.6 In Subsection 5.1 we prove estimates on the parameters ✓corresponding to µ close to the boundary of M1, and then in Subsection 5.2 we use these estimates to show that the boundary of M1 has a certain repulsive property that keeps the iterates inside. 5.1 Bounds on gradient We start by introducing some helpful notation. For a node i, let N(i) = {j 2 [p] : (i, j) 2 E} denote its neighbors. We partition the collection of independent set vectors as I = Si [ S− i [ S↵ i , where Si = {σ 2 I : σi = 1} = {Ind sets containing i} S− i = {σ −ei : σ 2 Si} = {Ind sets where i can be added} S↵ i = {σ 2 I : σj = 1 for some j 2 N(i)} = {Ind sets conflicting with i} . For a collection of independent set vectors S ✓I we write P(S) as shorthand for P✓(σ 2 S) and f(S) = P(S) · eΦ(✓) = X σ2S exp ✓X j2V ✓jσj ◆ . We can then write the marginal at node i as µi = P(Si), and since Si, S− i , S↵ i partition I, the space of all independent sets of G, 1 = P(Si) + P(S− i ) + P(S↵ i ). For each i let ⌫i = P(S↵ i ) = P(a neighbor of i is in σ) . The following lemma specifies a condition on µi and ⌫i that implies a lower bound on ✓i. Lemma 5.1. If µi + ⌫i ≥1 −δ and ⌫i 1 −⇣δ for ⇣> 1, then ✓i ≥ln(⇣−1). Proof. Let ↵= e✓i, and observe that f(Si) = ↵f(S− i ). We want to show that ↵≥⇣−1. The first condition µi + ⌫i ≥1 −δ implies that f(Si) + f(S↵ i ) ≥(1 −δ)(f(Si) + f(S↵ i ) + f(S− i )) = (1 −δ)(f(Si) + f(S↵ i ) + ↵−1f(Si)) , and rearranging gives f(S↵ i ) + f(Si) ≥1 −δ δ ↵−1f(Si) . (5.1) The second condition ⌫i 1 −⇣δ reads f(S↵ i ) (1 −⇣δ)(f(Si) + f(S↵ i ) + f(S− i )) or f(S↵ i ) 1 −⇣δ ⇣δ f(Si)(1 + ↵−1) (5.2) Combining (5.1) and (5.2) and simplifying results in ↵≥⇣−1. We now use the preceding lemma to show that if a coordinate is close to the boundary of the shrunken marginal polytope M1, then the corresponding parameter is large. Lemma 5.2. Let r be a positive real number. If µ 2 M1 and µ+r✏·ei /2 M, then ✓i ≥ln ( q r −1 ) . Proof. We would like to apply Lemma 5.1 with ⇣= q/r and δ = r✏, which requires showing that (a) ⌫i 1 −q✏and (b) µi + ⌫i ≥1 −r✏. To show (a), note that if µ 2 M1, then µi ≥q✏by definition of M1. It follows that ⌫i 1 −µi 1 −q✏. We now show (b). Since µi = P(Si), ⌫i = P(S↵ i ), and 1 = P(Si) + P(S↵ i ) + P(S− i ), (b) is equivalent to P(S− i ) r✏. We assume that µ + r✏· ei /2 M and suppose for the sake of 7 contradiction that P(S− i ) > r✏. Writing ⌘σ = P(σ) for σ 2 I, so that µ = P σ2I ⌘σ · σ, we define a new probability measure ⌘0 σ = 8 < : ⌘σ + ⌘σ−ei if σ 2 Si 0 if σ 2 S− i ⌘σ otherwise . One can check that µ0 = P σ2I ⌘0 σσ has µ0 j = µj for each i 6= j and µ0 i = µi + P(S− i ) > µi + r✏. The point µ0, being a convex combination of independent set vectors, must be in M, and hence so must µ + r✏· ei. But this contradicts the hypothesis and completes the proof of the lemma. The proofs of the next two lemmas are similar in spirit to Lemma 8 in [23] and are proved in the Appendix. The first lemma gives an upper bound on the parameters (✓i)i2V corresponding to an arbitrary point in M1. Lemma 5.3. If µ + ✏· ei 2 M, then ✓i p/✏. Hence if µ 2 M1, then ✓i p/✏for all i. The next lemma shows that if a component µi is not too small, the corresponding parameter ✓i is also not too negative. As before, this allows to bound from below the parameters corresponding to an arbitrary point in M1. Lemma 5.4. If µi ≥q✏, then ✓i ≥−p/q✏. Hence if µ 2 M1, then ✓i ≥−p/q✏for all i. 5.2 Finishing the proof of Proposition 4.6 We sketch the remainder of the proof here; full detail is given in Section D of the Supplement. Starting with an arbitrary xt in M1, our goal is to show that xt+1 = P≥(xt −sˆ✓(xt)) remains in M1. The proof will then follow by induction, because our initial point x1 is in M1 by the hypothesis. The argument considers separately each hyperplane constraint for M of the form hh, xi 1. The distance of x from the hyperplane is 1 −hh, xi. Now, the definition of M1 implies that if x 2 M1, then x+✏·ei 2 M1 for all coordinates i, and thus 1−hh, xi ≥✏khk1 for all constraints. We call a constraint hh, xi 1 critical if 1 −hh, xi < ✏khk1, and active if ✏khk1 1 −hh, xi < 2✏khk1. For xt 2 M1 there are no critical constraints, but there may be active constraints. We first show that inactive constraints can at worst become active for the next iterate xt+1, which requires only that the step-size is not too large relative to the magnitude of the gradient (Lemma 4.3 gives the desired bound). Then we show (using the gradient estimates from Lemmas 5.2, 5.3, and 5.4) that the active constraints have a repulsive property and that xt+1 is no closer than xt to any active constraint, that is, hh, xt+1i hh, xti. The argument requires care, because the projection P≥may prevent coordinates i from decreasing despite xt i −sˆ✓i(xt) being very negative if xt i is already small. These arguments together show that xt+1 remains in M1, completing the proof. 6 Discussion This paper addresses the computational tractability of parameter estimation for the hard-core model. Our main result shows hardness of approximating the backward mapping µ 7! ✓to within a small polynomial factor. This is a fairly stringent form of approximation, and it would be interesting to strengthen the result to show hardness even for a weaker form of approximation. A possible goal would be to show that there exists a universal constant c > 0 such that approximation of the backward mapping to within a factor 1 + c in each coordinate is NP-hard. Acknowledgments GB thanks Sahand Negahban for helpful discussions. Also we thank Andrea Montanari for sharing his unpublished manuscript [9]. This work was supported in part by NSF grants CMMI-1335155 and CNS-1161964, and by Army Research Office MURI Award W911NF-11-1-0036. 8 References [1] M. Wainwright and M. Jordan, “Graphical models, exponential families, and variational inference,” Foundations and Trends in Machine Learning, vol. 1, no. 1-2, pp. 1–305, 2008. [2] M. Luby and E. Vigoda, “Fast convergence of the glauber dynamics for sampling independent sets,” Random Structures and Algorithms, vol. 15, no. 3-4, pp. 229–241, 1999. [3] A. Sly and N. Sun, “The computational hardness of counting in two-spin models on d-regular graphs,” in FOCS, pp. 361–369, IEEE, 2012. [4] A. Galanis, D. Stefankovic, and E. Vigoda, “Inapproximability of the partition function for the antiferromagnetic Ising and hard-core models,” arXiv preprint arXiv:1203.2226, 2012. [5] A. Bogdanov, E. Mossel, and S. Vadhan, “The complexity of distinguishing Markov random fields,” Approximation, Randomization and Combinatorial Optimization, pp. 331–342, 2008. [6] D. Karger and N. Srebro, “Learning Markov networks: Maximum bounded tree-width graphs,” in Symposium on Discrete Algorithms (SODA), pp. 392–401, 2001. [7] D. Shah, D. N. Tse, and J. N. Tsitsiklis, “Hardness of low delay network scheduling,” Information Theory, IEEE Transactions on, vol. 57, no. 12, pp. 7810–7817, 2011. [8] T. Roughgarden and M. Kearns, “Marginals-to-models reducibility,” in Advances in Neural Information Processing Systems, pp. 1043–1051, 2013. [9] A. Montanari, “Computational implications of reducing data to sufficient statistics.” unpublished, 2014. [10] M. Deza and M. Laurent, Geometry of cuts and metrics. Springer, 1997. [11] G. M. Ziegler, “Lectures on 0/1-polytopes,” in Polytopes—combinatorics and computation, pp. 1–41, Springer, 2000. [12] C. H. Papadimitriou, Computational complexity. John Wiley and Sons Ltd., 2003. [13] R. T. Rockafellar, Convex analysis, vol. 28. Princeton university press, 1997. [14] D. Weitz, “Counting independent sets up to the tree threshold,” in Proceedings of the thirtyeighth annual ACM symposium on Theory of computing, pp. 140–149, ACM, 2006. [15] M. Dyer, A. Frieze, and M. Jerrum, “On counting independent sets in sparse graphs,” SIAM Journal on Computing, vol. 31, no. 5, pp. 1527–1541, 2002. [16] A. Sly, “Computational transition at the uniqueness threshold,” in FOCS, pp. 287–296, 2010. [17] F. Jaeger, D. Vertigan, and D. Welsh, “On the computational complexity of the jones and tutte polynomials,” Math. Proc. Cambridge Philos. Soc, vol. 108, no. 1, pp. 35–53, 1990. [18] M. Jerrum and A. Sinclair, “Polynomial-time approximation algorithms for the Ising model,” SIAM Journal on computing, vol. 22, no. 5, pp. 1087–1116, 1993. [19] S. Istrail, “Statistical mechanics, three-dimensionality and NP-completeness: I. universality of intracatability for the partition function of the Ising model across non-planar surfaces,” in STOC, pp. 87–96, ACM, 2000. [20] M. R. Jerrum, L. G. Valiant, and V. V. Vazirani, “Random generation of combinatorial structures from a uniform distribution,” Theoretical Computer Science, vol. 43, pp. 169–188, 1986. [21] Y. Nesterov, Introductory lectures on convex optimization: A basic course, vol. 87. Springer, 2004. [22] S. Bubeck, “Theory of convex optimization for machine learning.” Available at http://www.princeton.edu/ sbubeck/pub.html. [23] L. Jiang, D. Shah, J. Shin, and J. Walrand, “Distributed random access algorithm: scheduling and congestion control,” IEEE Trans. on Info. Theory, vol. 56, no. 12, pp. 6182–6207, 2010. [24] D. P. Bertsekas, Nonlinear programming. Athena Scientific, 1999. [25] S. M. Kakade, S. Shalev-Shwartz, and A. Tewari, “Regularization techniques for learning with matrices,” J. Mach. Learn. Res., vol. 13, pp. 1865–1890, June 2012. [26] J. M. Borwein and J. D. Vanderwerff, Convex functions: constructions, characterizations and counterexamples. No. 109, Cambridge University Press, 2010. 9
2014
53
5,540
On the Information Theoretic Limits of Learning Ising Models Karthikeyan Shanmugam1∗, Rashish Tandon2†, Alexandros G. Dimakis1‡, Pradeep Ravikumar2⋆ 1Department of Electrical and Computer Engineering, 2Department of Computer Science The University of Texas at Austin, USA ∗karthiksh@utexas.edu, †rashish@cs.utexas.edu ‡dimakis@austin.utexas.edu, ⋆pradeepr@cs.utexas.edu Abstract We provide a general framework for computing lower-bounds on the sample complexity of recovering the underlying graphs of Ising models, given i.i.d. samples. While there have been recent results for specific graph classes, these involve fairly extensive technical arguments that are specialized to each specific graph class. In contrast, we isolate two key graph-structural ingredients that can then be used to specify sample complexity lower-bounds. Presence of these structural properties makes the graph class hard to learn. We derive corollaries of our main result that not only recover existing recent results, but also provide lower bounds for novel graph classes not considered previously. We also extend our framework to the random graph setting and derive corollaries for Erd˝os-Rényi graphs in a certain dense setting. 1 Introduction Graphical models provide compact representations of multivariate distributions using graphs that represent Markov conditional independencies in the distribution. They are thus widely used in a number of machine learning domains where there are a large number of random variables, including natural language processing [13], image processing [6, 10, 19], statistical physics [11], and spatial statistics [15]. In many of these domains, a key problem of interest is to recover the underlying dependencies, represented by the graph, given samples i.e. to estimate the graph of dependencies given instances drawn from the distribution. A common regime where this graph selection problem is of interest is the high-dimensional setting, where the number of samples n is potentially smaller than the number of variables p. Given the importance of this problem, it is instructive to have lower bounds on the sample complexity of any estimator: it clarifies the statistical difficulty of the underlying problem, and moreover it could serve as a certificate of optimality in terms of sample complexity for any estimator that actually achieves this lower bound. We are particularly interested in such lower bounds under the structural constraint that the graph lies within a given class of graphs (such as degree-bounded graphs, bounded-girth graphs, and so on). The simplest approach to obtaining such bounds involves graph counting arguments, and an application of Fano’s lemma. [2, 17] for instance derive such bounds for the case of degree-bounded and power-law graph classes respectively. This approach however is purely graph-theoretic, and thus fails to capture the interaction of the graphical model parameters with the graph structural constraints, and thus typically provides suboptimal lower bounds (as also observed in [16]). The other standard approach requires a more complicated argument through Fano’s lemma that requires finding a subset of graphs such that (a) the subset is large enough in number, and (b) the graphs in the subset are close enough in a suitable metric, typically the KL-divergence of the corresponding distributions. This approach is however much more technically intensive, and even for the simple 1 classes of bounded degree and bounded edge graphs for Ising models, [16] required fairly extensive arguments in using the above approach to provide lower bounds. In modern high-dimensional settings, it is becoming increasingly important to incorporate structural constraints in statistical estimation, and graph classes are a key interpretable structural constraint. But a new graph class would entail an entirely new (and technically intensive) derivation of the corresponding sample complexity lower bounds. In this paper, we are thus interested in isolating the key ingredients required in computing such lower bounds. This key ingredient involves one the following structural characterizations: (1) connectivity by short paths between pairs of nodes, or (2) existence of many graphs that only differ by an edge. As corollaries of this framework, we not only recover the results in [16] for the simple cases of degree and edge bounded graphs, but to several more classes of graphs, for which achievability results have already been proposed[1]. Moreover, using structural arguments allows us to bring out the dependence of the edge-weights, λ, on the sample complexity. We are able to show same sample complexity requirements for d-regular graphs, as is for degree d-bounded graphs, whilst the former class is much smaller. We also extend our framework to the random graph setting, and as a corollary, establish lower bound requirements for the class of Erd˝os-Rényi graphs in a dense setting. Here, we show that under a certain scaling of the edge-weights λ, Gp,c/p requires exponentially many samples, as opposed to a polynomial requirement suggested from earlier bounds[1]. 2 Preliminaries and Definitions Notation: R represents the real line. [p] denotes the set of integers from 1 to p. Let 1S denote the vector of ones and zeros where S is the set of coordinates containing 1. Let A −B denote A T Bc and A∆B denote the symmetric difference for two sets A and B. In this work, we consider the problem of learning the graph structure of an Ising model. Ising models are a class of graphical model distributions over binary vectors, characterized by the pair (G(V, E), ¯θ), where G(V, E) is an undirected graph on p vertices and ¯θ ∈R( p 2) : ¯θi,j = 0 ∀(i, j) /∈ E, ¯θi,j ̸= 0 ∀(i, j) ∈E. Let X = {+1, −1}. Then, for the pair (G, ¯θ), the distribution on X p is given as: fG,¯θ(x) = 1 Z exp P i,j ¯θi,j xixj ! where x ∈X p and Z is the normalization factor, also known as the partition function. Thus, we obtain a family of distributions by considering a set of edge-weighted graphs Gθ, where each element of Gθ is a pair (G, ¯θ). In other words, every member of the class Gθ is a weighted undirected graph. Let G denote the set of distinct unweighted graphs in the class Gθ. A learning algorithm that learns the graph G (and not the weights ¯θ) from n independent samples (each sample is a p-dimensional binary vector) drawn from the distribution fG,¯θ(·), is an efficiently computable map φ : χnp →G which maps the input samples {x1, . . . xn} to an undirected graph ˆG ∈G i.e. ˆG = φ(x1, . . . , xn). We now discuss two metrics of reliability for such an estimator φ. For a given (G, ¯θ), the probability of error (over the samples drawn) is given by p(G, ¯θ) = Pr  ˆG ̸= G  . Given a graph class Gθ, one may consider the maximum probability of error for the map φ, given as: pmax = max (G,θ)∈Gθ Pr  ˆG ̸= G  . (1) The goal of any estimator φ would be to achieve as low a pmax as possible. Alternatively, there are random graph classes that come naturally endowed with a probability measure µ(G, θ) of choosing the graphical model. In this case, the quantity we would want to minimize would be the average probability of error of the map φ, given as: pavg = Eµ h Pr  ˆG ̸= G i (2) In this work, we are interested in answering the following question: For any estimator φ, what is the minimum number of samples n, needed to guarantee an asymptotically small pmax or pavg ? The answer depends on Gθ and µ(when applicable). 2 For the sake of simplicity, we impose the following restrictions1: We restrict to the set of zero-field ferromagnetic Ising models, where zero-field refers to a lack of node weights, and ferromagnetic refers to all positive edge weights. Further, we will restrict all the non-zero edge weights (¯θi,j) in the graph classes to be the same, set equal to λ > 0. Therefore, for a given G(V, E), we have ¯θ = λ1E for some λ > 0. A deterministic graph class is described by a scalar λ > 0 and the family of graphs G. In the case of a random graph class, we describe it by a scalar λ > 0 and a probability measure µ, the measure being solely on the structure of the graph G (on G). Since we have the same weight λ(> 0) on all edges, henceforth we will skip the reference to it, i.e. the graph class will simply be denoted G and for a given G ∈G, the distribution will be denoted by fG(·), with the dependence on λ being implicit. Before proceeding further, we summarize the following additional notation. For any two distributions fG and fG′, corresponding to the graphs G and G′ respectively, we denote the Kullback-Liebler divergence (KL-divergence) between them as D (fG∥fG′) = P x∈X p fG(x) log  fG(x) fG′(x)  . For any subset T ⊆G, we let CT (ϵ) denote an ϵ-covering w.r.t. the KL-divergence (of the corresponding distributions) i.e. CT (ϵ)(⊆G) is a set of graphs such that for any G ∈T , there exists a G′ ∈CT (ϵ) satisfying D (fG∥fG′) ≤ϵ. We denote the entropy of any r.v. X by H(X), and the mutual information between any two r.v.s X and Y , by I(X; Y ). The rest of the paper is organized as follows. Section 3 describes Fano’s lemma, a basic tool employed in computing information-theoretic lower bounds. Section 4 identifies key structural properties that lead to large sample requirements. Section 5 applies the results of Sections 3 and 4 on a number of different deterministic graph classes to obtain lower bound estimates. Section 6 obtains lower bound estimates for Erd˝os-Rényi random graphs in a dense regime. All proofs can be found in the Appendix (see supplementary material). 3 Fano’s Lemma and Variants Fano’s lemma [5] is a primary tool for obtaining bounds on the average probability of error, pavg. It provides a lower bound on the probability of error of any estimator φ in terms of the entropy H(·) of the output space, the cardinality of the output space, and the mutual information I(· , ·) between the input and the output. The case of pmax is interesting only when we have a deterministic graph class G, and can be handled through Fano’s lemma again by considering a uniform distribution on the graph class. Lemma 1 (Fano’s Lemma). Consider a graph class G with measure µ. Let, G ∼µ, and let Xn = {x1, . . . , xn} be n independent samples such that xi ∼fG, i ∈[n]. Then, for pmax and pavg as defined in (1) and (2) respectively, pmax ≥pavg ≥H(G) −I(G; Xn) −log 2 log|G| (3) Thus in order to use this Lemma, we need to bound two quantities: the entropy H(G), and the mutual information I(G; Xn). The entropy can typically be obtained or bounded very simply; for instance, with a uniform distribution over the set of graphs G, H(G) = log |G|. The mutual information is a much trickier object to bound however, and is where the technical complexity largely arises. We can however simply obtain the following loose bound: I(G; Xn) ≤H(Xn) ≤np. We thus arrive at the following corollary: Corollary 1. Consider a graph class G. Then, pmax ≥1 −np+log 2 log|G| . Remark 1. From Corollary 1, we get: If n ≤log|G| p  (1 −δ) − log 2 log|G|  , then pmax ≥δ. Note that this bound on n is only in terms of the cardinality of the graph class G, and therefore, would not involve any dependence on λ (and consequently, be very loose). To obtain sharper lower bound guarantees that depends on graphical model parameters, it is useful to consider instead a conditional form of Fano’s lemma[1, Lemma 9], which allows us to obtain lower bounds on pavg in terms conditional analogs of the quantities in Lemma 1. For the case of pmax, these conditional analogs correspond to uniform measures on subsets of the original class G. 1Note that a lower bound for a restricted subset of a class of Ising models will also serve as a lower bound for the class without that restriction. 3 The conditional version allows us to focus on potentially harder to learn subsets of the graph class, leading to sharper lower bound guarantees. Also, for a random graph class, the entropy H(G) may be asymptotically much smaller than the log cardinality of the graph class, log|G| (e.g. Erd˝os-Rényi random graphs; see Section 6), rendering the bound in Lemma 1 useless. The conditional version allows us to circumvent this issue by focusing on a high-probability subset of the graph class. Lemma 2 (Conditional Fano’s Lemma). Consider a graph class G with measure µ. Let, G ∼µ, and let Xn = {x1, . . . , xn} be n independent samples such that xi ∼fG, i ∈[n]. Consider any T ⊆G and let µ (T ) be the measure of this subset i.e. µ (T ) = Prµ (G ∈T ). Then, we have pavg ≥µ (T ) H(G|G ∈T ) −I(G; Xn|G ∈T ) −log 2 log|T | and, pmax ≥H(G|G ∈T ) −I(G; Xn|G ∈T ) −log 2 log|T | Given Lemma 2, or even Lemma 1, it is the sharpness of an upper bound on the mutual information that governs the sharpness of lower bounds on the probability of error (and effectively, the number of samples n). In contrast to the trivial upper bound used in the corollary above, we next use a tighter bound from [20], which relates the mutual information to coverings in terms of the KL-divergence, applied to Lemma 2. Note that, as stated earlier, we simply impose a uniform distribution on G when dealing with pmax. Analogous bounds can be obtained for pavg. Corollary 2. Consider a graph class G, and any T ⊆G. Recall the definition of CT (ϵ) from Section 2. For any ϵ > 0, we have pmax ≥  1 −log|CT (ϵ)|+nϵ+log 2 log|T |  . Remark 2. From Corollary 2, we get: If n ≤log|T | ϵ  (1 −δ) − log 2 log|T | −log|CT (ϵ)| log|T |  , then pmax ≥ δ. ϵ is an upper bound on the radius of the KL-balls in the covering, and usually varies with λ. But this corollary cannot be immediately used given a graph class: it requires us to specify a subset T of the overall graph class, the term ϵ, and the KL-covering CT (ϵ). We can simplify the bound above by setting ϵ to be the radius of a single KL-ball w.r.t. some center, covering the whole set T . Suppose this radius is ρ, then the size of the covering set is just 1. In this case, from Remark 2, we get: If n ≤log|T | ρ  (1 −δ) − log 2 log|T |  , then pmax ≥δ. Thus, our goal in the sequel would be to provide a general mechanism to derive such a subset T : that is large in number and yet has small diameter with respect to KL-divergence. We note that Fano’s lemma and variants described in this section are standard, and have been applied to a number of problems in statistical estimation [1,14,16,20,21]. 4 Structural conditions governing Correlation As discussed in the previous section, we want to find subsets T that are large in size, and yet have a small KL-diameter. In this section, we summarize certain structural properties that result in small KL-diameter. Thereafter, finding a large set T would amount to finding a large number of graphs in the graph class G that satisfy these structural properties. As a first step, we need to get a sense of when two graphs would have corresponding distributions with a small KL-divergence. To do so, we need a general upper bound on the KL-divergence between the corresponding distributions. A simple strategy is to simply bound it by its symmetric divergence[16]. In this case, a little calculation shows : D (fG∥fG′) ≤D (fG∥fG′) + D (fG′∥fG) = X (s,t)∈E\E′ λ (EG [xsxt] −EG′ [xsxt]) + X (s,t)∈E′\E λ (EG′ [xsxt] −EG [xsxt]) (4) where E and E′ are the edges in the graphs G and G′ respectively, and EG[·] denotes the expectation under fG. Also note that the correlation between xs and xt, EG[xsxt] = 2PG(xsxt = +1) −1. 4 From Eq. (4), we observe that the only pairs, (s, t), contributing to the KL-divergence are the ones that lie in the symmetric difference, E∆E′. If the number of such pairs is small, and the difference of correlations in G and G′ (i.e. EG [xsxt]−EG′ [xsxt]) for such pairs is small, then the KL-divergence would be small. To summarize the setting so far, to obtain a tight lower bound on sample complexity for a class of graphs, we need to find a subset of graphs T with small KL diameter. The key to this is to identify when KL divergence between (distributions corresponding to) two graphs would be small. And the key to this in turn is to identify when there would be only a small difference in the correlations between a pair of variables across the two graphs G and G′. In the subsequent subsections, we provide two simple and general structural characterizations that achieve such a small difference of correlations across G and G′. 4.1 Structural Characterization with Large Correlation One scenario when there might be a small difference in correlations is when one of the correlations is very large, specifically arbitrarily close to 1, say EG′[xsxt] ≥1 −ϵ, for some ϵ > 0. Then, EG[xsxt] −EG′[xsxt] ≤ϵ, since EG[xsxt] ≤1. Indeed, when s, t are part of a clique[16], this is achieved since the large number of connections between them force a higher probability of agreement i.e. PG(xsxt = +1) is large. In this work we provide a more general characterization of when this might happen by relying on the following key lemma that connects the presence of “many” node disjoint “short” paths between a pair of nodes in the graph to high correlation between them. We define the property formally below. Definition 1. Two nodes a and b in an undirected graph G are said to be (ℓ, d) connected if they have d node disjoint paths of length at most ℓ. Lemma 3. Consider a graph G and a scalar λ > 0. Consider the distribution fG(x) induced by the graph. If a pair of nodes a and b are (ℓ, d) connected, then EG [xaxb] ≥1 − 2 1+ (1+(tanh(λ))ℓ)d (1−(tanh(λ))ℓ)d . From the above lemma, we can observe that as ℓgets smaller and d gets larger, EG [xaxb] approaches its maximum value of 1. As an example, in a k-clique, any two vertices, s and t, are (2, k −1) connected. In this case, the bound from Lemma 3 gives us: EG [xaxb] ≥1 − 2 1+(cosh λ)k−1 . Of course, a clique enjoys a lot more connectivity (i.e. also 3, k−1 2  connected etc., albeit with node overlaps) which allows for a stronger bound of ∼1 −λkeλ eλk (see [16])2 Now, as discussed earlier, a high correlation between a pair of nodes contributes a small term to the KL-divergence. This is stated in the following corollary. Corollary 3. Consider two graphs G(V, E) and G′(V, E′) and scalar weight λ > 0 such that E −E′ and E′ −E only contain pairs of nodes that are (ℓ, d) connected in graphs G′ and G respectively, then the KL-divergence between fG and fG′, D (fG∥fG′) ≤ 2λ|E∆E′| 1+ (1+(tanh(λ))ℓ)d (1−(tanh(λ))ℓ)d . 4.2 Structural Characterization with Low Correlation Another scenario where there might be a small difference in correlations between an edge pair across two graphs is when the graphs themselves are close in Hamming distance i.e. they differ by only a few edges. This is formalized below for the situation when they differ by only one edge. Definition 2 (Hamming Distance). Consider two graphs G(V, E) and G′(V, E′). The hamming distance between the graphs, denoted by H(G, G′), is the number of edges where the two graphs differ i.e. H(G, G′) = |{(s, t) | (s, t) ∈E∆E′}| (5) Lemma 4. Consider two graphs G(V, E) and G′(V, E′) such that H(G, G′) = 1, and (a, b) ∈E is the single edge in E∆E′. Then, EfG [xaxb] −EfG′ [xaxb] ≤tanh(λ). Also, the KL-divergence between the distributions, D (fG∥f ′ G) ≤λ tanh(λ). 2Both the bound from [16] and the bound from Lemma 3 have exponential asymptotic behaviour (i.e. as k grows) for constant λ. For smaller λ, the bound from [16] is strictly better. However, not all graph classes allow for the presence of a large enough clique, for e.g., girth bounded graphs, path restricted graphs, Erd˝os-Rényi graphs. 5 The above bound is useful in low λ settings. In this regime λ tanh λ roughly behaves as λ2. So, a smaller λ would correspond to a smaller KL-divergence. 4.3 Influence of Structure on Sample Complexity Now, we provide some high-level intuition behind why the structural characterizations above would be useful for lower bounds that go beyond the technical reasons underlying Fano’s Lemma that we have specified so far. Let us assume that λ > 0 is a positive real constant. In a graph even when the edge (s, t) is removed, (s, t) being (ℓ, d) connected ensures that the correlation between s and t is still very high (exponentially close to 1). Therefore, resolving the question of the presence/absence of the edge (s, t) would be difficult – requiring lots of samples. This is analogous in principle to the argument in [16] used for establishing hardness of learning of a set of graphs each of which is obtained by removing a single edge from a clique, still ensuring many short paths between any two vertices. Similarly, if the graphs, G and G′, are close in Hamming distance, then their corresponding distributions, fG and fG′, also tend to be similar. Again, it becomes difficult to tease apart which distribution the samples observed may have originated from. 5 Application to Deterministic Graph Classes In this section, we provide lower bound estimates for a number of deterministic graph families. This is done by explicitly finding a subset T of the graph class G, based on the structural properties of the previous section. See the supplementary material for details of these constructions. A common underlying theme to all is the following: We try to find a graph in G containing many edge pairs (u, v) such that their end vertices, u and v, have many paths between them (possibly, node disjoint). Once we have such a graph, we construct a subset T by removing one of the edges for these wellconnected edge pairs. This ensures that the new graphs differ from the original in only the wellconnected pairs. Alternatively, by removing any edge (and not just well-connected pairs) we can get another larger family T which is 1-hamming away from the original graph. 5.1 Path Restricted Graphs Let Gp,η be the class of all graphs on p vertices with have at most η paths (η = o(p)) between any two vertices. We have the following theorem : Theorem 1. For the class Gp,η, if n ≤(1 −δ) max n log(p/2) λ tanh λ , 1+cosh(2λ)η−1 2λ log  p 2(η+1) o , then pmax ≥δ. To understand the scaling, it is useful to think of cosh(2λ) to be roughly exponential in λ2 i.e. cosh(2λ) ∼eΘ(λ2)3. In this case, from the second term, we need n ∼Ω  eλ2η λ log  p η  samples. If η is scaling with p, this can be prohibitively large (exponential in λ2η). Thus, to have low sample complexity, we must enforce λ = O(1/√η). In this case, the first term gives n = Ω(η log p), since λ tanh(λ) ∼λ2, for small λ. We may also consider a generalization of Gp,η. Let Gp,η,γ be the set of all graphs on p vertices such that there are at most η paths of length at most γ between any two nodes (with η + γ = o(p)). Note that there may be more paths of length > γ. Theorem 2. Consider the graph class Gp,η,γ. For any ν ∈(0, 1), let tν = p1−ν−(η+1) γ . If n ≤ (1 −δ) max    log(p/2) λ tanh λ , 1+  cosh(2λ)η−1  1+tanh(λ)γ+1 1−tanh(λ)γ+1 tν  2λ ν log(p)   , then pmax ≥δ. The parameter ν ∈(0, 1) in the bound above may be adjusted based on the scaling of η and γ. Also, an approximate way to think of the scaling of  1+tanh(λ)γ+1 1−tanh(λ)γ+1  is ∼eλγ+1. As an example, for constant η and γ, we may choose v = 1 2. In this case, for some constant c, our bound imposes n ∼Ω  log p λ tanh λ, ecλγ+1√p λ log p  . Now, same as earlier, to have low sample complexity, we must 3In fact, for λ ≤3, we have eλ2/2 ≤cosh(2λ) ≤e2λ2. For λ > 3, cosh(2λ) > 200 6 have λ = O(1/p1/2(γ+1)), in which case, we get a n ∼Ω(p1/(γ+1) log p) sample requirement from the first term. We note that the family Gp,η,γ is also studied in [1], and for which, an algorithm is proposed. Under certain assumptions in [1], and the restrictions: η = O(1), and γ is large enough, the algorithm in [1] requires log p λ2 samples, which is matched by the first term in our lower bound. Therefore, the algorithm in [1] is optimal, for the setting considered. 5.2 Girth Bounded Graphs The girth of a graph is defined as the length of its shortest cycle. Let Gp,g,d be the set of all graphs with girth atleast g, and maximum degree d. Note that as girth increases the learning problem becomes easier, with the extreme case of g = ∞(i.e. trees) being solved by the well known ChowLiu algorithm[3] in O(log p) samples. We have the following theorem: Theorem 3. Consider the graph class Gp,g,d. For any ν ∈(0, 1), let dν = min  d, p1−ν g  . If n ≤(1 −δ) max    log(p/2) λ tanh λ , 1+  1+tanh(λ)g−1 1−tanh(λ)g−1 dν 2λ ν log(p)   , then pmax ≥δ. 5.3 Approximate d-Regular Graphs Let Gapprox p,d be the set of all graphs whose vertices have degree d or degree d −1. Note that this class is subset of the class of graphs with degree at most d. We have: Theorem 4. Consider the class Gapprox p,d . If n ≤(1−δ) max  log( pd 4 ) λ tanh λ, eλd 2λdeλ  pd 4  then pmax ≥δ. Note that the second term in the bound above is from [16]. Now, restricting λ to prevent exponential growth in the number of samples, we get a sample requirement of n = Ω(d2 log p). This matches the lower bound for degree d bounded graphs in [16]. However, note that Theorem 4 is stronger in the sense that the bound holds for a smaller class of graphs i.e. only approximately d-regular, and not d-bounded. 5.4 Approximate Edge Bounded Graphs Let Gapprox p,k be the set of all graphs with number of edges ∈  k 2, k  . This class is a subset of the class of graphs with edges at most k. Here, we have: Theorem 5. Consider the class Gapprox p,k , and let k ≥9. If we have number of samples n ≤(1 − δ) max  log( k 2) λ tanh λ, eλ( √ 2k−1) 2λeλ( √ 2k+1) log k 2  , then pmax ≥δ. Note that the second term in the bound above is from [16]. If we restrict λ to prevent exponential growth in the number of samples, we get a sample requirement of n = Ω(k log k). Again, we match the lower bound for the edge bounded class in [16], but through a smaller class. 6 Erd˝os-Rényi graphs G(p, c/p) In this section, we relate the number of samples required to learn G ∼G(p, c/p) for the dense case, for guaranteeing a constant average probability of error pavg. We have the following main result whose proof can be found in the Appendix. Theorem 6. Let G ∼G(p, c/p), c = Ω(p3/4 + ϵ′), ϵ′ > 0. For this class of random graphs, if pavg ≤1/90, then n ≥max (n1, n2) where: n1 = H(c/p)(3/80) (1 −80pavg −O(1/p))  4λp 3 exp(−p 36) + 4 exp(−p 3 2 144) + 4λ 9  1+(cosh(2λ)) c2 6p    , n2 = p 4H(c/p)(1 −3pavg) −O(1/p) (6) 7 Remark 3. In the denominator of the first expression, the dominating term is 4λ 9  1+(cosh(2λ)) c2 6p . Therefore, we have the following corollary. Corollary 4. Let G ∼G(p, c/p), c = Ω(p3/4+ϵ′) for any ϵ′ > 0. Let pavg ≤1/90, then 1. λ = Ω(√p/c) : Ω  λH(c/p)(cosh(2λ)) c2 6p  samples are needed. 2. λ < O(√p/c) : Ω(c log p) samples are needed. (This bound is from [1] ) Remark 4. This means that when λ = Ω(√p/c), a huge number (exponential for constant λ) of samples are required. Hence, for any efficient algorithm, we require λ = O √p/c  and in this regime O (c log p) samples are required to learn. 6.1 Proof Outline The proof skeleton is based on Lemma 2. The essence of the proof is to cover a set of graphs T , with large measure, by an exponentially small set where the KL-divergence between any covered and the covering graph is also very small. For this we use Corollary 3. The key steps in the proof are outlined below: 1. We identify a subclass of graphs T , as in Lemma 2, whose measure is close to 1, i.e. µ(T ) = 1 −o(1). A natural candidate is the ’typical’ set T p ϵ which is defined to be a set of graphs each with ( cp 2 −cpϵ 2 , cp 2 + cpϵ 2 ) edges in the graph. 2. (Path property) We show that most graphs in T have property R: there are O(p2) pairs of nodes such that every pair is well connected by O( c2 p ) node disjoint paths of length 2 with high probability. The measure µ(R |T ) = 1 −δ1. 3. (Covering with low diameter) Every graph G in R T T is covered by a graph G′ from a covering set CR(δ2) such that their edge set differs only in the O(p2) nodes that are well connected. Therefore, by Corollary 3, KL-divergence between G and G′ is very small (δ2 = O(λp2 cosh(λ)−c2/p)). 4. (Efficient covering in Size) Further, the covering set CR is exponentially smaller than T . 5. (Uncovered graphs have exponentially low measure) Then we show that the uncovered graphs have large KL-divergence O(p2λ)  but their measure µ(Rc |T ) is exponentially small. 6. Using a similar (but more involved) expression for probability of error as in Corollary 2, roughly we need O( log|T | δ1+δ2 ) samples. The above technique is very general. Potentially this could be applied to other random graph classes. 7 Summary In this paper, we have explored new approaches for computing sample complexity lower bounds for Ising models. By explicitly bringing out the dependence on the weights of the model, we have shown that unless the weights are restricted, the model may be hard to learn. For example, it is hard to learn a graph which has many paths between many pairs of vertices, unless λ is controlled. For the random graph setting, Gp,c/p, while achievability is possible in the c = poly log p case[1], we have shown lower bounds for c > p0.75. Closing this gap remains a problem for future consideration. The application of our approaches to other deterministic/random graph classes such as the ChungLu model[4] (a generalization of Erd˝os-Rényi graphs), or small-world graphs[18] would also be interesting. Acknowledgments R.T. and P.R. acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894, IIS-1447574, and DMS-1264033. K.S. and A.D. acknowledge the support of NSF via CCF 1422549, 1344364, 1344179 and DARPA STTR and a ARO YIP award. 8 References [1] Animashree Anandkumar, Vincent YF Tan, Furong Huang, Alan S Willsky, et al. Highdimensional structure estimation in ising models: Local separation criterion. The Annals of Statistics, 40(3):1346–1375, 2012. [2] Guy Bresler, Elchanan Mossel, and Allan Sly. Reconstruction of markov random fields from samples: Some observations and algorithms. In Proceedings of the 11th international workshop, APPROX 2008, and 12th international workshop, RANDOM 2008 on Approximation, Randomization and Combinatorial Optimization: Algorithms and Techniques, APPROX ’08 / RANDOM ’08, pages 343–356. Springer-Verlag, 2008. [3] C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE Trans. Inf. Theor., 14(3):462–467, September 2006. [4] Fan Chung and Linyuan Lu. Complex Graphs and Networks. American Mathematical Society, August 2006. [5] Thomas M. Cover and Joy A. Thomas. Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, 2006. [6] G. Cross and A. Jain. Markov random field texture models. IEEE Trans. PAMI, 5:25–39, 1983. [7] Amir Dembo and Andrea Montanari. Ising models on locally tree-like graphs. The Annals of Applied Probability, 20(2):565–592, 04 2010. [8] Abbas El Gamal and Young-Han Kim. Network information theory. Cambridge University Press, 2011. [9] Ashish Goel, Michael Kapralov, and Sanjeev Khanna. Perfect matchings in o(n\logn) time in regular bipartite graphs. SIAM Journal on Computing, 42(3):1392–1404, 2013. [10] M. Hassner and J. Sklansky. Markov random field models of digitized image texture. In ICPR78, pages 538–540, 1978. [11] E. Ising. Beitrag zur theorie der ferromagnetismus. Zeitschrift für Physik, 31:253–258, 1925. [12] Stasys Jukna. Extremal combinatorics, volume 2. Springer, 2001. [13] C. D. Manning and H. Schutze. Foundations of Statistical Natural Language Processing. MIT Press, 1999. [14] Garvesh Raskutti, Martin J. Wainwright, and Bin Yu. Minimax rates of estimation for highdimensional linear regression over ℓq-balls. IEEE Trans. Inf. Theor., 57(10):6976–6994, October 2011. [15] B. D. Ripley. Spatial statistics. Wiley, New York, 1981. [16] Narayana P Santhanam and Martin J Wainwright. Information-theoretic limits of selecting binary graphical models in high dimensions. Information Theory, IEEE Transactions on, 58(7):4117–4134, 2012. [17] R. Tandon and P. Ravikumar. On the difficulty of learning power law graphical models. In In IEEE International Symposium on Information Theory (ISIT), 2013. [18] Duncan J. Watts and Steven H. Strogatz. Collective dynamics of ’small-world’ networks. Nature, 393(6684):440–442, June 1998. [19] J.W. Woods. Markov image modeling. IEEE Transactions on Automatic Control, 23:846–850, October 1978. [20] Yuhong Yang and Andrew Barron. Information-theoretic determination of minimax rates of convergence. Annals of Statistics, pages 1564–1599, 1999. [21] Yuchen Zhang, John Duchi, Michael Jordan, and Martin J Wainwright. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems 26, pages 2328–2336. Curran Associates, Inc., 2013. 9
2014
54
5,541
Projecting Markov Random Field Parameters for Fast Mixing Xianghang Liu NICTA, The University of New South Wales xianghang.liu@nicta.com.au Justin Domke NICTA, The Australian National University justin.domke@nicta.com.au Abstract Markov chain Monte Carlo (MCMC) algorithms are simple and extremely powerful techniques to sample from almost arbitrary distributions. The flaw in practice is that it can take a large and/or unknown amount of time to converge to the stationary distribution. This paper gives sufficient conditions to guarantee that univariate Gibbs sampling on Markov Random Fields (MRFs) will be fast mixing, in a precise sense. Further, an algorithm is given to project onto this set of fast-mixing parameters in the Euclidean norm. Following recent work, we give an example use of this to project in various divergence measures, comparing univariate marginals obtained by sampling after projection to common variational methods and Gibbs sampling on the original parameters. 1 Introduction Exact inference in Markov Random Fields (MRFs) is generally intractable, motivating approximate algorithms. There are two main classes of approximate inference algorithms: variational methods and Markov chain Monte Carlo (MCMC) algorithms [13]. Among variational methods, mean-field approximations [9] are based on a “tractable” family of distributions, such as the fully-factorized distributions. Inference finds a distribution in the tractable set to minimize the KL-divergence from the true distribution. Other methods, such as loopy belief propagation (LBP), generalized belief propagation [14] and expectation propagation [10] use a less restricted family of target distributions, but approximate the KL-divergence. Variational methods are typically fast, and often produce high-quality approximations. However, when the variational approximations are poor, estimates can be correspondingly worse. MCMC strategies, such as Gibbs sampling, simulate a Markov chain whose stationary distribution is the target distribution. Inference queries are then answered by the samples drawn from the Markov chain. In principle, MCMC will be arbitrarily accurate if run long enough. The principal difficulty is that the time for the Markov chain to converge to its stationary distribution, or the “mixing time”, can be exponential in the number of variables. This paper is inspired by a recent hybrid approach for Ising models [3]. This approach minimizes the divergence from the true distribution to one in a tractable family. However, the tractable family is a “fast mixing” family where Gibbs sampling is guaranteed to quickly converge to the stationary distribution. They observe that an Ising model will be fast mixing if the spectral norm of a matrix containing the absolute values of all interactions strengths is controlled. An algorithm projects onto this fast mixing parameter set in the Euclidean norm, and projected gradient descent (PGD) can minimize various divergence measures. This often leads to inference results that are better than either simple variational methods or univariate Gibbs sampling (with a limited time budget). However, this approach is limited to Ising models, and scales poorly in the size of the model, due to the difficulty of projecting onto the spectral norm. 1 The principal contributions of this paper are, first, a set of sufficient conditions to guarantee that univariate Gibbs sampling on an MRF will be fast-mixing (Section 4), and an algorithm to project onto this set in the Euclidean norm (Section 5). A secondary contribution of this paper is considering an alternative matrix norm (the induced ∞-norm) that is somewhat looser than the spectral norm, but more computationally efficient. Following previous work [3], these ideas are experimentally validated via a projected gradient descent algorithm to minimize other divergences, and looking at the accuracy of the resulting marginals. The ability to project onto a fast-mixing parameter set may also be of independent interest. For example, it might be used during maximum likelihood learning to ensure that the gradients estimated through sampling are more accurate. 2 Notation We consider discrete pairwise MRFs with n variables, where the i-th variable takes values in {1, ..., Li}, E is the set of edges, and θ are the potentials on each edge. Each edge in E is an ordered pair (i, j) with i ≤j. The parameters are a set of matrices θ := {θij|θij ∈RLi×Lj, ∀(i, j) ∈E}. When i > j, and (j, i) ∈E, we let θij denote the transpose of θji. The corresponding distribution is p(x; θ) = exp ⎛ ⎝# (i,j)∈E θij(xi, xj) −A(θ) ⎞ ⎠, (1) where A(θ) := log & x exp '& (i,j)∈E θij(xi, xj) ( is the log-partition function, and θij(xi, xj) denotes the entry in the xi-th row and xj-th column of θij. It is easy to show that any parametrization of a pairwise MRF can be converted into this form. “Self-edges” (i, i) can be included in E if one wishes to explicitly represent univariate terms. It is sometimes convenient to work with the exponential family representation p(x; θ) = exp{f(x) · θ −A(θ)}, (2) where f(x) is the sufficient statistics for configuration x. If these are indicator functions for all configurations of all pairs in E, then the two representations are equivalent. 3 Background Theory on Rapid Mixing This section reviews background on mixing times that will be used later in the paper. Definition 1. Given two finite distributions p and q, the total variation distance ∥· ∥T V is defined as ∥p(X) −q(X)∥T V = 1 2 & x |p(X = x) −q(X = x)|. Next, one must define a measure of how fast a Markov chain converges to the stationary distribution. Let the state of the Markov chain after t iterations be Xt. Given a constant ϵ, this is done by finding some number of iterations τ(ϵ) such that the induced distribution p(Xt|X0 = x) will always have a distance of less than ϵ from the stationary distribution, irrespective of the starting state x. Definition 2. Let {Xt} be the sequence of random variables corresponding to running Gibbs sampling on a distribution p. The mixing time τ(ϵ) is defined as τ(ϵ) = min{t : d(t) < ϵ}, where d(t) = maxx ∥P(Xt|X0 = x) −p(X)∥T V is the maximum distance at time t when considering all possible starting states x. Now, we are interested in when Gibbs sampling on a distribution p can be shown to have a fast mixing time. The central property we use is the dependency of one variable on another, defined informally as how much the conditional distribution over Xi can be changed when all variables other than Xj are the same. Definition 3. Given a distribution p, the dependency matrix R is defined by Rij = max x,x′:x−j=x′ −j ∥p(Xi|x−i) −p(Xi|x′ −i)∥T V . Here, the constraint x−j = x′ −j indicates that all variables in x and x′ are identical except xj. The central result on rapid mixing is given by the following Theorem, due to Dyer et al. [5], generalizing the work of Hayes [7]. Informally, it states that if ∥R∥< 1 for any sub-multiplicative norm ∥· ∥, then mixing will take on the order of n ln n iterations, where n is the number of variables. 2 Theorem 4. [5, Lemma 17] If ∥· ∥is any sub-multiplicative matrix norm and ||R|| < 1, the mixing time of univariate Gibbs sampling on a system with n variables with random updates is bounded by τ(ϵ) ≤ n 1−∥R∥ln ! ∥1n∥∥1T n ∥ ϵ " . Here, ∥1n∥denotes the same matrix norm applied to a matrix of ones of size n × 1, and similarly for 1T n. In particular, if ∥· ∥induced by a vector p-norm, then ∥1n∥∥1T n∥= n. Since this result is true for a variety of norms, it is natural to ask, for a given matrix R, which norm will give the strongest result. It can be shown that for symmetric matrices (such as the dependency matrix), the spectral norm ∥· ∥2 is always superior. Theorem 5. [5, Lemma 13] If A is a symmetric matrix and ∥· ∥is any sub-multiplicative norm, then ∥A∥2 ≤∥A∥. Unfortunately, as will be discussed below, the spectral norm can be more computationally expensive than other norms. As such, we will also consider the use of the ∞-norm ∥· ∥∞. This leads to additional looseness in the bound in general, but is limited in some cases. In particular if R = rG where G is the adjacency matrix for some regular graph with degree d, then for all induced p-norms, ∥R∥= rd, since ∥R∥= maxx̸=0 ∥Rx∥/∥x| = r maxx̸=0 ∥Gx∥/∥x∥= r∥Go∥/∥o∥= rd, where o is a vector of ones. Thus, the extra looseness from using, say, ∥· ∥∞instead of ∥· ∥2 will tend to be minimal when the graph is close to regular, and the dependency is close to a constant value. For irregular graphs with highly variable dependency, the looseness can be much larger. 4 Dependency for Markov Random Fields In order to establish that Gibbs sampling on a given MRF will be fast mixing, it is necessary to compute (a bound on) the dependency matrix R, as done in the following result. The proof of this result is fairly long, and so it is postponed to the Appendix. Note that it follows from several bounds on the dependency that are tighter, but less computationally convenient. Theorem 6. The dependency matrix for a pairwise Markov random field is bounded by Rij(θ) ≤max a,b 1 2∥θij ·a −θij ·b∥∞. Here, θij ·a indicates the a−th column of θij. Note that the MRF can include univariate terms as selfedges with no impact on the dependency bound, regardless of the strength of the univariate terms. It can be seen easily that from the definition of R (Definition 3), for any i the entry Rii for self-edges (i, i) should always be zero. One can, without loss of generality, set each column of θii to be the same, meaning that Rii = 0 in the above bound. 5 Euclidean Projection Operator The Euclidean distance between two MRFs parameterized respectively by ψ and θ is ∥θ −ψ∥2 := # (i,j)∈E ∥θij −ψij∥2 F . This section considers projecting a given vector ψ onto the fast mixing set or, formally, finding a vector θ with minimum Euclidean distance to ψ, subject to the constraint that a norm ∥· ∥∗applied to the bound on the dependency matrix R is less than some constant c. Euclidean projection is considered because, first, it is a straightforward measure of the closeness between two parameters and, second, it is the building block of the projected gradient descent for projection in other distance measures. To begin with, we do not specify the matrix norm ∥· ∥∗, as it could be any sub-multiplicative norm (Section 3). Thus, in principle, we would like to find θ to solve projc(ψ) := argmin θ:∥R(θ)∥∗≤c ∥θ −ψ∥2. (3) Unfortunately, while convex, this optimization turns out to be somewhat expensive to solve, due to a lack of smoothness Instead, we introduce a matrix Z, and constrain that Zij ≥Rij(θ), where Rij(θ) is the bound on dependency in Thm 6 (as an equality). We add an extra quadratic term 3 α∥Z −Y ∥2 F to the objective, where Y is an arbitrarily given matrix and α > 0 is trade-off between the smoothness and the closeness to original problem (3). The smoothed projection operator is projC(ψ, Y ) := argmin (θ,Z)∈C ∥θ −ψ∥2 + α∥Z −Y ∥2 F, C = {(θ, Z) : Zij ≥Rij(θ), ∥Z∥∗≤c}. (4) If α = 0, this yields a solution that is identical to that of Eq. 3. However, when α = 0, the objective in Eq. 4 is not strongly convex as a function of Z, which results in a dual function which is nonsmooth, meaning it must be solved with a method like subgradient descent, with a slow convergence rate. In general, of course, the optimal point of Eq. 4 is different to that of Eq. 3. However, the main usage of the Euclidean projection operator is the projection step in the projected gradient descent algorithm for divergence minimization. In these tasks the smoothed projection operator can be directly used in the place of the non-smoothed one without changing the final result. In situations when the exact Euclidean projection is required, it can be done by initializing Y1 arbitrarily and repeating (θk+1, Yk+1) ←projC(ψ, Yk), for k = 1, 2, . . . until convergence. 5.1 Dual Representation Theorem 7. Eq. 4 has the dual representation maximize σ,φ,∆,Γ g(σ, φ, ∆, Γ) subject to σij(a, b, c) ≥0, φij(a, b, c) ≥0, ∀(i, j) ∈E, a, b, c , (5) where g(σ, φ, ∆, Γ) = min Z h1(Z; σ, φ, ∆, Γ) + min θ h2(θ; σ, φ) h1(Z; σ, φ, ∆, Γ) = −tr(ZΛT ) + I(∥Z∥∗≤c) + α∥Z −Y ∥2 F h2(θ; σ, φ) = ∥θ −ψ∥2 + 1 2 ! i,j∈E ! a,b,c " σij(a, b, c) −φij(a, b, c) # (θij c,a −θij c,b), in which Λij := ∆ijDij + ˆΓij + $ a,b,c σij(a, b, c) + φij(a, b, c), where ˆΓij := % Γij if (i, j) ∈E −Γij if (j, i) ∈E , and D is an indicator matrix with Dij = 0 if (i, j) ∈E or (j, i) ∈E, and Dij = 1 otherwise. The dual variables σij and φij are arrays of size Lj × Li × Li for all pairs (i, j) ∈E while ∆and Γ are of size n × n. The proof of this is in the Appendix. Here, I(·) is the indicator function with I(x) = 0 when x is true and I(x) = ∞otherwise. Being a smooth optimization problem with simple bound constraints, Eq. 5 can be solved with LBFGS-B [2]. For a gradient-based method like this to be practical, it must be possible to quickly evaluate g and its gradient. This is complicated by the fact that g is defined in terms of the minimization of h1 with respect to Z and h2 with respect to θ. We discuss how to solve these problems now. We first consider the minimization of h2. This is a quadratic function of θ and can be solved analytically via the condition that ∂ ∂θh2(θ; σ, φ) = 0. The closed form solution is θij c,a = ψij c,a −1 4 &! b σij(a, b, c) − ! b σij(b, a, c) − ! b φij(a, b, c) + ! b φij(b, a, c) ' ∀(i, j) ∈E, 1 ≤a, c ≤m.. The time complexity is linear in the size of ψ. Minimizing h1 is more involved. We assume to start that there exists an algorithm to quickly project a matrix onto the set {Z : ∥Z∥∗≤c}, i.e. to solve the optimization problem of min ∥Z∥∗≤c ∥Z −A∥2 F . (6) Then, we observe that arg minZ h1 is equal to arg min Z −tr(ZΛT ) + I(∥Z∥∗≤c) + α∥Z −Y ∥2 F = arg min ∥Z∥∗≤c ∥Z −(Y + 1 2αΛ)∥2 F . 4 For different norms ∥· ∥∗, the projection algorithm will be different and can have a large impact on efficiency. We will discuss in the followings sections the choices of ∥· ∥∗and an algorithm for the ∞-norm. Finally, once h1 and h2 have been solved, the gradient of g is (by Danskin’s theorem [1]) ∂g ∂∆ij = −Dij ˆZij, ∂g ∂Γij = ˆZji −ˆZij, ∂g ∂σij(a, b, c) =1 2(ˆθij c,a −ˆθij c,b) −ˆZij, ∂g ∂φij(a, b, c) = −∂σij(a,b,c)g, where ˆZ and ˆθ represent the solutions to the subproblems. 5.2 Spectral Norm When ∥·∥∗is set to the spectral norm, i.e. the largest singular value of a matrix, the projection in Eq. 6 can be performed by thresholding the singular values of A [3]. Theoretically, using spectral norm will give a tighter bound on Z than other norms (Section 3). However, computing a full singular value decomposition can be impractically slow for a graph with a large number of variables. 5.3 ∞-norm Here, we consider setting ∥· ∥∗to the ∞-norm, ∥A∥∞= maxi ! j |Aij|, which measures the maximum l1 norm of the rows of A. This norm has several computational advantages. Firstly, to project a matrix onto a ∞-norm ball {A : ∥A∞∥≤c}, we can simply project each row ai of the matrix onto the l1-norm ball {a : ∥a∥1 ≤c}. Duchi et al. [4] provide a method linear in the number of nonzeros in a and logarithmic in the length of a. Thus, if Z is an n × n, matrix, Eq. 6 for the ∞-norm can be solved in time n2 and, for sufficiently sparse matrices, in time n log n. A second advantage of the ∞-norm is that (unlike the spectral norm) projection in Eq. 6 preserves the sparsity of the matrix. Thus, one can disregard the matrix D and dual variables ∆when solving the optimization in Theorem 7. This means that Z itself can be represented sparsely, i.e. we only need variables for those (i, j) ∈E. These simplifications significantly improve the efficiency of projection, with some tradeoff in accuracy. 6 Projection in Divergences In this section, we want to find a distribution p(x; θ) in the fast mixing family closest to a target distribution p(x; ψ) in some divergence D(ψ, θ). The choice of divergence depends on convenience of projection, the approximate family and the inference task. We will first present a general algorithmic framework based on projected gradient descent (Algorithm 1), and then discuss the details of several previously proposed divergences [11, 3]. 6.1 General algorithm framework for divergence minimization The problem of projection in divergences is formulated as min θ∈¯C D(ψ, θ), (7) D(·, ·) is some divergence measure, and ¯C := {θ : ∃Z, s.t.(θ, Z) ∈C}, where C is the feasible set in Eq. 4. Our general strategy for this is to use projected gradient descent to solve the optimization min (θ,Z)∈C D(ψ, θ), (8) using the joint operator to project onto C described in Section 5. For different divergences, the only difference in projection algorithm is the evaluation of the gradient ∇θD(ψ, θ). It is clear that if (θ∗, Z∗) is the solution of Eq. 8, then θ∗is the solution of 7. 6.2 Divergences 5 Algorithm 1 Projected gradient descent for divergence projection Initialize (θ1, Z1), k ←1. repeat θ′ ←θk −λ∇θD(ψ, θk) (θk+1, Zk+1) ←projC(θ′, Zk) k ←k + 1 until convergence 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Interaction Strength Marginal Error Grid, Attractive only LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(ψ||θ) (TW 1) Piecewise KL(ψ||θ) (TW 2) KL(θ||ψ) 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Interaction Strength Marginal Error Edge density = 0.50, Mixed LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(ψ||θ) KL(θ||ψ) Figure 1: Mean univariate marginal error on 16 × 16 grids (top) with attractive interactions and mediandensity random graphs (bottom) with mixed interactions, comparing 30k iterations of Gibbs sampling after projection (onto the l∞norm) to variational methods. The original parameters also show a lower curve with 106 samples. In this section, we will discuss the different choices of divergences and corresponding projection algorithms. 6.2.1 KL-divergence The KL-divergence KL(ψ∥θ) := ! x p(x; ψ) log p(x;ψ) p(x;θ) is arguably the optimal divergence for marginal inference because it strives to preserve the marginals of p(x; θ) and p(x; ψ). However, projection in KL-divergence is intractable here because the evaluation of the gradient ∇θKL(ψ∥θ) requires the marginals of distribution ψ. 6.2.2 Piecewise KL-divergence One tractable surrogate of KL(ψ∥θ) is the piecewise KL-divergence [3] defined over some tractable subgraphs. Here, D(ψ, θ) := maxT ∈T KL(ψT ∥θT ), where T is a set of low-treewidth subgraphs. The gradient can be evaluated as ∇θD(ψ, θ) = ∇θKL(ψT ∗∥θT ∗) where T ∗= arg maxT ∈T KL(ψT ∥θT ). For any T in T , KL(ψT ∥θT ) and its gradient can be evaluated by the junction-tree algorithm. 6.2.3 Reversed KL-divergence The “reversed” KL-divergence KL(θ∥ψ) is minimized by mean-field methods. In general KL(θ∥ψ) is inferior to KL(ψ∥θ) for marginal inference since it tends to underestimate the support of the distribution [11]. Still, it often works well in practice. ∇θKL(θ∥ψ) can computed as ∇θKL(θ∥ψ) = ! x p(x; θ)(θ −ψ) · f(x) " f(x) −µ(θ) # , which can be approximated by samples generated from p(x; θ) [3]. In implementation, we maintain a “pool” of samples, each of which is updated by a single Gibbs step after each iteration of Algorithm 1. 7 Experiments The experiments below take two stages: first, the parameters are projected (in some divergence) and then we compare the accuracy of sampling with the resulting marginals. We focus on this second aspect. However, we provide a comparison of the computation time for various projection algorithms in Table 1, and when comparing the accuracy of sampling with a given amount of time, provide two 6 curves for sampling with the original parameters, where one curve has an extra amount of sampling effort roughly approximating the time to perform projection in the reversed KL divergence. 7.1 Synthetic MRFs 10 0 10 1 10 2 10 3 10 4 10 5 10 6 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Number of Samples Marginal Error Interaction strength = 2.00, Attractive Only LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(ψ||θ) (TW 1) Piecewise KL(ψ||θ) (TW 2) KL(θ||ψ) 10 0 10 1 10 2 10 3 10 4 10 5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Number of Samples Marginal Error Edge density = 0.50, Interaction strength = 3.00, Mixed LBP TRW Mean−Field Original Parameters Euclidean Piecewise KL(ψ||θ) KL(θ||ψ) Figure 2: Examples of the accuracy of obtained marginals vs. the number of samples. Top: Grid graphs. Bottom: Median-Density Random graphs. Our first experiment follows that of [8, 3] in evaluating the accuracy of approximation methods in marginal inference. In the experiments, we approximate randomly generated MRF models with rapid-mixing distributions using the projection algorithms described previously. Then, the marginals of fast mixing approximate distributions are estimated by running a Gibbs chain on each distribution. These are compared against exact marginals as computed by the junction tree algorithm. We use the mean absolute difference of the marginals |p(Xi = 1) −q(Xi = 1)| as the accuracy measure. We compare to Naive mean-field (MF), Gibbs sampling on original parameters (Gibbs), and Loopy belief propagation (LBP). Many other methods have been compared against a similar benchmark [6, 8]. While our methods are for general MRFs, we test on Ising potentials because this is a standard benchmark. Two graph topologies are used: two-dimensional 16 × 16 grids and 10 node random graphs, where each edge is independently present with probability pe ∈ {0.3, 0.5, 0.7}. Node parameters θi are uniform from [−dn, dn] with fixed field strength dn = 1.0. Edge parameters θij are uniform from [−de, de] or [0, de] to obtain mixed or attractive interactions respectively, with interaction strengths de ∈{0, 0.5, . . ., 4}. Figure 1 shows the average marginal error at different interaction strengths. Error bars show the standard error normalized by the number of samples, which can be interpreted as a 68.27% confidence interval. We also include time-accuracy comparisons in Figure 2. All results are averaged over 50 random trials. We run Gibbs long enough ( 106 samples) to get a fair comparison in terms of running time. Except where otherwise stated, parameters are projected onto the ball {θ : ∥R(θ)∥∞≤c}, where c = 2.5 is larger than the value of c = 1 suggested by the proofs above. Better results are obtained by using this larger constraint set, presumably because of looseness in the bound. For piecewise projection, grids use simple vertical and horizontal chains of treewidth either one or two. For random graphs, we randomly generate spanning trees until all edges are covered. Gradient descent uses a fixed step size of λ = 0.1. A Gibbs step is one “systematic-scan” pass over all variables between. The reversed KL divergence maintains a pool of 500 samples, each of which is updated by a single Gibbs step in each iteration. We wish to compare the trade-off between computation time and accuracy represented by the choice between the use of the ∞and spectral norms. We measure the running time on 16 × 16 grids in Table 1, and compare the accuracy in Figure 3. The appendix contains results for a three-state Potts model on an 8×8 grid, as a test of the multivariate setting. Here, the intractable divergence KL(ψ∥θ) is included for reference, with the projection computed with the help of the junction tree algorithm for inference. 7 Table 1: Running times on 16×16 grids with attractive interactions. Euclidean projection converges in around 5 LBFGS-B iterations. Piecewise projection (with a treewidth of 1) and reversed KL projection use 60 gradient descent steps. All results use a single core of a Intel i7 860 processor. Gibbs Euclidean Piecewise Reversed-KL 30k Steps 106 Steps l∞norm l2 norm l∞norm l2 norm l∞norm l2 norm de = 1.5 0.67s 22.42s 1.50s 25.63s 12.87s 45.26s 13.13s 66.81s de = 3.0 0.67s 22.42s 3.26s 164.34s 20.73s 211.08s 20.12s 254.25s 7.2 Berkeley binary image denoising 0 0.5 1 1.5 2 2.5 3 3.5 4 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Interaction Strength Marginal Error Grid, Mixed Euclidean SP Piecewise KL(ψ||θ) (TW 1) SP Piecewise KL(ψ||θ) (TW 2) SP KL(θ||ψ) SP Euclidean Inf Piecewise KL(ψ||θ) (TW 1) Inf Piecewise KL(ψ||θ) (TW 2) Inf KL(θ||ψ) Inf Figure 3: The marginal error using ∞-norm projection (solid lines) and spectral-norm projection (dotted lines) on 16x16 Ising grids. This experiment evaluates various methods for denoising binary images from the Berkeley segmentation dataset downscaled from 300 × 200 to 120 × 80. The images are binarized by setting Yi = 1 if pixel i is above the average gray scale in the image, and Yi = −1. The noisy image X is created by setting: Xi = Yi+1 2 i(1 −t1.25 i ) + 1−Yi 2 t1.25 i , in which ti is sampled uniformly from [0, 1]. For inference purposes, the conditional distribution Y is modeled as P(Y |X) ∝ exp ! β " ij YiYj + α 2 " i(2Xi −1)Yi # , where the pairwise strength β > 0 encourages smoothness. On this attractive-only Ising potential, the Swendsen-Wang method [12] mixes rapidly, and so we use the resulting samples to estimate the ground truth. The parameters α and β are heuristically chosen to be 0.5 and 0.7 respectively. 10 0 10 1 10 2 10 3 10 4 10 5 10 6 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Number of samples Marginal Error Marginal Error LBP Mean−Field Original Parameter Euclidean Piecewise KL(ψ||θ) (TW 1) KL(θ||ψ) Figure 4: Average marginal error on the Berkeley segmentation dataset. Figure 4 shows the decrease of average marginal error. To compare running time, Euclidean and K(θ∥ψ) projection cost approximately the same as sampling 105 and 4.8 × 105 samples respectively. Gibbs sampling on the original parameter converges very slowly. Sampling the approximate distributions from our projection algorithms converge quickly in less than 104 samples. 8 Conclusions We derived sufficient conditions on the parameters of an MRF to ensure fast-mixing of univariate Gibbs sampling, along with an algorithm to project onto this set in the Euclidean norm. As an example use, we explored the accuracy of samples obtained by projecting parameters and then sampling, which is competitive with simple variational methods as well as traditional Gibbs sampling. Other possible applications of fast-mixing parameter sets include constraining parameters during learning. Acknowledgments NICTA is funded by the Australian Government through the Department of Communications and the Australian Research Council through the ICT Centre of Excellence Program. 8 References [1] Dimitri Bertsekas. Nonlinear Programming. Athena Scientific, 2004. 5.1 [2] Richard H. Byrd, Peihuang Lu, Jorge Nocedal, and Ciyou Zhu. A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput., 16(5):1190–1208, 1995. 5.1 [3] Justin Domke and Xianghang Liu. Projecting Ising model parameters for fast mixing. In NIPS, 2013. 1, 5.2, 6, 6.2.2, 6.2.3, 7.1 [4] John C. Duchi, Shai Shalev-Shwartz, Yoram Singer, and Tushar Chandra. Efficient projections onto the l1-ball for learning in high dimensions. In ICML, 2008. 5.3 [5] Martin E. Dyer, Leslie Ann Goldberg, and Mark Jerrum. Matrix norms and rapid mixing for spin systems. Ann. Appl. Probab., 19:71–107, 2009. 3, 4, 5 [6] Amir Globerson and Tommi Jaakkola. Approximate inference using conditional entropy decompositions. In UAI, 2007. 7.1 [7] Thomas P. Hayes. A simple condition implying rapid mixing of single-site dynamics on spin systems. In FOCS, pages 39–46, 2006. 3 [8] Tamir Hazan and Amnon Shashua. Convergent message-passing algorithms for inference over general graphs with convex free energies. In UAI, pages 264–273, 2008. 7.1 [9] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. 1 [10] Thomas Minka. Expectation propagation for approximate bayesian inference. In UAI, 2001. 1 [11] Thomas Minka. Divergence measures and message passing. Technical report, 2005. 6, 6.2.3 [12] Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in monte carlo simulations. Phys. Rev. Lett., 58:86–88, Jan 1987. 7.2 [13] Martin Wainwright and Michael Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, 2008. 1 [14] Jonathan Yedidia, William Freeman, and Yair Weiss. Constructing free energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282–2312, 2005. 1 9
2014
55
5,542
A Boosting Framework on Grounds of Online Learning Tofigh Naghibi, Beat Pfister Computer Engineering and Networks Laboratory ETH Zurich, Switzerland naghibi@tik.ee.ethz.ch, pfister@tik.ee.ethz.ch Abstract By exploiting the duality between boosting and online learning, we present a boosting framework which proves to be extremely powerful thanks to employing the vast knowledge available in the online learning area. Using this framework, we develop various algorithms to address multiple practically and theoretically interesting questions including sparse boosting, smooth-distribution boosting, agnostic learning and, as a by-product, some generalization to double-projection online learning algorithms. 1 Introduction A boosting algorithm can be seen as a meta-algorithm that maintains a distribution over the sample space. At each iteration a weak hypothesis is learned and the distribution is updated, accordingly. The output (strong hypothesis) is a convex combination of the weak hypotheses. Two dominant views to describe and design boosting algorithms are “weak to strong learner” (WTSL), which is the original viewpoint presented in [1, 2], and boosting by “coordinate-wise gradient descent in the functional space” (CWGD) appearing in later works [3, 4, 5]. A boosting algorithm adhering to the first view guarantees that it only requires a finite number of iterations (equivalently, finite number of weak hypotheses) to learn a (1−ǫ)-accurate hypothesis. In contrast, an algorithm resulting from the CWGD viewpoint (usually called potential booster) may not necessarily be a boosting algorithm in the probability approximately correct (PAC) learning sense. However, while it is rather difficult to construct a boosting algorithm based on the first view, the algorithmic frameworks, e.g., AnyBoost [4], resulting from the second viewpoint have proven to be particularly prolific when it comes to developing new boosting algorithms. Under the CWGD view, the choice of the convex loss function to be minimized is (arguably) the cornerstone of designing a boosting algorithm. This, however, is a severe disadvantage in some applications. In CWGD, the weights are not directly controllable (designable) and are only viewed as the values of the gradient of the loss function. In many applications, some characteristics of the desired distribution are known or given as problem requirements while, finding a loss function that generates such a distribution is likely to be difficult. For instance, what loss functions can generate sparse distributions?1 What family of loss functions results in a smooth distribution?2 We even can go further and imagine the scenarios in which a loss function needs to put more weights on a given subset of examples than others, either because that subset has more reliable labels or it is a problem requirement to have a more accurate hypothesis for that part of the sample space. Then, what 1In the boosting terminology, sparsity usually refers to the greedy hypothesis-selection strategy of boosting methods in the functional space. However, sparsity in this paper refers to the sparsity of the distribution (weights) over the sample space. 2A smooth distribution is a distribution that does not put too much weight on any single sample or in other words, a distribution emulated by the booster does not dramatically diverge from the target distribution [6, 7]. 1 loss function can generate such a customized distribution? Moreover, does it result in a provable boosting algorithm? In general, how can we characterize the accuracy of the final hypothesis? Although, to be fair, the so-called loss function hunting approach has given rise to useful boosting algorithms such as LogitBoost, FilterBoost, GiniBoost and MadaBoost [5, 8, 9, 10] which (to some extent) answer some of the above questions, it is an inflexible and relatively unsuccessful approach to addressing the boosting problems with distribution constraints. Another approach to designing a boosting algorithm is to directly follow the WTSL viewpoint [11, 6, 12]. The immediate advantages of such an approach are, first, the resultant algorithms are provable boosting algorithms, i.e., they output a hypothesis of arbitrary accuracy. Second, the booster has direct control over the weights, making it more suitable for boosting problems subject to some distribution constraints. However, since the WTSL view does not offer any algorithmic framework (as opposed to the CWGD view), it is rather difficult to come up with a distribution update mechanism resulting in a provable boosting algorithm. There are, however, a few useful, and albeit fairly limited, algorithmic frameworks such as TotalBoost [13] that can be used to derive other provable boosting algorithms. The TotalBoost algorithm can maximize the margin by iteratively solving a convex problem with the totally corrective constraint. A more general family of boosting algorithms was later proposed by Shalev-Shwartz et. al. [14], where it was shown that weak learnability and linear separability are equivalent, a result following from von Neumann’s minmax theorem. Using this theorem, they constructed a family of algorithms that maintain smooth distributions over the sample space, and consequently are noise tolerant. Their proposed algorithms find an (1−ǫ)-accurate solution after performing at most O(log(N)/ǫ2) iterations, where N is the number of training examples. 1.1 Our Results We present a family of boosting algorithms that can be derived from well-known online learning algorithms, including projected gradient descent [15] and its generalization, mirror descent (both active and lazy updates, see [16]) and composite objective mirror descent (COMID) [17]. We prove the PAC learnability of the algorithms derived from this framework and we show that this framework in fact generates maximum margin algorithms. That is, given a desired accuracy level ν, it outputs a hypothesis of margin γmin −ν with γmin being the minimum edge that the weak classifier guarantees to return. The duality between (linear) online learning and boosting is by no means new. This duality was first pointed out in [2] and was later elaborated and formalized by using the von Neumann’s minmax theorem [18]. Following this line, we provide several proof techniques required to show the PAC learnability of the derived boosting algorithms. These techniques are fairly versatile and can be used to translate many other online learning methods into our boosting framework. To motivate our boosting framework, we derive two practically and theoretically interesting algorithms: (I) SparseBoost algorithm which by maintaining a sparse distribution over the sample space tries to reduce the space and the computation complexity. In fact this problem, i.e., applying batch boosting on the successive subsets of data when there is not sufficient memory to store an entire dataset, was first discussed by Breiman in [19], though no algorithm with theoretical guarantee was suggested. SparseBoost is the first provable batch booster that can (partially) address this problem. By analyzing this algorithm, we show that the tuning parameter of the regularization term ℓ1 at each round t should not exceed γt 2 ηt to still have a boosting algorithm, where ηt is the coefficient of the tth weak hypothesis and γt is its edge. (II) A smooth boosting algorithm that requires only O(log 1/ǫ) number of rounds to learn a (1−ǫ)-accurate hypothesis. This algorithm can also be seen as an agnostic boosting algorithm3 due to the fact that smooth distributions provide a theoretical guarantee for noise tolerance in various noisy learning settings, such as agnostic boosting [21, 22]. Furthermore, we provide an interesting theoretical result about MadaBoost [10]. We give a proof (to the best of our knowledge the only available unconditional proof) for the boosting property of (a variant of) MadaBoost and show that, unlike the common presumption, its convergence rate is of O(1/ǫ2) rather than O(1/ǫ). 3Unlike the PAC model, the agnostic learning model allows an arbitrary target function (labeling function) that may not belong to the class studied, and hence, can be viewed as a noise tolerant learning model [20]. 2 Finally, we show our proof technique can be employed to generalize some of the known online learning algorithms. Specifically, consider the Lazy update variant of the online Mirror Descent (LMD) algorithm (see for instance [16]). The standard proof to show that the LMD update scheme achieves vanishing regret bound is through showing its equivalence to the FTRL algorithm [16] in the case that they are both linearized, i.e., the cost function is linear. However, this indirect proof is fairly restrictive when it comes to generalizing the LMD-type algorithms. Here, we present a direct proof for it, which can be easily adopted to generalize the LMD-type algorithms. 2 Preliminaries Let {(xi, ai)}, 1 ≤i ≤N, be N training samples, where xi ∈X and ai ∈{−1, +1}. Assume h ∈H is a real-valued function mapping X into [−1, 1]. Denote a distribution over the training data by w = [w1, . . . , wN]⊤and define a loss vector d = [−a1h(x1), . . . , −aNh(xN)]⊤. We define γ = −w⊤d as the edge of the hypothesis h under the distribution w and it is assumed to be positive when h is returned by a weak learner. In this paper we do not consider the branching program based boosters and adhere to the typical boosting protocol (described in Section 1). Since a central notion throughout this paper is that of Bregman divergences, we briefly revisit some of their properties. A Bregman divergence is defined with respect to a convex function R as BR(x, y)= R(x) −R(y) −∇R(y)(x −y)⊤ (1) and can be interpreted as a distance measure between x and y. Due to the convexity of R, a Bregman divergence is always non-negative, i.e., BR(x, y) ≥0. In this work we consider R to be a β-strongly convex function4 with respect to a norm ||.||. With this choice of R, the Bregman divergence BR(x, y) ≥β 2 ||x −y||2. As an example, if R(x) = 1 2x⊤x (which is 1-strongly convex with respect to ||.||2), then BR(x, y) = 1 2||x −y||2 2 is the Euclidean distance. Another example is the negative entropy function R(x) = PN i=1 xi log xi (resulting in the KL-divergence) which is known to be 1-strongly convex over the probability simplex with respect to ℓ1 norm. The Bregman projection is another fundamental concept of our framework. Definition 1 (Bregman Projection). The Bregman projection of a vector y onto a convex set S with respect to a Bregman divergence BR is ΠS(y) = arg min x∈S BR(x, y) (2) Moreover, the following generalized Pythagorean theorem holds for Bregman projections. Lemma 1 (Generalized Pythagorean) [23, Lemma 11.3]. Given a point y ∈RN, a convex set S and ˆy= ΠS(y) as the Bregman projection of y onto S, for all x ∈S we have Exact: BR(x, y) ≥BR(x, ˆy) + BR(ˆy, y) (3) Relaxed: BR(x, y) ≥BR(x, ˆy) (4) The relaxed version follows from the fact that BR(ˆy, y)≥0 and thus can be ignored. Lemma 2. For any vectors x, y, z, we have (x −y)⊤(∇R(z) −∇R(y)) = BR(x, y) −BR(x, z) + BR(y, z) (5) The above lemma follows directly from the Bregman divergence definition in (1). Additionally, the following definitions from convex analysis are useful throughout the paper. Definition 2 (Norm & dual norm). Let ||.||A be a norm. Then its dual norm is defined as ||y||A∗= sup{y⊤x, ||x||A ≤1} (6) For instance, the dual norm of ||.||2 = ℓ2 is ||.||2∗= ℓ2 norm and the dual norm of ℓ1 is ℓ∞norm. Further, Lemma 3. For any vectors x, y and any norm ||.||A, the following inequality holds: x⊤y ≤||x||A||y||A∗≤1 2||x||2 A + 1 2||y||2 A∗ (7) 4That is, its second derivative (Hessian in higher dimensions) is bounded away from zero by at least β. 3 Throughout this paper, we use the shorthands ||.||A = ||.|| and ||.||A∗= ||.||∗for the norm and its dual, respectively. Finally, before continuing, we establish our notations. Vectors are lower case bold letters and their entries are non-bold letters with subscripts, such as xi of x, or non-bold letter with superscripts if the vector already has a subscript, such as xi t of xt. Moreover, an N-dimensional probability simplex is denoted by S = {w| PN i=1 wi = 1, wi ≥0}. The proofs of the theorems and the lemmas can be found in the Supplement. 3 Boosting Framework Let R(x) be a 1-strongly convex function with respect to a norm ||.|| and denote its associated Bregman divergence BR. Moreover, let the dual norm of a loss vector dt be upper bounded, i.e., ||dt||∗ ≤ L. It is easy to verify that for dt as defined in MABoost, L = 1 when ||.||∗ = ℓ∞and L = N when ||.||∗ = ℓ2. The following Mirror Ascent Boosting (MABoost) algorithm is our boosting framework. Algorithm 1: Mirror Ascent Boosting (MABoost) Input: R(x) 1-strongly convex function, w1 = [ 1 N , . . . , 1 N ]⊤and z1 = [ 1 N , . . . , 1 N ]⊤ For t = 1, . . . , T do (a) Train classifier with wt and get ht, let dt = [−a1ht(x1), . . . , −aNht(xN)] and γt = −w⊤ t dt. (b) Set ηt = γt L (c) Update weights: ∇R(zt+1) = ∇R(zt) + ηtdt (lazy update) ∇R(zt+1) = ∇R(wt) + ηtdt (active update) (d) Project onto S: wt+1 = argmin w∈S BR(w, zt+1) End Output: The final hypothesis f(x)= sign  PT t=1 ηtht(x)  . This algorithm is a variant of the mirror descent algorithm [16], modified to work as a boosting algorithm. The basic principle in this algorithm is quite clear. As in ADABoost, the weight of a wrongly (correctly) classified sample increases (decreases). The weight vector is then projected onto the probability simplex in order to keep the weight sum equal to 1. The distinction between the active and lazy update versions and the fact that the algorithm may behave quite differently under different update strategies should be emphasized. In the lazy update version, the norm of the auxiliary variable zt is unbounded which makes the lazy update inappropriate in some situations. In the active update version, on the other hand, the algorithm always needs to access (compute) the previous projected weight wt to update the weight at round t and this may not be possible in some applications (such as boosting-by-filtering). Due to the duality between online learning and boosting, it is not surprising that MABoost (both the active and lazy versions) is a boosting algorithm. The proof of its boosting property, however, reveals some interesting properties which enables us to generalize the MABoost framework. In the following, only the proof of the active update is given and the lazy update is left to Section 3.4. Theorem 1. Suppose that MABoost generates weak hypotheses h1, . . . , hT whose edges are γ1, . . . , γT . Then the error ǫ of the combined hypothesis f on the training set is bounded and yields for the following R functions: R(w) = 1 2||w||2 2 : ǫ ≤ 1 PT t=1 1 2γ2 t + 1 (8) R(w)= N X i=1 wi log wi : ǫ ≤e−PT t=1 1 2 γ2 t (9) 4 In fact, the first bound (8) holds for any 1-strongly convex R, though for some R (e.g., negative entropy) a much tighter bound as in (9) can be achieved. Proof: Assume w∗= [w∗ 1, . . . , w∗ N]⊤is a distribution vector where w∗ i = 1 Nǫ if f(xi) ̸= ai, and 0 otherwise. w∗can be seen as a uniform distribution over the wrongly classified samples by the ensemble hypothesis f. Using this vector and following the approach in [16], we derive the upper bound of PT t=1 ηt(w∗⊤dt−w⊤ t dt) where dt = [d1 t , . . . ,dN t ] is a loss vector as defined in Algorithm 1. (w∗−wt)⊤ηtdt = (w∗−wt)⊤∇R(zt+1) −∇R(wt)  (10a) = BR(w∗, wt) −BR(w∗, zt+1) + BR(wt, zt+1) (10b) ≤BR(w∗, wt) −BR(w∗, wt+1) + BR(wt, zt+1) (10c) where the first equation follows Lemma 2 and inequality (10c) results from the relaxed version of Lemma 1. Note that Lemma 1 can be applied here because w∗∈S. Further, the BR(wt, zt+1) term is bounded. By applying Lemma 3 BR(wt, zt+1) + BR(zt+1, wt) = (zt+1 −wt)⊤ηtdt ≤1 2||zt+1 −wt||2 + 1 2η2 t ||dt||2 ∗ (11) and since BR(zt+1, wt) ≥1 2||zt+1 −wt||2 due to the 1-strongly convexity of R, we have BR(wt, zt+1) ≤1 2η2 t ||dt||2 ∗ (12) Now, replacing (12) into (10c) and summing it up from t = 1 to T , yields T X t=1 w∗⊤ηtdt−w⊤ t ηtdt ≤ T X t=1 1 2η2 t ||dt||2 ∗+ BR(w∗, w1) −BR(w∗, wT +1) (13) Moreover, it is evident from the algorithm description that for mistakenly classified samples −aif(xi)= −aisign  T X t=1 ηtht(xi)  = sign  T X t=1 ηtdi t  ≥0 ∀xi ∈{x|f(xi) ̸= ai} (14) Following (14), the first term in (13) will be w∗⊤PT t=1 ηtdt ≥0 and thus, can be ignored. Moreover, by the definition of γ, the second term is PT t=1 −w⊤ t ηtdt = PT t=1 ηtγt. Putting all these together, ignoring the last term in (13) and replacing ||dt||2 ∗with its upper bound L, yields −BR(w∗, w1) ≤L T X t=1 1 2η2 t − T X t=1 ηtγt (15) Replacing the left side with −BR = −||w∗−w1||2 = ǫ−1 Nǫ for the case of quadratic R, and with −BR = log(ǫ) when R is a negative entropy function, taking the derivative w.r.t ηt and equating it to zero (which yields ηt = γt L ) we achieve the error bounds in (8) and (9). Note that in the case of R being the negative entropy function, Algorithm 1 degenerates into ADABoost with a different choice of ηt. Before continuing our discussion, it is important to mention that the cornerstone concept of the proof is the choice of w∗. For instance, a different choice of w∗results in the following max-margin theorem. Theorem 2. Setting ηt = γt L √ t, MABoost outputs a hypothesis of margin at least γmin −ν, where ν is a desired accuracy level and tends to zero in O( log T √ T ) rounds of boosting. Observations: Two observations follow immediately from the proof of Theorem 1. First, the requirement of using Lemma 1 is w∗∈S, so in the case of projecting onto a smaller convex set Sk ⊆S, as long as w∗∈Sk holds, the proof is intact. Second, only the relaxed version of Lemma 1 is required in the proof (to obtain inequality (10c)). Hence, if there is an approximate projection operator ˆΠS that satisfies the inequality BR(w∗, zt+1) ≥BR w∗, ˆΠS(zt+1)  , it can be substituted 5 for the exact projection operator ΠS and the active update version of the algorithm still works. A practical approximate operator of this type can be obtained by using the double-projection strategy as in Lemma 4. Lemma 4. Consider the convex sets K and S, where S ⊆K. Then for any x ∈S and y ∈RN, ˆΠS(y) = ΠS  ΠK(y)  is an approximate projection operator that satisfies BR(x, y) ≥ BR x, ˆΠS(y)  . These observations are employed to generalize Algorithm 1. However, we want to emphasis that the approximate Bregman projection is only valid for the active update version of MABoost. 3.1 Smooth Boosting Let k > 0 be a smoothness parameter. A distribution w is smooth w.r.t a given distribution D if wi ≤kDi for all 1 ≤i ≤N. Here, we consider the smoothness w.r.t to the uniform distribution, i.e., Di = 1 N . Then, given a desired smoothness parameter k, we require a boosting algorithm that only constructs distributions w such that wi ≤ k N , while guaranteeing to output a (1 −1 k)accurate hypothesis. To this end, we only need to replace the probability simplex S with Sk = {w| PN i=1 wi = 1, 0 ≤wi ≤ k N } in MABoost to obtain a smooth distribution boosting algorithm, called smooth-MABoost. That is, the update rule is: wt+1 = argmin w∈Sk BR(w, zt+1). Note that the proof of Theorem 1 holds for smooth-MABoost, as well. As long as ǫ ≥1 k, the error distribution w∗(w∗ i = 1 Nǫ if f(xi) ̸= ai, and 0 otherwise) is in Sk because 1 Nǫ ≤ k N . Thus, based on the first observation, the error bounds achieved in Theorem 1 hold for ǫ≥1 k. In particular, ǫ= 1 k is reached after a finite number of iterations. This projection problem has already appeared in the literature. An entropic projection algorithm (R is negative entropy), for instance, was proposed in [14]. Using negative entropy and their suggested projection algorithm results in a fast smooth boosting algorithm with the following convergence rate. Theorem 3. Given R(w) = PN i=1 wi log wi and a desired ǫ, smooth-MABoost finds a (1 −ǫ)accurate hypothesis in O(log( 1 ǫ)/γ2) of iterations. 3.2 Combining Datasets Let’s assume we have two sets of data. A primary dataset A and a secondary dataset B. The goal is to train a classifier that achieves (1−ǫ) accuracy on A while limiting the error on dataset B to ǫB ≤1 k. This scenario has many potential applications including transfer learning [24], weighted combination of datasets based on their noise level and emphasizing on a particular region of a sample space as a problem requirement (e.g., a medical diagnostic test that should not make a wrong diagnosis when the sample is a pregnant woman). To address this problem, we only need to replace S in MABoost with Sc = {w| PN i=1 wi = 1, 0≤wi ∀i ∈A ∧0≤wi ≤k N ∀i ∈B} where i ∈A shorthands the indices of samples in A. By generating smooth distributions on B, this algorithm limits the weight of the secondary dataset, which intuitively results in limiting its effect on the final hypothesis. The proof of its boosting property is quite similar to Theorem 1 and can be found in the Supplement. 3.3 Sparse Boosting Let R(w)= 1 2||w||2 2. Since in this case the projection onto the simplex is in fact an ℓ1-constrained optimization problem, it is plausible that some of the weights are zero (sparse distribution), which is already a useful observation. To promote the sparsity of the weight vector, we want to directly regularize the projection with the ℓ1 norm, i.e., adding ||w||1 to the objective function in the projection step. It is, however, not possible in MABoost, since ||w||1 is trivially constant on the simplex. Therefore, following the second observation, we split the projection step into two consecutive projections. The first projection is onto K, an N-dimensional unit hypercube K = {y|0≤yi ≤1}. This projection is regularized with the ℓ1 norm and the solution is then projected onto a simplex. Note 6 that the second projection can only make the solution sparser (look at the projection onto simplex algorithm in [25]). Algorithm 2: SparseBoost Let K be a hypercube and S a probability simplex; Set w1 = [ 1 N , . . . , 1 N ]⊤; At t = 1, . . . , T , train ht with wt, set ηt = γt N and 0≤αt < γt 2 , and update zt+1 = wt + ηtdt yt+1 = arg min y∈K ||y −zt+1||2 + αtηt||y||1 wt+1 = arg min w∈S ||w −yt+1||2 Output the final hypothesis f(x)= sign  PT t=1 ηtht(x)  . αt is the regularization factor at round t. Since αtηt controls the sparsity of the solution, it is natural to investigate the maximum value that αt can take, provided that the boosting property still holds. This bound is implicit in the following theorem. Theorem 4. Suppose that SparseBoost generates weak hypotheses h1, . . . , hT whose edges are γ1, . . . , γT . Then, as long as αt ≤γt 2 , the error ǫ of the combined hypothesis f on the training set is bounded as follows: ǫ ≤ 1 PT t=1 1 2γt(γt −2αt) + 1 (16) See the Supplement for the proof. It is noteworthy that SparseBoost can be seen as a variant of the COMID algorithm [17] with the difference that SparseBoost uses a double-projection or as called in Lemma 4, approximate projection strategy. 3.4 Lazy Update Boosting In this section, we present the proof for the lazy update version of MABoost (LAMABoost) in Theorem 1. The proof technique is novel and can be used to generalize several known online learning algorithms such as OMDA in [26] and Meta algorithm in [27]. Moreover, we show that MadaBoost [10] can be presented in the LAMABoost setting. This gives a simple proof for MadaBoost without making the assumption that the edge sequence is monotonically decreasing (as in [10]). Proof: Assume w∗= [w∗ 1, . . . , w∗ N]⊤is a distribution vector where w∗ i = 1 Nǫ if f(xi) ̸= ai, and 0 otherwise. Then, (w∗−wt)⊤ηtdt = (wt+1 −wt)⊤∇R(zt+1) −∇R(zt)  + (zt+1 −wt+1)⊤∇R(zt+1) −∇R(zt)  + (w∗−zt+1)⊤∇R(zt+1) −∇R(zt)  ≤1 2||wt+1 −wt||2 + 1 2η2 t ||dt||2 ∗+ BR(wt+1, zt+1) −BR(wt+1, zt) + BR(zt+1, zt) −BR(w∗, zt+1) + BR(w∗, zt) −BR(zt+1, zt) ≤1 2||wt+1 −wt||2 + 1 2η2 t ||dt||2 ∗−BR(wt+1, wt) + BR(wt+1, zt+1) −BR(wt, zt) −BR(w∗, zt+1) + BR(w∗, zt) (17) where the first inequality follows applying Lemma 3 to the first term and Lemma 2 to the rest of the terms and the second inequality is the result of applying the exact version of Lemma 1 to BR(wt+1, zt). Moreover, since BR(wt+1, wt)−1 2||wt+1 −wt||2 ≥0, they can be ignored in (17). Summing up the inequality (17) from t = 1 to T , yields −BR(w∗, z1) ≤L T X t=1 1 2η2 t − T X t=1 ηtγt (18) where we used the facts that w∗⊤PT t=1 ηtdt ≥0 and PT t=1 −w⊤ t ηtdt = PT t=1 ηtγt. The above inequality is exactly the same as (15), and replacing −BR with ǫ−1 Nǫ or log(ǫ) yields the same 7 error bounds in Theorem 1. Note that, since the exact version of Lemma 1 is required to obtain (17), this proof does not reveal whether LAMABoost can be generalized to employ the doubleprojection strategy. In some particular cases, however, we may show that a double-projection variant of LAMABoost is still a provable boosting algorithm. In the following, we briefly show that MadaBoost can be seen as a double-projection LAMABoost. Algorithm 3: Variant of MadaBoost Let R(w) be the negative entropy and K a unit hypercube; Set z1 = [1, . . . , 1]⊤; At t = 1, . . . , T , train ht with wt, set ft(x)= sign  Pt t′=1 ηt′ht′(x)  and calculate ǫt = PN i=1 1 2|ft(xi) −ai| N , set ηt = ǫtγt and update ∇R(zt+1) = ∇R(zt) + ηtdt →zi t+1 = zi teηtdi t yt+1 = arg min y∈K BR(y, zt+1) →yi t+1 = min(1, zi t+1) wt+1 = arg min w∈S BR(w, yt+1) →wi t+1 = yi t+1 ||yt+1||1 Output the final hypothesis f(x)= sign  PT t=1 ηtht(x)  . Algorithm 3 is essentially MadaBoost, only with a different choice of ηt. It is well-known that the entropy projection onto the probability simplex results in the normalization and thus, the second projection of Algorithm 3. The entropy projection onto the unit hypercube, however, maybe less known and thus, its proof is given in the Supplement. Theorem 5. Algorithm 3 yields a (1−ǫ)-accurate hypothesis after at most T = O( 1 ǫ2γ2 ). This is an important result since it shows that MadaBoost seems, at least in theory, to be slower than what we hoped, namely O( 1 ǫγ2 ). 4 Conclusion and Discussion In this work, we provided a boosting framework that can produce provable boosting algorithms. This framework is mainly suitable for designing boosting algorithms with distribution constraints. A sparse boosting algorithm that samples only a fraction of examples at each round was derived from this framework. However, since our proposed algorithm cannot control the exact number of zeros in the weight vector, a natural extension to this algorithm is to develop a boosting algorithm that receives the sparsity level as an input. However, this immediately raises the question: what is the maximum number of examples that can be removed at each round from the dataset, while still achieving a (1−ǫ)-accurate hypothesis? The boosting framework derived in this work is essentially the dual of the online mirror descent algorithm. This framework can be generalized in different ways. Here, we showed that replacing the Bregman projection step with the double-projection strategy, or as we call it approximate Bregman projection, still results in a boosting algorithm in the active version of MABoost, though this may not hold for the lazy version. In some special cases (MadaBoost for instance), however, it can be shown that this double-projection strategy works for the lazy version as well. Our conjecture is that under some conditions on the first convex set, the lazy version can also be generalized to work with the approximate projection operator. Finally, we provided a new error bound for the MadaBoost algorithm that does not depend on any assumption. Unlike the common conjecture, the convergence rate of MadaBoost (at least with our choice of η) is of O(1/ǫ2). Acknowledgments This work was partially supported by SNSF. We would like to thank Professor Rocco Servedio for an inspiring email conversation and our colleague Hui Liang for his helpful comments. 8 References [1] R. E. Schapire. The strength of weak learnability. Journal of Machine Learning Research, 1990. [2] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 1997. [3] L. Breiman. Prediction games and arcing algorithms. Neural Computation, 1999. [4] L. Mason, J. Baxter, P. Bartlett, and M. Frean. Boosting algorithms as gradient descent. In NIPS, 1999. [5] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of Statistics, 1998. [6] R. A. Servedio. Smooth boosting and learning with malicious noise. Journal of Machine Learning Research, 2003. [7] D. Gavinsky. Optimally-smooth adaptive boosting and application to agnostic learning. Journal of Machine Learning Research, 2003. [8] J. K. Bradley and R. E. Schapire. Filterboost: Regression and classification on large datasets. In NIPS. 2008. [9] K. Hatano. Smooth boosting using an information-based criterion. In Algorithmic Learning Theory. 2006. [10] C. Domingo and O. Watanabe. Madaboost: A modification of AdaBoost. In COLT, 2000. [11] Y. Freund. Boosting a weak learning algorithm by majority. Journal of Information and Computation, 1995. [12] N. H. Bshouty, D. Gavinsky, and M. Long. On boosting with polynomially bounded distributions. Journal of Machine Learning Research, 2002. [13] M. K. Warmuth, J. Liao, and G. R¨atsch. Totally corrective boosting algorithms that maximize the margin. In ICML, 2006. [14] S. Shalev-Shwartz and Y. Singer. On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms. In COLT, 2008. [15] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003. [16] E. Hazan. A survey: The convex optimization approach to regret minimization. Working draft, 2009. [17] J. C. Duchi, S. Shalev-shwartz, Y. Singer, and A. Tewari. Composite objective mirror descent. In COLT, 2010. [18] Y. Freund and R. E. Schapire. Game theory, on-line prediction and boosting. In COLT, 1996. [19] L. Breiman. Pasting bites together for prediction in large data sets and on-line. Technical report, Dept. Statistics, Univ. California, Berkeley, 1997. [20] M. J. Kearns, R. E. Schapire, and L. M. Sellie. Toward efficient agnostic learning. In COLT, 1992. [21] A. Kalai and V. Kanade. Potential-based agnostic boosting. In NIPS. 2009. [22] S. Ben-David, P. Long, and Y. Mansour. Agnostic boosting. In COLT. 2001. [23] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [24] W. Dai, Q. Yang, G. Xue, and Y. Yong. Boosting for transfer learning. In ICML, 2007. [25] W. Wang and M. A. Carreira-Perpi˜n´an. Projection onto the probability simplex: An efficient algorithm with a simple proof, and an application. arXiv:1309.1541, 2013. [26] A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In COLT, 2013. [27] C. Chiang, T. Yang, C. Lee, M. Mahdavi, C. Lu, R. Jin, and S. Zhu. Online optimization with gradual variations. In COLT, 2012. 9
2014
56
5,543
Near–Optimal Density Estimation in Near–Linear Time Using Variable–Width Histograms Siu-On Chan Microsoft Research sochan@gmail.com Ilias Diakonikolas University of Edinburgh ilias.d@ed.ac.uk Rocco A. Servedio Columbia University rocco@cs.columbia.edu Xiaorui Sun Columbia University xiaoruisun@cs.columbia.edu Abstract Let p be an unknown and arbitrary probability distribution over [0, 1). We consider the problem of density estimation, in which a learning algorithm is given i.i.d. draws from p and must (with high probability) output a hypothesis distribution that is close to p. The main contribution of this paper is a highly efficient density estimation algorithm for learning using a variable-width histogram, i.e., a hypothesis distribution with a piecewise constant probability density function. In more detail, for any k and ", we give an algorithm that makes ˜O(k/"2) draws from p, runs in ˜O(k/"2) time, and outputs a hypothesis distribution h that is piecewise constant with O(k log2(1/")) pieces. With high probability the hypothesis h satisfies dTV(p, h) C · optk(p) + ", where dTV denotes the total variation distance (statistical distance), C is a universal constant, and optk(p) is the smallest total variation distance between p and any k-piecewise constant distribution. The sample size and running time of our algorithm are optimal up to logarithmic factors. The “approximation factor” C in our result is inherent in the problem, as we prove that no algorithm with sample size bounded in terms of k and " can achieve C < 2 regardless of what kind of hypothesis distribution it uses. 1 Introduction Consider the following fundamental statistical task: Given independent draws from an unknown probability distribution, what is the minimum sample size needed to obtain an accurate estimate of the distribution? This is the question of density estimation, a classical problem in statistics with a rich history and an extensive literature (see e.g., [BBBB72, DG85, Sil86, Sco92, DL01]). While this broad question has mostly been studied from an information–theoretic perspective, it is an inherently algorithmic question as well, since the ultimate goal is to describe and understand algorithms that are both computationally and information-theoretically efficient. The need for computationally efficient learning algorithms is only becoming more acute with the recent flood of data across the sciences; the “gold standard” in this “big data” context is an algorithm with information-theoretically (near-) optimal sample size and running time (near-) linear in its sample size. In this paper we consider learning scenarios in which an algorithm is given an input data set which is a sample of i.i.d. draws from an unknown probability distribution. It is natural to expect (and can be easily formalized) that, if the underlying distribution of the data is inherently “complex”, it may be hard to even approximately reconstruct the distribution. But what if the underlying distribution is “simple” or “succinct” – can we then reconstruct the distribution to high accuracy in a computationally and sample-efficient way? In this paper we answer this question in the affirmative for the 1 problem of learning “noisy” histograms, arguably one of the most basic density estimation problems in the literature. To motivate our results, we begin by briefly recalling the role of histograms in density estimation. Histograms constitute “the oldest and most widely used method for density estimation” [Sil86], first introduced by Karl Pearson in [Pea95]. Given a sample from a probability density function (pdf) p, the method partitions the domain into a number of intervals (bins) B1, . . . , Bk, and outputs the “empirical” pdf which is constant within each bin. A k-histogram is a piecewise constant distribution over bins B1, . . . , Bk, where the probability mass of each interval Bj, j 2 [k], equals the fraction of observations in the interval. Thus, the goal of the “histogram method” is to approximate an unknown pdf p by an appropriate k-histogram. It should be emphasized that the number k of bins to be used and the “width” and location of each bin are unspecified; they are parameters of the estimation problem and are typically selected in an ad hoc manner. We study the following distribution learning question: Suppose that there exists a k-histogram that provides an accurate approximation to the unknown target distribution. Can we efficiently find such an approximation? In this paper, we provide a fairly complete affirmative answer to this basic question. Given a bound k on the number of intervals, we give an algorithm that uses a near-optimal sample size, runs in near-linear time (in its sample size), and approximates the target distribution nearly as accurately as the best k-histogram. To formally state our main result, we will need a few definitions. We work in a standard model of learning an unknown probability distribution from samples, essentially that of [KMR+94], which is a natural analogue of Valiant’s well-known PAC model for learning Boolean functions [Val84] to the unsupervised setting of learning an unknown probability distribution.1 A distribution learning problem is defined by a class C of distributions over a domain ⌦. The algorithm has access to independent draws from an unknown pdf p, and its goal is to output a hypothesis distribution h that is “close” to the target distribution p. We measure the closeness between distributions using the statistical distance or total variation distance. In the “noiseless” setting, we are promised that p 2 C and the goal is to construct a hypothesis h such that (with high probability) the total variation distance dTV(h, p) between h and p is at most ", where " > 0 is the accuracy parameter. The more challenging “noisy” or agnostic model captures the situation of having arbitrary (or even adversarial) noise in the data. In this setting, we do not make any assumptions about the target density p and the goal is to find a hypothesis h that is almost as accurate as the “best” approximation of p by any distribution in C. Formally, given sample access to a (potentially arbitrary) target distribution p and " > 0, the goal of an agnostic learning algorithm for C is to compute a hypothesis distribution h such that dTV(h, p) ↵· optC(p) + ", where optC(p) := infq2C dTV(q, p) – i.e., optC(p) is the statistical distance between p and the closest distribution to it in C – and ↵≥1 is a constant (that may depend on the class C). We will call such a learning algorithm an ↵-agnostic learning algorithm for C; when ↵> 1 we sometimes refer to this as a semi-agnostic learning algorithm. A distribution f over a finite interval I ✓R is called k-flat if there exists a partition of I into k intervals I1, . . . , Ik such that the pdf f is constant within each such interval. We henceforth (without loss of generality for densities with bounded support) restrict ourselves to the case I = [0, 1). Let Ck be the class of all k-flat distributions over [0, 1). For a (potentially arbitrary) distribution p over [0, 1) we will denote by optk(p) := inff2Ck dTV(f, p). In this terminology, our learning problem is exactly the problem of agnostically learning the class of k-flat distributions. Our main positive result is a near-optimal algorithm for this problem, i.e., a semi-agnostic learning algorithm that has near-optimal sample size and near-linear running time. More precisely, we prove the following: Theorem 1 (Main). There is an algorithm A with the following property: Given k ≥1, " > 0, and sample access to a target distribution p, algorithm A uses ˜O(k/"2) independent draws from p, runs in time ˜O(k/"2), and outputs a O(k log2(1/"))-flat hypothesis distribution h that satisfies dTV(h, p) O(optk(p)) + " with probability at least 9/10. 1We remark that our model is essentially equivalent to the “minimax rate of convergence under the L1 distance” in statistics [DL01], and our results carry over to this setting as well. 2 Using standard techniques, the confidence probability can be boosted to 1 −δ, for any δ > 0, with a (necessary) overhead of O(log(1/δ)) in the sample size and the running time. We emphasize that the difficulty of our result lies in the fact that the “optimal” piecewise constant decomposition of the domain is both unknown and approximate (in the sense that optk(p) > 0); and that our algorithm is both sample-optimal and runs in (near-) linear time. Even in the (significantly easier) case that the target p 2 Ck (i.e., optk(p) = 0), and the optimal partition is explicitly given to the algorithm, it is known that a sample of size ⌦(k/"2) is information-theoretically necessary. (This lower bound can, e.g., be deduced from the standard fact that learning an unknown discrete distribution over a k-element set to statistical distance " requires an ⌦(k/"2) size sample.) Hence, our algorithm has provably optimal sample complexity (up to a logarithmic factor), runs in essentially sample linear time, and is ↵-agnostic for a universal constant ↵> 1. It should be noted that the sample size required for our problem is well-understood; it follows from the VC theorem (Theorem 3) that O(k/"2) draws from p are information-theoretically sufficient. However, the theorem is non-constructive, and the “obvious” algorithm following from it has running time exponential in k and 1/". In recent work, Chan et al [CDSS14] presented an approach employing an intricate combination of dynamic programming and linear programming which yields a poly(k/") time algorithm for the above problem. However, the running time of the [CDSS14] algorithm is ⌦(k3) even for constant values of ", making it impractical for applications. As discussed below our algorithmic approach is significantly different from that of [CDSS14], using neither dynamic nor linear programming. Applications. Nonparametric density estimation for shape restricted classes has been a subject of study in statistics since the 1950’s (see [BBBB72] for an early book on the topic and [Gre56, Bru58, Rao69, Weg70, HP76, Gro85, Bir87] for some of the early literature), and has applications to a range of areas including reliability theory (see [Reb05] and references therein). By using the structural approximation results of Chan et al [CDSS13], as an immediate corollary of Theorem 1 we obtain sample optimal and near-linear time estimators for various well-studied classes of shape restricted densities including monotone, unimodal, and multimodal densities (with unknown mode locations), monotone hazard rate (MHR) distributions, and others (because of space constraints we do not enumerate the exact descriptions of these classes or statements of these results here, but instead refer the interested reader to [CDSS13]). Birg´e [Bir87] obtained a sample optimal and linear time estimator for monotone densities, but prior to our work, no linear time and sample optimal estimator was known for any of the other classes. Our algorithm from Theorem 1 is ↵-agnostic for a constant ↵> 1. It is natural to ask whether a significantly stronger accuracy guarantee is efficiently achievable; in particular, is there an agnostic algorithm with similar running time and sample complexity and ↵= 1? Perhaps surprisingly, we provide a negative answer to this question. Even in the simplest nontrivial case that k = 2, and the target distribution is defined over a discrete domain [N] = {1, . . . , N}, any ↵-agnostic algorithm with ↵< 2 requires large sample size: Theorem 2 (Lower bound, Informal statement). Any 1.99-agnostic learning algorithm for 2-flat distributions over [N] requires a sample of size ⌦( p N). See Theorem 7 in Section 4 for a precise statement. Note that there is an exact correspondence between distributions over the discrete domain [N] and pdf’s over [0, 1) which are piecewise constant on each interval of the form [k/N, (k + 1)/N) for k 2 {0, 1, . . . , N −1}. Thus, Theorem 2 implies that no finite sample algorithm can 1.99-agnostically learn even 2-flat distributions over [0, 1). (See Corollary 4.1 in Section 4 for a detailed statement.) Related work. A number of techniques for density estimation have been developed in the mathematical statistics literature, including kernels and variants thereof, nearest neighbor estimators, orthogonal series estimators, maximum likelihood estimators (MLE), and others (see Chapter 2 of [Sil86] for a survey of existing methods). The main focus of these methods has been on the statistical rate of convergence, as opposed to the running time of the corresponding estimators. We remark that the MLE does not exist for very simple classes of distributions (e.g., unimodal distributions with an unknown mode, see e.g, [Bir97]). We note that the notion of agnostic learning is related to the literature on model selection and oracle inequalities [MP007], however this work is of a different flavor and is not technically related to our results. 3 Histograms have also been studied extensively in various areas of computer science, including databases and streaming [JKM+98, GKS06, CMN98, GGI+02] under various assumptions about the input data and the precise objective. Recently, Indyk et al [ILR12] studied the problem of learning a k-flat distribution over [N] under the L2 norm and gave an efficient algorithm with sample complexity O(k2 log(N)/"4). Since the L1 distance is a stronger metric, Theorem 1 implies an improved sample and time bound of ˜O(k/"2) for their setting. 2 Preliminaries Throughout the paper we assume that the underlying distributions have Lebesgue measurable densities. For a pdf p : [0, 1) ! R+ and a Lebesgue measurable subset A ✓[0, 1), i.e., A 2 L([0, 1)), we use p(A) to denote R z2A p(z). The statistical distance or total variation distance between two densities p, q : [0, 1) ! R+ is dTV(p, q) := supA2L([0,1)) |p(A) −q(A)|. The statistical distance satisfies the identity dTV(p, q) = 1 2kp −qk1 where kp −qk1, the L1 distance between p and q, is R [0,1) |p(x) −q(x)|dx; for convenience in the rest of the paper we work with L1 distance. We refer to a nonnegative function p over an interval (which need not necessarily integrate to one over the interval) as a “sub-distribution.” Given a value > 0, we say that a (sub-)distribution p over [0, 1) is -well-behaved if supx2[0,1) Prx⇠p[x] , i.e., no individual real value is assigned more than probability under p. Any probability distribution with no atoms is -well-behaved for all > 0. Our results apply for general distributions over [0, 1) which may have an atomic part as well as a non-atomic part. Given m independent draws s1, . . . , sm from a distribution p over [0, 1), the empirical distribution bpm over [0, 1) is the discrete distribution supported on {s1, . . . , sm} defined as follows: for all z 2 [0, 1), Prx⇠bpm[x = z] = |{j 2 [m] | sj = z}|/m. The VC inequality. Let p : [0, 1) ! R be a Lebesgue measurable function. Given a family of subsets A ✓L([0, 1)) over [0, 1), define kpkA = supA2A |p(A)|. The VC dimension of A is the maximum size of a subset X ✓[0, 1) that is shattered by A (a set X is shattered by A if for every Y ✓X, some A 2 A satisfies A \ X = Y ). If there is a shattered subset of size s for all s 2 +, then we say that the VC dimension of A is 1. The well-known Vapnik-Chervonenkis (VC) inequality states the following: Theorem 3 (VC inequality, [DL01, p.31]). Let p : I ! R+ be a probability density function over I ✓R and bpm be the empirical distribution obtained after drawing m points from p. Let A ✓2I be a family of subsets with VC dimension d. Then E[kp −bpmkA] O( p d/m). Partitioning into intervals of approximately equal mass. As a basic primitive, given access to a sample drawn from a -well-behaved target distribution p over [0, 1), we will need to partition [0, 1) into ⇥(1/) intervals each of which has probability ⇥() under p. There is a simple algorithm, based on order statistics, which does this and has the following performance guarantee (see Appendix A.2 of [CDSS14]): Lemma 2.1. Given 2 (0, 1) and access to points drawn from a /64-well-behaved distribution p over [0, 1), the procedure Approximately-Equal-Partition draws O((1/) log(1/)) points from p, runs in time ˜O(1/), and with probability at least 99/100 outputs a partition of [0, 1) into ` = ⇥(1/) intervals such that p(Ij) 2 [/2, 3] for all 1 j `. 3 The algorithm and its analysis In this section we prove our main algorithmic result, Theorem 1. Our approach has the following high-level structure: In Section 3.1 we give an algorithm for agnostically learning a target distribution p that is “nice” in two senses: (i) p is well-behaved (i.e., it does not have any heavy atomic elements), and (ii) optk(p) is bounded from above by the error parameter ". In Section 3.2 we give a general efficient reduction showing how the second assumption can be removed, and in Section 3.3 we briefly explain how the first assumption can be removed, thus yielding Theorem 1. 4 3.1 The main algorithm In this section we give our main algorithmic result, which handles well-behaved distributions p for which optk(p) is not too large: Theorem 4. There is an algorithm Learn-WB-small-opt-k-histogram that given as input ˜O(k/"2) i.i.d. draws from a target distribution p and a parameter " > 0, runs in time ˜O(k/"2), and has the following performance guarantee: If (i) p is "/ log(1/") 384k -well-behaved, and (ii) optk(p) ", then with probability at least 19/20, it outputs an O(k · log2(1/"))-flat distribution h such that dTV(p, h) 2 · optk(p) + 3". We require some notation and terminology. Let r be a distribution over [0, 1), and let P be a set of disjoint intervals that are contained in [0, 1). We say that the P-flattening of r, denoted (r)P, is the sub-distribution defined as r(v) = ⇢ r(I)/|I| if v 2 I, I 2 P 0 if v does not belong to any I 2 P Observe that if P is a partition of [0, 1), then (since r is a distribution) (r)P is a distribution. We say that two intervals I, I0 are consecutive if I = [a, b) and I0 = [b, c). Given two consecutive intervals I, I0 contained in [0, 1) and a sub-distribution r, we use ↵r(I, I0) to denote the L1 distance between (r){I,I0} and (r){I[I0}, i.e., ↵r(I, I0) = R I[I0 |(r){I,I0}(x) −(r){I[I0}(x)|dx. Note here that {I [ I0} is a set that contains one element, the interval [a, c). 3.1.1 Intuition for the algorithm We begin with a high-level intuitive explanation of the Learn-WB-small-opt-k-histogram algorithm. It starts in Step 1 by constructing a partition of [0, 1) into z = ⇥(k/"0) intervals I1, . . . , Iz (where "0 = ˜⇥(")) such that p has weight ⇥("0/k) on each subinterval. In Step 2 the algorithm draws a sample of ˜O(k/"2) points from p and uses them to define an empirical distribution bpm. This is the only step in which points are drawn from p. For the rest of this intuitive explanation we pretend that the weight bp(I) that the empirical distribution bpm assigns to each interval I is actually the same as the true weight p(I) (Lemma 3.1 below shows that this is not too far from the truth). Before continuing with our explanation of the algorithm, let us digress briefly by imagining for a moment that the target distribution p actually is a k-flat distribution (i.e., that optk(p) = 0). In this case there are at most k “breakpoints”, and hence at most k intervals Ij for which ↵bpm(Ij, Ij+1) > 0, so computing the ↵bpm(Ij, Ij+1) values would be an easy way to identify the true breakpoints (and given these it is not difficult to construct a high-accuracy hypothesis). In reality, we may of course have optk(p) > 0; this means that if we try to use the ↵bpm(Ij, Ij+1) criterion to identify “breakpoints” of the optimal k-flat distribution that is closest to p (call this k-flat distribution q), we may sometimes be “fooled” into thinking that q has a breakpoint in an interval Ij where it does not (but rather the value ↵bpm(Ij, Ij+1) is large because of the difference between q and p). However, recall that by assumption we have optk(p) "; this bound can be used to show that there cannot be too many intervals Ij for which a large value of ↵bpm(Ij, Ij+1) suggests a “spurious breakpoint” (see the proof of Lemma 3.3). This is helpful, but in and of itself not enough; since our partition I1, . . . , Iz divides [0, 1) into k/"0 intervals, a naive approach based on this would result in a (k/"0)-flat hypothesis distribution, which in turn would necessitate a sample complexity of ˜O(k/"03), which is unacceptably high. Instead, our algorithm performs a careful process of iteratively merging consecutive intervals for which the ↵bpm(Ij, Ij+1) criterion indicates that a merge will not adversely affect the final accuracy by too much. As a result of this process we end up with k · polylog(1/") intervals for the final hypothesis, which enables us to output a (k · polylog(1/"0))-flat final hypothesis using ˜O(k/"02) draws from p. In more detail, this iterative merging is carried out by the main loop of the algorithm in Step 4. Going into the t-th iteration of the loop, the algorithm has a partition Pt−1 of [0, 1) into disjoint sub-intervals, and a set Ft−1 ✓Pt−1 (i.e., every interval belonging to Ft−1 also belongs to Pt−1). Initially P0 contains all the intervals I1, . . . , Iz and F0 is empty. Intuitively, the intervals in Pt−1 \ 5 Ft−1 are still being “processed”; such an interval may possibly be merged with a consecutive interval from Pt−1 \ Ft−1 if doing so would only incur a small “cost” (see condition (iii) of Step 4(b) of the algorithm).The intervals in Ft−1 have been “frozen” and will not be altered or used subsequently in the algorithm. 3.1.2 The algorithm Algorithm Learn-WB-small-opt-k-histogram: Input: parameters k ≥1, " > 0; access to i.i.d. draws from target distribution p over [0, 1) Output: If (i) p is "/ log(1/") 384k -well-behaved and (ii) optk(p) ", then with probability at least 99/100 the output is a distribution q such that dTV(p, q) 2optk(p) + 3". 1. Let "0 = "/ log(1/"). Run Algorithm Approximately-Equal-Partition on input parameter "0 6k to partition [0, 1) into z = ⇥(k/"0) intervals I1 = [i0, i1), . . . , Iz = [iz−1, iz), where i0 = 0 and iz = 1, such that with probability at least 99/100, for each j 2 {1, . . . , z} we have p([ij−1, ij)) 2 ["0/12k, "0/2k] (assuming p is "0/(384k)-well-behaved). 2. Draw m = ˜O(k/"02) points from p and let bpm be the resulting empirical distribution. 3. Set P0 = {I1, I2, . . . Iz}, and F0 = ;. 4. Let s = log2 1 "0 . Repeat for t = 1, . . . until t = s: (a) Initialize Pt to ; and Ft to Ft−1. (b) Without loss of generality, assume Pt−1 = {It−1,1, . . . , It−1,zt−1} where interval It−1,i is to the left of It−1,i+1 for all i. Scan left to right across the intervals in Pt−1 (i.e., iterate over i = 1, . . . , zt−1 −1). If intervals It−1,i, It−1,i+1 are (i) both not in Ft−1, and (ii) ↵bpm(It−1,i, It−1,i+1) > "0/(2k), then add both It−1,i and It−1,i+1 into Ft. (c) Initialize i to 1, and repeatedly execute one of the following four (mutually exclusive and exhaustive) cases until i > zt−1: [Case 1] i zt−1 −1 and It−1,i = [a, b), It−1,i+1 = [b, c) are consecutive intervals both not in Ft. Add the merged interval It−1,i [ It−1,i+1 = [a, c) into Pt. Set i i + 2. [Case 2] i zt−1 −1 and It−1,i 2 Ft. Set i i + 1. [Case 3] i zt−1 −1, It−1,i /2 Ft and It−1,i+1 2 Ft. Add It−1,i into Ft and set i i + 2. [Case 4] i = zt−1. Add It−1,zt−1 into Ft if It−1,zt−1 is not in Ft and set i i + 1. (d) Set Pt Pt [ Ft. 5. Output the |Ps|-flat hypothesis distribution (bpm)Ps. 3.1.3 Analysis of the algorithm and proof of Theorem 4 It is straightforward to verify the claimed running time given Lemma 2.1, which bounds the running time of Approximately-Equal-Partition. Indeed, we note that Step 2, which simply draws ˜O(k/"02) points and constructs the resulting empirical distribution, dominates the overall running time. In the rest of this subsubsection we prove correctness. We first observe that with high probability the empirical distribution bpm defined in Step 2 gives a high-accuracy estimate of the true probability of any union of consecutive intervals from I1, . . . , Iz. The following lemma from [CDSS14] follows from the standard multiplicative Chernoff bound: Lemma 3.1 (Lemma 12, [CDSS14]). With probability 99/100 over the sample drawn in Step 2, for every 0 a < b z we have that |bpm([ia, ib)) −p([ia, ib))|  p "0(b −a) · "0/(10k). We henceforth assume that this 99/100-likely event indeed takes place, so the above inequality holds for all 0 a < b z. We use this to show that the ↵bpm(It−1,i, It−1,i+1) value that the algorithm 6 uses in Step 4(b) is a good proxy for the actual value ↵p(It−1,i, It−1,i+1) (which of course is not accessible to the algorithm): Lemma 3.2. Fix 1 t s. Then we have |↵bpm(It−1,i, It−1,i+1) −↵p(It−1,i, It−1,i+1)|  2"0/(5k). Due to space constraints the proofs of all lemmas in this section are deferred to Appendix A. For the rest of the analysis, let q denote a fixed k-flat distribution that is closest to p, so kp −qk1 = optk(p). (We note that while optk(p) is defined as infq2C kp −qk1, standard closure arguments can be used to show that the infimum is actually achieved by some k-flat distribution q.) Let Q be the partition of [0, 1) corresponding to the intervals on which q is piecewise constant. We say that a breakpoint of Q is a value in [0, 1] that is an endpoint of one of the (at most) k intervals in Q. The following important lemma bounds the number of intervals in the final partition Ps: Lemma 3.3. Ps contains at most O(k log2(1/")) intervals. The following definition will be useful: Definition 5. Let P denote any partition of [0, 1). We say that partition P is "0-good for (p, q) if for every breakpoint v of Q, the interval I in P containing v satisfies p(I) "0/(2k). The above definition is justified by the following lemma: Lemma 3.4. If P is "0-good for (p, q), then kp −(p)Pk1 2optk(p) + "0. We are now in a position to prove the following: Lemma 3.5. There exists a partition R of [0, 1) that is "0-good for (p, q) and satisfies k(p)Ps −(p)Rk1 ". We construct the claimed R based on Ps, Ps−1, . . . , P0 as follows: (i) If I is an interval in Ps not containing a breakpoint of Q, then I is also in R; (ii) If I is an interval in Ps that does contain a breakpoint of Q, then we further partition I into a set of intervals S in a recursive manner using Ps−1, . . . , P0 (see Appendix A.4). Finally, by putting everything together we can prove Theorem 4: Proof of Theorem 4. By Lemma 3.4 applied to R, we have that kp −(p)Rk1 2optk(p) + "0. By Lemma 3.5, we have that k(p)Ps−(p)Rk1 "; thus the triangle inequality gives that kp−(p)Psk1  2optk(p) + 2". By Lemma 3.3 the partition Ps contains at most O(k log2(1/")) intervals, so both (p)Ps and (bpm)Ps are O(k log2(1/"))-flat distributions. Thus, k(p)Ps −(bpm)Psk1 = k(p)Ps − (bpm)PskA`, where ` = O(k log2(1/")) and A` is the family of all subsets of [0, 1) that consist of unions of up to ` intervals (which has VC dimension 2`). Consequently by the VC inequality (Theorem 3, for a suitable choice of m = ˜O(k/"02), we have that E[k(p)Ps −(bpm)Psk1] 4"0/100. Markov’s inequality now gives that with probability at least 96/100, we have k(p)Ps −(bpm)Psk1  "0. Hence, with overall probability at least 19/20 (recall the 1/100 error probability incurred in Lemma 3.1), we have that kp −(bpm)Psk1 2optk(p) + 3", and the theorem is proved. 3.2 A general reduction to the case of small opt for semi-agnostic learning In this section we show that under mild conditions, the general problem of agnostic distribution learning for a class C can be efficiently reduced to the special case when optC is not too large compared with ". While the reduction is simple and generic, we have not previously encountered it in the literature on density estimation, so we provide a proof in Appendix A.5. A precise statement of the reduction follows: Theorem 6. Let A be an algorithm with the following behavior: A is given as input i.i.d. points drawn from p and a parameter " > 0. A uses m(") = ⌦(1/") draws from p, runs in time t(") = ⌦(1/"), and satisfies the following: if optC(p) 10", then with probability at least 19/20 it outputs a hypothesis distribution q such that (i) kp−qk1 ↵·optC(p)+", where ↵is an absolute constant, and (ii) given any r 2 [0, 1), the value q(r) of the pdf of q at r can be efficiently computed in T time steps. 7 Then there is an algorithm A0 with the following performance guarantee: A0 is given as input i.i.d. draws from p and a parameter " > 0.2 Algorithm A0 uses O(m("/10) + log log(1/")/"2) draws from p, runs in time O(t("/10)) + T · ˜O(1/"2), and outputs a hypothesis distribution q0 such that with probability at least 39/40 we have kp −q0k1 10(↵+ 2) · optC(p) + ". 3.3 Dealing with distributions that are not well behaved The assumption that the target distribution p is ˜⇥("/k)-well-behaved can be straightforwardly removed by following the approach in Section 3.6 of [CDSS14]. That paper presents a simple lineartime sampling-based procedure, using ˜O(k/") samples, that with high probability identifies all the “heavy” elements (atoms which cause p to not be well-behaved, if any such points exist). Our overall algorithm first runs this procedure to find the set S of “heavy” elements, and then runs the algorithm presented above (which succeeds for well-behaved distributions, i.e., distributions that have no “heavy” elements) using as its target distribution the conditional distribution of p over [0, 1) \ S (let us denote this conditional distribution by p0). A straightforward analysis given in [CDSS14] shows that (i) optk(p) ≥optk(p0), and moreover (ii) dTV(p, p0) optk(p). Thus, by the triangle inequality, any hypothesis h satisfying dTV(h, p0) Coptk(p0) + " will also satisfy dTV(h, p) (C + 1)optk(p) + " as desired. 4 Lower bounds on agnostic learning In this section we establish that ↵-agnostic learning with ↵< 2 is information theoretically impossible, thus establishing Theorem 2. Fix any 0 < t < 1/2. We define a probability distribution Dt over a finite set of discrete distributions over the domain [2N] = {1, . . . , 2N} as follows. (We assume without loss of generality below that t is rational and that tN is an integer.) A draw of pS1,S2,t from Dt is obtained as follows. 1. A set S1 ⇢[N] is chosen uniformly at random from all subsets of [N] that contain precisely tN elements. For i 2 [N], the distribution pS1,S2,t assigns probability weight as follows: pS1,S2,t(i) = 1 4N if i 2 S1, pS1,S2,t(i) = 1 2N ✓ 1 + t 2(1 −t) ◆ if i 2 [N] \ S1. 2. A set S2 ⇢[N + 1, . . . , 2N] is chosen uniformly at random from all subsets of [N + 1, . . . , 2N] that contain precisely tN elements. For i 2 [N + 1, . . . , 2N], the distribution pS1,S2,t assigns probability weight as follows: pS1,S2,t(i) = 3 4N if i 2 S2, 1 2N ✓ 1 − t 2(1 −t) ◆ if i 2 [N] \ S1. Using a birthday paradox type argument, we show that no o( p N)-sample algorithm can successfully distinguish between a distribution pS1,S2,t ⇠Dt and the uniform distribution over [2N]. We then leverage this indistinguishability to show that any (2 −δ)-semi-agnostic learning algorithm, even for 2-flat distributions, must use a sample of size ⌦( p N) (see Appendix B for these proofs): Theorem 7. Fix any δ > 0 and any function f(·). There is no algorithm A with the following property: given " > 0 and access to independent points drawn from an unknown distribution p over [2N], algorithm A makes o( p N) · f(") draws from p and with probability at least 51/100 outputs a hypothesis distribution h over [2N] satisfying kh −pk1 (2 −δ)opt2(p) + ". As described in the Introduction, via the obvious correspondence that maps distributions over [N] to distributions over [0, 1), we get the following: Corollary 4.1. Fix any δ > 0 and any function f(·). There is no algorithm A with the following property: given " > 0 and access to independent draws from an unknown distribution p over [0, 1), algorithm A makes f(") draws from p and with probability at least 51/100 outputs a hypothesis distribution h over [0, 1) satisfying kh −pk1 (2 −δ)opt2(p) + ". 2 Note that now there is no guarantee that optC(p) "; indeed, the point here is that optC(p) may be arbitrary. 8 References [AJOS14] J. Acharya, A. Jafarpour, A. Orlitsky, and A.T. Suresh. Near-optimal-sample estimators for spherical gaussian mixtures. Technical Report http://arxiv.org/abs/1402.4746, 19 Feb 2014. A.5 [BBBB72] R.E. Barlow, D.J. Bartholomew, J.M. Bremner, and H.D. Brunk. Statistical Inference under Order Restrictions. Wiley, New York, 1972. 1, 1 [Bir87] L. Birg´e. Estimating a density under order restrictions: Nonasymptotic minimax risk. Annals of Statistics, 15(3):995–1012, 1987. 1 [Bir97] L. Birg´e. Estimation of unimodal densities without smoothness assumptions. Annals of Statistics, 25(3):970–981, 1997. 1 [Bru58] H. D. Brunk. On the estimation of parameters restricted by inequalities. Ann. Math. Statist., 29(2):pp. 437–454, 1958. 1 [CDSS13] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Learning mixtures of structured distributions over discrete domains. In SODA, pages 1380–1394, 2013. 1 [CDSS14] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Efficient density estimation via piecewise polynomial approximation. Technical Report http://arxiv.org/abs/1305.3207, conference version in STOC, pages 604-613, 2014. 1, 2, 3.1.3, 3.1, 3.3, A.2 [CMN98] S. Chaudhuri, R. Motwani, and V. Narasayya. Random sampling for histogram construction: How much is enough? In SIGMOD Conference, pages 436–447, 1998. 1 [DDS12] A. De, I. Diakonikolas, and R. Servedio. Inverse problems in approximate uniform generation. Available at http://arxiv.org/pdf/1211.1722v1.pdf, 2012. A.5 [DG85] L. Devroye and L. Gy¨orfi. Nonparametric Density Estimation: The L1 View. John Wiley & Sons, 1985. 1 [DK14] C. Daskalakis and G. Kamath. Faster and sample near-optimal algorithms for proper learning mixtures of gaussians. In COLT, pages 1183–1213, 2014. A.5 [DL01] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Series in Statistics, Springer, 2001. 1, 1, 3, A.5 [GGI+02] A. Gilbert, S. Guha, P. Indyk, Y. Kotidis, S. Muthukrishnan, and M. Strauss. Fast, small-space algorithms for approximate histogram maintenance. In STOC, pages 389–398, 2002. 1 [GKS06] S. Guha, N. Koudas, and K. Shim. Approximation and streaming algorithms for histogram construction problems. ACM Trans. Database Syst., 31(1):396–438, 2006. 1 [Gre56] U. Grenander. On the theory of mortality measurement. Skand. Aktuarietidskr., 39:125–153, 1956. 1 [Gro85] P. Groeneboom. Estimating a monotone density. In Proc. of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, pages 539–555, 1985. 1 [HP76] D. L. Hanson and G. Pledger. Consistency in concave regression. The Annals of Statistics, 4(6):pp. 1038–1050, 1976. 1 [ILR12] P. Indyk, R. Levi, and R. Rubinfeld. Approximating and Testing k-Histogram Distributions in Sub-linear Time. In PODS, pages 15–22, 2012. 1 [JKM+98] H. V. Jagadish, N. Koudas, S. Muthukrishnan, V. Poosala, K. Sevcik, and T. Suel. Optimal histograms with quality guarantees. In VLDB, pages 275–286, 1998. 1 [KMR+94] M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, and L. Sellie. On the learnability of discrete distributions. In Proc. 26th STOC, pages 273–282, 1994. 1 [MP007] Concentration inequalities and model selection. Lecture Notes in Mathematics, 33, 2003, SaintFlour, Cantal, 2007. Massart, P. and Picard, J., Springer. 1 [Pea95] K. Pearson. Contributions to the mathematical theory of evolution. ii. skew variation in homogeneous material. Philosophical Trans. of the Royal Society of London, 186:343–414, 1895. 1 [Rao69] B.L.S. Prakasa Rao. Estimation of a unimodal density. Sankhya Ser. A, 31:23–36, 1969. 1 [Reb05] L. Reboul. Estimation of a function under shape restrictions. Applications to reliability. Ann. Statist., 33(3):1330–1356, 2005. 1 [Sco92] D.W. Scott. Multivariate Density Estimation: Theory, Practice and Visualization. Wiley, New York, 1992. 1 [Sil86] B. W. Silverman. Density Estimation. Chapman and Hall, London, 1986. 1, 1 [Val84] L. G. Valiant. A theory of the learnable. In Proc. 16th Annual ACM Symposium on Theory of Computing (STOC), pages 436–445. ACM Press, 1984. 1 [Weg70] E.J. Wegman. Maximum likelihood estimation of a unimodal density. I. and II. Ann. Math. Statist., 41:457–471, 2169–2174, 1970. 1 9
2014
57
5,544
Efficient Minimax Signal Detection on Graphs Jing Qian Division of Systems Engineering Boston University Brookline, MA 02446 jingq@bu.edu Venkatesh Saligrama Department of Electrical and Computer Engineering Boston University Boston, MA 02215 srv@bu.edu Abstract Several problems such as network intrusion, community detection, and disease outbreak can be described by observations attributed to nodes or edges of a graph. In these applications presence of intrusion, community or disease outbreak is characterized by novel observations on some unknown connected subgraph. These problems can be formulated in terms of optimization of suitable objectives on connected subgraphs, a problem which is generally computationally difficult. We overcome the combinatorics of connectivity by embedding connected subgraphs into linear matrix inequalities (LMI). Computationally efficient tests are then realized by optimizing convex objective functions subject to these LMI constraints. We prove, by means of a novel Euclidean embedding argument, that our tests are minimax optimal for exponential family of distributions on 1-D and 2-D lattices. We show that internal conductance of the connected subgraph family plays a fundamental role in characterizing detectability. 1 Introduction Signals associated with nodes or edges of a graph arise in a number of applications including sensor network intrusion, disease outbreak detection and virus detection in communication networks. Many problems in these applications can be framed from the perspective of hypothesis testing between null and alternative hypothesis. Observations under null and alternative follow different distributions. The alternative is actually composite and identified by sub-collections of connected subgraphs. To motivate the setup consider the disease outbreak problem described in [1]. Nodes there are associated with counties and observations associated with each county correspond to reported cases of a disease. Under the null distribution, observations at each county are assumed to be poisson distributed and independent across different counties. Under the alternative there are a contiguous sub-collection of counties (connected sub-graph) that each experience elevated cases on average from their normal levels but are otherwise assumed to be independent. The eventual shape of the sub-collection of contiguous counties is highly unpredictable due to uncontrollable factors. In this paper we develop a novel approach for signal detection on graphs that is both statistically effective and computationally efficient. Our approach is based on optimizing an objective function subject to subgraph connectivity constraints, which is related to generalized likelihood ratio tests (GLRT). GLRTs maximize likelihood functions over combinatorially many connected subgraphs, which is computationally intractable. On the other hand statistically, GLRTs have been shown to be asymptotically minimax optimal for exponential class of distributions on Lattice graphs & Trees [2] thus motivating our approach.We deal with combinatorial connectivity constraints by obtaining a novel characterization of connected subgraphs in terms of convex Linear Matrix Inequalities (LMIs). In addition we show how our LMI constraints naturally incorporate other features such as shape and size. We show that the resulting tests are essentially minimax optimal for exponential family 1 of distributions on 1-D and 2-D lattices. Conductance of the subgraph, a parameter in our LMI constraint, plays a central role in characterizing detectability. Related Work: The literature on signal detection on graphs can be organized into parametric and non-parametric methods, which can be further sub-divided into computational and statistical analysis themes. Parametric methods originated in the scan statistics literature [3] with more recent work including that of [4, 5, 6, 1, 7, 8] focusing on graphs. Much of this literature develops scanning methods that optimize over rectangles, circles or neighborhood balls [5, 6] across different regions of the graphs. However, the drawbacks of simple shapes and the need for non-parametric methods to improve detection power is well recognized. This has led to new approaches such as simulated annealing [5, 4] but is lacking in statistical analysis. More recent work in ML literature [9] describes semi-definite programming algorithm for non-parametric shape detection, which is similar to our work here. However, unlike us their method requires a heuristic rounding step, which does not lend itself to statistical analysis. In this context a number of recent papers have focused on statistical analysis [10, 2, 11, 12] with non-parametric shapes. They derive fundamental bounds for signal detection for the elevated means testing problem in the Gaussian setting on special graphs such as trees and lattices. In this setting under the null hypothesis the observations are assumed to be independent identically distributed (IID) with standard normal random variables. Under the alternative the Gaussian random variables are assumed to be standard normal except on some connected subgraph where the mean μ is elevated. They show that GLRT achieves “near”-minimax optimality in a number of interesting scenarios. While this work is interesting the suggested algorithms are computationally intractable. To the best of our knowledge only [13, 14] explores a computationally tractable approach and also provides statistical guarantees. Nevertheless, this line of work does not explicitly deal with connected subgraphs (complex shapes) but deals with more general clusters. These are graph partitions with small out-degree. Although this appears to be a natural relaxation of connected subgraphs/complex-shapes it turns out to be quite loose1 and leads to substantial gap in statistical effectiveness for our problem. In contrast we develop a new method for signal detection of complex shapes that is not only statistically effective but also computationally efficient. 2 Problem Formulation Let G = (V, E) denote an undirected unweighted graph with |V | = n nodes and |E| = m edges. Associated with each node, v ∈V , are observations xv ∈Rp. We assume observations are distributed P0 under the null hypothesis. The alternative is composite and the observed distribution, PS, is parameterized by S ⊆V belonging to a class of subsets Λ ⊆S, where S is the superset. We denote by SK ⊆S the collection of size-K subsets. ES = {(u, v) ∈E : u ∈S, v ∈S} denotes the induced edge set on S. We let xS denote the collection of random variables on the subset S ⊆V . Sc denotes nodes V −S. Our goal is to design a decision rule, π, that maps observations xn = (xv)v∈V to {0, 1} with zero denoting null hypothesis and one denoting the alternative. We formulate risk following the lines of [12] and combine Type I and Type II errors: R(π) = P0 (π(xn) = 1) + max S∈Λ PS (π(xn) = 0) (1) Definition 1 (δ-Separable). We say that the composite hypothesis problem is δ-separable if there exists a test π such that, R(π) ≤δ. We next describe asymptotic notions of detectability and separability. These notions requires us to consider large-graph limits. To this end we index a sequence of graphs Gn = (Vn, En) with n →∞ and an associated sequence of tests πn. Definition 2 (Separability). We say that the composite hypothesis problem is asymptotically δseparable if there is some sequence of tests, πn, such that R(πn) ≤δ for sufficiently large n. It is said to be asymptotically separable if R(πn) −→0. The composite hypothesis problem is said to be asymptotically inseparable if no such test exists. Sometimes, additional granular measures of performance are often useful to determine asymptotic behavior of Type I and Type II error. This motivates the following definition: 1A connected subgraph on a 2-D lattice of size K has out-degree at least Ω( √ K) while set of subgraphs with out-degree Ω( √ K) includes disjoint union of Ω( √ K/4) nodes. So statistical requirements with out-degree constraints can be no better than those for arbitrary K-sets. 2 Definition 3 (δ-Detectability). We say that the composite hypothesis testing problem is δ-detectable if there is a sequence of tests, πn, such that, sup S∈Λ PS(πn(xn) = 0) n→∞ −→0, lim sup n P0(πn(xn) = 1) ≤δ In general δ-detectability does not imply separability. For instance, consider x H0 ∼N(0, σ2) and x H1 ∼N(μ, σ2 n ). It is δ-detectable for μ σ ≥2  log 1 δ but not separable. Generalized Likelihood Ratio Test (GLRT) is often used as a statistical test for composite hypothesis testing. Suppose φ0(xn) and φS(xn) are probability density functions associated with P0 and PS respectively. The GLRT test thresholds the “best-case” likelihood ratio, namely, GLRT: ℓmax(xn) = max S∈Λ ℓS(xn) H1 >< H0 η, ℓS(x) = log φS(xn) φ0(xn) (2) Local Behavior: Without additional structure, the likelihood ratio, ℓS(x) for a fixed S ∈Λ is a function of observations across all nodes. Many applications exhibit local behavior, namely, the observations under the two hypothesis behave distinctly only on some small subset of nodes (as in disease outbreaks). This justifies introducing local statistical models in the following section. Combinatorial: The class Λ is combinatorial such as collections of connected subgraphs and GLRT is not generally computationally tractable. On the other hand GLRT is minimax optimal for special classes of distributions and graphs and motivates development of tractable algorithms. 2.1 Statistical Models & Subgraph Classes The foregoing discussion motivates introducing local models, which we present next. Then informed by existing results on separability we categorize subgraph classes by shape, size and connectivity. 2.1.1 Local Statistical Models Signal in Noise Models arise in sensor network (SNET) intrusion [7, 15] and disease outbreak detection [1]. They are modeled with Gaussian (SNET) and Poisson (disease outbreak) distributions. H0 : xv = wv; H1 : xv = μαuv1S(v) + wv, for some, S ∈Λ, u ∈S (3) For Gaussian case we model μ as a constant, wv as IID standard normal variables, αuv as the propagation loss from source node u ∈S to the node v. In disease outbreak detection μ = 1, αuv ∼Pois(λNv) and wv ∼Pois(Nv) are independent Poisson random variables, and Nv is the population of county v. In these cases ℓS(x) takes the following local form where Zv is a normalizing constant. ℓS(x) = ℓS(xS) ∝  v∈V (Ψv(xv) −log(Zv))1S(v) (4) We characterize μ0, λ0 as the minimum value that ensures separability for the different models: μ0 = inf{μ ∈R+ | ∃πn, lim n→∞R(πn) = 0}, λ0 = inf{λ ∈R+ | ∃πn, lim n→∞R(πn) = 0} (5) Correlated Models arise in textured object detection [16] and protein subnetwork detection [17]. For instance consider a common random signal z on S, which results in uniform correlation ρ > 0 on S. H0 : xv = wv; H1 : xv = (  ρ(1 −ρ)−1)z1S(v) + wv, for some, S ∈Λ, (6) z, wv are standard IID normal random variables. Again we obtain ℓS(x) = ℓS(xS). These examples motivate the following general setup for local behavior: Definition 4. The distributions P0 and PS are said to exhibit local structure if they satisfy: (1) Markovianity: The null distribution P0 satisfies the properties of a Markov Random Field (MRF). Under the distribution PS the observations xS are conditionally independent of xSc 1 when conditioned on annulus S1 ∩Sc, where S1 = {v ∈V | d(v, w) ≤1, w ∈S}, is the 1-neighborhood of S. (2) Mask: Marginal distributions of observations under P0 and PS on nodes in Sc are identical: P0(xSc ∈A) = PS(xSc ∈A), ∀A ∈A, the σ-algebra of measurable sets. Lemma 1 ([7]). Under conditions (1) and (2) it follows that ℓS(x) = ℓS(xS1). 3 2.1.2 Structured Subgraphs Existing works [10, 2, 12] point to the important role of size, shape and connectivity in determining detectability. For concreteness we consider the signal in noise model for Gaussian distribution and tabulate upper bounds from existing results for μ0 (Eq. 5). The lower bounds are messier and differ by logarithmic factors but this suffices for our discussion here. The table reveals several important points. Larger sets are easier to detect – μ0 decreases with size; connected K-sets are easier to detect relative to arbitrary K-sets; for 2-D lattices “thick” connected shapes are easier to detect than “thin” sets (paths); finally detectability on complete graphs is equivalent to arbitrary K-sets, i.e., shape does not matter. Intuitively, these tradeoffs make sense. For a constant μ, “signal-to-noise” ratio increases with size. Combinatorially, there are fewer K-connected sets than arbitrary K-sets; fewer connected balls than connected paths; and fewer connected sets in 2-D lattices than dense graphs. These results point to the need for characterizing the signal detection problem in terms of Arbitrary K-Set K-Connected Ball K-Connected Path Line Graph ω  2 log(n)  ω  2 K log(n)  ω  2 K log(n)  2-D Lattice ω  2 log(n)  ω  2 K log(n)  ω (1) Complete ω  2 log(n)  ω  2 log(n)  ω  2 log(n)  connectivity, size, shape and the properties of the ambient graph. We also observe that the table is somewhat incomplete. While balls can be viewed as thick shapes and paths as thin shapes, there are a plethora of intermediate shapes. A similar issue arises for sparse vs. dense graphs. We introduce general definitions to categorize shape and graph structures below. Definition 5 (Internal Conductance). (a.k.a. Cut Ratio) Let H = (S, FS) denote a subgraph of G = (V, E) where S ⊆V , FS ⊆ES, written as H ⊆G. Define the internal conductance of H as: φ(H) = min A⊂S |δS(A)| min{|A|, |S −A|}; δS(A) = {(u, v) ∈FS | u ∈A, v ∈S −A} (7) Apparently φ(H) = 0 if H is not connected. The internal conductance of a collection of subgraphs, Σ, is defined as the smallest internal conductance: φ(Σ) = min H∈Σ φ(H) For future reference we denote the collection of connected subgraphs by C and by Ca,Φ the subcollections containing node a ∈V with minimal internal conductance Φ: C = {H ⊆G : φ(H) > 0}, Ca,Φ = {H = (S, FS) ⊆G : a ∈S, φ(H) ≥Φ} (8) In 2-D lattices, for example, φ(BK) ≈Ω(1/ √ K) for connected K-balls BK or other thick shapes of size K. φ(C ∩SK) ≈Ω(1/K) due to “snake”-like thin shapes. Thus internal conductance explicitly accounts for shape of the sets. 3 Convex Programming We develop a convex optimization framework for generating test statistics for local statistical models described in Section 2.1. Our approach relaxes the combinatorial constraints and the functional objectives of the GLRT problem of Eq.(2). In the following section we develop a new characterization based on linear matrix inequalities that accounts for size, shape and connectivity of subgraphs. For future reference we denote A ◦B Δ= [AijBij]i,j. Our first step is to embed subgraphs, H of G, into matrices. A binary symmetric incidence matrix, A, is associated with an undirected graph G = (V, E), and encodes edge relationships. Formally, the edge set E is the support of A, namely, E = Supp(A). For subgraph correspondences we consider symmetric matrices, M, with components taking values in the unit interval, [0, 1]. M = {M ∈[0, 1]n×n | Muv ≤Muu, M Symmetric} 4 Definition 6. M ∈M is said to correspond to a subgraph H = (S, FS), written as H ⇌M, if S = Supp{Diag(M)}, FS = Supp(A ◦M) The role of M ∈M is to ensure that if u ̸∈S we want the corresponding edges Muv = 0. Note that A ◦M in Defn. 6 removes the spurious edges Muv ̸= 0 for (u, v) /∈ES. Our second step is to characterize connected subgraphs as convex subsets of M. Now a subgraph H = (S, FS) is a connected subgraph if for every u, v ∈S, there is a path consisting only of edges in FS going from u to v. This implies that for two subgraphs H1, H2 and corresponding matrices M1 and M2, their convex combination Mη = ηM1 + (1 −η)M2, η ∈(0, 1) naturally corresponds to H = H1 ∪H2 in the sense of Defn 6. On the other hand if H1 ∩H2 = ∅then H is disconnected and so Mη is as well. This motivates our convex characterization with a common “anchor” node. To this end we consider the following collection of matrices: M∗ a = {M ∈M | Maa = 1, Mvv ≤Mav} Note that M∗ a includes star graphs induced on subsets S = Supp(Diag(M)) with anchor node a. We now make use of the well known properties [18] of the Laplacian of a graph to characterize connectivity. The unnormalized Laplacian matrix of an undirected graph G with incidence matrix A is described by L(A) = diag(A1n) −A where 1n is the all-one vector. Lemma 2. Graph G is connected if and only if the number of zero eigenvalues of L(A) is one. Unfortunately, we cannot directly use this fact on the subgraph A ◦M because there are many zero eigenvalues because the complement of Supp(Diag(M)) is by definition zero. We employ linear matrix inequalities (LMI) to deal with this issue. The condition [19] F(x) = F0 + F1x1 + · · · + Fpxp ⪰0 with symmetric matrices Fj is called a linear matrix inequality in xj ∈R with respect to the positive semi-definite cone represented by ⪰. Note that the Laplacian of the subgraph L(A◦M) is a linear matrix function of M. We denote a collection of subgraphs as follows: CLMI(a, γ) Δ= {H ⇌M | M ∈M∗ a, L(A ◦M) −γL(M) ⪰0} (9) Theorem 3. The class CLMI(a, γ) is connected for γ > 0. Furthermore, every connected subgraph can be characterized in this way for some a ∈V and γ > 0, namely, C =  a∈V,γ>0 CLMI(a, γ). Proof Sketch. M ∈CLMI(a, γ) implies M is connected. By definition of Ma there must be a star graph that is a subgraph on Supp(Diag(M)). This means that L(M) (hence L(A ◦M)) can only have one zero eigenvalue on Supp(Diag(M)). We can now invoke Lemma 2 on Supp(Diag(M)). The other direction is based on hyperplane separation of convex sets. Note that Ca,γ is convex but C is not. This necessitates the need for an anchor. In practice this means that we have to search for connected sets with different anchors. This is similar to scan statistics the difference being that we can now optimize over arbitrary shapes. We next get a handle on γ. γ encodes Shape: We will relate γ to the internal conductance of the class C. This provides us with a tool to choose γ to reflect the type of connected sets that we expect for our alternative hypothesis. In particular thick sets correspond to relatively large γ and thin sets to small γ. In general for graphs of fixed size the minimum internal conductance over all connected shapes is strictly positive and we can set γ to be this value if we do not a priori know the shape. Theorem 4. In a 2-D lattice, it follows that Ca,Φ ⊆CLMI(a, γ), where γ = Θ( Φ2 log(1/Φ)). LMI-Test: We are now ready to present our test statistics. We replace indicator variables with the corresponding matrix components in Eq. 4, i.e., 1S(v) →Mvv, 1S(u)1S(v) →Muv and obtain: Elevated Mean: ℓM(x) =  v∈V (Ψv(xv) −log(Zv))Mvv Correlated Gaussian: ℓM(x) ∝  (u,v)∈E Ψ(xu, xv)Muv − v Mvv log(1 −ρ) (10) LMITa,γ ℓa,γ(x) = max M∈CLMI(a,γ) ℓM(x) H1 >< H0 η (11) This test explicitly makes use of the fact that alternative hypothesis is anchored at a and the internal conductance parameter γ is known. We will refine this test to deal with the completely agnostic case in the following section. 5 4 Analysis In this section we analyze LMITa,γ and the agnostic LMI tests for the Elevated Mean problem for exponential family of distributions on 2-D lattices. For concreteness we focus on Gaussian & Poisson models and derive lower and upper bounds for μ0 (see Eq. 5). Our main result states that to guarantee separability, μ0 ≈Ω 1 KΦ , where Φ is the internal conductance of the family Ca,Φ of connected subgraphs, K is the size of the subgraphs in the family, and a is some node that is common to all the subgraphs. The reason for our focus on homogenous Gaussian/Poisson setting is that we can extend current lower bounds in the literature to our more general setting and demonstrate that they match the bounds obtained from our LMIT analysis. We comment on how our LMIT analysis extends to other general structures and models later. The proof for LMIT analysis involves two steps (see Supplementary): 1. Lower Bound: Under H1 we show that the ground truth is a feasible solution. This allows us to lower bound the objective value, ℓa,γ(x), of Eq. 11. 2. Upper Bound: Under H0 we consider the dual problem. By weak duality it follows that any feasible solution of the dual is an upper bound for ℓa,γ(x). A dual feasible solution is then constructed through a novel Euclidean embedding argument. We then compare the upper and lower bounds to obtain the critical value μ0. We analyze both non-agnostic and agnostic LMI tests for the homogenous version of Gaussian and Poisson models of Eq. 3 for both finite and asymptotic 2-D lattice graphs. For the finite case the family of subgraphs in Eq. 3 is assumed to belong to the connected family of sets, Ca,Φ ∩SK, containing a fixed common node a ∈V of size K. For the asymptotic case we let the size of the graph approach infinity (n →∞). For this case we consider a sequence of connected family of sets Cn a.Φn ∩SKn on graph Gn = (Vn, En) with some fixed anchor node a ∈Vn. We will then describe results for agnostic LMI tests, i.e., lacking knowledge of conductance Φ and anchor node a. Poisson Model: In Eq. 3 we let the population Nv to be identically equal to one across counties. We present LMI tests that are agnostic to shape and anchor nodes: LMITA : ℓ(x) = max a∈V,γ≥Φ2 min √γℓa,γ(x) H0 >< H1 0 (12) where Φmin denotes the minimum possible conductance of a connected subgraph with size K, which is 2/K. Theorem 5. The LMITa,γ test achieves δ-separability for λ = Ω( log(K) KΦ ) and the agnostic test LMITA for λ = Ω(log K√log n). Next we consider the asymptotic case and characterize tight bounds for separability. Theorem 6. The two hypothesis H0 and H1 are asymptotically inseparable if λnΦnKn log(Kn) → 0. It is asymptotically separable with LMITa,γ for λnKnΦn/ log(Kn) →∞. The agnostic LMITA achieves asymptotic separability with λn/(log(Kn)√log n) →∞. Gaussian Model: We next consider agnostic tests for Gaussian model of Eq. 3 with no propagation loss, i.e., αuv = 1. Theorem 7. The two hypotheses H0 and H1 for the Gaussian model are asymptotically inseparable if μnΦnKn log(Kn) →0, are separable with LMITa,γ if μnKnΦn/ log(Kn) →∞, and are separable with LMITA if μn/(log(Kn)√log n) →∞ Our inseparability bound matches existing results on 2-D Lattice & Line Graphs by plugging in appropriate values for Φ for the cases considered in [2, 12]. The lower bound is obtained by specializing to a collection of “non-decreasing band” subgraphs.Yet LMITa,γ and LMITA is able to achieves the lower bound within a logarithmic factor. Furthermore, our analysis extends beyond Poisson & Gaussian models and applies to general graph structures and models. The main reason is that our LMIT analysis is fairly general and provides an observation-dependent bound through convex duality. We briefly describe it here. Consider functions ℓS(x) that are positive, separable 6 0 2 4 6 8 10 0 2 4 6 8 10 12 14 16 (a) Thick shape 0 2 4 6 8 10 0 2 4 6 8 10 12 14 16 (b) Thin shape 0 2 4 6 8 10 0 2 4 6 8 10 12 14 16 (c) Snake shape 0 2 4 6 8 10 0 2 4 6 8 10 12 14 16 (d) Thin shape(8-neighbors) Figure 1: Various shapes of ground-truth anomalous clusters on a fixed 15×10 lattice. Anomalous cluster size is fixed at 17 nodes. (a) shows a thick cluster with a large internal conductance. (b) shows a relatively thinner shape. (c) shows a snake-like shape which has the smallest internal conductance. (d) shows the same shape of (b), with the background lattice more densely connected. and bounded for simplicity. By establishing primal feasibility that the subgraph S ∈CLMI(a, γ) for a suitably chosen γ, we can obtain a lower bound for the alternative hypothesis H1 and show that EH1 maxM∈CLMI(a,γ) ℓM(x) ≥EH1  v∈S ℓS(xv) . On the other hand for the null hypothesis we can show that, EH0 maxM∈CLMI(a,γ) ℓM(x) ≤EH0  v∈B(a,Θ(√γ)) ℓS(xv)  . Here EH1 and EH0 denote expectations with respect to alternative and null hypothesis and B(a, Θ(√γ)) is a ball-like thick shape centered at a ∈V with radius Θ(√γ). Our result then follows by invoking standard concentration inequalities. We can extend our analysis to the non-separable case such as correlated models because of the linear objective form in Eq. 10. 5 Experiments We present several experiments to highlight key properties of LMIT and to compare LMIT against other state-of-art parametric and non-parametric tests on synthetic and real-world data. We have shown that agnostic LMIT is near minimax optimal in terms of asymptotic separability. However, separability is an asymptotic notion and only characterizes the special case of zero false alarms (FA) and missed detections (MD), which is often impractical. It is unclear how LMIT behaves with finite size graphs when FAs and MDs are prevalent. In this context incorporating priors could indeed be important. Our goal is to highlight how shape prior (in terms of thick, thin, or arbitrary shapes) can be incorporated in LMIT using the parameter γ to obtain better AUC performance in finite size graphs. Another goal is to demonstrate how LMIT behaves with denser graph structures. From the practical perspective, our main step is to solve the following SDP problem: max M :  i yiMii s.t. M ∈CLMI(a, γ), tr(M) ≤K We use standard SDP solvers which can scale up to n ∼1500 nodes for sparse graphs like lattice and n ∼300 nodes for dense graphs with m = Θ(n2) edges. To understand the impact of shape we consider the test LMITa,γ for Gaussian model and manually vary γ. On a 15×10 lattice we fix the size (17 nodes) and the signal strength μ  |S| = 3, and consider three different shapes (see Fig. 1) for the alternative hypothesis. For each shape we synthetically simulate 100 null and 100 alternative hypothesis and plot AUC performance of LMIT as a function of γ. We observe that the optimum value of AUC for thick shapes is achieved for large γ and small γ for thin shape confirming our intuition that γ is a good surrogate for shape. In addition we notice that thick shapes have superior AUC performance relative to thin shapes, again confirming intuition of our analysis. To understand the impact of dense graph structures we consider performance of LMIT with neighborhood size. On the lattice of the previous experiment we vary neighborhood by connecting each node to its 1-hop, 2-hop, and 3-hop neighbors to realize denser structures with each node having 4, 8 and 12 neighbors respectively. Note that all the different graphs have the same vertex set. This is convenient because we can hold the shape under the alternative fixed for the different graphs. As before we generate 100 alternative hypothesis using the thin set of the previous experiment with the same mean μ and 100 nulls. The AUC curves for the different graphs highlight the fact that higher density leads to degradation in performance as our intuition with complete graphs suggests. We also 7 10 3 10 2 10 1 10 0 10 1 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 LMIT shape parameter  AUC performance Thick shape Thin shape Snake shape  = 0.2 AUC=0.952  = 0.05 AUC=0.899  = 0.02 AUC=0.865 (a) AUC with various shapes 10 3 10 2 10 1 10 0 10 1 0.65 0.7 0.75 0.8 0.85 0.9 LMIT shape parameter  AUC performance 4neighbor lattice 8neighbor lattice 12neighbor lattice  = 0.2 AUC=0.855  = 0.05 AUC=0.899  = 0.1 AUC=0.874 (b) AUC with different graph structures Figure 2: (a) demonstrates AUC performances with fixed lattice structure, signal strength μ and size (17 nodes), but different shapes of ground-truth clusters, as shown in Fig.1. (b) demonstrates AUC performances with fixed signal strength μ, size (17 nodes) and shape (Fig.1(b)), but different lattice structures. see that as density increases a larger γ achieves better performance confirming our intuition that as density increases the internal conductance of the shape increases. In this part we compare LMIT against existing state-of-art approaches on a 300-node lattice, a 200node random geometric graph (RGG), and a real-world county map graph (129 nodes) (see Fig.3,4). We incorporate shape priors by setting γ (internal conductance) to correspond to thin sets. While this implies some prior knowledge, we note that this is not necessarily the optimal value for γ and we are still agnostic to the actual ground truth shape (see Fig.3,4). For the lattice and RGG we use the elevated-mean Gaussian model. Following [1] we adopt an elevated-rate independent Poisson model for the county map graph. Here Ni is the population of county, i. Under null the number of cases at county i, follows a Poisson distribution with rate Niλ0 and under the alternative a rate Niλ1 within some connected subgraph. We assume λ1 > λ0 and apply a weighted version of LMIT of Eq. 12, which arises on account of differences in population. We compare LMIT against several other tests, including simulated annealing (SA) [4], rectangle test (Rect), nearest-ball test (NB), and two naive tests: maximum test (MaxT) and average test (AvgT). SA is a non-parametric test and works by heuristically adding/removing nodes toward a better normalized GLRT objective while maintaining connectivity. Rect and NB are parametric methods with Rect scanning rectangles on lattice and NB scanning nearest-neighbor balls around different nodes for more general graphs (RGG and countymap graph). MaxT & AvgT are often used for comparison purposes. MaxT is based on thresholding the maximum observed value while AvgT is based on thresholding the average value. We observe that uniformly MaxT and AvgT perform poorly. This makes sense; It is well known that MaxT works well only for alternative of small size while AvgT works well with relatively large sized alternatives [11]. Parametric methods (Rect/NB) performs poorly because the shape of the ground truth under the alternative cannot be well-approximated by Rectangular or Nearest Neighbor Balls. Performance of SA requires more explanation. One issue could be that SA does not explicitly incorporate shape and directly searches for the best GLRT solution. We have noticed that this has the tendency to amplify the objective value of null hypothesis because SA exhibits poor “regularization” over the shape. On the other hand LMIT provides some regularization for thin shape and does not admit arbitrary connected sets. Table 1: AUC performance of various algorithms on a 300-node lattice, a 200-node RGG, and the county map graph. On all three graphs LMIT significantly outperforms the other tests consistently for all SNR levels. SNR lattice (μ  |S|/σ) RGG (μ  |S|/σ) map (λ1/λ0) 1.5 2 3 1.5 2 3 1.1 1.3 1.5 LMIT 0.728 0.780 0.882 0.642 0.723 0.816 0.606 0.842 0.948 SA 0.672 0.741 0.827 0.627 0.677 0.756 0.556 0.744 0.854 Rect(NB) 0.581 0.637 0.748 0.584 0.632 0.701 0.514 0.686 0.791 MaxT 0.531 0.547 0.587 0.529 0.562 0.624 0.525 0.559 0.543 AvgT 0.565 0.614 0.705 0.545 0.623 0.690 0.536 0.706 0.747 8 References [1] G. P. Patil and C. Taillie. Geographic and network surveillance via scan statistics for critical area detection. In Statistical Science, volume 18(4), pages 457–465, 2003. [2] E. Arias-Castro, E. J. Candes, H. Helgason, and O. Zeitouni. Searching for a trail of evidence in a maze. In The Annals of Statistics, volume 36(4), pages 1726–1757, 2008. [3] J. Glaz, J. Naus, and S. Wallenstein. Scan Statistics. Springer, New York, 2001. [4] L. Duczmal and R. Assuncao. A simulated annealing strategy for the detection of arbitrarily shaped spatial clusters. In Computational Statistics and Data Analysis, volume 45, pages 269– 286, 2004. [5] M. Kulldorff, L. Huang, L. Pickle, and L. Duczmal. An elliptic spatial scan statistic. In Statistics in Medicine, volume 25, 2006. [6] C. E. Priebe, J. M. Conroy, D. J. Marchette, and Y. Park. Scan statistics on enron graphs. In Computational and Mathematical Organization Theory, 2006. [7] V. Saligrama and M. Zhao. Local anomaly detection. In Artificial Intelligence and Statistics, volume 22, 2012. [8] V. Saligrama and Z. Chen. Video anomaly detection based on local statistical aggregates. 2013 IEEE Conference on Computer Vision and Pattern Recognition, 0:2112–2119, 2012. [9] J. Qian and V. Saligrama. Connected sub-graph detection. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2014. [10] E. Arias-Castro, D. Donoho, and X. Huo. Near-optimal detection of geometric objects by fast multiscale methods. In IEEE Transactions on Information Theory, volume 51(7), pages 2402–2425, 2005. [11] Addario-Berry, N. Broutin, L. Devroye, and G. Lugosi. On combinatorial testing problems. In The Annals of Statistics, volume 38(5), pages 3063–3092, 2010. [12] E. Arias-Castro, E. J. Candes, and A. Durand. Detection of an anomalous cluster in a network. In The Annals of Statistics, volume 39(1), pages 278–304, 2011. [13] J. Sharpnack, A. Rinaldo, and A. Singh. Changepoint detection over graphs with the spectral scan statistic. In International Conference on Artificial Intelligence and Statistics, 2013. [14] J. Sharpnack, A. Krishnamurthy, and A. Singh. Near-optimal anomaly detection in graphs using lovasz extended scan statistic. In Neural Information Processing Systems, 2013. [15] Erhan Baki Ermis and Venkatesh Saligrama. Distributed detection in sensor networks with limited range multimodal sensors. IEEE Transactions on Signal Processing, 58(2):843–858, 2010. [16] G. R. Cross and A. K. Jain. Markov random field texture models. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 5, pages 25–39, 1983. [17] M. Bailly-Bechet, C. Borgs, A. Braunstein, J. T. Chayes, A.Dagkessamanskaia, J. Francois, and R. Zecchina. Finding undetected protein associations in cell signaling by belief propagation. In Proceedings of the National Academy of Sciences (PNAS), volume 108, pages 882–887, 2011. [18] F. Chung. Spectral graph theory. American Mathematical Society, 1996. [19] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. 9
2014
58
5,545
Magnitude-sensitive preference formation Nisheeth Srivastava∗ Department of Psychology University of San Diego La Jolla, CA 92093 nisheeths@gmail.com Edward Vul Department of Psychology University of San Diego La Jolla, CA 92093 edwardvul@gmail.com Paul R Schrater Dept of Psychology University of Minnesota Minneapolis, MN, 55455 schrater@umn.edu Abstract Our understanding of the neural computations that underlie the ability of animals to choose among options has advanced through a synthesis of computational modeling, brain imaging and behavioral choice experiments. Yet, there remains a gulf between theories of preference learning and accounts of the real, economic choices that humans face in daily life, choices that are usually between some amount of money and an item. In this paper, we develop a theory of magnitudesensitive preference learning that permits an agent to rationally infer its preferences for items compared with money options of different magnitudes. We show how this theory yields classical and anomalous supply-demand curves and predicts choices for a large panel of risky lotteries. Accurate replications of such phenomena without recourse to utility functions suggest that the theory proposed is both psychologically realistic and econometrically viable. 1 Introduction While value/utility is a useful abstraction for macroeconomic applications, it has little psychological validity [1]. Valuations elicited in laboratory conditions are known to be extremely variable under different elicitation conditions, liable to anchor on arbitrary observations, and extremely sensitive to the set of options presented [2]. This last property constitutes the most straightforward refutation of the existence of object-specific utilities. Consider for example, an experiment conducted by [3], where subjects were endowed with a fixed amount of money, which they could use across multiple trials to buy out of receiving an electric shock of one of three different magnitudes (see left panel in Figure 1). The large systematic differences found in the prices for different shock magnitudes that subjects in this study were willing to pay demonstrate the absence of any fixed psychophysical measurements of value. Thus, while utility maximization is a mathematically useful heuristic in economic applications, it is unlikely that utility functions can represent value in any significant psychological sense. Neurological studies also demonstrate the existence of neuron populations sensitive not to absolute reward values, but to one of the presented options being better relative to the others, a phenomenon called comparative coding. Comparative coding was first reported in [4], who observed activity in the orbito-frontal neurons of monkeys when offered varying juice rewards presented in pairs within separate trial blocks in patterns that depended only on whether a particular juice is preferred within its trial. Elliott et al. [5] found similar results using fMRI in the medial orbitofrontal cortex of human subjects a brain region known to be involved in value coding. Even more strikingly, Plassmann et al [6] found that falsely assigning a high price to a particular item (wine) caused both greater selfreported experienced pleasantness (EP) (see right panel of Figure 1) and greater mOFC activity indicative of pleasure. What is causing this pleasure? Where is the ‘value’ assigned to the pricier wine sample coming from? ∗Corresponding author: nisheeths@gmail.com 1 Price offered 0 10 20 30 40 50 60 70 Key: high medium low Pain options lowmedium mediumhigh lowmedium mediumhigh Endowment 40 Endowment 80 1 2 3 4 5 6 1 2 3 4 5 6 A B C D E $5 $45 $10 $90 $35 Liking Liking Price labels No price labels Wine 1 Wine 2 Wine 3 * Reconstructed from Figure 1(a) in (Vlaev, 2011) * Reconstructed from Figure 1, panels B,D in (Plassmann, 2008) Figure 1: Valuations of options elicited in the lab can be notoriously labile. Left: An experiment where subjects had to pay to buy out of receiving electric shock saw subjects losing or gaining value for the price of pain of particular magnitudes both as a function of the amount of money the experimenters initially gave them and the relative magnitude of the pair of shock options they were given experience with. Right: Subjects asked to rate five (actually three) wines rated artificially highly-priced samples of wine as more preferable. Not only this, imaging data from orbitofrontal cortex showed that they actually experienced these samples as more pleasurable. Viewed in light of these various difficulties, making choices for options that involve magnitudes, appears to be a formidable challenge. However humans, and even animals [7] are well-known to perform such operations easily. Therefore, one of two possibilities holds: one, that it is possible, notwithstanding the evidence laid out above, for humans to directly assess value magnitudes (except in corner cases like the ones we describe); two, that some alternative set of computations permits them to behave as if they can estimate value magnitudes. This paper formalizes the set of computations that operationalizes this second view. We build upon a framework of preference learning proposed in [8] that avoids the necessity for assuming psychophysical access to value and develop a model that can form preferences for quantities of objects directly from history of past choices. Since the most common modality of choices involving quantities in the modern world is determining the prices of objects, pricing forms the primary focus of our experiments. Specifically, we derive from our theory (i) classical and anomalous supply-demand curves, and (ii) choice predictions for a large panel of risky lotteries. Hence, in this paper we present a theory of magnitude-sensitive preference formation that, as an important special case, provides an account of how humans learn to value money. 2 Learning to value magnitudes 2.1 Rational preference formation Traditional treatments of preference learning (e.g. [9]) assume that there is some hidden state function U : X →R+ such that x ≻x′ iff U(x) > U(x′) ∀x′ ∈X, where X is the set of all possible options. Preference learning, in such settings, is reduced to a task of statistically estimating a monotone distortion of U, thereby making two implicit assumptions (i) that there exists some psychophysical apparatus that can compute hedonic utilities and (ii) that there exists some psychophysical apparatus capable of representing absolute magnitudes capable of comparison in the mind. The data we describe above argues against either possibility being true. In order to develop a theory of preference formation that avoids commitments to psychophysical value estimation, a novel approach is needed. Srivastava & Schrater [8] provide us with the building blocks for such an approach. They propose that the process of learning preferences can be modeled as an ideal Bayesian observer directly learning ‘which option among the ones offered is best’, retaining memory of which options were presented to it at every choice instance. However, instead of directly remembering option sets, their model allows for the possibility that option set observations map to latent contexts in memory. In practice, this mapping is assumed to be identified in all their demonstrations. Formally, the computation corresponding to utility in this framework is p(r|x, o), which is obtained by marginalizing 2 over the set of latent contexts C, D(x) = p(r|x, o) = PC c p(r|x, c)p(x|c)p(c|o) PC c p(x|c)p(c|o) , (1) where it is understood that the context probability p(c|o) = p(c|{o1, o2, · · · , ot−1}) is a distribution on the set of all possible contexts incrementally inferred from the agent’s observation history. Here, p(r|x, c) encodes the probability that the item x was preferred to all other items present in choice instances linked with the context c, p(x|c) encodes the probability that the item x was present in choice sets indexed by the context c and p(c) encodes the frequency with which the observer encounters these contexts. The observer also continually updates p(c|o) via recursive Bayesian estimation, p(c(t)|o(1:t)) = p(o(t)|c)p(c|o(1:t−1)) PC c p(o(t)|c)p(c|o(1:t−1)) , (2) which, in conjunction with the desirability based state preference update, and a simple decision rule (e.g. MAP, softmax) yields a complete decision theory. While this theory is complete in the formal sense that it can make testable predictions of options chosen in the future given options chosen in the past, it is incomplete in its ability to represent options: it will treat a gamble that pays $20 with probability 0.1 against safely receiving $1 and one that pays $20000 with probability 0.1 against safely receiving $1 as equivalent, which is clearly unsatisfactory. This is because it considers only simple cases where options have nominal labels. We now augment it to take the information that magnitude labels1 provide into account. 2.2 Magnitude-sensitive preference formation Typically, people will encounter monetary labels m ∈M in a large number of contexts, often entirely outside the purview of the immediate choice to be made. In the theory of [8] incorporating desirability information related to m will involve marginalizing across all these contexts. Since the set of such contexts across a person’s entire observation history is larg, explicit marginalization across all contexts would imply explicit marginalization across every observation involving the monetary label m, which is unrealistic. Thus information about contexts must be compressed or summarized2. We can resolve this by assuming that instead that animals generate contexts as clusters of observations, thereby creating the possibility of learning higher-order abstract relationships between them. Such models of categorization via clustering are widely accepted in cognitive psychology [10]. Now, instead of recalling all possible observations containing m, an animal with a set of observation clusters (contexts) would simply sample a subset of these that would be representative of all contexts wherein observations containing m are statistically typical. In such a setting, p(m|c) would correspond to the observation likelihood of the label m being seen in the cluster c, p(c) would correspond to the relative frequency of context occurrences, and p(r|x, m, c) would correspond to the inferred value for item x when compared against monetary label m while the active context c. The remaining probability term p(x|m) encodes the probability of seeing transactions involving item x and the particular monetary label m. We define r to take the value 1 when x ≻x′∀x′ ∈X −{x}. Following a similar probabilistic calculus as in Equation 1, the inferred value of x becomes p(r|x) and can be calculated as, p(r|x) = PM m P C p(r|x, m, c)p(x|m)p(m|c)p(c) PM m P C p(x|m)p(m|c)p(c) , (3) 1Note that taking monetary labels into account is not the same as committing to a direct psychophysical evaluation of money. In our account, value judgments are linked not with magnitudes, but with labels, that just happen to correspond to numbers in common practice. 2Mechanistic considerations of neurobiology also suggest sparse sampling of prior contexts. The memory and computational burden of recalculating preferences for an ever-increasing C would quickly prove insuperable. 3 Forager Berry bush where to go? Hill Forest Valley Sparse Normal Dense M C s n d s n d s n d p(m|c) p(x|m) s n d Is there a bush where I see m red splotches? Typically high for interesting m values X = all berry bushes p(r|x,m,c) c = hill (pmf for all x's with one m shown in one bar) c = valley c = forest (too crowded!) (easy to get to) p(r|x) hill valley forest for hill valley forest p(c) (live close to hill) Figure 2: Illustrating a choice problem an animal might face in the wild (left) and how the intermediate probability terms in our proposed model would operationalize different forms of information needed to solve such a problem (right). Marginalizing across situation contexts and magnitude labels tells us what the animal will do. with the difference from the earlier expression arising from an additional summation over the set M of monetary labels that the agent has experience with. Figure 2 illustrates how these computations could be practically instantiated in a general situation involving magnitude-sensitive value inference that animals could face. Our hunter-gatherer ancestor has to choose which berry bush to forage in, and we must infer the choice he will make based on recorded history of his past behavior. The right panel in this figure illustrates natural interpretations for the intermediate conditional probabilities in Equation 3. The term p(m|c) encodes prior understanding of the fertility differential in the soils that characterize each of the three active contexts. The p(r|x, m, c) term records the history of the forager’s choice within the context in via empirically observed relative frequencies. What drives the forager to prefer a sparsely-laden tree on the hill instead of the densely laden tree in the forest in our example, though, is his calculation of the underlying context probability p(c). In our story, because he lives near the hill, he encounters the bushes on the hill more frequently, and so they dominate his preference judgment. A wide palette of possible behaviors can be similarly interpreted and rationalized within the framework we have outlined. What exactly is this model telling us though that we aren’t putting into it ourselves? The only strong constraint it imposes on the form of preferences currently is that they will exhibit context-specific consistency, viz. an animal that prefers one option over another in a particular context will continue to do so in future trials. While this constraint itself is only valid if we have some way of pinning down particular contexts, it is congruent with results from marketing research that describe the general form of human preferences as being ‘ arbitrarily coherent’ - consumer preferences are labile and sensitive to changes in option sets, framing effects, loss aversion and a host of other treatments but are longitudinally reliable within these treatments [2]. For our model to make more interesting economic predictions, we must further constrain the form of the preferences it can emit to match those seen in typical monetary transactions; we do this by making further assumptions about the intermediate terms in Equation 3 in the next three sections that describe economic applications. 3 Living in a world of money Equation 3 gives us predictions about how people will form preferences for various options that co-occur with money labels. Here we specialize this model to make predictions about the value of options that are money labels, viz. fiat currency. The institutional imperatives of legal tender impose a natural ordering on preferences involving monetary quantities. Ceteris paribus, subjects will prefer a larger quantity of money to a smaller quantity of money. Thus, while the psychological de4 sirability pointer could assign preferences to monetary labels capriciously (as an infant who prefers the drawings on a $1 bill to those on a $100 bill might), to model desirability behavior corresponding to knowledgeable use of currency, we constrain it to follow arithmetic ordering such that, xm∗≻xm ⇔m∗> m ∀m ∈M, (4) where the notation xm denotes an item (currency token) x associated with the money label m. Then, Equation 3 reduces to, p(r|xm∗) = PM′ m P C p(x|m)p(m|c)p(c) PM m P C p(x|m)p(m|c)p(c) , (5) where max(M′) ≤m∗, since the contribution to p(r|x, m, c) for all larger m terms, is set to zero by the arithmetic ordering condition; the p(x|m) term binds x to all the m′s it has been seen with before. Assuming no uncertainty about which currency token goes with which label, p(x|m) becomes a simple delta function pointing to m that the subject has experience with, and Equation 5 can be rewritten as, p(r|x) = R m∗ 0 P C p(x|m, c)p(m|c)p(c) R ∞ 0 P C p(x|m, c)p(m|c)p(c) . (6) If we further assume that the model gets to see all possible money labels, i.e. M = R+, this can be further simplified as, p(r|x) = R m∗ 0 P C p(m|c)p(c) R ∞ 0 P C p(m|c)p(c) , (7) reflecting strong dependence on the shape of p(m), the empirical distribution of monetary outcomes in the world. What can we say about the shape of the general frequency distribution of numbers in the world? Numbers have historically arisen as ways to quantify, which helps plan resource foraging, consumption and conservation. Scarcity of essential resources naturally makes being able to differentiate small magnitudes important for selection fitness. This motivates the development of number systems where objects counted frequently (essential resources) are counted with small numbers (for better discriminability). Thus, it is reasonable to assume that, in general, larger numbers will be encountered relatively less frequently than smaller ones in natural environments, and hence, that the functions p(m) and p(c) will be monotone decreasing3. For analytical tractability, we formalize this assumption by setting p(m|c) to be gamma distributed on the domain of monetary labels, and p(c) to be an exponential distribution on the domain of the typical ‘wealth’ rate of individual contexts. The wealth rate is an empirically accessible index for the set of situation contexts, and represents the typical (average) monetary label we expect to see in observations associated with this context. Thus, for instance, the wealth rate for ‘steakhouses’ will be higher than that of ‘fast food’. For any particular value of the wealth rate, the ‘price’ distribution p(m|c) will reflect the relative frequencies of seeing various monetary labels in the world in observations typical to context c. The gamma/log-normal shape of real-world prices in specific contexts is well-attested empirically. The wealth rate distribution p(c) can be always made monotone decreasing simply by shuffling the order of presentation of contexts in the measure of the distribution. With these distributional assumptions, the marginalized product p(m) is assured to be a Pareto distribution. Data from [12] as well as supporting indirect observations in [13], suggest that we are on relatively safe ground by making such assumptions for the general distribution of monetary units in the world [14]. This set of assumptions further reduces Equation 7 to, p(r|x) = ψ(xm∗), (8) where ψ(·) is the Pareto c.d.f. 3Convergent evidence may also be found in the Zipfian principle of communication efficiency [11]. While it might appear incongruous to speak of differential efficiency in communicating numbers, recall that the historical origins of numbers involved tally marks and other explicit token-based representations of numbers which imposed increasing resource costs in representing larger numbers. 5 Reduced experience with monetary options will be reflected in a reduced membership of M. Sampling at random from M corresponds to approximating ψ with a limited number of samples. So long as the sampling procedure is not systematically biased away from particular x values, the resulting curve will not be qualitatively different from the true one. Systematic differences will arise, though, if the sampling is biased by, say, the range of values observers are known to encounter. For instance, it is reasonable to assume that the wealth of a person is directly correlated with the upper limit of money values they will see. Substituting this upper limit in Equation 7, we obtain a systematic difference in the curvature of the utility function that subjects with different wealth endowments will have for the same monetary labels. The trend we obtain from a simulation (see gray inset in Figure 3) with three different wealth levels ($1000, $10000 and $ 1 million) matches the empirically documented increase in relative risk aversion (curvature of the utility function) with wealth [15]. Observe that the log concavity of the Pareto c.d.f. has the practical effect of essentially converting our inferred value for money into a classical utility function. Thus, using two assumptions (number ordering and scarcity of essential resources), we have situated economic measurements of preference as a special, fixed case of a more general dynamic process of desirability evaluation. 4 Modeling willingness-to-pay p(m|c) m (a) (b) (c) p(m|x) m same history p(m|c) m p(x|m) p(m|x) m relatively flat distribution in the tail exclusive goods seen at relatively few price points same history (a) (b) (c) (a) (b) (c) (a)(b) (c) (a) (b) (c) p(m|c) m (a) (b) + + p(m|x) m p(r|x,m,c) Money label Desirability Wealth Wealth effect on risk aversion 0 20 40 60 80 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1k 10k 1M item 1 preferred in existing choice set p(x|m) p(x|m) Item 2 preferred after prices rise p(m|c) t1 t2 t3 t4 t5 t6 t7 t8 1 2 p(m|x) At t8 p(m|x) At t2 Preference anchored to initial numeric label Well-behaved classical demand curve m m Classical demand curve Veblen demand curve Giffen substitution Price anchoring Money distribution is learned over time ... Initial samples in money distribution can skew initial value estimates in novel contexts Figure 3: Illustrating derivations of pricing theory predictions for goods of various kinds from our model. Having studied how our model works for choices between items that all have money labels, the logical next step is to study choices involving one item with a money label and one without, i.e., pricing. Note that asking how much someone values an option, as we did in the section above, is different from asking if they would be willing to buy it at a particular price. The former corresponds to the term p(r|x), as defined above. The latter will correspond to p(m|r, x), with m being the price the subject is willing to pay to complete the transaction. Since the contribution of all terms where r = 0, i.e. the transaction is not completed, is identically zero this term can be computed as, p(m|x) = P C p(x|m)p(m|c)p(c) PM m P C p(x|m)p(m|c)p(c) , (9) further replacing the integral over M with an integral over the real line as in Equation 5 for analytical tractability when necessary. What aspects of pricing behavior in the real world can our model explain? Interesting variations in pricing arise from assumptions about the money distribution p(m|c) and/or the price distribution p(x|m). Figure 3 illustrates our model’s explanation for three prominent variations of classical 6 demand curves documented in the microeconomics literature. Consumers typically reduce preference for goods when prices rise, and increases it when prices drop. This fact about the structure of preferences involved in money transactions is replicated in our model (see first column in Figure 3) via the reduction/increase of the contribution of the p(m|c) term to the numerator of Equation 9. Marketing research reports anomalous pricing curves that violate this behavior in some cases. One important case comprises of Veblen goods, wherein the demand for high-priced exclusive goods drops when prices are lowered. Our model explains this behavior (see second column in Figure 3) via unfamiliarity with the price reflected in a lower contribution from the price distribution p(x|m) for such low values. Such non-monotonic preference behavior is difficult for utility-based models, but sits comfortably within ours, where familiarity with options at typical price points drives desirability. Another category of anomalous demand curves comes from Giffen goods, which rise in demand upon price increases because another substitute item becomes too expensive. Our approach accounts for this behavior (see third column in Figure 3) under the assumption that price changes affect the Giffen good less because its price distribution has a larger variance, which is in line with empirical reports showing greater price inelasticity of Giffen goods [16]. The last column in Figure 3 addresses an aspect of the temporal dynamics of our model that potentially explains both (i) why behavioral economists can continually find new anchoring results (e.g. [6, 2]) and (ii) why classical economists often consider such results to be marginal and uninteresting [17]. Behavioral scientists running experiments in labs ask subjects to exhibit preferences for which they may not have well-formed price and label distributions, which causes them to anchor and show other forms of preference instability. Economists fail to find similar results in their field studies, because they collect data from subjects operating in contexts for which their price and label distributions are well-formed. Both conclusions fall out of our model of sequential preference learning, where initial samples can bias the posterior, but the long-run distribution remains stable. Parenthetically, this demonstration also renders transparent the mechanisms by which consumers process rapid inflationary episodes, stock price volatility, and transferring between multiple currency bases. In all these cases, empirical observations suggests inertia followed by adaptation, which is precisely what our model would predict. 5 Modeling risky monetary choices Finally, we ask: how well can our model fit the choice behavior of real humans making economic decisions? The simplest economic setup to perform such a test is in predicting choices between risky lotteries, since the human prediction is always treated as a stochastic choice preference that maps directly onto the output of our model. We use a basic expected utility calculation, where the desirability for lottery options is computed as in Equation 8. For a choice between a risky lottery x1 = {mh, ml} and a safe choice x2 = ms, with a win probability q and where mh > ms > ml, the value calculation for the risky option will take the form, p(r|x) = R mh ms p(m|c)p(c) R ∞ 0 p(m|c)p(c) , in wins (10) p(r|x) = R ml ms p(m|c)p(c) R ∞ 0 p(m|c)p(c) , in losses (11) ⇒EV (risky) = q (ψx(mh) −ψx(ms)) + (1 −q) (ψx(ml) −ψx(ms)) . (12) where ψ(·) is the c.d.f. of the Pareto distribution on monetary labels m and p(x) is the given lottery probability. Using Equation 12, where ψ is the c.d.f of a Pareto distribution, (θ = {2.9, 0.1, 1} fitted empirically), assuming that subjects distort perceived probabilities [18] via an inverse-S shaped weighting function4, and using an ϵ-random utility maximization decision rule5, we obtain choice predictions 4We use Prelec’s version of this function, with the slope parameter γ distributed N(0.65, 0.2) across our agent population. The quantitative values for γ are taken from (Zhang & Maloney, 2012). 5ϵ-random decision utility maximization is a simple way of introducing stochasticity into the decision rule, and is a common econometric practice when modeling population-level data. It predicts that subjects pick the option with higher computed expected utility with a probability 1 −ϵ, and predict randomly with a probability 7 Probability of risky gamble Expected value 0.01 0.05 0.2 0.33 0.4 0.5 0.67 10500 2100 400 100 20 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 4: Comparing proportion of subjects selecting risky options predicted by our theory with data obtained in a panel of 35 different risky choice experiments. The x-axis plots the probability of the risky gamble; the y-axis plots the expected value of gambles scaled to the smallest EV gamble. Left: Choice probabilities for risky option plotted for 7 p values and 5 expected value levels. Each of the 35 choice experiments was conducted using between 70-100 subjects. Right: Choice probabilities predicted by relative desirability computing agents in the same 35 choice experiments. Results are compiled by averaging over 1000 artificial agents. that match human performance (see Figure 4) on a large and comprehensive panel of risky choice experiments obtained from [19] to within statistical confidence6. 6 Conclusion The idea that preferences about options can be directly determined psychophysically is strongly embedded in traditional computational treatments of human preferences, e.g. reinforcement learning [20]. Considerable evidence, some of which we have discussed, suggests that the brain does not in fact, compute value [3]. In search of a viable alternative, we have demonstrated a variety of behaviors typical of value-based theories using a stochastic latent variable model that simply tracks the frequency with which options are seen to be preferred in latent contexts and then compiles this evidence in a rational Bayesian manner to emit preferences. This proposal, and its success in explaining fundamental economic concepts, situates the computation of value (as it is generally measured) within the range of abilities of neural architectures that can only represent relative frequencies, not absolute magnitudes. While our demonstrations are computationally simple, they are substantially novel. In fact, computational models explaining any of these effects even in isolation are difficult to find [1]. While the results we demonstrate are preliminary, and while some of the radical implications of our predictions about the effects of choice history on preferences (“you will hesitate in buying a Macbook for $100 because that is an unfamiliar price for it”7) remain to be verified, the plain ability to describe these economic concepts within an inductively rational framework without having to invoke a psychophysical value construct by itself constitutes a non-trival success and forms the essential contribution of this work. Acknowledgments NS and PRS acknowledge funding from the Institute for New Economic Thinking. EV acknowledges funding from NSF CPS Grant #1239323. ϵ. The value of ϵ is fitted to the data; we used ϵ = 0.25, the value that maximized our fit to the endpoints of the data. Since we are computing risk attitudes over a population, we should ideally also model stochasticity in utility computatation. 6While [19] do not give standard deviations for their data, we assume that binary choice probabilities can be modeled by a binomial distribution, which gives us a theoretical estimate for the standard deviation expected in the data. Our optimal fits lie within 1 SD of the raw data for 34 of 35 payoff-probability combinations, yielding a fit in probability. 7You will! You’ll think there’s something wrong with it. 8 References [1] M. Rabin. Psychology and economics. Journal of Economic Literature, 36(1):pp. 11–46, 1998. [2] Dan Ariely. Predictably irrational: The Hidden Forces That Shape Our Decisions. Harper Collins, 2009. [3] I. Vlaev, N. Chater, N. Stewart, and G. Brown. Does the brain calculate value? Trends in Cognitive Sciences, 15(11):546 – 554, 2011. [4] L. Tremblay and W. Schultz. Relative reward preference in primate orbitofrontal cortex. Nature, 398:704–708, 1999. [5] R. Elliott, Z. Agnew, and J. F. W. Deakin. Medial orbitofrontal cortex codes relative rather than absolute value of financial rewards in humans. European Journal of Neuroscience, 27(9):2213– 2218, 2008. [6] Hilke Plassmann, John O’Doherty, Baba Shiv, and Antonio Rangel. Marketing actions can modulate neural representations of experienced pleasantness. Proceedings of the National Academy of Sciences, 105(3):1050–1054, 2008. [7] M Keith Chen, Venkat Lakshminarayanan, and Laurie R Santos. How basic are behavioral biases? evidence from capuchin monkey trading behavior. Journal of Political Economy, 114(3):517–537, 2006. [8] N Srivastava and PR Schrater. Rational inference of relative preferences. In Proceedings of Advances in Neural Information Processing Systems 25, 2012. [9] A. Jern, C. Lucas, and C. Kemp. Evaluating the inverse decision-making approach to preference learning. In NIPS, pages 2276–2284, 2011. [10] J. Anderson. The Adaptive character of thought. Erlbaum Press, 1990. [11] John Z Sun, Grace I Wang, Vivek K Goyal, and Lav R Varshney. A framework for bayesian optimality of psychophysical laws. Journal of Mathematical Psychology, 56(6):495–501, 2012. [12] Neil Stewart, Nick Chater, and Gordon D.A. Brown. Decision by sampling. Cognitive Psychology, 53(1):1 – 26, 2006. [13] Christian Kleiber and Samuel Kotz. Statistical size distributions in economics and actuarial sciences, volume 470. Wiley-Interscience, 2003. [14] Adrian Dragulescu and Victor M Yakovenko. Statistical mechanics of money. The European Physical Journal B-Condensed Matter and Complex Systems, 17(4):723–729, 2000. [15] Daniel Paravisini, Veronica Rappoport, and Enrichetta Ravina. Risk aversion and wealth: Evidence from person-to-person lending portfolios. Technical report, National Bureau of Economic Research, 2010. [16] Kris De Jaegher. Giffen behaviour and strong asymmetric gross substitutability. In New Insights into the Theory of Giffen Goods, pages 53–67. Springer, 2012. [17] Faruk Gul and Wolfgang Pesendorfer. The case for mindless economics. The foundations of positive and normative economics, pages 3–39, 2008. [18] D. Kahneman and A. Tversky. Prospect theory: An analysis of decision under risk. Econometrica, 47:263–291, 1979. [19] Pedro Bordalo, Nicola Gennaioli, and Andrei Shleifer. Salience theory of choice under risk. The Quarterly Journal of Economics, 127(3):1243–1285, 2012. [20] Richard S Sutton and Andrew G Barto. Introduction to reinforcement learning. MIT Press, 1998. 9
2014
59
5,546
On a Theory of Nonparametric Pairwise Similarity for Clustering: Connecting Clustering to Classification Yingzhen Yang1 Feng Liang1 Shuicheng Yan2 Zhangyang Wang1 Thomas S. Huang1 1 University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA {yyang58,liangf,zwang119,t-huang1}@illinois.edu 2 National University of Singapore, Singapore, 117576 eleyans@nus.edu.sg Abstract Pairwise clustering methods partition the data space into clusters by the pairwise similarity between data points. The success of pairwise clustering largely depends on the pairwise similarity function defined over the data points, where kernel similarity is broadly used. In this paper, we present a novel pairwise clustering framework by bridging the gap between clustering and multi-class classification. This pairwise clustering framework learns an unsupervised nonparametric classifier from each data partition, and search for the optimal partition of the data by minimizing the generalization error of the learned classifiers associated with the data partitions. We consider two nonparametric classifiers in this framework, i.e. the nearest neighbor classifier and the plug-in classifier. Modeling the underlying data distribution by nonparametric kernel density estimation, the generalization error bounds for both unsupervised nonparametric classifiers are the sum of nonparametric pairwise similarity terms between the data points for the purpose of clustering. Under uniform distribution, the nonparametric similarity terms induced by both unsupervised classifiers exhibit a well known form of kernel similarity. We also prove that the generalization error bound for the unsupervised plugin classifier is asymptotically equal to the weighted volume of cluster boundary [1] for Low Density Separation, a widely used criteria for semi-supervised learning and clustering. Based on the derived nonparametric pairwise similarity using the plug-in classifier, we propose a new nonparametric exemplar-based clustering method with enhanced discriminative capability, whose superiority is evidenced by the experimental results. 1 Introduction Pairwise clustering methods partition the data into a set of self-similar clusters based on the pairwise similarity between the data points. Representative clustering methods include K-means [2] which minimizes the within-cluster dissimilarities, spectral clustering [3] which identifies clusters of more complex shapes lying on low dimensional manifolds, and the pairwise clustering method [4] using message-passing algorithm to inference the cluster labels in a pairwise undirected graphical model. Utilizing pairwise similarity, these pairwise clustering methods often avoid estimating complex hidden variables or parameters, which is difficult for high dimensional data. However, most pairwise clustering methods assume that the pairwise similarity is given [2, 3], or they learn a more complicated similarity measure based on several given base similarities [4]. In this paper, we present a new framework for pairwise clustering where the pairwise similarity is derived as the generalization error bound for the unsupervised nonparametric classifier. The un1 supervised classifier is learned from unlabeled data and the hypothetical labeling. The quality of the hypothetical labeling is measured by the associated generalization error of the learned classifier, and the hypothetical labeling with minimum associated generalization error bound is preferred. We consider two nonparametric classifiers, i.e. the nearest neighbor classifier (NN) and the plug-in classifier (or the kernel density classifier). The generalization error bounds for both unsupervised classifiers are expressed as sum of pairwise terms between the data points, which can be interpreted as nonparametric pairwise similarity measure between the data points. Under uniform distribution, both nonparametric similarity measures exhibit a well known form of kernel similarity. We also prove that the generalization error bound for the unsupervised plug-in classifier is asymptotically equal to the weighted volume of cluster boundary [1] for Low Density Separation, a widely used criteria for semi-supervised learning and clustering. Our work is closely related to discriminative clustering methods by unsupervised classification, which search for the cluster boundaries with the help of unsupervised classifier. For example, [5] learns a max-margin two-class classifier to group unlabeled data in an unsupervised manner, known as unsupervised SVM whose theoretical property is further analyzed in [6]. Also, [7] learns the kernel logistic regression classifier, and uses the entropy of the posterior distribution of the class label by the classifier to measure the quality of the learned classifier. More recent work presented in [8] learns an unsupervised classifier by maximizing the mutual information between cluster labels and the data, and the Squared-Loss Mutual Information is employed to produce a convex optimization problem. Although such discriminative methods produce satisfactory empirical results, the optimization of complex parameters hampers their application in high-dimensional data. Following the same principle of unsupervised classification using nonparametric classifiers, we derive nonparametric pairwise similarity and eliminate the need of estimating complicated parameters of the unsupervised classifer. As an application, we develop a new nonparametric exemplar-based clustering method with the derived nonparametric pairwise similarity induced by the plug-in classifier, and our new method demonstrates better empirical clustering results than the existing exemplar-based clustering methods. It should be emphasized that our generalization bounds are essentially different from the literature. As nonparametric classification methods, the generalization properties of the nearest neighbor classifier (NN) and the plug-in classifier are extensively studied. Previous research focuses on the average generalization error of the NN [9, 10], which is the average error of the NN over all the random training data sets, or the excess risk of the plug-in classifier [11, 12]. In [9], it is shown that the average generalization error of the NN is bounded by twice of the Bayes error. Assuming that the class of the regression functions has a smooth parameter β, [11] proves that the excess risk of the plug-in classifier converges to 0 of the order n −β 2β+d where d is the dimension of the data. [12] further shows that the plug-in classifier attains faster convergence rate of the excess risk, namely n−1 2 , under some margin assumption on the data distribution. All these generalization error bounds depend on the unknown Bayes error. By virtue of kernel density estimation and generalized kernel density estimation [13], our generalization bounds are represented mostly in terms of the data, leading to the pairwise similarities for clustering. 2 Formulation of Pairwise Clustering by Unsupervised Nonparametric Classification The discriminative clustering literature [5, 7] has demonstrated the potential of multi-class classification for the clustering problem. Inspired by the natural connection between clustering and classification, we model the clustering problem as a multi-class classification problem: a classifier is learned from the training data built by a hypothetical labeling, which is a possible cluster labeling. The optimal hypothetical labeling is supposed to be the one such that its associated classifier has the minimum generalization error bound. To study the generalization bound for the classifier learned from the hypothetical labeling, we define the concept of classification model. Given unlabeled data {xl}n l=1, a classification model MY is constructed for any hypothetical labeling Y = {yl}n l=1 as below: Definition 1. The classification model corresponding to the hypothetical labeling Y = {yl}n l=1 is defined as MY = ( S, PXY , {πi, fi}Q i=1, F ) . S = {xl, yl}n l=1 are the labeled data by the 2 hypothetical labeling, and S are assumed to be i.i.d. samples drawn from the joint distribution PXY = PX|Y PY , where (X, Y ) is a random couple, X ∈IRd represents the data and Y ∈{1, 2, ..., Q} is the class label of X, Q is the number of classes determined by the hypothetical labeling. Furthermore, PXY is specified by {π(i), f (i)}Q i=1 as follows: π(i) is the class prior for class i, i.e. Pr [Y = i] = π(i); the conditional distribution PX|Y =i has probabilistic density function f (i), i = 1, . . . , Q. F is a classifier trained using the training data S. The generalization error of the classification model MY is defined as the generalization error of the classifier F in MY. In this paper, we study two types of classification models with the nearest neighbor classifier and the plug-in classifier respectively, and derive their generalization error bounds as sum of pairwise similarity between the data. Given a specific type of classification model, the optimal hypothetical labeling corresponds to the classification model with minimum generalization error bound. The optimal hypothetical labeling also generates a data partition where the sum of pairwise similarity between the data from different clusters is minimized, which is a common criteria for discriminative clustering. In the following text, we derive the generalization error bounds for the two types of classification models. Before that, we introduce more notations and assumptions for the classification model. Denote by PX the induced marginal distribution of X, and f is the probabilistic density function of PX which is a mixture of Q class-conditional densities: f = Q ∑ i=1 π(i)f (i). η(i) (x) is the regression function of Y on X = x, i.e. η(i) (x) = Pr [Y = i |X = x] = π(i)f (i)(x) f(x) . For the sake of the consistency of the kernel density estimators used in the sequel, there are further assumptions on the marginal density and class-conditional densities in the classification model for any hypothetical labeling: (A) f is bounded from below, i.e. f ≥fmin > 0 (B) {f (i)} is bounded from above, i.e. f (i) ≤f (i) max, and f (i) ∈Σγ,ci, 1 ≤i ≤Q. where Σγ,c is the class of H¨older-γ smooth functions with H¨older constant c: Σγ,c ≜{f : IRd →IR | ∀x, y, |f (x) −f (y)| ≤c∥x −y∥γ}, γ > 0 It follows from assumption (B) that f ∈Σγ,c where c = ∑ i π(i)ci. Assumption (A) and (B) are mild. The upper bound for the density functions is widely required for the consistency of kernel density estimators [14, 15]; H¨older-γ smoothness is required to bound the bias of such estimators, and it also appears in [12] for estimating the excess risk of the plug-in classifier. The lower bound for the marginal density is used to derive the consistency of the estimator of the regression function η(i) (Lemma 2) and the consistency of the generalized kernel density estimator (Lemma 3). We denote by PX the collection of marginal distributions that satisfy assumption (A), and denote by PX|Y the collection of class-conditional distributions that satisfy assumption (B). We then define the collection of joint distributions PXY that PXY belongs to, which requires the marginal density and class-conditional densities satisfy assumption (A)-(B): PXY ≜{PXY | PX ∈PX, {PX|Y =i} ∈PX|Y , min i {π(i)} > 0} (1) Given the joint distribution PXY , the generalization error of the classifier F learned from the training data S is: R (FS) ≜Pr [(X, Y ) : F (X) ̸= Y ] (2) Nonparametric kernel density estimator (KDE) serves as the primary tool of estimating the underlying probabilistic density functions in our generalization analysis, and we introduce the KDE of f as below: ˆfn,hn (x) = 1 n n ∑ l=1 Khn (x −xl) (3) where Kh (x) = 1 hd K ( x h ) is the isotropic Gaussian kernel with bandwidth h and K (x) ≜ 1 (2π)d/2 e−∥x∥2 2 . We have the following VC property of the Gaussian kernel K. Define the class 3 of functions F ≜{K (t −· h ) , t ∈IRd, h ̸= 0} (4) The VC property appears in [14, 15, 16, 17, 18], and it is proved that F is a bounded VC class of measurable functions with respect to the envelope function F such that |u| ≤F for any u ∈F (e.g. F ≡(2π)−d 2 ). It follows that there exist positive numbers A and v such that for every probability measure P on IRd for which ∫ F 2dP < ∞and any 0 < τ < 1, N ( F, ∥·∥L2(P ) , τ ∥F∥L2(P ) ) ≤ (A τ )v (5) where N ( T , ˆd, ϵ ) is defined as the minimal number of open ˆd-balls of radius ϵ required to cover T in the metric space ( T , ˆd ) . A and v are called the VC characteristics of F. The VC property of K is required for the consistency of kernel density estimators shown in Lemma 2. Also, we adopt the kernel estimator of η(i) below ˆη(i) n,hn (x) = n∑ l=1 Khn (x −xl)1I{yl=i} n ˆfn,hn (x) (6) Before stating Lemma 2, we introduce several frequently used quantities throughout this paper. Let L, C > 0 be constants which only depend on the VC characteristics of the Gaussian kernel K. We define f0 ≜ Q ∑ i=1 π(i)f (i) max σ2 0 ≜∥K∥2 2f0 (7) Also, for all positive numbers λ ≥C and σ > 0, we define Eσ2 ≜log (1 + λ/4L) λLσ2 (8) Based on Corollary 2.2 in [14], Lemma 2 and Lemma 3 in the Appendix (more complete version in the supplementary) show the strong consistency (almost sure uniformly convergence) of several kernel density estimators, i.e. ˆfn,hn, {ˆη(i) n,hn} and the generalized kernel density estimator, and they form the basis for the derivation of the generalization error bounds for the two types of classification models. 3 Generalization Bounds We derive the generalization error bounds for the two types of classification models with the nearest neighbor classifier and the plug-in classifier respectively. Substituting these kernel density estimators for the corresponding true density functions, Theorem 1 and Theorem 2 present the generalization error bounds for the classification models with the plug-in classifier and the nearest neighbor classifier. The dominant terms of both bounds are expressed as sum of pairwise similarity depending solely on the data, which facilitates the application of clustering. We also show the connection between the error bound for the plug-in classifier and Low Density Separation in this section. The detailed proofs are included in the supplementary. 3.1 Generalization Bound for the Classification Model with Plug-In Classifier The plug-in classifier resembles the Bayes classifier while it uses the kernel density estimator of the regression function η(i) instead of the true η(i). It has the form PI (X) = arg max 1≤i≤Q ˆη(i) n,hn (X) (9) where ˆη(i) n,hn is the nonparametric kernel estimator of the regression function η(i) by (6). The generalization capability of the plug-in classifier has been studied by the literature[11, 12]. Let 4 F ∗be the Bayes classifier, it is proved that the excess risk of PIS, namely IESR (PIS) −R (F ∗), converges to 0 of the order n −β 2β+d under some complexity assumption on the class of the regression functions with smooth parameter β that {η(i)} belongs to [11, 12]. However, this result cannot be used to derive the generalization error bound for the plug-in classifier comprising of nonparametric pairwise similarities in our setting. We show the upper bound for the generalization error of PIS in Lemma 1. Lemma 1. For any PXY ∈PXY , there exists a n0 which depends on σ0 and VC characteristics of K, when n > n0, with probability greater than 1 −2QLh Eσ2 0 n , the generalization error of the plug-in classifier satisfies R (PIS) ≤RPI n + O (√ log h−1 n nhdn + hγ n ) (10) RPI n = ∑ i,j=1,...,Q,i̸=j IEX [ ˆη(i) n,hn (X) ˆη(j) n,hn (X) ] (11) where Eσ2 is defined by (8), hn is chosen such that hn →0, log h−1 n nhd n →0, ˆη(i) n,hn is the kernel estimator of the regression function. Moreover, the equality in (10) holds when ˆη(i) n,hn ≡ 1 Q for 1 ≤i ≤Q. Based on Lemma 1, we can bound the error of the plug-in classifier from above by RPI n . Theorem 1 then gives the bound for the error of the plug-in classifier in the corresponding classification model using the generalized kernel density estimator in Lemma 3. The bound has a form of sum of pairwise similarity between the data from different classes. Theorem 1. (Error of the Plug-In Classifier) Given the classification model MY = ( S, PXY , {πi, fi}Q i=1, PI ) such that PXY ∈PXY , there exists a n1 which depends on σ0, σ1 and the VC characteristics of K, when n > n1, with probability greater than 1 −2QLh Eσ2 0 n −QLh Eσ2 1 n , the generalization error of the plug-in classifier satisfies R (PIS) ≤ˆRn (PIS) + O (√ log h−1 n nhdn + hγ n ) (12) where ˆRn (PIS) = 1 n2 ∑ l,m θlmGlm, √ 2hn, σ2 1 = ∥K∥2 2fmax fmin , θlm = 1I{yl̸=ym} is a class indicator function and Glm,h = Gh (xl, xm) , Gh (x, y) = Kh (x −y) ˆf 1 2 n,h (x) ˆf 1 2 n,h (y) , (13) Eσ2 is defined by (8), hn is chosen such that hn →0, log h−1 n nhd n →0, ˆfn,hn is the kernel density estimator of f defined by (3). ˆRn is the dominant term determined solely by the data and the excess error O (√ log h−1 n nhdn + hγ n ) goes to 0 with infinite n. In the following subsection, we show the close connection between the error bound for the plug-in classifier and the weighted volume of cluster boundary, and the latter is proposed by [1] for Low Density Separation. 3.1.1 Connection to Low Density Separation Low Density Separation [19], a well-known criteria for clustering, requires that the cluster boundary should pass through regions of low density. It has been extensively studied in unsupervised learning and semi-supervised learning [20, 21, 22]. Suppose the data {xl}n l=1 lies on a domain Ω⊆Rd. Let f be the probability density function on Ω, S be the cluster boundary which separates Ωinto two parts S1 and S2. Following the Low Density Separation assumption, [1] suggests that the 5 cluster boundary S with low weighted volume ∫ S f (s)ds should be preferable. [1] also proves that a particular type of cut function converges to the weighted volume of S. Based on their study, we obtain the following result relating the error of the plug-in classifier to the weighted volume of the cluster boundary. Corollary 1. Under the assumption of Theorem 1, for any kernel bandwidth sequence {hn}∞ n=1 such that lim n→∞hn = 0 and hn > n− 1 4d+4 , with probability 1, lim n→∞ √π 2hn ˆRn (PIS) = ∫ S f (s)ds (14) 3.2 Generalization Bound for the Classification Model with Nearest Neighbor Classifier Theorem 2 shows the generalization error bound for the classification model with nearest neighbor classifier (NN), which has a similar form as (12). Theorem 2. (Error of the NN) Given the classification model MY = ( S, PXY , {πi, fi}Q i=1, NN ) such that PXY ∈PXY and the support of PX is bounded by [−M0, M0]d, there exists a n0 which depends on σ0 and VC characteristics of K, when n > n0, with probability greater than 1 − 2QLh Eσ2 0 n −(2M0)dndd0e−n1−dd0fmin, the generalization error of the NN satisfies: R (NNS) ≤ˆRn (NNS) + c0 (√ d )γ n−d0γ + O (√ log h−1 n nhdn + hγ n ) (15) where ˆRn (NN) = 1 n ∑ 1≤l<m≤n Hlm,hnθlm, Hlm,hn = Khn (xl −xm) (∫ Vl ˆfn,hn (x) dx ˆfn,hn (xl) + ∫ Vm ˆfn,hn (x) dx ˆfn,hn (xm) ) , (16) Eσ2 is defined by (8), d0 is a constant such that dd0 < 1, ˆfn,hn is the kernel density estimator of f defined by (3) with the kernel bandwidth hn satisfying hn →0, log h−1 n nhd n →0, Vl is the Voronoi cell associated with xl, c0 is a constant, θlm = 1I{yl̸=ym} is a class indicator function such that θlm = 1 if xl and xm belongs to different classes, and 0 otherwise. Moreover, the equality in (15) holds when η(i) ≡1 Q for 1 ≤i ≤Q. Glm, √ 2hn in (13) and Hlm,hn in (16) are the new pairwise similarity functions induced by the plugin classifier and the nearest neighbor classifier respectively. According to the proof of Theorem 1 and Theorem 2, the kernel density estimator ˆf can be replaced by the true density f in the denominators of (13) and (16), and the conclusions of Theorem 1 and 2 still hold. Therefore, both Glm, √ 2hn and Hlm,hn are equal to ordinary Gaussian kernels (up to a scale) with different kernel bandwidth under uniform distribution, which explains the broadly used kernel similarity in data clustering from an angle of supervised learning. 4 Application to Exemplar-Based Clustering We propose a nonparametric exemplar-based clustering algorithm using the derived nonparametric pairwise similarity by the plug-in classifier. In exemplar-based clustering, each xl is associated with a cluster indicator el (l ∈{1, 2, ...n} , el ∈{1, 2, ...n}), indicating that xl takes xel as the cluster exemplar. Data from the same cluster share the same cluster exemplar. We define e ≜{el}n l=1. Moreover, a configuration of the cluster indicators e is consistent iff el = l when em = l for any l, m ∈1..n, meaning that xl should take itself as its exemplar if any xm take xl as its exemplar. It is required that the cluster indicators e should always be consistent. Affinity Propagation (AP) [23], a representative of the exemplar-based clustering methods, solves the following optimization problem min e n ∑ l=1 Sl,el s.t. e is consistent (17) 6 Sl,el is the dissimilarity between xl and xel, and note that Sl,l is set to be nonzero to avoid the trivial minimizer of (17). Now we aim to improve the discriminative capability of the exemplar-based clustering (17) using the nonparametric pairwise similarity derived by the unsupervised plug-in classifier. As mentioned before, the quality of the hypothetical labeling ˆy is evaluated by the generalization error bound for the nonparametric plug-in classifier trained by Sˆy, and the hypothetical labeling ˆy with minimum associated error bound is preferred, i.e. arg minˆy ˆRn (PIS) = arg minˆy ∑ l,m θlmGlm, √ 2hn where θlm = 1Iˆyl̸=ˆym and Glm, √ 2hn is defined in (13). By Lemma 3, minimizing ∑ l,m θlmGlm, √ 2hn also enforces minimization of the weighted volume of cluster boundary asymptotically. To avoid the trivial clustering where all the data are grouped into a single cluster, we use the sum of withincluster dissimilarities term n∑ l=1 exp ( −Glel, √ 2hn ) to control the size of clusters. Therefore, the objective function of our pairwise clustering method is below: Ψ (e) = n ∑ l=1 exp ( −Glel, √ 2hn ) + λ ∑ l,m ( ˜θlmGlm, √ 2hn + ρlm (el, em) ) (18) where ρlm is a function to enforce the consistency of the cluster indicators: ρlm (el, em) = { ∞em = l, el ̸= l or el = m, em ̸= m 0 otherwise , λ is a balancing parameter. Due to the form of (18), we construct a pairwise Markov Random Field (MRF) representing the unary term ul and the pairwise term ˜θlmGlm, √ 2hn + ρlm as the data likelihood and prior respectively. The variables e are modeled as nodes and the unary term and pairwise term in (18) are modeled as potential functions in the pairwise MRF. The minimization of the objective function is then converted to a MAP (Maximum a Posterior) problem in the pairwise MRF. (18) is minimized by Max-Product Belief Propagation (BP). The computational complexity of our clustering algorithm is O(TEN), where E is the number of edges in the pairwise MRF, T is the number of iterations of message passing in the BP algorithm. We call our new algorithm Plug-In Exemplar Clustering (PIEC), and compare it to representative exemplar-based clustering methods, i.e. AP and Convex Clustering with Exemplar-Based Model (CEB) [24], for clustering on three real data sets from UCI repository, i.e. Iris, Vertebral Column (VC) and Breast Tissue (BT). We record the average clustering accuracy (AC) and the standard deviation of AC for all the exemplar-based clustering methods when they produce the correct number of clusters for each data set with different values of hn and λ, and the results are shown in Table 1. Although AP produces better clustering accuracy on the VC data set, PIEC generates the correct cluster numbers for much more times. The dash in Table 1 indicates that the corresponding clustering method cannot produce the correct cluster number. The default value for the kernel bandwidth hn is h∗ n, which is set as the variance of the pairwise distance between data points { ∥xl −xm∥l<m } . The default value for the balancing parameter λ is 1. We let hn = αh∗ n, λ varies between [0.2, 1] and α varies between [0.2, 1.9] with step 0.2 and 0.05 respectively, resulting in 170 different parameter settings. We also generate the same number of parameter settings for AP and CEB. Table 1: Comparison Between Exemplar-Based Clustering Methods. The number in the bracket is the number of times when the corresponding algorithm produces correct cluster numbers. Data sets Iris VC BT AP 0.8933 ± 0.0138 (16) 0.6677 (14) 0.4906 (1) CEB 0.6929 ± 0.0168 (15) 0.4748 ± 0.0014 (5) 0.3868 ± 0.08 (2) PIEC 0.9089 ± 0.0033 (15) 0.5263 ± 0.0173 (35) 0.6585 ± 0.0103 (5) 5 Conclusion We propose a new pairwise clustering framework where nonparametric pairwise similarity is derived by minimizing the generalization error unsupervised nonparametric classifier. Our framework bridges the gap between clustering and multi-class classification, and explains the widely used kernel similarity for clustering. In addition, we prove that the generalization error bound for the unsupervised plug-in classifier is asymptotically equal to the weighted volume of cluster boundary for 7 Low Density Separation. Based on the derived nonparametric pairwise similarity using the plug-in classifier, we propose a new nonparametric exemplar-based clustering method with enhanced discriminative capability compared to the exiting exemplar-based clustering methods. Appendix Lemma 2. (Consistency of Kernel Density Estimator) Let the kernel bandwidth hn of the Gaussian kernel K be chosen such that hn →0, log h−1 n nhd n →0. For any PX ∈PX, there exists a n0 which depends on σ0 and VC characteristics of K, when n > n0, with probability greater than 1 −Lh Eσ2 0 n over the data {xl}, ˆfn,hn (x) −f (x) ∞= O (√ log h−1 n nhdn + hγ n ) (19) where ˆfn,hn is the kernel density estimator of f. Furthermore, for any PXY ∈PXY , when n > n0, then with probability greater than 1 −2Lh Eσ2 0 n over the data {xl}, ˆη(i) n,hn (x) −η(i) (x) ∞= O (√ log h−1 n nhdn + hγ n ) (20) for each 1 ≤i ≤Q. Lemma 3. (Consistency of the Generalized Kernel Density Estimator) Suppose f is the probabilistic density function of PX ∈PX. Let g be a bounded function defined on X and g ∈Σγ,g0, 0 < gmin ≤ g ≤gmax, and e = f g . Define the generalized kernel density estimator of e as ˆen,h ≜1 n n ∑ l=1 Kh (x −xl) g (xl) (21) Let σ2 g = ∥K∥2 2fmax g2 min . There exists ng which depends on σg and the VC characteristics of K, When n > ng, with probability greater than 1 −Lh Eσ2g n over the data {xl}, ∥ˆen,hn (x) −e (x)∥∞= O (√ log h−1 n nhdn + hγ n ) (22) where hn is chosen such that hn →0, log h−1 n nhd n →0. Sketch of proof: For fixed h ̸= 0, we consider the class of functions Fg ≜{K ( t−· h ) g (·) , t ∈IRd} It can be verified that Fg is also a bounded VC class with the envelope function Fg = F gmin , and N ( Fg, ∥·∥L2(P ) , τ ∥Fg∥L2(P ) ) ≤ (A τ )v (23) Then (22) follows from similar argument in the proof of Lemma 2 and Corollary 2.2 in [14]. The generalized kernel density estimator (21) is also used in [13] to estimate the Laplacian PDF Distance between two probabilistic density functions, and the authors only provide the proof of pointwise weak consistency of this estimator in [13]. Under mild conditions, our Lemma 3 and Lemma 2 show the strong consistency of the generalized kernel density estimator and the traditional kernel density estimator under the same theoretical framework of the VC property of the kernel. Acknowledgements. This material is based upon work supported by the National Science Foundation under Grant No. 1318971. 8 References [1] Hariharan Narayanan, Mikhail Belkin, and Partha Niyogi. On the relation between low density separation, spectral clustering and graph cuts. In NIPS, pages 1025–1032, 2006. [2] J. A. Hartigan and M. A. Wong. A K-means clustering algorithm. Applied Statistics, 28:100–108, 1979. [3] Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, pages 849–856, 2001. [4] Noam Shental, Assaf Zomet, Tomer Hertz, and Yair Weiss. Pairwise clustering and graphical models. In NIPS, 2003. [5] Linli Xu, James Neufeld, Bryce Larson, and Dale Schuurmans. Maximum margin clustering. In NIPS, 2004. [6] Zohar Karnin, Edo Liberty, Shachar Lovett, Roy Schwartz, and Omri Weinstein. Unsupervised svms: On the complexity of the furthest hyperplane problem. Journal of Machine Learning Research - Proceedings Track, 23:2.1–2.17, 2012. [7] Ryan Gomes, Andreas Krause, and Pietro Perona. Discriminative clustering by regularized information maximization. In NIPS, pages 775–783, 2010. [8] Masashi Sugiyama, Makoto Yamada, Manabu Kimura, and Hirotaka Hachiya. On informationmaximization clustering: Tuning parameter selection and analytic solution. In ICML, pages 65–72, 2011. [9] T. Cover and P. Hart. Nearest neighbor pattern classification. Information Theory, IEEE Transactions on, 13(1):21–27, January 1967. [10] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. springer, 1996. [11] Yuhong Yang. Minimax nonparametric classification - part i: Rates of convergence. IEEE Transactions on Information Theory, 45(7):2271–2284, 1999. [12] Jean-Yves Audibert and Alexandre B. Tsybakov. Fast learning rates for plug-in classifiers. The Annals of Statistics, 35(2):pp. 608–633, 2007. [13] Robert Jenssen, Deniz Erdogmus, Jos´e Carlos Pr´ıncipe, and Torbjørn Eltoft. The laplacian pdf distance: A cost function for clustering in a kernel feature space. In NIPS, 2004. [14] Evarist Gin´e and Armelle Guillou. Rates of strong uniform consistency for multivariate kernel density estimators. Ann. Inst. H. Poincar´e Probab. Statist., 38(6):907–921, November 2002. [15] Uwe Einmahl and David M. Mason. Uniform in bandwidth consistency of kernel-type function estimators. The Annals of Statistics, 33:1380C1403, 2005. [16] R. M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999. [17] A.W. van der Vaart and J.A. Wellner. Weak Convergence and Empirical Processes. Springer series in statistics. Springer, 1996. [18] Deborah Nolan and David Pollard. U-Processes: Rates of convergence. The Annals of Statistics, 15(2), 1987. [19] Olivier Chapelle and Alexander Zien. Semi-Supervised Classification by Low Density Separation. In AISTATS, 2005. [20] Markus Maier, Ulrike von Luxburg, and Matthias Hein. Influence of graph construction on graph-based clustering measures. In NIPS, pages 1025–1032, 2008. [21] Zenglin Xu, Rong Jin, Jianke Zhu, Irwin King, Michael R. Lyu, and Zhirong Yang. Adaptive regularization for transductive support vector machine. In NIPS, pages 2125–2133, 2009. [22] Xiaojin Zhu, John Lafferty, and Ronald Rosenfeld. Semi-supervised learning with graphs. PhD thesis, Carnegie Mellon University, Language Technologies Institute, School of Computer Science, 2005. [23] Brendan J. Frey and Delbert Dueck. Clustering by passing messages between data points. Science, 315:972–977, 2007. [24] Danial Lashkari and Polina Golland. Convex clustering with exemplar-based models. In NIPS, 2007. 9
2014
6
5,547
Accelerated Mini-batch Randomized Block Coordinate Descent Method Tuo Zhao†§⇤ Mo Yu‡⇤Yiming Wang† Raman Arora† Han Liu§ †Johns Hopkins University ‡Harbin Institute of Technology §Princeton University {tour,myu25,freewym,arora}@jhu.edu,hanliu@princeton.edu Abstract We consider regularized empirical risk minimization problems. In particular, we minimize the sum of a smooth empirical risk function and a nonsmooth regularization function. When the regularization function is block separable, we can solve the minimization problems in a randomized block coordinate descent (RBCD) manner. Existing RBCD methods usually decrease the objective value by exploiting the partial gradient of a randomly selected block of coordinates in each iteration. Thus they need all data to be accessible so that the partial gradient of the block gradient can be exactly obtained. However, such a “batch” setting may be computationally expensive in practice. In this paper, we propose a mini-batch randomized block coordinate descent (MRBCD) method, which estimates the partial gradient of the selected block based on a mini-batch of randomly sampled data in each iteration. We further accelerate the MRBCD method by exploiting the semi-stochastic optimization scheme, which effectively reduces the variance of the partial gradient estimators. Theoretically, we show that for strongly convex functions, the MRBCD method attains lower overall iteration complexity than existing RBCD methods. As an application, we further trim the MRBCD method to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method naturally exploits the sparsity structure and achieves better computational performance than existing methods. 1 Introduction Big data analysis challenges both statistics and computation. In the past decade, researchers have developed a large family of sparse regularized M-estimators, such as Sparse Linear Regression [17, 24], Group Sparse Linear Regression [22], Sparse Logistic Regression [9], Sparse Support Vector Machine [23, 19], and etc. These estimators are usually formulated as regularized empirical risk minimization problems in a generic form as follows [10], b✓= argmin ✓ P(✓) = argmin ✓ F(✓) + R(✓), (1.1) where ✓is the parameter of the working model. Here we assume the empirical risk function F(✓) is smooth, and the regularization function R(✓) is non-differentiable. Some first order algorithms, mostly variants of proximal gradient methods [11], have been proposed for solving (1.1) . For strongly convex P(✓), these methods achieve linear rates of convergence [1]. The proximal gradient methods, though simple, are not necessarily efficient for large problems. Note that empirical risk function F(✓) is usually composed of many smooth component functions: F(✓) = 1 n n X i=1 fi(✓) and rF(✓) = 1 n n X i=1 rfi(✓), ⇤Both authors contributed equally. 1 where each fi is associated with a few samples of the whole date set. Since the proximal gradient methods need to calculate the gradient of F in every iteration, the computational complexity scales linearly with the sample size (or the number of components functions). Thus the overall computation can be expensive especially when the sample size is very large in such a “batch” setting [16]. To overcome the above drawback, recent work has focused on stochastic proximal gradient methods (SPG), which exploit the additive nature of the empirical risk function F(✓). In particular, the SPG methods randomly sample only a few fi’s to estimate the gradient rF(✓), i.e., given an index set B, also as known as a mini-batch [16], where all elements are independently sampled from {1, ..., n} with replacement, we consider a gradient estimator 1 |B| P i2B rfi(✓). Thus calculating such a “stochastic” gradient can be far less expensive than the proximal gradient methods within each iteration. Existing literature has established the global convergence results for the stochastic proximal gradient methods [3, 7] based on the unbiasedness of the gradient estimator, i.e., EB " 1 |B| X i2B rfi(✓) # = rF(✓) for 8 ✓2 Rd. However, owing to the variance of the gradient estimator introduced by the stochastic sampling, SPG methods only achieve sublinear rates of convergence even when P(✓) is strongly convex [3, 7]. A second line of research has focused randomized block coordinate descent (RBCD) methods. These methods exploit the block separability of the regularization function R, i.e., given a partition {G1, ..., Gk} of d coordinates, we use vGj to denote the subvector of v with all indices in Gj, and then we can write R(✓) = k X j=1 rj(✓Gj) with ✓= (✓T G1, ..., ✓T Gk)T . Accordingly, they develop the randomized block coordinate descent (RBCD) methods. In particular, the block coordinate descent methods randomly select a block of coordinates in each iteration, and then only calculate the gradient of F with respect to the selected block [15, 13]. Since the variance introduced by the block selection asymptotically goes to zero, the RBCD methods also attain linear rates of convergence when P(✓) is strongly convex. For sparse learning problems, the RBCD methods have a natural advantage over the proximal gradient methods. Because many blocks of coordinates stay at zero values throughout most of iterations, we can integrate the active set strategy into the computation. The active set strategy maintains an only iterates over a small subset of all blocks [2], which greatly boosts the computational performance. Recent work has corroborated the empirical advantage of RBCD methods over the proximal gradient method [4, 20, 8]. The RBCD methods, however, still requires that all component functions are accessible within every iteration so that the partial gradient can be exactly obtained. To address this issue, we propose a stochastic variant of the RBCD methods, which shares the advantage with both the SPG and RBCD methods. More specifically, we randomly select a block of coordinates in each iteration, and estimate the corresponding partial gradient based on a mini-batch of fi’s sampled from all component functions. To address the variance introduced by stochastic sampling, we exploit the semi-stochastic optimization scheme proposed in [5, 6]. The semi-stochastic optimization scheme contains two nested loops: For each iteration of the outer loop, we calculate an exact gradient. Then in the follow-up inner loop, we adjust all estimated partial gradients by the obtained exact gradient. Such a modification, though simple, has a profound impact: the amortized computational complexity in each iteration is similar to the stochastic optimization, but the rate of convergence is not compromised. Theoretically, we show that when P(✓) is strongly convex, the MRBCD method attains better overall iteration complexity than existing RBCD methods. We then apply the MRBCD method combined with the active set strategy to solve the regularized sparse learning problems. Our numerical experiments shows that the MRBCD method achieves much better computational performance than existing methods. A closely related method is the stochastic proximal variance reduced gradient method proposed in [21]. Their method is a variant of the stochastic proximal gradient methods using the same semistochastic optimization scheme as ours, but their method inherits the same drawback as the proximal gradient method, and does not fully exploit the underlying sparsity structure for large sparse learning problems. We will compare its computational performance with the MRBCD method in numerical 2 experiments. Note that their method can be viewed as a special example of the MRBCD method with one single block. While this paper was under review, we learnt that a similar method was independently proposed by [18]. They also apply the variance reduction technique into the randomized block coordinate descent method, and obtain similar theoretical results to ours. 2 Notations and Assumptions Given a vector v = (v1, ..., vd)T 2 Rd, we define vector norms: ||v||1 = P j |vj|, ||v||2 = P j v2 j , and ||v||1 = maxj |vj|. Let {G1, ..., Gk} be a partition of all d coordinates with |Gj| = pj and Pk j=1 pj = d. We use vGj to denote the subvector of v with all indices in Gj, and v\Gj to denote the subvector of v with all indices in Gj removed. Throughout the rest of the paper, if not specified, we make the following assumptions on P(✓). Assumption 2.1. Each fi(✓) is convex and differentiable. Given the partition {G1, ..., Gk}, all rGjfi(✓) = [rfi(✓)]Gj’s are Lipschitz continuous, i.e., there exists a positive constants Lmax such that for all ✓, ✓0 2 Rd and ✓Gj 6= ✓0 Gj, we have ||rGjfi(✓) −rGjfi(✓0)|| Lmax||✓Gj −✓0 Gj||. Moreover, rfi(✓) is Lipschitz continuous, i.e., there exists a positive constant Tmax for all ✓, ✓0 2 Rd and ✓6= ✓0, we have ||rfi(✓) −rfi(✓0)|| Tmax||✓−✓0||. Assumption 2.1 also implies that rF(✓) is Lipschitz continuous, and given the tightest Tmax and Lmax in Assumption 2.1, we have Tmax kLmax. Assumption 2.2. F(✓) is strongly convex, i.e., for all ✓and ✓0, there exists a positive constant µ such that F(✓0) −F(✓) + rF(✓)T (✓0 −✓) ≥µ 2 ||✓0 −✓||2. Note that Assumption 2.2 also implies that P(✓) is strongly convex. Assumption 2.3. R(✓) is a simple convex nonsmooth function such that given some positive constant ⌘, we can obtain a closed form solution to the following optimization problem, T j ⌘(✓0 Gj) = argmin ✓Gj 2Rpj 1 2⌘||✓Gj −✓0 Gj||2 + rj(✓). Assumptions 2.1-2.3 are satisfied by many popular regularized empirical risk minimization problems. We give some examples in the experiments section. 3 Method The MRBCD method is doubly stochastic, in the sense that we not only randomly select a block of coordinates, but also randomly sample a mini-batch of component functions from all fi’s. The partial gradient of the selected block is estimated based on the selected component functions, which yields a much lower computational complexity than existing RBCD methods in each iteration. A naive implementation of the MRBCD method is summarized in Algorithm 1. Since the variance introduced by stochastic sampling over component functions does not go to zero as the number of iteration increases, we have to choose a sequence of diminishing step sizes (e.g. ⌘t = µ−1t−1) to ensure the convergence. When t is large, we only gain very limited descent in each iteration. Thus the MRBCD-I method can only attain a sublinear rate of convergence. 3 Algorithm 1 Mini-batch Randomized Block Coordinate Descent Method-I: A Naive Implementation. The stochastic sampling over component functions introduces variance to the partial gradient estimator. To ensure the convergence, we adopt a sequence of diminishing step sizes, which eventually leads to sublinear rates of convergence. Parameter: Step size ⌘t Initialize: ✓(0) For t = 1, 2, ... Randomly sample a mini-batch B from {1, ..., n} with equal probability Randomly sample j from {1, ..., k} with equal probability ✓(t) Gj T j ⌘t ⇣ ✓(t−1) Gj −⌘trGjfB(✓(t−1)) ⌘ , ✓(t) \Gj ✓(t−1) \Gj End for 3.1 MRBCD with Variance Reduction A recent line of work shows how to reduce the variance in the gradient estimation without deteriorating rates of convergence using a semi-stochastic optimization scheme [5, 6]. The semi-stochastic optimization contains two nested loops: In each iteration of the outer loop, we calculate an exact gradient; Then within the follow-up inner loop, we use the obtained exact gradient to adjust all estimated partial gradients. These adjustments can guarantee that the variance introduced by stochastic sampling over component functions asymptotically goes to zero (see [5]). Algorithm 2 Mini-batch Randomized Block Coordinate Descent Method-II: MRBCD + Variance Reduction. We periodically calculate the exact gradient at the beginning of each outer loop, and then use the obtained exact gradient to adjust all follow-up estimated partial gradients. These adjustments guarantee that the variance introduced by stochastic sampling over component functions asymptotically goes to zero, and help the MRBCD II method attain linear rates of convergence. Parameter: update frequency m and step size ⌘ Initialize: e✓(0) For s = 1,2,... e✓ e✓(s−1), eµ rF(e✓(s−1)), ✓(0) e✓(s−1) For t = 1, 2, ..., m Randomly sample a mini-batch B from {1, ..., n} with equal probability Randomly sample j from {1, ..., k} with equal probability ✓(t) Gj T j ⌘ ⇣ ✓(t−1) Gj −⌘ h rGjfB(✓(t−1)) −rGjfB(e✓) + eµGj i⌘ , ✓(t) \Gj ✓(t−1) \Gj End for e✓(s) Pm l=1 ✓(l) End for The MRBCD method with variance reduction is summarized in Algorithm 2. In the next section, we will show that the MRBCD II method attains linear rates of convergence, and the amortized computational complexity within each iteration is almost the same as that of the MRBCD I method. Remark 3.1. Another option for the variance reduction is the stochastic averaging scheme as proposed in [14], which stores the gradients of most recently subsampled component functions. But the MRBCD method iterates randomly over different blocks of coordinates, which makes the stochastic averaging scheme inapplicable. 3.2 MRBCD with Variance Reduction and Active Set Strategy When applying the MRBCD II method to regularized sparse learning problems, we further incorporate the active set strategy to boost the empirical performance. Different from existing RBCD methods, which usually identify the active set by cyclic search, we exploit a proximal gradient pilot to identify the active set. More specifically, within each iteration of the outer loop, we conduct a proximal gradient descent step, and select the support of the resulting solution as the active set. This is very natural to the MRBCD-II method. Because at the beginning of each outer loop, we always calculate an exact gradient, and delivering a proximal gradient pilot will not introduce much addi4 tional computational cost. Once the active set is identified, all randomized block coordinate descent steps within the follow-up inner loop only iterates over blocks of coordinates in the active set. Algorithm 3 Mini-batch Randomized Block Coordinate Descent Method-III: MRBCD with Variance Reduction and Active Set. To fully take advantage of the obtained exact gradient, we adopt a proximal gradient pilot ✓(0) to identify the active set at each iteration of the outer loop. Then all randomized coordinate descent steps within the follow-up inner loop only iterate over blocks of coordinates in the active set. Parameter: update frequency m and step size ⌘ Initialize: e✓(0) For s = 1,2,... e✓ e✓(s−1), eµ rF(e✓(s−1)) For j = 1, 2, ..., k ✓(0) Gj T j ⌘/k ⇣ e✓Gj −⌘eµGj/k ⌘ End for A { j | ✓(0) Gj 6= 0}, |B| = |A| For t = 1, 2, ..., m|A|/k Randomly sample a mini-batch B from {1, ..., n} with equal probability Randomly sample j from {1, ..., k} with equal probability For all j 2 e A ✓(t) Gj T j ⌘ ⇣ ✓(t−1) Gj −⌘ h rGjfB(✓(t−1)) −rGjfB(e✓) + eµGj i⌘ , ✓(t) \Gj ✓(t−1) \Gj End for e✓(s) Pm l=1 ✓(l) End for The MRBCD method with variance reduction and active set strategy is summarized in Algorithm 3. Since we integrate the active set into the computation, a successive |A| coordinate decent iterations in MRBCD-III will have similar performance as k iterations in MRBCD-II. Therefore we change the maximum number of iterations within each inner loop to |A|m/k. Moreover, since the support is only |A| blocks of coordinates, we only need to take |B| = |A| to guarantee sufficient variance reduction. These modifications will further boost the computational performance of MRBCD-III. Remark 3.2. The exact gradient can be also used to determine the convergence of the MRBCDIII method. We terminate the iteration when the approximate KKT condition is satisfied, min⇠2@R(e✓) ||eµ + ⇠|| ", where " is a positive preset convergence parameter. Since evaluating whether the approximate KKT condition holds is based on the exact gradient obtained at each iteration of the outer loop, it does not introduce much additional computational cost, either. 4 Theory Before we proceed with our main results of the MRBCD-II method, we first introduce the important lemma for controlling the variance introduced by stochastic sampling. Lemma 4.1. Let B be a mini-batch sampled from {1, ..., n}. Define vB = 1 |B| P i2|B| rfi(✓(t−1))− 1 |B| P i2|B| rfi(e✓) + eµ. Conditioning on ✓(t−1), we have EBvB = rF(✓(t−1)) and EB||vB −rF(✓(t−1))||2 4Tmax |B| h P(✓(t−1)) −P(b✓) + P(e✓) −P(b✓) i . The proof of Lemma 4.1 is provided in Appendix A. Lemma 4.1 guarantees that v is an unbiased estimator of F(✓), and its variance is bounded by the objective value gap. Therefore we do not need to choose a sequence diminishing step sizes to reduce the variance. 4.1 Strongly Convex Functions We then present the concrete rates of convergence of MRBCD-II when P is strongly convex. 5 Theorem 4.2. Suppose that Assumptions 2.1-2.3 hold. Let e✓(s) be a random point generated by the MRBCD-II method in Algorithm 2. Given a large enough batch B and a small enough learning rate ⌘such that |B| ≥Tmax/Lmax and ⌘< L−1 max/4, we have EP(e✓(s)) −P(b✓)  ✓ k µ⌘(1 −4⌘Lmax)m + 4⌘Lmax(m + 1) (1 −4⌘Lmax)m ◆s [P(e✓(0)) −P(b✓)]. Here we only present a sketch. The detailed proof is provided in Appendix B. The expected successive descent of the objective value is composed of two terms: The first one is the same as the expected successive descent of the “batch” RBCD methods; The second one is the variance introduced by the stochastic sampling. The descent term can be bounded by taking the average of the successive descent over all blocks of coordinates. The variance term can be bounded using Lemma 4.1. The mini-batch sampling and adjustments of µ’s guarantees that the variance asymptotically goes to zero at a proper scale. By taking expectation over the randomness of component functions and blocks of coordinates throughout all iterations, we derive a geometric rate of convergence. The next corollary present the concrete iteration complexity of the MRBCD-II method. Corollary 4.3. Suppose that Assumptions 2.1-2.3 hold. Let |B| = Tmax/Lmax, m = 65kLmax/µ, and ⌘= L−1 max/16. Given the target accuracy ✏and some ⇢2 (0, 1), for any s ≥3 log[P(e✓(0)) −P(b✓)/⇢] + 3 log(1/✏), we have P(e✓(s)) −P(b✓) ✏with at last probability 1 −⇢. Corollary 4.3 is a direct result of Theorem 4.2 and Markov inequality. The detailed proof is provided in Appendix C. To characterize the overall iteration complexity, we count the number of partial gradients we estimate. In each iteration of the outer loop, we calculate an exact gradient. Thus the number of estimated partial gradients is O(nk). Within each iteration of the inner loop (m in total), we estimate the partial gradients based on a mini-batch B. Thus the number of estimate partial gradients is O(m|B|). If we choose ⌘, m, and B as in Corollary (4.3) and consider ⇢as a constant, then the iteration complexity of the MRBCD-II method with respect to the number of estimated partial gradients is O ((nk + kTmax/µ) · log(1/✏)) , which is much lower than that of existing “batch” RBCD methods, O (nkLmax/µ · log(1/✏)). Remark 4.4 (Connection to the MRBCD-III method). There still exists a gap between the theory and empirical success of the active set strategy and its in existing literature, even for the “batch” RBCD methods. When incorporating the active set strategy to the RBCD-style methods, we have known that the empirical performance can be greatly boosted. How to exactly characterize the theoretical speed up is still largely unknown. Therefore Theorem 4.2 and 4.3 can only serve as an imprecise characterization of the MRBCD-III method. A rough understanding is that if the solution has at most q nonzero entries throughout all iterations, then the MRBCD-III method should have an approximate overall iteration complexity O ((nk + qTmax/µ) · log(1/✏)) . 4.2 Nonstrongly Convex Functions When P(✓) is not strongly convex, we can adopt a perturbation approach. Instead of solving (1.1), we consider the minimization problem as follows, ~✓= argmin ✓2Rd F(✓) + γ||✓(0) −✓||2 + R(✓), (4.1) where γ is some positive perturbation parameter, and ✓(0) is the initial value. If we consider eF(✓) = F(✓) + γ||✓(0) −✓||2 in (4.1) as the smooth empirical risk function, then eF(✓) is a strongly convex function. Thus Corollary 4.3 can be applied to (4.1): When B, m, ⌘, and ⇢are suitably chosen, given s ≥3 log([P(✓(0)) −P(~✓) −γ||✓(0) −~✓||2]/⇢) + 3 log(2/✏), 6 we have P(e✓(s)) −P(~✓) −γ||✓(0) −~✓||2 ✏/2 with at least probability 1 −⇢. We then have P(e✓(s)) −P(b✓) P(e✓(s)) −P(b✓) −γ||✓(0) −b✓||2 + γ||✓(0) −b✓||2 P(e✓(s)) −P(~✓) −γ||✓(0) −~✓||2 + γ||✓(0) −b✓||2 ✏/2 + γ||✓(0) −b✓||2. where the second inequality comes from the fact that P(~✓)+γ||✓(0) −~✓||2 P(✓)+γ||✓(0) −b✓||2, because ~✓is the minimizer to (4.1). If we choose γ = ✏/||✓(0) −b✓||2, we have P(e✓(s))−P(b✓) ✏. Since γ depends on the desired accuracy ✏, the number of estimated partial gradients also depends on ✏. Thus if we consider ||✓(0) −b✓||2 as a constant, then the overall iteration complexity of the perturbation approach becomes O ((nk + kTmax/✏) · log(1/✏)). 5 Numerical Simulations The first sparse learning problem of our interest is Lasso, which solves b✓= argmin ✓2Rd 1 n n X i=1 fi(✓) + λ||✓||1 with fi = 1 2(yi −xT i ✓)2. (5.1) We set n = 2000 and d = 1000, and all covariate vectors xi’s are independently sampled from a 1000-dimensional Gaussian distribution with mean 0 and covariance matrix ⌃, where ⌃jj = 1 and ⌃jk = 0.5 for all k 6= j. The first 50 entries of the regression coefficient vector ✓are independently samples from a uniform distribution over support (−2, −1) S(+1, +2). The responses yi’s are generated by the linear model yi = xT i ✓+ ✏i, where all ✏i’s are independently sampled from a standard Gaussian distribution N(0, 1). We choose λ = p log d/n, and compare the proposed MRBCD-I and MRBCD-II methods with the “batch” proximal gradient (BPG) method [11], the stochastic proximal variance reduced gradient method (SPVRG) [21], and the “batch” randomized block coordinate descent (BRBCD) method [12]. We set k = 100. All blocks are of the same size (10 coordinates). For BPG, the step size is 1/T, where T is the largest singular value of 1 n Pn i=1 xixT i . For BRBCD, the step size as 1/L, where L is the maximum over the largest singular values of 1 n Pn i=1[xi]Gj of all blocks. For SPVRG, we choose m = n, and the step size is 1/(4T). For MRBCD-I, the step size is 1/(Ldt/8000e), where t is the iteration index. For MRBCD-II, we choose m = n, and the step size is 1/(4L). Note that the step size and number of iterations m within each inner loop for MRBCD-II and SPVRG are tuned over a refined grid such that the best computational performance is obtained. 0 1 2 3 4 5 6 7 8 9 10 x 10 6 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 10 2 Number of partial gradient estimates Objective Value Gap MRBCD−II SPVRG BRBCD BPG MRBCD−I (a) Comparison between different methods for a single regularization parameter. 0 2 4 6 8 10 12 14 16 18 20 10 6 10 7 10 8 10 9 Regularization Index Number of partial gradients estimates MRBCD−III SPVRG BRBCD (b) Comparison between different methods for a sequence of regularization parameters. Figure 5.1: [a] The vertical axis corresponds to objective value gaps P(✓) −P(b✓) in log scale. The horizontal axis corresponds to numbers of partial gradient estimates. [b] The horizontal axis corresponds to indices of regularization parameters. The vertical axis corresponds to numbers of partial gradient estimates in log scale. We see that MRBCD attains the best performance among all methods for both settings We evaluate the computational performance by the number of estimated partial gradients, and the results averaged over 100 replications are presented in Figure 5.1 [a]. As can be seen, MRBCD-II outperforms SPVRG, and attains the best performance among all methods. The BRBCD and BPG 7 perform worse than MRBCD-II and SPVRG due to high computational complexity within each iteration. MRBCD-I is actually the fastest among all methods at the first few iterations, and then falls behind SPG and SPVRG due to its sublinear rate of convergence. We then compare the proposed MRBCD-III method with SPVRG and BRBCD for a sequence of regularization parameters. The sequence contains 21 regularization parameters {λ0, ..., λ20}. We set λ0 = || 1 n P i yixi||1, which yields a null solution (all entries are zero), and λ20 = p log d/n. For K = 1, ..., 19, we set λK = ↵λK−1, where ↵= (λ20/λ0)1/20. When solving (5.1) with respect to λK, we use the output solution for λK−1 as the initial solution. The above setting is often referred to the warm start scheme in existing literature, and it is very natural to sparse learning problems, since we always need to tune the regularization parameter λ to secure good finite sample performance. For each regularization parameter, the algorithm terminates the iteration when the approximate KKT condition is satisfied with ✏= 10−10. The results over 50 replications are presented in Figure 5.1 [b]. As can be seen, MRBCD-III outperforms SPVRG and BRBCD, and attains the best performance among all methods. Since BRBCD is also combined with the active set strategy, it attains better performance than SPVRG. See more detailed results in Table E.1 in Appendix E 6 Real Data Example The second sparse learning problem is the elastic-net regularized logistic regression, which solves b✓= argmin ✓2Rd 1 n n X i=1 fi(✓) + λ1||✓||1 with fi = log(1 + exp(−yixT i ✓)) + λ2 2 ||✓||2. We adopt the rcv1 dataset with n = 20242 and d = 47236. We set k = 200, and each block contains approximately 237 coordinates. We choose λ2 = 10−4, and λ1 = 10−4, and compare MRBCD-II with SPVRG and BRBCD. For BRBCD, the step size as 1/(4L), where L is the maximum of the largest singular values of 1 n Pn i=1[xi]Gj over all blocks for BRBCD. For SPVRG, m = n and the step size is 1/(16T), where T is the largest singular value of 1/ 1 n Pn i=1 xixT i . For MRBCD-II, m = n and the step size is 1/(16T). For BRBCD, the step size as 1/(4L), where L = 1 n maxj Pn i=1[xi]2 j for BRBCD. Note that the step size and number of iterations m within each inner loop for MRBCD-II and SPVRG are tuned over a refined grid such that the best computational performance is obtained. The results averaged over 30 replications are presented in Figure F.1 [a] of Appendix F. As can be seen, MRBCD-II outperforms SPVRG, and attains the best performance among all methods. The BRBCD performs worse than MRBCD-II and SPVRG due to high computational complexity within each iteration. We then compare the proposed MRBCD-III method with SPVRG and BRBCD for a sequence of regularization parameters. The sequence contains 11 regularization parameters {λ0, ..., λ10}. We set λ0 = || 1 P i rfi(0)||1, which yields a null solution (all entries are zero), and λ10 = 1e −4. For K = 1, ..., 9, we set λK = ↵λK−1, where ↵= (λ10/λ0)1/10. For each regularization parameter, we set ✏= 10−7 for the approximate KKT condition. The results over 30 replications are presented in Figure F.1 [b] of Appendix F. As can be seen, MRBCD-III outperforms SPVRG and BRBCD, and attains the best performance among all methods. Since BRBCD is also combined with the active set strategy, it attains better performance than SPVRG. Acknowledgements This work is partially supported by the grants NSF IIS1408910, NSF IIS1332109, NIH R01MH102339, NIH R01GM083084, and NIH R01HG06841. Yu is supported by China Scholarship Council and by NSFC 61173073. 8 References [1] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2009. [3] John Duchi and Yoram Singer. Efficient online and batch learning using forward backward splitting. The Journal of Machine Learning Research, 10:2899–2934, 2009. [4] Jerome Friedman, Trevor Hastie, Holger H¨ofling, and Robert Tibshirani. Pathwise coordinate optimization. The Annals of Applied Statistics, 1(2):302–332, 2007. [5] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013. [6] Jakub Koneˇcn`y and Peter Richt´arik. Semi-stochastic gradient descent methods. arXiv preprint arXiv:1312.1666, 2013. [7] John Langford, Lihong Li, and Tong Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10(777-801):65, 2009. [8] Han Liu, Mark Palatucci, and Jian Zhang. Blockwise coordinate descent procedures for the multi-task lasso, with applications to neural semantic basis discovery. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 649–656, 2009. [9] L. Meier, S. Van De Geer, and P B¨uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society: Series B, 70(1):53–71, 2008. [10] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. [11] Yu Nesterov. Gradient methods for minimizing composite objective function. Technical report, Universit´e catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2007. [12] Peter Richt´arik and Martin Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. arXiv preprint arXiv:1107.2848, 2011. [13] Peter Richt´arik and Martin Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, pages 1–38, 2012. [14] Nicolas L Roux, Mark Schmidt, and Francis R Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems, pages 2672–2680, 2012. [15] Shai Shalev-Shwartz and Ambuj Tewari. Stochastic methods for `1-regularized loss minimization. The Journal of Machine Learning Research, 12:1865–1892, 2011. [16] Suvrit Sra, Sebastian Nowozin, and Stephen J Wright. Optimization for machine learning. Mit Press, 2012. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58(1):267–288, 1996. [18] Huahua Wang and Arindam Banerjee. Randomized block coordinate descent for online and stochastic optimization. CoRR, abs/1407.0107, 2014. [19] Li Wang, Ji Zhu, and Hui Zou. The doubly regularized support vector machine. Statistica Sinica, 16(2):589, 2006. [20] Tong Tong Wu and Kenneth Lange. Coordinate descent algorithms for lasso penalized regression. The Annals of Applied Statistics, 2:224–244, 2008. [21] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. arXiv preprint arXiv:1403.4699, 2014. [22] Ming Yuan and Yi Lin. Model selection and estimation in the gaussian graphical model. Biometrika, 94(1):19–35, 2007. [23] Ji Zhu, Saharon Rosset, Trevor Hastie, and Robert Tibshirani. 1-norm support vector machines. In NIPS, volume 15, pages 49–56, 2003. [24] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(2):301–320, 2005. 9
2014
60
5,548
Multiscale Fields of Patterns Pedro F. Felzenszwalb Brown University Providence, RI 02906 pff@brown.edu John G. Oberlin Brown University Providence, RI 02906 john oberlin@brown.edu Abstract We describe a framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local patterns in a multiscale representation of an image. Local properties of a coarsened image reflect non-local properties of the original image. In the case of binary images local properties are defined by the binary patterns observed over small neighborhoods around each pixel. With the multiscale representation we capture the frequency of patterns observed at different scales of resolution. This framework leads to expressive priors that depend on a relatively small number of parameters. For inference and learning we use an MCMC method for block sampling with very large blocks. We evaluate the approach with two example applications. One involves contour detection. The other involves binary segmentation. 1 Introduction Markov random fields are widely used as priors for solving a variety of vision problems such as image restoration and stereo [5, 8]. Most of the work in the area has concentrated on low-order models involving pairs of neighboring pixels. However, it is clear that realistic image priors need to capture higher-order properties of images. In this paper we describe a general framework for defining high-order image models that can be used in a variety of applications. The approach involves modeling local properties in a multiscale representation of an image. This leads to a natural low-dimensional representation of a high-order model. We concentrate on the problem of estimating binary images. In this case local image properties can be captured by the binary patterns in small neighborhoods around each pixel. We define a Field of Patterns (FoP) model using an energy function that assigns a cost to each 3x3 pattern observed in an image pyramid. The cost of a pattern depends on the scale where it appears. Figure 1 shows a binary image corresponding to a contour map from the Berkeley segmentation dataset (BSD) [12, 2] and a pyramid representation obtained by repeated coarsening. The 3x3 patterns we observe after repeated coarsening depend on large neighborhoods of the original image. These coarse 3x3 patterns capture non-local image properties. We train models using a maximumlikelihood criteria. This involves selecting pattern costs making the expected frequency of patterns in a random sample from the model match the average frequency of patterns in the training images. Using the pyramid representation the model matches frequencies of patterns at each resolution. In practice we use MCMC methods for inference and learning. In Section 3 we describe an MCMC sampling algorithm that can update a very large area of an image (a horizontal or vertical band of pixels) in a single step, by combining the forward-backward algorithm for one-dimensional Markov models with a Metropolis-Hastings procedure. We evaluated our models and algorithms on two different applications. One involves contour detection. The other involves binary segmentation. These two applications require very different image priors. For contour detection the prior should encourage a network of thin contours, while for bi1 (a) (b) (c) Figure 1: (a) Multiscale/pyramid representation of a contour map. (b) Coarsest image scaled up for better visualization, with a 3x3 pattern highlighted. The leftmost object in the original image appears as a 3x3 “circle” pattern in the coarse image. (c) Patches of contour maps (top) that coarsen to a particular 3x3 pattern (bottom) after reducing their resolution by a factor of 8. nary segmentation the prior should encourage spatially coherent masks. In both cases we can design effective models using maximum-likelihood estimation. 1.1 Related Work FRAME models [24] and more recently Fields of Experts (FoE) [15] defined high-order energy models using the response of linear filters. FoP models are closely related. The detection of 3x3 patterns at different resolutions corresponds to using non-linear filters of increasing size. In FoP we have a fixed set of pre-defined non-linear filters that detect common patterns at different resolutions. This avoids filter learning, which leads to a non-convex optimization problem in FoE. A restricted set of 3x3 binary patterns was considered in [6] to define priors for image restoration. Binary patterns were also used in [17] to model curvature of a binary shape. There has been recent work on inference algorithms for CRFs defined by binary patterns [19] and it may be possible to develop efficient inference algorithms for FoP models using those techniques. The work in [23] defined a variety of multiresolution models for images based on a quad-tree representation. The quad-tree leads to models that support efficient learning and inference via dynamic programming, but such models also suffer from artifacts due to the underlying tree-structure. The work in [7] defined binary image priors using deep Boltzmann machines. Those models are based on a hierarchy of hidden variables that is related to our multiscale representation. However in our case the multiscale representation is a deterministic function of the image and does not involve extra hidden variables as [7]. The approach we take to define a multiscale model is similar to [9] where local properties of subsampled signals where used to model curves. One of our motivating applications involves detecting contours in noisy images. This problem has a long history in computer vision, going back at least to [16], who used a type of Markov model for detecting salient contours. Related approaches include the stochastic completion field in [22, 21], spectral methods [11], the curve indicator random field [3], and the more recent work in [1]. 2 Fields of Patterns (FoP) Let G = [n] × [m] be the grid of pixels in an n by m image. Let x = {x(i, j) | (i, j) ∈G} be a hidden binary image and y = {y(i, j) | (i, j) ∈G} be a set of observations (such as a grayscale or color image). Our goal is to estimate x from y. We define p(x|y) using an energy function that is a sum of two terms, p(x|y) = 1 Z(y) exp(−E(x, y)) E(x, y) = EFoP(x) + Edata(x, y) (1) 2 It is sometimes useful to think of EFoP(x) as a model for binary images and Edata(x, y) as a data model even though technically there is no such distinction in a conditional model. 2.1 Singlescale FoP Model The singlescale FoP model is one of the simplest energy models that can capture the basic properties of contour maps or other images that contain thin objects. We use x[i, j] to denote the binary pattern defined by x in the 3x3 window centered at pixel (i, j), treating values outside of the image as 0. A singlescale FoP model is defined by the local patterns in x, EFoP(x) = X (i,j)∈G V (x[i, j]). (2) Here V is a potential function assigning costs (or energies) to binary patterns. Note that there are 512 possible binary patterns in a 3x3 window. We can make the model invariant to rotations and mirror symmetries by tying parameters together. The resulting model has 102 parameters (some patterns have more symmetries than others) and can be learned from smaller datasets. We used invariant models for all of the experiments reported in this paper. 2.2 Multiscale FoP Model To capture non-local statistics we look at local patterns in a multiscale representation of x. For a model with K scales let σ(x) = x0, . . . , xK−1 be an image pyramid where x0 = x and xk+1 is a coarsening of xk. Here xk is a binary image defined over a grid Gk = [n/2k] × [m/2k]. The coarsening we use in practice is defined by a logical OR operation, xk+1(i, j) = xk(2i, 2j) ∨xk(2i + 1, 2j) ∨xk(2i, 2j + 1)k ∨xk(2i + 1, 2j + 1) (3) This particular coarsening maps connected objects at one scale of resolution to connected objects at the next scale, but other coarsenings may be appropriate in different applications. A multiscale FoP model is defined by the local patterns in σ(x), EFoP(x) = K−1 X k=0 X (i,j)∈Gk V k(xk[i, j]). (4) This model is parameterized by K potential functions V k., one for each scale in the pyramid σ(x). In many applications we expect the frequencies of a 3x3 pattern to be different at each scale. The potential functions can encourage or discourage specific patterns to occur at specific scales. Note that σ(x) is a deterministic function and the pyramid representation does not introduce new random variables. The pyramid simply defines a convenient way to specify potential functions over large regions of x. A single potential function in a multiscale model can depend on a large area of x due to the coarsenings. For large enough K (proportional to log of the image size) the Markov blanket of a pixel can be the whole image. While the experiments in Section 5 use the conditional modeling approach specified by Equation (1), we can also use EFoP to define priors over binary images. Samples from these priors illustrate the information that is captured by a FoP model, specially the added benefit of the multiscale representation. Figure 2 shows samples from FoP priors trained on contour maps of natural images. The empirical studies in [14] suggest that low-order Markov models can not capture the empirical length distribution of contours in natural images. A multiscale FoP model can control the size distribution of objects much better than a low-order MRF. After coarsening the diameter of an object goes down by a factor of approximately two, and eventually the object is mapped to a single pixel. The scale at which this happens can be captured by a 3x3 pattern with an “on” pixel surrounded by “off” pixels (this assumes there are no other objects nearby). Since the cost of a pattern depends on the scale at which it appears we can assign a cost to an object that is based loosely upon its size. 2.3 Data Model Let y be an input image and σ(y) be an image pyramid computed from y. Our data models are defined by sums over pixels in the two pyramids σ(x) and σ(y). In our experiments y is a graylevel 3 (a) (b) (c) Figure 2: (a) Examples of training images T extracted from the BSD. (b) Samples from a singlescale FoP prior trained on T. (c) Samples from a multiscale FoP prior trained on T. The multiscale model is better at capturing the lengths of contours and relationships between them. image with values in {0, . . . , M −1}. The pyramid σ(y) is defined in analogy to σ(x) except that we use a local average for coarsening instead of the logical OR, yk+1(i, j) = ⌊(yk(2i, 2j) + yk(2i + 1, 2j) + yk(2i, 2j + 1) + yk(2i + 1, 2j + 1))/4⌋ (5) The data model is parameterized by K vectors D0, . . . , DK−1 ∈RM Edata(x, y) = K−1 X k=0 X (i,j)∈Gk xk(i, j)Dk(yk(i, j)) (6) Here Dk(yk(i, j)) is an observation cost incurred when xk(i, j) = 1. There is no need to include an observation cost when xk(i, j) = 0 because only energy differences affect the posterior p(x|y). We note that it would be interesting to consider data models that capture complex relationships between local patterns in σ(x) and σ(y). For example a local maximum in yk(i, j) might give evidence for xk(i, j) = 1, or a particular 3x3 pattern in xk[i, j]. 2.4 Log-Linear Representation The energy function E(x, y) of a FoP model can be expressed by a dot product between a vector of model parameters w and a feature vector φ(x, y). The vector φ(x, y) has one block for each scale. In the k-th block we have: (1) 512 (or 102 for invariant models) entries counting the number of times each 3x3 pattern occurs in xk; and (2) M entries counting the number of times each possible value for y(i, j) occurs where xk(i, j) = 1. The vector w specifies the cost for each pattern in each scale (V k) and the parameters of the data model (Dk). We then have that E(x, y) = w · φ(x, y). This log-linear form is useful for learning the model parameters as described in Section 4. 3 Inference with a Band Sampler In inference we have a set of observations y and want to estimate x. We use MCMC methods [13] to draw samples from p(x|y) and estimate the posterior marginal probabilities p(x(i, j) = 1|y). Sampling is also used for learning model parameters as described in Section 4. In a block Gibbs sampler we repeatedly update x by picking a block of pixels B and sampling new values for xB from p(xB|y, xB). If the blocks are selected appropriately this defines a Markov chain with stationary distribution p(x|y). We can implement a block Gibbs sampler for a multiscale FoP model by keeping track of the image pyramid σ(x) as we update x. To sample from p(xB|y, xB) we consider each possible configuration 4 for xB. We can efficiently update σ(x) to reflect a possible configuration for xB and evaluate the terms in E(x, y) that depend on xB. This takes O(K|B|) time for each configuration for xB. This in turn leads to an O(K|B|2|B|) time algorithm for sampling from p(xB|y, x ¯ B). The running time can be reduced to O(K2|B|) using Gray codes to iterate over configurations for xB. Here we define a band sampler that updates all pixels in a horizontal or vertical band of x in a single step. Consider an n by m image x and let B be a horizontal band of pixels with h rows. Since |B| = mh a straightforward implementation of block sampling for B is completely impractical. However, for an Ising model we can generate samples from p(xB|y, xB) in O(m22h) time using the forward-backward algorithm for Markov models. We simply treat each column of B as a single variable with 2h possible states. A similar idea can be used for FoP models. Let S be a state space where a state specifies a joint configuration of binary values for the pixels in a column of B. Note that |S| = 2h. Let z1, . . . , zm be a representation of xB in terms of the state of each column. For a singlescale FoP model the distribution p(z1, . . . , zn|y, x ¯ B) is a 2nd-order Markov model. This allows for efficient sampling using forward weights computed via dynamic programming. Such an algorithm takes O(m23h) time to generate a sample from p(xB|y, xB), which is efficient for moderate values of h. In a multiscale FoP model the 3x3 patterns in the upper levels of σ(x) depend on many columns of B. This means p(z1, . . . , zn|x ¯ B) is no longer 2nd-order. Therefore instead of sampling xB directly we use a Metropolis-Hastings approach. Let p be a multiscale FoP model we would like to sample from. Let q be a singlescale FoP model that approximates p. Let x be the current state of the Markov chain and x′ be a proposal generated by the singlescale band sampler for q. We accept x′ with probability min(1, ((p(x′|y)q(x|y))/(p(x|y)q(x′|y)))). Efficient computation of acceptance probabilities can be done using the pyramid representations of x and y. For each proposal we update σ(x) to σ(x′) and compute the difference in energy due to the change under both p and q. One problem with the Metropolis-Hastings approach is that if proposals are rejected very often the resulting Markov chain mixes slowly. We can avoid this problem by noting that most of the work required to generate a sample from the proposal distribution involves computing forward weights that can be re-used to generate other samples. Each step of our band sampler for a multiscale FoP model picks a band B (horizontal or vertical) and generates many proposals for xB, accepting each one with the appropriate acceptance probability. As long as one of the proposals is accepted the work done in computing forward weights is not wasted. 4 Learning We can learn models using maximum-likelihood and stochastic gradient descent. This is similar to what was done in [24, 15, 20]. But in our case we have a conditional model so we maximize the conditional likelihood of the training examples. Let T = {(x1, yi), . . . , (xN, yN)} be a training set with N examples. We define the training objective using the negative log-likelihood of the data plus a regularization term. The regularization ensures no pattern is too costly. This helps the Markov chains used during learning and inference to mix reasonably fast. Let L(xi, yi) = −log p(xi|yi). The training objective is given by O(w) = λ 2 ||w||2 + N X i=1 L(xi, yi). (7) This objective is convex and ∇O(w) = λw + N X i=1 φ(xi, yi) −Ep(x|yi)[φ(x, yi)]. (8) Here Ep(x|yi)[φ(x, yi)] is the expectation of φ(x, yi) under the posterior p(x|yi) defined by the current model parameters w. A stochastic approximation to the gradient ∇O(w) can be obtained by sampling x′ i from p(x|yi). Let η be a learning rate. In each stochastic gradient descent step we sample x′ i from p(x|yi) and update w as follows w := w −η(λw + N X i=1 φ(xi, yi) −φ(x′ i, yi)). (9) 5 To sample the x′ i we run N Markov chains, one for each training example, using the band sampler from Section 3. After each model update we advance each Markov chain for a small number of steps using the latest model parameters to obtain new samples x′ i. 5 Applications To evaluate the ability of FoP to adapt to different problems we consider two different applications. In both cases we estimate hidden binary images x from grayscale input images y. We used ground truth binary images obtained from standard datasets and synthetic observations. For the experiments described here we generate y by sampling a value y(i, j) for each pixel independently from a normal distribution with standard deviation σy and mean µ0 or µ1, depending on x(i, j), y(i, j) ∼N(µx(i,j), σ2 y). (10) We have also done experiments with more complex data models but the results we obtained were similar to the results described here. 5.1 Contour Detection The BSD [12, 2] contains images of natural scenes and manual segmentations of the most salient objects in those images. We used one manual segmentation for each image in the BSD500. From each image we generated a contour map x indicating the location of boundaryes between segments in the image. To generate the observations y we used µ0 = 150, µ1 = 100 and σy = 40 in Equation (10). Our training and test sets each have 200 examples. We first trained a 1-scale FoP model. We then trained a 4-level FoP model using the 1-level model as a proposal distribution for the band sampler (see Section 3). Training each model took 2 days on a 20-core machine. During training and testing we used the band sampler with h = 3 rows. Inference involves estimating posterior marginal probabilities for each pixel by sampling from p(x|y). Inference on each image took 20 minutes on an 8-core machine. For comparison we implemented a baseline technique using linear filters. Following [10] we used the second derivative of an elongated Gaussian filter together with its Hilbert transform. The filters had an elongation factor of 4 and we experimented with different values for the base standard deviation σb of the Gaussian. The sum of squared responses of both filters defines an oriented energy map. We evaluated the filters at 16 orientations and took the maximum response at each pixel. We performed non-maximum suppression along the dominant orientations to obtain a thin contour map. Figure 3 illustrates our results on 3 examples from the test set. Results on more examples are available in the supplemental material. For the FoP models we show the posterior marginal probabilities p(x(i, j) = 1|y). The darkness of a pixel is proportional to the marginal probability. The FoP models do a good job suppressing noise and localizing the contours. The multiscale FoP model in particular gives fairly clean results despite the highly noisy inputs. The baseline results at lower σb values suffer from significant noise, detecting many spurious edges. The baseline at higher σb values suppresses noise at the expense of having poor localization and missing high-curvature boundaries. For a quantitative evaluation we compute precision-recall curves for the different models by thresholding the estimated contour maps at different values. Figure 4 shows the precision-recall curves. The average precision (AP) was found by calculating the area under the precision-recall curves. The 1-level FoP model AP was 0.73. The 4-level FoP model AP was 0.78. The best baseline AP was 0.18 obtained with σb = 1. We have also done experiments using lower observation noise levels σy. With low observation noise the 1-level and 4-level FoP results become similar and baseline results improve significantly approaching the FoP results. 5.2 Binary Segmentation For this experiment we obtained binary images from the Swedish Leaf Dataset [18]. We focused on the class of Rowan leaves because they have complex shapes. Each image defines a segmentation mask x. To generate the observations y we used µ0 = 150, µ1 = 100 and σy = 100 in Equation (10). We used a higher σy compared to the previous experiment because the 2D nature of masks makes it possible to recover them under higher noise. We used 50 examples for training and 25 6 Contour map x Observation y Baseline σb = 1 Baseline σb = 4 FoP 1 FoP 4 Figure 3: Contour detection results. Top-to-bottom: Hidden contour map x, input image y, output of oriented filter baseline with σb = 1 and σb = 4, output of 1-level and 4-level FoP model. examples for testing. We trained FoP models with the same procedure and parameters used for the contour detection experiment. For a baseline, we used graph-cuts [5, 4] to perform MAP inference with an Ising model. We set the data term using our knowledge of the observation model and picked the pairwise discontinuity cost minimizing the per-pixel error rate in the test set. Figure 5 illustrates the results of the different methods. Results on other images are available in the supplemental material. The precision-recall curves are in Figure 4. Graph-cuts yields a precisionrecall point, with precision 0.893 and recall 0.916. The 1-level FoP model has a higher precision of 0.915 at the same recall. The 4-level FoP model raises the precision to 0.929 at the same recall. The 7 (a) Contour detection (b) Binary segmentation Figure 4: (a) Precision-recall curves for the contour detection experiment. (b) Precision-recall curves for the segmentation experiment (the graph-cuts baseline yields a single precision-recall point). Mask x Observation y Graph-cuts FoP 1 FoP 4 Figure 5: Binary segmentation examples. The 4-level FoP model does a better job recovering pixels near the object boundary and the stem of the leaves. differences in precision are small because they are due to pixels near the object boundary but those are the hardest pixels to get right. There are clear differences that can be seen by visual inspection. 6 Conclusion We described a general framework for defining high-order image models. The idea involves modeling local properties in a multiscale representation of an image. This leads to a natural lowdimensional parameterization for high-order models that exploits standard pyramid representations of images. Our experiments demonstrate the approach yields good results on two applications that require very different image priors, illustrating the broad applicability of our models. An interesting direction for future work is to consider FoP models for non-binary images. Acknowledgements We would like to thank Alexandra Shapiro for helpful discussions and initial experiments related to this project. This material is based upon work supported by the National Science Foundation under Grant No. 1161282. 8 References [1] S. Alpert, M. Galun, B. Nadler, and R. Basri. Detecting faint curved edges in noisy images. In ECCV, 2010. [2] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. PAMI, 33(5):898–916, May 2011. [3] J. August and S.W. Zucker. Sketches with curvature: The curve indicator random field and markov processes. PAMI, 25(4):387–400, April 2003. [4] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. PAMI, 26(9):1124–1137, Sep 2004. [5] Y. Boykov, O. Veksler, and R.Zabih. Efficient approximate energy minimization via graph cuts. PAMI, 20(12):1222–1239, Nov 2001. [6] X. Descombes, J.F. Mangin, E. Pechersky, and M. Sigelle. Fine structures preserving markov model for image processing. In SCIA, 1995. [7] S.M. Eslami, N. Heess, and J. Winn. The shape boltzmann machine: a strong model of object shape. CVPR, 2012. [8] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient belief propagation for early vision. IJCV, 70(1), 2006. [9] P. F. Felzenszwalb and J. Schwartz. Hierarchical matching of deformable shapes. CVPR, 2007. [10] T. Leung and J. Malik. Contour continuity in region-based image segmentation. ECCV, pages 544–559, 1998. [11] Shyjan Mahamud, Lance R. Williams, Karvel K. Thornber, and Kanglin Xu. Segmentation of multiple salient closed contours from real images. PAMI, 25(4):433–444, 2003. [12] David R. Martin, Charless C. Fowlkes, and Jitendra Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. PAMI, 26(5):530–549, 2004. [13] R. Neal. Probabilistic inference using markov chain monte carlo methods. Technical Report CRG-TR93-1, Computer Science, University of Toronto, 1993. [14] Xiaofeng Ren, Charless Fowlkes, and Jitendra Malik. Learning probabilistic models for contour completion in natural images. IJCV, 77(1-3):47–63, 2008. [15] Stefan Roth and Michael J. Black. Fields of experts. IJCV, 82(2):205–229, 2009. [16] A. Shashua and S. Ullman. Structural saliency: The detection of globally salient structures using a locally connected network. In ICCV, pages 321–327, 1988. [17] A. Shekhovtsov, P. Kohli, and C. Rother. Curvature prior for mrf-based segmentation and shape inpainting. In DAGM, 2012. [18] O. J. O. S¨oderkvist. Computer vision classification of leaves from swedish trees. Master’s thesis, Link¨oping University, September 2001. [19] Rustem Takhanov and Vladimir Kolmogorov. Inference algorithms for pattern-based crfs on sequence data. In ICML, 2013. [20] T. Tieleman. Training restricted boltzmann machines using approximations to the likelihood gradient. In ICML, 2008. [21] Lance R. Williams and David W. Jacobs. Local parallel computation of stochastic completion fields. Neural Computation, 9(4):859–881, 1997. [22] Lance R. Williams and David W. Jacobs. Stochastic completion fields: A neural model of illusory contour shape and salience. Neural Computation, 9(4):837–858, 1997. [23] A.S. Willsky. Multiresolution markov models for signal and image processing. Proceedings of the IEEE, 90(8):1396–1458, 2002. [24] S. C. Zhu, Y. N. Wu, and D.B. Mumford. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. IJCV, 27(2):1–20, 1998. 9
2014
61
5,549
“How hard is my MDP?” The distribution-norm to the rescue Odalric-Ambrym Maillard The Technion, Haifa, Israel odalric-ambrym.maillard@ens-cachan.org Timothy A. Mann The Technion, Haifa, Israel mann.timothy@gmail.com Shie Mannor The Technion, Haifa, Israel shie@ee.technion.ac.il Abstract In Reinforcement Learning (RL), state-of-the-art algorithms require a large number of samples per state-action pair to estimate the transition kernel p. In many problems, a good approximation of p is not needed. For instance, if from one state-action pair (s, a), one can only transit to states with the same value, learning p(·|s, a) accurately is irrelevant (only its support matters). This paper aims at capturing such behavior by defining a novel hardness measure for Markov Decision Processes (MDPs) based on what we call the distribution-norm. The distributionnorm w.r.t. a measure ν is defined on zero ν-mean functions f by the standard variation of f with respect to ν. We first provide a concentration inequality for the dual of the distribution-norm. This allows us to replace the problem-free, loose || · ||1 concentration inequalities used in most previous analysis of RL algorithms, with a tighter problem-dependent hardness measure. We then show that several common RL benchmarks have low hardness when measured using the new norm. The distribution-norm captures finer properties than the number of states or the diameter and can be used to assess the difficulty of MDPs. 1 Introduction The motivation for this paper started with a question: Why are the number of samples needed for Reinforcement Learning (RL) in practice so much smaller than those given by theory? Can we improve this? In Markov Decision Processes (MDPs, Puterman (1994)), when the performance is measured by (1) the sample complexity (Kearns and Singh, 2002; Kakade, 2003; Strehl and Littman, 2008; Szita and Szepesv´ari, 2010) or (2) the regret (Bartlett and Tewari, 2009; Jaksch, 2010; Ortner, 2012), algorithms have been developed that achieve provably near-optimal performance. Despite this, one can often solve MDPs in practice with far less samples than required by current theory. One possible reason for this disconnect between theory and practice is because the analysis of RL algorithms has focused on bounds that hold for the most difficult MDPs. While it is interesting to know how an RL algorithm will perform for the hardest MDPs, most MDPs we want to solve in practice are far from pathological. Thus, we want algorithms (and analysis) that perform appropriately with respect to the hardness of the MDP it is facing. A natural way to fill this gap is to formalize a “hardness” metric for MDPs and show that MDPs from the literature that were solved with few samples are not “hard” according to this metric. For finite-state MDPs, usual metrics appearing in performance bounds of MDPs include the number of states and actions, the maximum of the value function in the discounted setting, and the diameter or sometimes the span of the bias function in the undiscounted setting. They only capture limited properties of the MDP. Our goal in this paper is to propose a more refined notion of hardness. 1 Previous work Despite the rich literature on MDPs, there has been surprisingly little work on metrics capturing the difficulty of learning MDPs. In Jaksch (2010), the authors introduce the UCRL algorithm for undiscounted MDPs, whose regret scales with the diameter D of the MDP, a quantity that captures the time to reach any state from any other. In Bartlett and Tewari (2009), the authors modify UCRL to achieve regret that scales with the span of the bias function, which can be arbitrarily smaller than D. The resulting algorithm, REGAL achieves smaller regret, but it is an open question whether the algorithm can be implemented. Closely related to our proposed solution, in Filippi et al. (2010) the authors provide a modified version of UCRL, called KL-UCRL that uses modified confidence intervals on the transition kernel based on Kullback-Leibler divergence rather than || · ||1 control on the error. The resulting algorithm is reported to work better in practice, although this is not reflected in the theoretical bounds. Farahmand (2011) introduced a metric for MDPs called the action-gap. This work is the closest in spirit to our approach. The actiongap captures the difficulty of distinguishing the optimal policy from near-optimal policies, and is complementary to the notion of hardness proposed here. However, the action-gap has mainly been used for planning, instead of learning, which is our main focus. In the discounted setting, several works have improved the bounds with respect to the number of states (Szita and Szepesv´ari, 2010) and the discount factor (Lattimore and Hutter, 2012). However, these analyses focus on worst case bounds that do not scale with the hardness of the MDP, missing an opportunity to help bridge the gap between theory and practice. Contributions Our main contribution is a refined metric for the hardness of MDPs, that captures the observed “easiness” of common benchmark MDPs. To accomplish this we first introduce a norm induced by a distribution ν, aka the distribution-norm. For functions f with zero ν-expectation, ||f||ν is the variance of f. We define the dual of this norm in Lemma 1, and then study its concentration properties in Theorem 1. This central result is of independent interest beyond its application in RL. More precisely, for a discrete probability measure p and its empirical version pn built from n i.i.d samples, we control ||p −pn||,p in O((np0)−1/2), where p0 is the minimum mass of p on its support. Second, we define a hardness measure for MDPs based on the distribution-norm. This measure captures stochasticity along the value function. This quantity is naturally small in MDPs that are nearly deterministic, but it can also be small in MDPs with highly stochastic transition kernels. For instance, this is the case when all states reachable from a state have the same value. We show that some common benchmark MDPs have small hardness measure. This illustrates that our proposed norm is a useful tool for the analysis and design of existing and future RL algorithms. Outline In Section 2, we formalize the distribution-norm, and give intuition about the interplay with its dual. We compare to distribution-independent norms. Theorem 1 provides a concentration inequality for the dual of this norm, that is of independent interest beyond the MDP setting. Section 3 uses these insights to define a problem-dependent hardness metric for both undiscounted and discounted MDPs (Definition 2, Definition 1), that we call the environmental norm. Importantly, we show in section 3.2 that common benchmark MDPs have small environmental norm C in this sense, and compare our bound to approaches bounding the problem-free || · ||1 norm. 2 The distribution-norm and its dual In Machine Learning (ML), norms often play a crucial role in obtaining performance bounds. One typical example is the following. Let X be a measurable space equipped with an unknown probability measure ν ∈M1(X) with density p. Based on some procedure, an algorithm produces a candidate measure ˜ν ∈M1(X) with density ˜p. One is then interested in the loss with respect to a continuous function f. It is natural to look at the mismatch between ν and ˜ν on f. That is (ν −˜ν, f) =  X f(x)(ν −˜ν)(dx) =  X f(x)(p(x) −˜p(x))dx . A typical bound on this quantity is obtained by applying a H¨older inequality to f and p −˜p, which gives (ν −˜ν, f)  ||p −˜p||1||f||∞. Assuming a bound is known for ||f||∞, this inequality can be controlled with a bound on ||p −˜p||1. When X is finite and ˜p is the empirical distribution pn estimated from n i.i.d. samples of p, results such as Weissman et al. (2003) can be applied to bound this term with high probability. However, in this learning problem, what matters is not f but the way f behaves with respect to ν. Thus, trying to capture the properties of f via the distribution-free ||f||∞bound is not satisfactory. So we propose, instead, a norm || · ||ν driven by ν. Well-behaving f will have small norm ||f||ν, whereas badly-behaving f will have large norm ||f||ν. Every distribution has a natural norm asso2 ciated with it that measures the quadratic variations of f with respect to ν. This quantity is at the heart of many key results in mathematical statistics, and is formally defined by ||f||ν =  X  f(x) −Eνf 2 ν(dx) . (1) To get a norm, we restrict C(X) to the space of continuous functions Eν = {f ∈C(X) : ||f||ν < ∞, supp(ν) ⊂supp(f), Eνf = 0} . We then define the corresponding dual space in a standard way by E ν = {µ : ||µ||,ν < ∞} where ||µ||,ν = sup f∈Eν  x f(x)µ(dx) ||f||ν . Note that for f ∈Eν, using the fact the ν(X) = ˜ν(X) = 1 and that x →f(x) −Eνf is a zero mean function, we immediately have (ν −˜ν, f) = (ν −˜ν, f −Eνf)  ||p −˜p||,ν||f −Eνf||ν . (2) The key difference with the generic H¨older inequality is that || · ||ν is now capturing the behavior of f with respect to ν, as opposed to || · ||∞. Conceptually, using a quadratic norm instead of an L1 norm, as we do here, is analogous to moving from Hoeffding’s inequality to Bernstein’s inequality in the framework of concentration inequalities. We are interested in situations where ||f||ν is much smaller than ||f||∞. That is, f is well-behaving with respect to ν. In such cases, we can get an improved bound ||p −˜p||,ν||f −Eνf||ν instead of the best possible generic bound infc∈R ||p −˜p||1||f −c||∞. Simply controlling either ||p −˜p||,ν (respectively ||p −˜p||1) or ||f||ν (respectively ||f||∞) is not enough. What matters is the product of these quantities. For our choice of norm, we show that ||p −˜p||,ν concentrates at essentially the same speed as ||p −˜p||1, but ||f||∞is typically much larger than ||f||ν for the typical functions met in the analysis of MDPs. We do not claim that the norm defined in equation (1) is the best norm that leads to a minimal ||p −˜p||,ν||f −Eνf||ν, but we show that it is an interesting candidate. We proceed in two steps. First, we design in Section 2 a concentration bound for ||p −pn||,ν that is not much larger than the Weissman et al. (2003) bound on ||p −pn||1. (Note that ||p −pn||,ν must be larger than ||p −pn||1 as it captures a refined property). Second, in Section 3, we consider RL in an MDP where p represents the transition kernel of a station-action pair and f represents the value function of the MDP for a policy. The value function and p are strongly linked by construction, and the distribution-norm helps us capture their interplay. We show in Section 3.2 that common benchmark MDPs have optimal value functions with small || · ||ν norm. This naturally introduces a new way to capture the hardness of MDPs, besides the diameter (Jaksch, 2010) or the span (Bartlett and Tewari, 2009). Our formal notion of MDP hardness is summarized in Definitions 1 and 2, for discounted and undiscounted MDPs, respectively. 2.1 A dual-norm concentration inequality For convenience we consider a finite space X = {1, . . . , S} with S points. We focus on the first term on the right hand side of (2), which corresponds to the dual norm when ˜p = pn is the empirical mean built from n i.i.d. samples from the distribution ν. We denote by p the probability vector corresponding to ν. The following lemma, whose proof is in the supplementary material, provides a convenient way to compute the dual norm. Lemma 1 Assume that X = {1, . . . , S}, and, without loss of generality, that supp(p) = {1, . . . , K}, with K  S. Then the following equality holds true ||pn −p||,p =    K s=1 p2n,s −p2s ps . Now we provide a finite-sample bound on our proposed norm. 3 Theorem 1 (Main result) Assume that supp(p) = {1, . . . , K}, with K  S. Then for all δ ∈ (0, 1), with probability higher than 1 −δ, ||pn −p||,p  min  1 p(K) −1, K −1 n + 2  (2n −1) ln(1/δ) n2 1 p(K) − 1 p(1)  ,(3) where p(K) is the smallest non zero component of p = (p1, . . . , pS), and p(1) the largest one. The proof follows an adaptation of Maurer and Pontil (2009) for empirical Bernstein bounds, and uses results for self-bounded functions from the same paper. This gives tighter bounds than naive concentration inequalities (Hoeffding, Bernstein, etc.). We indeed get a O(n−1/2) scaling, whereas using simpler techniques would lead to a weak O(n−1/4) scaling. Proof We will apply Theorem 7 of Maurer and Pontil (2009). Using the notation of this theorem, we denote the sample by X = (X1, . . . , Xn) and the function we want to control by V(X) = ||pn −p||2 ,p . We now introduce, for any s ∈S the modified sample Xi0,s = (X1, . . . , Xi0−1, s, Xi0+1, . . . , Xn). We are interested in the quantity V(X)−V(Xi0,s). To apply Theorem 7 of Maurer and Pontil (2009), we need to identify constants a, b such that ∀i ∈[n], V(X) −infs∈S V(Xi,s)  b n i=1  V(X) −infs∈S V(Xi,s) 2  aV(X) . The two following lemmas enable us to identify a and b. They follow from simple algebra and are proved in Appendix A in the supplementary material. Lemma 2 V(X) satisfies Ep  V(X)  = K−1 n . Moreover, for all i ∈{1, . . . , n} we have that V(X) −inf s∈S V(Xi,s)  b , where b = 2n −1 n2 1 p(K) − 1 p(1)  . Lemma 3 V(X) = ||pn −p||2 ,p satisfies n i=1  V(X) −inf s∈S V(Xi,s) 2  2bV(X) . Thus, we can choose a = 2b. By application of Theorem 7 of Maurer and Pontil (2009) to ˜V(X) = V(X)/b, we deduce that for all ε > 0, P  ˜V(X) −E˜V(X) > ε   exp − ε2 4E˜V(X) + 2ε  . Plugging back in the definition of ˜V(X), we obtain P  ||pn −p||2 ,p > K −1 n + ε   exp  − ε2/b 4 K−1 n + 2ε  . After inverting this bound in ε and using the fact that √ a + b  √a + √ b for non-negative a, b, we deduce that for all δ ∈(0, 1), with probability higher than 1 −δ, then ||pn −p||2 ,p  EV(X) + 2  EV(X)b ln(1/δ) + 2b log(1/δ) =  EV(X) +  b ln(1/δ) 2 + b log(1/δ) . Thus, we deduce from this inequality that ||pn −p||,p   EV(X) + 2  b ln(1/δ) = K −1 n + 2  (2n −1) ln(1/δ) n2 1 p(K) − 1 p(1)  , which concludes the proof. We recover here a O(n−1/2) behavior, more precisely a O(p−1 (K)n−1/2) scaling where p(K) is the smallest non zero probability mass of p.  4 3 Hardness measure in Reinforcement Learning using the distribution-norm In this section, we apply the insights from Section 2 for the distribution-norm to learning in Markov Decision Processes (MDPs). We start by defining a formal notion of hardness C for discounted MDPs and undiscounted MDPs with average reward, that we call the environmental norm. Then, we show in Section 3.2 that several benchmark MDPs have small environmental norm. In Section 3.1, we present a regret bound for a modification of UCRL whose regret scales with C, without having to know C in advance. Definition 1 (Discounted MDP) Let M =< S, A, r, p, γ > be a γ-discounted MDP, with reward function r and transition kernel p. We denote V π the value function corresponding to a policy π (Puterman, 1994). We define the environmental-value norm of policy π in MDP M by Cπ M = max s,a∈S×A ||V π||p(·|s,a) . Definition 2 (Undiscounted MDP) Let M =< S, A, r, p > be an undiscounted MDP, with reward function r and transition kernel p. We denote by hπ the bias function for policy π (Puterman, 1994; Jaksch, 2010). We define the environmental-value norm of policy π in MDP M by the quantity Cπ M = max s,a∈S×A ||hπ||p(·|s,a) . In the discounted setting with bounded rewards in [0, 1], V π  1 1−γ and thus Cπ M  1 1−γ as well. In the undiscounted setting, then ||hπ||p(·|s,a)  span(hπ), and thus Cπ M  span(hπ). We define the class of C-“hard” MDPs by MC = M : Cπ∗ M  C  . That is, the class of MDPs with optimal policy having a low environmental-value norm, or for short, MDPs with low environmental norm. Important note It may be tempting to think that, since the above definition captures a notion of variance, an MDP that is very noisy will have a high environmental norm. However this reasoning is incorrect. The environmental norm of an MDP is not the variance of a roll-out trajectory, but rather captures the variations of the value (or the bias value) function with respect to the transition kernel. For example, consider a fully connected MDP with transition kernel that transits to every state uniformly at random, but with a constant reward function. In this trivial MDP, Cπ M = 0 for all policies π, even though the MDP is extremely noisy because the value function is constant. In general MDPs, the environmental norm depends on how varied the value function is at the possible next states and on the distribution over next states. Note also that we use the term hardness rather than complexity to avoid confusion with such concepts as Rademacher or VC complexity. 3.1 “Easy” MDPs and algorithms In this section, we demonstrate how the dual norm (instead of the usual || · ||1 norm) can lead to improved bounds for learning in MDPs with small environmental norm. Discounted MDPs Due to space constraints, we only report one proposition that illustrates the kind of achievable results. Indeed, our goal is not to derive a modified version of each existing algorithm for the discounted scenario, but rather to instill the key idea of using a refined hardness measure when deriving the core lemmas underlying the analysis of previous (and future) algorithms. The analysis of most RL algorithms for the discounted case uses a “simulation lemma” (Kearns and Singh, 2002); see also Strehl and Littman (2008) for a refined version. A simulation lemma bounds the error in the value function of running a policy planned on an estimated MDP in the MDP where the samples were taken from. This effectively controls the number of samples needed from each state-action pair to derive a near-optimal policy. The following result is a simulation lemma exploiting our proposed notion of hardness (the environmental norm). Proposition 1 Let M be a γ-discounted MDP with deterministic rewards. For a policy π, let us denote its corresponding value V π. We denote by p the transition kernel of M, and for convenience use the notation pπ(s |s) for p(s |s, π(s)). Now, let p be an estimate of the transition kernel such that maxs∈S ||pπ(·|s) −pπ(·|s)||,pπ(·|s)  ε and let us denote V π its corresponding value in the MDP with kernel p. Then, the maximal expected error between the two values is bounded by Eπ rr def = max s0∈S  Epπ(·|s0)  V π −Epπ(·|s0) V π  εCπ 1 −γ , where Cπ = maxs,a∈S×A ||V π||p(·|s,a). In particular, for the optimal policy π, then Cπ  C. 5 To understand when this lemma results in smaller sample sizes, we need to compare to what one would get using the standard || · ||1 decomposition, for an MDP with rewards in [0, 1]. If maxs∈S ||pπ(·|s) −pπ(·|s)||1  ε , then one would get Eπ rr  εspan(V π) 1 −γ  ε V ∗ MAX 1 −γ  ε (1 −γ)2 . When, for example, C is a bound with respect to all policies, this simulation lemma can be plugged directly into the analysis of R-MAX (Kakade, 2003) or MBIE (Strehl and Littman, 2008) to obtain a hardness-sensitive bound on the sample complexity. Now, in most analyses, one only needs to bound the hardness with respect to the optimal policy and to the optimistic/greedy policies actually used by the algorithm. For an optimal policy ˜π computed from an (ε, ε )-approximate model (see Lemma 4 for details), it is not difficult to show that C ˜π  Cπ + (ε Cπ + ε)/(1 −γ), which thus allows for a tighter analysis. We do not report further results here, to avoid distracting the reader from the main message of the paper, which is the introduction of a distribution-dependent hardness metric for MDPs. Likewise, we do not detail the steps that lead from this result to the various sample-complexity bounds one can find in the abundant literature on the topic, as it would not be more illuminating than Proposition 1. Undiscounted MDPs In the undiscounted setting, with average reward criterion, it is natural to consider the UCRL algorithm from Jaksch (2010). We modify the definition of plausible MDPs used in the algorithm as follows: Using the same notations as that of Jaksch (2010), we replace the admissibility condition for a candidate transition kernel ˜p at the beginning of episode k at time tk ||pk(·|s, a) −˜p(·|s, a)||1   14S log(2Atk/δ) max{1, Nk(s, a)} , with the following condition involving the result of Theorem 1 ||pk(·|s, a) −˜p(·|s, a)||,˜p(·|s,a)  Bk(s, a) def = min 1 p0 −1,  K −1 max{1, Nk(s, a)} + 2  (2Nk(s, a) −1) ln(tkSA/δ) max{1, Nk(s, a)}2 1 ˜p(K) − 1 ˜p(1)  , (4) where ˜p(K) is the smallest non zero component of ˜p(·|s, a), and ˜p(1) the largest one, and K is the size of the support of ˜p(·|s, a). We here assume for simplicity that the transition kernel p of the MDP always puts at least p0 mass on each point of its support, and thus constraint an admissible kernel ˜p to satisfy the same condition. One restriction of the current (simple) analysis is that the algorithm needs to know a bound on p0 in advance. We believe it is possible to remove such an assumption by estimating p0 and taking care of the additional low probability event corresponding to the estimation error. As this comes at the price of a more complicated algorithm and analysis, we do not report this extension here for clarity. Note that the optimization problem corresponding to Extended Value Iteration with (4) can still be solved by optimizing over the simplex. We refer to Jaksch (2010) for implementation details. Naturally, similar modifications apply also to REGAL and other UCRL variants introduced in the MDP literature. In order to assess the performance of the policy chosen by UCRL it is useful to show the following: Lemma 4 Let M and ˜ M be two communicating MDPs over the same state-action space such that one is an (ε, ε )-approximation of the other in the sense that for all s, a |r(s, a) −˜r(s, a)|  ε and ||˜p(·|s, a) −p(·|s, a)||,p(·|s,a)  ε . Let ρ(M) denotes the average value function of M. Then ||ρ(M) −ρ( ˜ M)||p  ε min{CM, C ˜ M} + ε . Lemma 4 is a simple adaptation from Ortner et al. (2014). We now provide a bound on the regret of this modified UCRL algorithm. The regret bound turns out to be a bit better than UCRL in the case of an MDP M ∈MC with a small C. Proposition 2 Let us consider a finite-state MDP with S state, low environmental norm (M ∈MC) and diameter D. Assume moreover that the transition kernel that always puts at least p0 mass on each point of its support. Then, the modified UCRL algorithm run with condition (4) is such that for all δ, with probability higher than 1 −δ, for all T, the regret after T steps is bounded by RT = O  DC √ SA  log(TSA/δ) p0 + √ S  + D  T p0 log(TSA/δ)  . 6 The regret bound for the original UCRL from Jaksch (2010) scales as O DS  AT log(TSA/δ)  . Since we used some crude upper bounds in parts of the proof of Proposition 2, we believe the right scaling for the bound of Proposition 2 is O C  T SA p0 log(TSA/δ)  . The cruder factors come from some second order terms that we controlled trivially to avoid technical and not very illuminating considerations. What matters here is that C appears as a factor of the leading term. Indeed proposition 2 is mostly here for illustration purpose of what one can achieve, and improving on the other terms is technical and goes beyond the scope of this paper. Comparing the two regret bounds, the result of Proposition 2 provides a qualitative improvement over the result of Jaksch (2010) whenever C < D√Sp0 (respectively C < √ Sp0) for the conjectured (resp. current) result. Note. The modified UCRL algorithm does not need to know the environmental norm C of the MDP in advance. It only appears in the analysis and in the final regret bound. This property is similar to that of UCRL with respect to the diameter D. 3.2 The hardness of benchmarks MDPs In this section, we consider the hardness of a set of MDPs that have appeared in past literature. Table 3.2 summarizes the results for six MDPs that were chosen to be both representative of typical finite-states MDPs but also cover a diverse range of tasks. These MDPs are also significant in the sense that good solutions for them have been learned with far fewer samples then suggested by existing theoretical bounds. The metrics we report include the number of states S, the number of actions A, the maximum of V  (denoted V ∗ MAX), the span of V ∗, the Cπ∗ M , and p0 = min s∈S,a∈A min s∈supp(p(·|s,a) p(s |s, a), that is the minimum non-zero probability mass given by the transition kernel of the MDP. While we cannot compute the hardness for all policies, the hardness with respect to π∗is significant because it indicates how hard it is to learn the value function V ∗ of the optimal policy. Notice that Cπ∗ M is significantly smaller than both V ∗ MAX and span(V ∗) in all the MDPs. This suggests that a model accurately representing the optimal value function can be derived with a small number of samples (and a bound based on  · 1V ∗ MAX is overly conservative). MDP S A V ∗ MAX Span(V ∗) Cπ∗ M p0 bottleneck McGovern and Barto (2001) 231 4 19.999 19.999 0.526 0.1 red herring Hester and Stone (2009) 121 4 17.999 17.999 4.707 0.1 taxi † Dietterich (1998) 500 6 7.333 0.885 0.055 0.043 inventory † Mankowitz et al. (2014) 101 2 19.266 0.963 0.263 < 10−3 mountain car †   Sutton and Barto (1998) 150 3 19.999 19.999 1.296 0.322 pinball †   Konidaris and Barto (2009) 2304 5 19.999 19.991 0.059 < 10−3 Table 1: MDPs marked with a † indicate that the true MDP was not available and so it was estimated from samples. We estimated these MDPs with 10, 000 samples from each stateaction pair. MDPs marked with a ' indicate that the original MDP is deterministic and therefore we added noise to the transition dynamics. For the Mountain Car problem, we added a small amount of noise to the vehicle’s velocity during each step (post+1 = post + velt(1 + X) where X is a random variable with equally probable events {−velMAX, 0, velMAX}). For the pinball domain we added noise similar to Tamar et al. (2013). MDPs marked with a  were discretized to create a finite state MDP. The rewards of all MDPs were normalized to [0, 1] and discount factor γ = 0.95 was used. To understand the environmental-value norm of near-optimal policies π in an MDP, we ran policy iteration on each of the benchmark MDPs from Table 3.2 for 100 iterations (see supplementary material for further details). We computed the environmental-value norm of all encountered policies and selected the policy π with maximal norm and its corresponding worst case distribution. Figure 1 compares the Weissman et al. (2003) bound ×VMAX to the bound given by Theorem 1 ×Cπ M as the number of samples increases. This is indeed the comparison of this products that matters for the learning regret, rather than that of one or the other factor only. In each MDP, we see an order of magnitude improvement by exploiting the distribution-norm. This is particularly significant because the Weissman et al. (2003) bound is quite close to the behavior observed in experiments. The result in Figure 1 strengthens support for our theoretical findings, suggesting that bounds based on the distribution-norm scale with the MDP’s hardness. 7 0 200 400 600 800 1000 Samples 100 101 102 103 Error (log-scale) Bottleneck Weissman ×VMAX Theorem1 ×Cπ 0 200 400 600 800 1000 Samples 100 101 102 103 Error (log-scale) Red Herring Weissman ×VMAX Theorem1 ×Cπ 0 200 400 600 800 1000 Samples 100 101 102 103 Error (log-scale) Taxi Weissman ×VMAX Theorem1 ×Cπ 0 200 400 600 800 1000 Samples 10−1 100 101 102 103 Error (log-scale) Inventory Management Weissman ×VMAX Theorem1 ×Cπ 0 200 400 600 800 1000 Samples 100 101 102 103 Error (log-scale) Mountain Car Weissman ×VMAX Theorem1 ×Cπ 0 200 400 600 800 1000 Samples 10−1 100 101 102 103 104 Error (log-scale) Pinball Weissman ×VMAX Theorem1 ×Cπ Figure 1: Comparison of the Weissman et al. (2003) bound times VMAX to (3) of Theorem 1 times Cπ M in the benchmark MDPs. In each MDP, we selected the policy π (from the policies encountered during policy iteration) that gave the largest Cπ and the worst next state distribution for our bound. In each MDP, the improvement with the distribution-norm is an order of magnitude (or more) better than using the distribution-free Weissman et al. (2003) bound. 4 Discussion and conclusion In the early days of learning theory, sample independent quantities such as the VC-dimension and later the Rademacher complexity were used to derive generalization bounds for supervised learning. Later on, data dependent bounds (empirical VC or empirical Rademacher) replaced these quantities to obtain better bounds. In a similar spirit, we proposed the first analysis in RL where instead of considering generic a-priori bounds one can use stronger MDP-specific bounds. Similarly to the supervised learning, where generalization bounds have been used to drive model selection algorithms and structural risk minimization, our proposed distribution dependent norm suggests a similar approach in solving RL problems. Although we do not claim to close the gap between theoretical and empirical bounds, this paper opens an interesting direction of research towards this goal, and achieves a significant first step. It inspires at least a modification of the whole family of UCRLbased algorithms, and could potentially benefit also to others fundamental problems in RL such as basis-function adaptation or model selection, but efficient implementation should not be overlooked. We choose a natural weighted L2 norm induced by a distribution, due to its simplicity of interpretation and showed several benchmark MDPs have low hardness. A natural question is how much benefit can be obtained by studying other Lp or Orlicz distribution-norms? Further, one may wish to create other distribution dependent norms that emphasize certain areas of the state space in order to better capture desired (or undesired) phenomena. This is left for future work. In the analysis we basically showed how to adapt existing algorithms to use the new distribution dependent hardness measure. We believe this is only the beginning of what is possible, and that new algorithms will be developed to best utilize distribution dependent norms in MDPs. Acknowledgements This work was supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL) and the Technion. References Bartlett, P. L. and Tewari, A. (2009). Regal: A regularization based algorithm for reinforcement learning in weakly communicating mdps. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 35–42. Dietterich, T. G. (1998). The MAXQ method for hierarchical reinforcement learning. In International Conference on Machine Learning, pages 118–126. 8 Farahmand, A. M. (2011). Action-gap phenomenon in reinforcement learning. In Shawe-Taylor, J., Zemel, R. S., Bartlett, P. L., Pereira, F. C. N., and Weinberger, K. Q., editors, Proceedings of the 25th Annual Conference on Neural Information Processing Systems, pages 172–180, Granada, Spain. Filippi, S., Capp´e, O., and Garivier, A. (2010). Optimism in reinforcement learning and kullbackleibler divergence. In Communication, Control, and Computing (Allerton), 2010 48th Annual Allerton Conference on, pages 115–122. IEEE. Hester, T. and Stone, P. (2009). Generalized model learning for reinforcement learning in factored domains. In The Eighth International Conference on Autonomous Agents and Multiagent Systems (AAMAS). Jaksch, T. (2010). Near-optimal regret bounds for reinforcement learning. Journal of Machine Learning Research, 11:1563–1600. Kakade, S. M. (2003). On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London. Kearns, M. and Singh, S. (2002). Near-optimal reinforcement learning in polynomial time. Machine Learning, 49:209–232. Konidaris, G. and Barto, A. (2009). Skill discovery in continuous reinforcement learning domains using skill chaining. In Bengio, Y., Schuurmans, D., Lafferty, J., Williams, C. K. I., and Culotta, A., editors, Advances in Neural Information Processing Systems 22, pages 1015–1023. Lattimore, T. and Hutter, M. (2012). PAC bounds for discounted MDPs. In Algorithmic learning theory, pages 320–334. Springer. Mankowitz, D. J., Mann, T. A., and Mannor, S. (2014). Time-regularized interrupting options (TRIO). In Proceedings of the 31st International Conference on Machine Learning. Maurer, A. and Pontil, M. (2009). Empirical Bernstein bounds and sample-variance penalization. In Conference On Learning Theory (COLT). McGovern, A. and Barto, A. G. (2001). Automatic discovery of subgoals in reinforcement learning using diverse density. In Proceedings of the 18th International Conference on Machine Learning, pages 361 – 368, San Fransisco, USA. Ortner, R. (2012). Online regret bounds for undiscounted continuous reinforcement learning. In Neural Information Processing Systems 25, pages 1772—-1780. Ortner, R., Maillard, O.-A., and Ryabko, D. (2014). Selecting near-optimal approximate state representations in reinforcement learning. Technical report, Montanuniversitaet Leoben. Puterman, M. L. (1994). Markov Decision Processes - Discrete Stochastic Dynamic Programming. John Wiley & Sons, Inc. Strehl, A. L. and Littman, M. L. (2008). An analysis of model-based interval estimation for markov decision processes. Journal of Computer and System Sciences, 74(8):1309–1331. Sutton, R. and Barto, A. (1998). Reinforcement Learning: An Introduction. MIT Press. Szita, I. and Szepesv´ari, C. (2010). Model-based reinforcement learning with nearly tight exploration complexity bounds. In Proceedings of the 27th International Conference on Machine Learning. Tamar, A., Castro, D. D., and Mannor, S. (2013). TD methods for the variance of the reward-to-go. In Proceedings of the 30 th International Conference on Machine Learning. Weissman, T., Ordentlich, E., Seroussi, G., Verdu, S., and Weinberger, M. J. (2003). Inequalities for the l1 deviation of the empirical distribution. Technical report, Hewlett-Packard Labs. 9
2014
62
5,550
Spectral Learning of Mixture of Hidden Markov Models Y. Cem S¨ubakan♭, Johannes Traa♯, Paris Smaragdis♭,♯,♮ ♭Department of Computer Science, University of Illinois at Urbana-Champaign ♯Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign ♮Adobe Systems, Inc. {subakan2, traa2, paris}@illinois.edu Abstract In this paper, we propose a learning approach for the Mixture of Hidden Markov Models (MHMM) based on the Method of Moments (MoM). Computational advantages of MoM make MHMM learning amenable for large data sets. It is not possible to directly learn an MHMM with existing learning approaches, mainly due to a permutation ambiguity in the estimation process. We show that it is possible to resolve this ambiguity using the spectral properties of a global transition matrix even in the presence of estimation noise. We demonstrate the validity of our approach on synthetic and real data. 1 Introduction Method of Moments (MoM) based algorithms [1, 2, 3] for learning latent variable models have recently become popular in the machine learning community. They provide uniqueness guarantees in parameter estimation and are a computationally lighter alternative compared to more traditional maximum likelihood approaches. The main reason behind the computational advantage is that once the moment expressions are acquired, the rest of the learning work amounts to factorizing a moment matrix whose size is independent of the number of data items. However, it is unclear how to use these algorithms for more complicated models such as Mixture of Hidden Markov Models (MHMM). MHMM [4] is a useful model for clustering sequences, and has various applications [5, 6, 7]. The E-step of the Expectation Maximization (EM) algorithm for an MHMM requires running forwardbackward message passing along the latent state chain for each sequence in the dataset in every EM iteration. For this reason, if the number of sequences in the dataset is large, EM can be computationally prohibitive. In this paper, we propose a learning algorithm based on the method of moments for MHMM. We use the fact that an MHMM can be expressed as an HMM with block diagonal transition matrix. Having made that observation, we use an existing MoM algorithm to learn the parameters up to a permutation ambiguity. However, this doesn’t recover the parameters of the individual HMMs. We exploit the spectral properties of the global transition matrix to estimate a de-permutation mapping that enables us to recover the parameters of the individual HMMs. We also specify a method that can recover the number of HMMs under several spectral conditions. 2 Model Definitions 2.1 Hidden Markov Model In a Hidden Markov Model (HMM), an observed sequence x = x1:T = {x1, . . . , xt, . . . , xT } with xt ∈RL is generated conditioned on a latent Markov chain r = r1:T = {r1, . . . , rt, . . . , rT }, with 1 rt ∈{1, . . . M}. The HMM is parameterized by an emission matrix O ∈RL×M, a transition matrix A ∈RM×M and an initial state distribution ν ∈RM. Given the model parameters θ = (O, A, ν), the likelihood of an observation sequence x1:T is defined as follows: p(x1:T |θ) = X r1:T p(x1:T , r1:T |θ) = X r1:T T Y t=1 p(xt|rt, O) p(rt|rt−1, A) =1⊤ MA diag(p(xT | :, O)) · · · A diag(p(x1| :, O)) ν = 1⊤ M T Y t=1 Adiag(O(xt)) ! ν, (1) where 1M ∈RM is a column vector of ones, we have switched from index notation to matrix notation in the second line such that summations are embedded in matrix multiplications, and we use the MATLAB colon notation to pick a row/column of a matrix. Note that O(xt) := p(xt| :, O). The model parameters are defined as follows: • ν(u) = p(r1 = u|r0) = p(r1 = u) initial latent state distribution • A(u, v) = p(rt = u|rt−1 = v), t ≥2 latent state transition matrix • O(:, u) = E[xt|rt = u] emission matrix The choice of the observation model p(xt|rt) determines what the columns of O correspond to: • Gaussian: p(xt|rt = u) = N(xt; µu, σ2) ⇒ O(:, u) = E[xt|rt = u] = µu. • Poisson: p(xt|rt = u) = PO(xt; λu) ⇒ O(:, u) = E[xt|rt = u] = λu. • Multinomial: p(xt|rt = u) = Mult(xt; pu, S) ⇒ O(:, u) = E[xt|rt = u] = pu. The first model is a multivariate, isotropic Gaussian with mean µu ∈RL and covariance σ2I ∈ RL×L. The second distribution is Poisson with intensity parameter λu ∈RL. This choice is particularly useful for counts data. The last density is a multinomial distribution with parameter pu ∈RL and number of draws S. 2.2 Mixture of HMMs The Mixture of HMMs (MHMM) is a useful model for clustering sequences where each sequence is modeled by one of K HMMs. It is parameterized by K emission matrices Ok ∈RL×M, K transition matrices1 Ak ∈RM×M, and K initial state distributions νk ∈RM as well as a cluster prior probability distribution π ∈RK. Given the model parameters θ1:K = (O1:K, A1:K, ν1:K, π), the likelihood of an observation sequence xn = {x1,n, x2,n, . . . , xTn,n} is computed as a convex combination of the likelihood of K HMMs: p(xn|θ1:K) = K X k=1 p(hn = k)p(xn|hn = k, θk) = K X k=1 πk X r1:Tn,n p(xn, rn|hn = k, θk) = K X k=1 πk X r1:Tn,n Tn Y t=1 p(xt,n|rt,n, hn = k, Ok)p(rt,n|rt−1,n, hn = k, Ak) = K X k=1 πk ( 1⊤ J Tn Y t=1 Akdiag(Ok(xt,n)) ! νk ) , (2) where hn ∈{1, 2, . . . , K} is the latent cluster indicator, rn = {r1,n, r2,n, . . . , rTn,n} is the latent state sequence for the observed sequence xn, and Ok(xt,n) is a shorthand for p(xt,n| :, hn = k, Ok). Note that if a sequence is assigned to the kth cluster (hn = k), the corresponding HMM parameters θk = (Ak, Ok, νk) are used to generate it. 1Without loss of generality, the number of hidden states for each HMM is taken to be M to keep the notation uncluttered. 2 3 Spectral Learning for MHMMs Traditionally, the parameters of an MHMM are learned with the Expectation-Maximization (EM) algorithm. One drawback of EM is that it requires a good initialization. Another issue is its computational requirements. In every iteration, one has to perform forward-backward message passing for every sequence, resulting in a computationally expensive process, especially when dealing with large datasets. The proposed MoM approach avoids the issues associated with EM by leveraging the information in various moments computed from the data. Given these moments, which can be computed efficiently, the computation time of the learning algorithm is independent of the amount of data (number of sequences and their lengths). Our approach is mainly based on the observation that an MHMM can be seen as a single HMM with a block-diagonal transition matrix. We will first establish this proposition and discuss its implications. Then, we will describe the proposed learning algorithm. 3.1 MHMM as an HMM with a special structure Lemma 1: An MHMM with local parameters θ1:K = (O1:K, A1:K, ν1:K, π) is an HMM with global parameters ¯θ = ( ¯O, ¯A, ¯ν), where: ¯O = [O1 O2 . . . OK] , ¯A =   A1 0 . . . 0 0 A2 . . . 0 ... 0 0 . . . AK   , ¯ν =   π1ν1 π2ν2 ... πKνK  . (3) Proof: Consider the MHMM likelihood for a sequence xn: p(xn|θ1:K) = K X k=1 πk ( 1⊤ M Tn Y t=1 Ak diag(Ok(xt)) ! νk ) (4) =1⊤ MK     Tn Y t=1   A1 0 . . . 0 0 A2 . . . 0 ... 0 0 . . . AK  diag([O1 O2 . . . OK] (xt))       π1ν1 π2ν2 ... πKνK   =1⊤ MK Tn Y t=1 ¯A diag( ¯O(xt)) ! ¯ν, where [O1 O2 . . . OK] (xt) := ¯O(xt). We conclude that the MHMM and an HMM with parameters ¯θ describe equivalent probabilistic models. □ We see that the state space of an MHMM consists of K disconnected regimes. For each sequence sampled from the MHMM, the first latent state r1 determines what region the entire latent state sequence lies in. 3.2 Learning an MHMM by learning an HMM In the previous section, we showed the equivalence between the MHMM and an HMM with a blockdiagonal transition matrix. Therefore, it should be possible to use an HMM learning algorithm such as spectral learning for HMMs [1, 2] to find the parameters of an MHMM. However, the true global parameters ¯θ are recovered inexactly due to noise ϵ: ¯θ →¯θϵ and state indexing ambiguity via a permutation mapping P: ¯θϵ →¯θP ϵ . Consequently, the parameters ¯θP ϵ = ( ¯OP ϵ , ¯AP ϵ , ¯νP ϵ ) obtained from the learning algorithm are in the following form: ¯OP ϵ = ¯OϵP ⊤, ¯AP ϵ = P ¯AϵP ⊤, ¯νP ϵ = P ¯νϵ , (5) 3 where P is the permutation matrix corresponding to the permutation mapping P. The presence of the permutation is a fundamental nuisance for MHMM learning since it causes parameter mixing between the individual HMMs. The global parameters are permuted such that it becomes impossible to identify individual cluster parameters. A brute force search to find P requires (MK)! trials, which is infeasible for anything but very small MK. Nevertheless, it is possible to efficiently find a depermutation mapping eP using the spectral properties of the global transition matrix ¯A. Our ultimate goal in this section is to undo the effect of P by estimating a eP that makes ¯AP ϵ block diagonal despite the presence of the estimation noise ϵ. 3.2.1 Spectral properties of the global transition matrix Lemma 2: Assuming that each of the local transition matrices A1:K has only one eigenvalue which is 1, the global transition matrix ¯A has K eigenvalues which are 1. Proof: ¯A =   V1Λ1V −1 1 . . . 0 0 ... 0 0 0 VKΛKV −1 K  =   V1 . . . 0 0 ... 0 0 0 VK     Λ1 . . . 0 0 ... 0 0 0 ΛK     V1 . . . 0 0 ... 0 0 0 VK   −1 | {z } ¯V ¯Λ ¯V −1 , where VkΛkV −1 k is the eigenvalue decomposition of Ak with Vk as eigenvectors, and Λk as a diagonal matrix with eigenvalues on the diagonal. The eigenvalues of A1:K appear unaltered in the eigenvalue decomposition of ¯A, and consequently ¯A has K eigenvalues which are 1. □ Corollary 1: lim e→∞ ¯Ae =  ¯v11⊤ M . . . ¯vk1⊤ M . . . ¯vK1⊤ M  , (6) where ¯vk = [0⊤. . . v⊤ k . . . 0⊤]⊤and vk is the stationary distribution of Ak, ∀k ∈{1, . . . , K}. Proof: lim e→∞(VkΛkV −1 k )e = lim e→∞VkΛe kV −1 k = Vk   1 0 . . . 0 0 0 . . . 0 ... 0 0 . . . 0  V −1 k = vk1⊤ M. The third step follows because there is only one eigenvalue with magnitude 1. Since multiplying ¯A by itself amounts to multiplying the corresponding diagonal blocks, we have the structure in (6). □ Note that equation (6) points out that the matrix lime→∞¯Ae consists of K blocks of size M × M where the k’th block is vk1⊤ M. A straightforward algorithm can now be developed for making ¯AP block diagonal. Since the eigenvalue decomposition is invariant under permutation, ¯A and ¯AP have the same eigenvalues and eigenvectors. As e →∞, K clusters of columns appear in ( ¯AP)e. Thus, ¯AP can be made block-diagonal by clustering the columns of ( ¯AP)∞. This idea is illustrated in the middle row of Figure 1. Note that, in an actual implementation, one would use a low-rank reconstruction by zeroing-out the eigenvalues that are not equal to 1 in ¯Λ to form ( ¯AP)r := ¯V P(¯ΛP)r( ¯V P)−1 = ( ¯AP)∞, where (¯ΛP)r ∈RMK×MK is a diagonal matrix with only K non-zero entries, corresponding to the eigenvalues which are 1. This algorithm corresponds to the noiseless case ¯AP. In practice, the output of the learning algorithm is ¯AP ϵ and the clear structure in Equation (6) no longer holds in ( ¯AP ϵ )e, as e →∞, as illustrated in the bottom row of Figure 1. We can see that the three-cluster structure no longer holds for large e. Instead, the columns of the transition matrix converge to a global stationary distribution. 3.2.2 Estimating the permutation in the presence of noise In the general case with noise ϵ, we lose the spectral property that the global transition matrix has K eigenvalues which are 1. Consequently, the algorithm described in Section 3.2.1 cannot be 4 e: 1 rt rt+1 e: 5 rt e: 10 rt e: 20 rt e: 1 rt rt+1 e: 5 rt e: 10 rt e: 20 rt e: 1 rt rt+1 e: 5 rt e: 10 rt e: 20 rt Figure 1: (Top left) Block-diagonal transition matrix after e-fold exponentiation. Each block converge to its own stationary distribution. (Top right) Same as above with permutation. (Bottom) Corrupted and permuted transition matrix after exponentiation. The true number K = 3 of HMMs is clear for intermediate values of e, but as e →∞, the columns of the matrix converge to a global stationary distribution. applied directly to make ¯AP ϵ block diagonal. In practice, the estimated transition matrix has only one eigenvalue with unit magnitude and lime→∞( ¯AP ϵ )e converges to a global stationary distribution. However, if the noise ϵ is sufficiently small, a depermutation mapping eP and the number of HMM clusters K can be successfully estimated. We now specify the spectral conditions for this. Definition 1: We denote λG k := αkλ1,k for k ∈{1, . . . , K} as the global, noisy eigenvalues with |λG k| ≥|λG k+1|, ∀k ∈{1, . . . , K −1}, where λ1,k is the original eigenvalue of the kth cluster with magnitude 1 and αk is the noise that acts on that eigenvalue (note that α1 = 1). We denote λL j,k := βj,kλj,k for j ∈{2, . . . , M} and k ∈{1, . . . , K} as the local, noisy eigenvalues with |λL j,k| ≥|λL j+1,k|, ∀k ∈{1, . . . , K} and ∀j ∈{1, . . . , M −1}, where λj,k is the original eigenvalue with the jth largest magnitude in the kth cluster, and βj,k is the noise that acts on that eigenvalue. Definition 2: The low-rank eigendecomposition of the estimated transition matrix ¯AP ϵ is defined as Ar ϵ := V ΛrV −1, where V is a matrix with eigenvectors in the columns and Λr is a diagonal matrix with eigenvalues λG 1:K in the first K entries. Conjecture 1: If |λG K| > max k∈{1,...,K} |λL 2,k|, then Ar can be formed using the eigen-decomposition of ¯AP ϵ . Then, with high probability, ∥Ar ϵ −Ar∥F ≤O(1/ √ TN), where TN is the total number of observed vectors. Justification: ∥Ar ϵ −Ar∥F = ∥Ar ϵ −A + A −Ar∥F ≤∥Ar ϵ −A∥F + ∥A −Ar∥F =∥A −Ar∥F + ∥A −Aϵ + A¯r ϵ∥F ≤∥A −Ar∥F + ∥A¯r ϵ∥F + ∥A −Aϵ∥F ≤2KM + O(1/ √ TN) = O(1/ √ TN), w.h.p., where A is used for ¯AP to reduce the notation clutter (and similarly Ar for ( ¯AP)r and so on), we used the triangle inequality for the first and second inequalities and A¯r ϵ = V Λ¯rV −1, where Λ¯r is a diagonal matrix of eigenvalues with the first K diagonal entries equal to zero (complement of Λr). For the last inequality, we used the fact that A ∈RMK×MK has entries in the interval [0, 1] and we used the sample complexity result from [1]. The bound specified in [1] is for a mixture model, but since the two models are similar and the estimation procedure is almost identical, we are reusing it. We believe that further analysis of the spectral learning algorithm is out of the scope of this paper, so we leave this proposition as a conjecture. □ Conjecture 1 asserts that, if we have enough data we should obtain an estimate Ar ϵ close to Ar in the squared error sense. Furthermore, if the following mixing rate condition is satisfied, we will be able to identify the number of clusters K from the data. 5 10 20 1 2 3 4 5 6 7 8 9 e K ′ No. of Significant Eigenvalues 1 2 3 4 5 6 7 8 9 0 2 4 6 8 10 Eigenvalue Index Spectral Longevity Spectral Longevity of Eigenvalues Figure 2: (Left) Number of significant eigenvalues across exponentiations. (Right) Spectral Longevity L˜λK′ with respect to the eigenvalue index K′. Definition 3: Let eλk denote the kth largest eigenvalue (in decreasing order) of the estimated transition matrix ¯AP ϵ . We define the quantity, L˜λK′ := ∞ X e=1 " PK′ l=1 |˜λl|e PMK l′=1 |˜λl′|e > 1 −γ # − "PK′−1 l=1 |˜λl|e PMK l′=1 |˜λl′|e > 1 −γ #! , (7) as the spectral longevity of ˜λK′. The square brackets [.] denote an indicator function which outputs 1 if the argument is true and 0 otherwise, and γ is a small number such as machine epsilon. Lemma 3: If |λG K| > max k∈{1,...,K} |λL 2,k| and arg maxK′ |eλK′|2 |eλK′+1||eλK′−1| = K, for K′ ∈ {2, 3, . . . , MK −1}, then arg maxK′ L˜λK′ = K. Proof: The first condition ensures that the top K eigenvalues are global eigenvalues. The second condition is about the convergence rates of the two ratios in equation (7). The first indicator function has the following summation inside: PK′ l=1 |˜λl|e PMK l′=1 |˜λl′|e = PK′−1 l=1 |˜λl|e + |˜λK′|e PK′−1 l′=1 |˜λl′|e + |˜λK′|e + |˜λK′+1|e + PMK l′=K′+2 |˜λl′|e . The rate at which this term goes to 1 is determined by the spectral gap |λK′|/|λK′+1|. The smaller this ratio is, the faster the term (it is non-decreasing w.r.t. e) converges to 1. For the second indicator function inside L˜λK′ , we can do the same analysis and see that the convergence rate is again determined by the gap |λK′−1|/|λK′|. The ratio of the two spectral gaps determines the spectral longevity. Hence, for the K′ with largest ratio |eλK′|2 |eλK′+1||eλK′−1|, we have arg maxK′ L˜λK′ = K. □ Lemma 3 tells us the following. If the estimated transition matrix ¯AP ϵ is not too noisy, we can determine the number of clusters by choosing the value of K′ such that it maximizes L˜λK′ . This corresponds to exponentiating the sorted eigenvalues in a finite range, and recording the number of non-negligible eigenvalues. This is depicted in Figure 2. 3.3 Proposed Algorithm In previous sections, we have shown that the permutation caused by the MoM estimation procedure can be undone, and we have proposed a way to estimate the number of clusters K. We summarize the whole procedure in Algorithm 1. 4 Experiments 4.1 Effect of noise on depermutation algorithm We have tested the algorithm’s performance with respect to amount of data. We used the parameters K = 3, M = 4, L = 20, and we have 2 sequences with length T for each cluster. We used a Gaussian observation model with unit observation variance and the columns of the emission matrices O1:K were drawn from zero mean spherical Gaussian with variance 2. Results for 10 uniformly 6 Algorithm 1 Spectral Learning for Mixture of Hidden Markov Models Inputs: x1:N : Sequences, MK : total number of states of global HMM. Output: bθ =  bO1: b K, bA1: b K  : MHMM parameters Method of Moments Parameter Estimation ( ¯OP ϵ , ¯AP ϵ ) = HMM MethodofMoments (x1:N, MK) Depermutation Find eigenvalues of ¯AP ϵ Exponentiate eigenvalues for each discrete value e in a sufficiently large range. Identify bK as the eigenvalue with largest longevity. Compute rank- bK reconstruction Ar ϵ via eigendecomposition. Cluster the columns of Ar ϵ with bK clusters to find a depermutation mapping eP via cluster labels. Depermute ¯OP ϵ and ¯AP ϵ according to eP. Form bθ by choosing corresponding blocks from depermuted ¯OP ϵ and ¯AP ϵ . Return bθ. 10 120 230 340 450 560 670 780 890 1000 0 1 2 Euclidean Distance vs Sequence Length T Euc. Dist. 3 3 3 3 3 3 3 3 3 3 Figure 3: Top row: Euclidean distance vs T. Second row: Noisy input matrix. Third row: Noisy reconstruction Ar ϵ. Bottom row: Depermuted matrix, numbers at the bottom indicate the estimated number of clusters. spaced sequence lengths from 10 to 1000 are shown in Figure 3. On the top row, we plot the total error (from centroid to point) obtained after fitting k-means with true number of HMM clusters. We can see that the correct number of clusters K = 3 as well as the block-diagonal structure of the transition matrix is correctly recovered even in the case where T = 20. 4.2 Amount of data vs accuracy and speed We have compared clustering accuracies of EM and our approach on data sampled from a Gaussian emission MHMM. Means of each state of each cluster is drawn from a zero mean unit variance Gaussian, and observation covariance is spherical with variance 2. We set L = 20, K = 5, M = 3. We used uniform mixing proportions and uniform initial state distribution. We evaluated the clustering accuracies for 10 uniformly spaced sequence lengths (every sequence has the same length) between 20 and 200, and 10 uniformly spaced number of sequences between 1 and 100 for each cluster. The results are shown in Figure 4. Although EM seems to provide higher accuracy on 7 20 20 80 80 80 80 20 80 60 40 53 65 82 83 80 77 62 95 100 82 68 73 97 66 80 82 86 100 88 100 58 76 65 79 85 81 100 98 81 100 73 78 61 97 80 60 100 100 100 100 79 77 69 69 84 100 100 100 100 100 76 77 69 100 100 88 78 100 100 100 88 78 88 80 100 100 100 100 100 75 58 63 82 78 100 100 100 79 100 100 78 86 80 100 87 77 80 100 100 100 T N/K Accuracy (%) of spectral algorithm 10 31 73 116 158 200 1 12 34 56 78 100 60 80 60 40 80 60 80 60 60 40 100 100 100 100 100 100 100 80 80 100 100 100 100 100 71 100 100 80 100 80 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 100 80 100 100 100 100 100 100 100 100 100 100 80 100 100 100 100 100 100 100 100 100 80 80 100 100 100 100 100 100 80 100 100 100 100 100 100 100 100 100 100 100 100 100 T N/K Accuracy (%) of EM algorithm 10 31 73 116 158 200 1 12 34 56 78 100 2 2 1 1 1 1 2 1 3 2 2 3 3 3 3 4 4 4 5 5 2 3 3 4 5 5 6 6 7 7 3 3 4 5 6 6 7 8 9 10 3 3 5 6 7 8 9 11 12 13 3 4 5 7 8 10 11 13 14 15 3 4 6 8 9 11 13 14 16 17 3 5 7 9 11 13 14 16 18 20 3 5 7 10 12 14 16 18 20 22 3 6 8 11 13 16 18 20 23 25 T N/K Run time (s) of spectral algorithm 10 31 73 116 158 200 1 12 34 56 78 100 1 5 16 34 19 33 56 47 56 75 27 138 178 187 313 367 606 846 573 614 54 235 296 370 724 550 969 1093 1056 1616 89 427 378 529 703 1241 1873 1434 1418 2433 165 290 754 734 1301 1323 1646 1851 3074 2423 172 444 662 970 1477 1098 1892 3258 2030 2404 229 588 1040 1106 1683 1943 1861 2396 3603 3332 266 791 1335 2020 2457 2662 2311 4330 5137 5849 233 865 1664 2597 3761 4431 3914 4133 4247 4915 216 855 2046 1879 1875 3920 3609 3629 8719 6890 T N/K Run time (s) of EM algorithm 10 31 73 116 158 200 1 12 34 56 78 100 Figure 4: Clustering accuracy and run time results for synthetic data experiments. Table 1: Clustering accuracies for handwritten digit dataset. Algorithm 1v2 1v3 1v4 2v3 2v4 2v5 Spectral 100 70 54 83 99 99 EM init. w/ Spectral 100 99 100 96 100 100 EM init. at Random 96 99 98 83 100 100 regions where we have less data, spectral algorithm is much faster. Note that, in spectral algorithm we include the time spent in moment computation. We used four restarts for EM, and take the result with highest likelihood, and used an automatic stopping criterion. 4.3 Real data experiment We ran an experiment on the handwritten character trajectory dataset from the UCI machine learning repository [8]. We formed pairs of characters and compared the clustering results for three algorithms: the proposed spectral learning approach, EM initialized at random, and EM initialized with MoM algorithm as explored in [9]. We take the maximum accuracy of EM over 5 random initializations in the third row. We set the algorithm parameters to K = 2 and M = 4. There are 140 sequences of average length 100 per class. In the original data, L = 3, but to apply MoM learning, we require that MK < L. To achieve this, we transformed the data vectors with a cubic polynomial feature transformation such that L = 10 (this is the same transformation that corresponds to a polynomial kernel). The results from these trials are shown in Table 1. We can see that although spectral learning doesn’t always surpass randomly initialized EM on its own, it does serve as a very good initialization scheme. 5 Conclusions and future work We have developed a method of moments based algorithm for learning mixture of HMMs. Our experimental results show that our approach is computationally much cheaper than EM, while being comparable in accuracy. Our real data experiment also show that our approach can be used as a good initialization scheme for EM. As future work, it would be interesting to apply the proposed approach on other hierarchical latent variable models. Acknowledgements: We would like to thank Taylan Cemgil, David Forsyth and John Hershey for valuable discussions. This material is based upon work supported by the National Science Foundation under Grant No. 1319708. References [1] A. Anandkumar, D. Hsu, and S.M. Kakade. A method of moments for mixture models and hidden markov models. In COLT, 2012. [2] A. Anandkumar, R. Ge, D. Hsu, S.M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. arXiv:1210.7559v2, 2012. 8 [3] Daniel Hsu, Sham M. Kakade, and Tong Zhang. A spectral algorithm for learning hidden markov models a spectral algorithm for learning hidden markov models. Journal of Computer and System Sciences, (1460-1480), 2009. [4] P. Smyth. Clustering sequences with hidden markov models. In Advances in neural information processing systems, 1997. [5] Yuting Qi, J.W. Paisley, and L. Carin. Music analysis using hidden markov mixture models. Signal Processing, IEEE Transactions on, 55(11):5209 –5224, nov. 2007. [6] A. Jonathan, S. Sclaroff, G. Kollios, and V. Pavlovic. Discovering clusters in motion time-series data. In CVPR, 2003. [7] Tim Oates, Laura Firoiu, and Paul R. Cohen. Clustering time series with hidden markov models and dynamic time warping. In In Proceedings of the IJCAI-99 Workshop on Neural, Symbolic and Reinforcement Learning Methods for Sequence Learning, pages 17–21, 1999. [8] K. Bache and M. Lichman. UCI machine learning repository, 2013. [9] Arun Chaganty and Percy Liang. Spectral experts for estimating mixtures of linear regressions. In International Conference on Machine Learning (ICML), 2013. 9
2014
63
5,551
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation Emily Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun and Rob Fergus Dept. of Computer Science, Courant Institute, New York University {denton, zaremba, bruna, lecun, fergus} @cs.nyu.edu Abstract We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy, but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the redundancy present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2×, while keeping the accuracy within 1% of the original model. 1 Introduction Large neural networks have recently demonstrated impressive performance on a range of speech and vision tasks. However, the size of these models can make their deployment at test time problematic. For example, mobile computing platforms are limited in their CPU speed, memory and battery life. At the other end of the spectrum, Internet-scale deployment of these models requires thousands of servers to process the 100’s of millions of images per day. The electrical and cooling costs of these servers is significant. Training large neural networks can take weeks, or even months. This hinders research and consequently there have been extensive efforts devoted to speeding up training procedure. However, there are relatively few efforts aimed at improving the test-time performance of the models. We consider convolutional neural networks (CNNs) used for computer vision tasks, since they are large and widely used in commercial applications. These networks typically require a huge number of parameters (∼108 in [1]) to produce state-of-the-art results. While these networks tend to be hugely over parameterized [2], this redundancy seems necessary in order to overcome a highly nonconvex optimization [3]. As a byproduct, the resulting network wastes computing resources. In this paper we show that this redundancy can be exploited with linear compression techniques, resulting in significant speedups for the evaluation of trained large scale networks, with minimal compromise to performance. We follow a relatively simple strategy: we start by compressing each convolutional layer by finding an appropriate low-rank approximation, and then we fine-tune the upper layers until the prediction performance is restored. We consider several elementary tensor decompositions based on singular value decompositions, as well as filter clustering methods to take advantage of similarities between learned features. Our main contributions are the following: (1) We present a collection of generic methods to exploit the redundancy inherent in deep CNNs. (2) We report experiments on state-of-the-art Imagenet 1 CNNs, showing empirical speedups on convolutional layers by a factor of 2 −3× and a reduction of parameters in fully connected layers by a factor of 5 −10×. Notation: Convolution weights can be described as a 4-dimensional tensor: W ∈RC×X×Y ×F . C is the number of number of input channels, X and Y are the spatial dimensions of the kernel, and F is the target number of feature maps. It is common for the first convolutional layer to have a stride associated with the kernel which we denote by ∆. Let I ∈RC×N×M denote an input signal where C is the number of input maps, and N and M are the spatial dimensions of the maps. The target value, T = I ∗W, of a generic convolutional layer, with ∆= 1, for a particular output feature, f, and spatial location, (x, y), is T(f, x, y) = C X c=1 X X x′=1 Y X y′=1 I(c, x −x′, y −y′)W(c, x′, y′, f) If W is a tensor, ∥W∥denotes its operator norm, sup∥x∥=1 ∥Wx∥F and ∥W∥F denotes its Frobenius norm. 2 Related Work Vanhoucke et al. [4] explored the properties of CPUs to speed up execution. They present many solutions specific to Intel and AMD CPUs and some of their techniques are general enough to be used for any type of processor. They describe how to align memory, and use SIMD operations (vectorized operations on CPU) to boost the efficiency of matrix multiplication. Additionally, they propose the linear quantization of the network weights and input. This involves representing weights as 8-bit integers (range [−127, 128]), rather than 32-bit floats. This approximation is similar in spirit to our approach, but differs in that it is applied to each weight element independently. By contrast, our approximation approach models the structure within each filter. Potentially, the two approaches could be used in conjunction. The most expensive operations in CNNs are the convolutions in the first few layers. The complexity of this operation is linear in the area of the receptive field of the filters, which is relatively large for these layers. However, Mathieu et al. [5] have shown that convolution can be efficiently computed in Fourier domain, where it becomes element-wise multiplication (and there is no cost associated with size of receptive field). They report a forward-pass speed up of around 2× for convolution layers in state-of-the-art models. Importantly, the FFT method can be used jointly with most of the techniques presented in this paper. The use of low-rank approximations in our approach is inspired by work of Denil et al. [2] who demonstrate the redundancies in neural network parameters. They show that the weights within a layer can be accurately predicted from a small (e.g. ∼5%) subset of them. This indicates that neural networks are heavily over-parametrized. All the methods presented here focus on exploiting the linear structure of this over-parametrization. Finally, a recent preprint [6] also exploits low-rank decompositions of convolutional tensors to speed up the evaluation of CNNs, applied to scene text character recognition. This work was developed simultaneously with ours, and provides further evidence that such techniques can be applied to a variety of architectures and tasks. Our work differs in several ways. First, we consider a significantly larger model. This makes it more challenging to compute efficient approximations since there are more layers to propagate through and thus a greater opportunity for error to accumulate. Second, we present different compression techniques for the hidden convolutional layers and provide a method of compressing the first convolutional layer. Finally, we present GPU results in addition to CPU results. 3 Convolutional Tensor Compression In this section we describe techniques for compressing 4 dimensional convolutional weight tensors and fully connected weight matrices into a representation that permits efficient computation and storage. Section 3.1 describes how to construct a good approximation criteria. Section 3.2 describes 2 techniques for low-rank tensor approximations. Sections 3.3 and 3.4 describe how to apply these techniques to approximate weights of a convolutional neural network. 3.1 Approximation Metric Our goal is to find an approximation, ˜W, of a convolutional tensor W that facilitates more efficient computation while maintaining the prediction performance of the network. A natural choice for an approximation criterion is to minimize ∥˜W −W∥F . This criterion yields efficient compression schemes using elementary linear algebra, and also controls the operator norm of each linear convolutional layer. However, this criterion assumes that all directions in the space of weights equally affect prediction performance. We now present two methods of improving this criterion while keeping the same efficient approximation algorithms. Mahalanobis distance metric: The first distance metric we propose seeks to emphasize coordinates more prone to produce prediction errors over coordinates whose effect is less harmful for the overall system. We can obtain such measurements as follows. Let Θ = {W1, . . . , WS} denote the set of all parameters of the S-layer network, and let U(I; Θ) denote the output after the softmax layer of input image I. We consider a given input training set (I1, . . . , IN) with known labels (y1, . . . , yN). For each pair (In, yn), we compute the forward propagation pass U(In, Θ), and define as {βn} the indices of the h largest values of U(In, Θ) different from yn. Then, for a given layer s, we compute dn,l,s = ∇Ws (U(In, Θ) −δ(i −l)) , n ≤N , l ∈{βn} , s ≤S , (1) where δ(i−l) is the dirac distribution centered at l. In other words, for each input we back-propagate the difference between the current prediction and the h “most dangerous” mistakes. The Mahalanobis distance is defined from the covariance of d: ∥W∥2 maha = wΣ−1wT , where w is the vector containing all the coordinates of W, and Σ is the covariance of (dn,l,s)n,l. We do not report results using this metric, since it requires inverting a matrix of size equal to the number of parameters, which can be prohibitively expensive in large networks. Instead we use an approximation that considers only the diagonal of the covariance matrix. In particular, we propose the following, approximate, Mahalanobis distance metric: ∥W∥^ maha := X p αpW(p) , where αp =  X n,l dn,l,s(p)21/2 (2) where the sum runs over the tensor coordinates. Since (2) is a reweighted Euclidiean metric, we can simply compute W ′ = α . ∗W, where .∗denotes element-wise multiplication, then compute the approximation ˜ W ′ on W ′ using the standard L2 norm, and finally output ˜W = α−1. ∗˜ W ′ . Data covariance distance metric: One can view the Frobenius norm of W as ∥W∥2 F = Ex∼N (0,I)∥Wx∥2 F . Another alternative, similar to the one considered in [6], is to replace the isotropic covariance assumption by the empirical covariance of the input of the layer. If W ∈ RC×X×Y ×F is a convolutional layer, and bΣ ∈RCXY ×CXY is the empirical estimate of the input data covariance, it can be efficiently computed as ∥W∥data = ∥bΣ1/2WF ∥F , (3) where WF is the matrix obtained by folding the first three dimensions of W.As opposed to [6], this approach adapts to the input distribution without the need to iterate through the data. 3.2 Low-rank Tensor Approximations 3.2.1 Matrix Decomposition Matrices are 2-tensors which can be linearly compressed using the Singular Value Decomposition. If W ∈Rm×k is a real matrix, the SVD is defined as W = USV ⊤, where U ∈Rm×m, S ∈ Rm×k, V ∈Rk×k. S is a diagonal matrix with the singular values on the diagonal, and U, V are orthogonal matrices. If the singular values of W decay rapidly, W can be well approximated by keeping only the t largest entries of S, resulting in the approximation ˜W = ˜U ˜S ˜V ⊤, where 3 ˜U ∈Rm×t, ˜S ∈Rt×t, ˜V ∈Rt×k Then, for I ∈Rn×m, the approximation error ∥I ˜W −IW∥F satisfies ∥I ˜W −IW∥F ≤st+1∥I∥F , and thus is controlled by the decay along the diagonal of S. Now the computation I ˜W can be done in O(nmt + nt2 + ntk), which, for sufficiently small t is significantly smaller than O(nmk). 3.2.2 Higher Order Tensor Approximations SVD can be used to approximate a tensor W ∈Rm×n×k by first folding all but two dimensions together to convert it into a 2-tensor, and then considering the SVD of the resulting matrix. For example, we can approximate Wm ∈Rm×(nk) as ˜Wm ≈˜U ˜S ˜V ⊤. W can be compressed even further by applying SVD to ˜V . We refer to this approximation as the SVD decomposition and use K1 and K2 to denote the rank used in the first and second application of SVD respectively. Alternatively, we can approximate a 3-tensor, WS ∈Rm×n×k, by a rank 1 3-tensor by finding a decomposition that minimizes ∥W −α ⊗β ⊗γ∥F , (4) where α ∈Rm, β ∈Rn, γ ∈Rk and ⊗denotes the outer product operation. Problem (4) is solved efficiently by performing alternate least squares on α, β and γ respectively, although more efficient algorithms can also be considered [7]. This easily extends to a rank K approximation using a greedy algorithm: Given a tensor W, we compute (α, β, γ) using (4), and we update W (k+1) ←W k −α ⊗β ⊗γ. Repeating this operation K times results in ˜ WS = K X k=1 αk ⊗βk ⊗γk . (5) We refer to this approximation as the outer product decomposition and use K to denote the rank of the approximation. Pointwise matrix multiplication RGB input Intermediate representation Output 2D monochromatic spatial convolution (a) Input Output Bi-cluster input and output (b) F C X·Y + … F C X·Y + (c) Figure 1: A visualization of monochromatic and biclustering approximation structures. (a) The monochromatic approximation, used for the first layer. Input color channels are projected onto a set of intermediate color channels. After this transformation, output features need only to look at one intermediate color channel. (b) The biclustering approximation, used for higher convolution layers. Input and output features are clustered into equal sized groups. The weight tensor corresponding to each pair of input and output clusters is then approximated. (c) The weight tensors for each input-output pair in (b) are approximated by a sum of rank 1 tensors using techniques described in 3.2.2 3.3 Monochromatic Convolution Approximation Let W ∈RC×X×Y ×F denote the weights of the first convolutional layer of a trained network. We found that the color components of trained CNNs tend to have low dimensional structure. In particular, the weights can be well approximated by projecting the color dimension down to a 1D subspace. The low-dimensional structure of the weights is illustrated in Figure 4.1. The monochromatic approximation exploits this structure and is computed as follows. First, for every output feature, f, we consider the matrix Wf ∈RC×(XY ), where the spatial dimensions of the filter corresponding to the output feature have been combined, and find the SVD, Wf = UfSfV ⊤ f , 4 Approximation technique Number of operations No approximation XY CFNM∆−2 Monochromatic C′CNM + XY FNM∆−2 Biclustering + outer product decomposition GHK(NM C G + XY NM∆−2 + F H NM∆−2) Biclustering + SVD GHNM( C GK1 + K1XY K2∆−2 + K2 F H ) Table 1: Number of operations required for various approximation methods. where Uf ∈RC×C, Sf ∈RC×XY , and Vf ∈RXY ×XY . We then take the rank 1 approximation of Wf, ˜Wf = ˜Uf ˜Sf ˜V ⊤ f , where ˜Uf ∈RC×1, ˜Sf ∈R, ˜Vf ∈R1×XY . We can further exploit the regularity in the weights by sharing the color component basis between different output features. We do this by clustering the F left singular vectors, ˜Uf, of each output feature f into C′ clusters, for C′ < F . We constrain the clusters to be of equal size as discussed in section 3.4. Then, for each of the F C′ output features, f, that is assigned to cluster cf, we can approximate Wf with ˜Wf = Ucf ˜Sf ˜V ⊤ f where Ucf ∈RC×1 is the cluster center for cluster cf and ˜Sf and ˜Vf are as before. This monochromatic approximation is illustrated in the left panel of Figure 1(c). Table 1 shows the number of operations required for the standard and monochromatic versions. 3.4 Biclustering Approximations We exploit the redundancy within the 4-D weight tensors in the higher convolutional layers by clustering the filters, such that each cluster can be accurately approximated by a low-rank factorization. We start by clustering the rows of WC ∈RC×(XY F ), which results in clusters C1, . . . , Ca. Then we cluster the columns of WF ∈R(CXY )×F , producing clusters F1, . . . , Fb. These two operations break the original weight tensor W into ab sub-tensors {WCi,Fj}i=1,...,a,j=1,...,b as shown in Figure 1(b). Each sub-tensor contains similar elements, and thus is easier to fit with a low-rank approximation. In order to exploit the parallelism inherent in CPU and GPU architectures it is useful to constrain clusters to be of equal sizes. We therefore perform the biclustering operations (or clustering for monochromatic filters in Section 3.3) using a modified version of the k-means algorithm which balances the cluster count at each iteration. It is implemented with the Floyd algorithm, by modifying the Euclidean distance with a subspace projection distance. After the input and output clusters have been obtained, we find a low-rank approximation of each sub-tensor using either the SVD decomposition or the outer product decomposition as described in Section 3.2.2. We concatenate the X and Y spatial dimensions of the sub-tensors so that the decomposition is applied to the 3-tensor, WS ∈RC×(XY )×F . While we could look for a separable approximation along the spatial dimensions as well, we found the resulting gain to be minimal. Using these approximations, the target output can be computed with significantly fewer operations. The number of operations required is a function the number of input clusters, G, the output clusters H and the rank of the sub-tensor approximations (K1, K2 for the SVD decomposition; K for the outer product decomposition. The number of operations required for each approximation is described in Table 1. 3.5 Fine-tuning Many of the approximation techniques presented here can efficiently compress the weights of a CNN with negligible degradation of classification performance provided the approximation is not too harsh. Alternatively, one can use a harsher approximation that gives greater speedup gains but hurts the performance of the network. In this case, the approximated layer and all those below it can be fixed and the upper layers can be fine-tuned until the original performance is restored. 4 Experiments We use the 15 layer convolutional architecture of [8], trained on the ImageNet 2012 dataset [9]. The network contains 5 convolutional layers, 3 fully connected layers and a softmax output layer. We 5 Figure 2: Visualization of the 1st layer filters. (Left) Each component of the 96 7x7 filters is plotted in RGB space. Points are colored based on the output filter they belong to. Hence, there are 96 colors and 72 points of each color. Leftmost plot shows the original filters and the right plot shows the filters after the monochromatic approximation, where each filter has been projected down to a line in colorspace. (Right) Original and approximate versions of a selection of 1st layer filters. evaluated the network on both CPU and GPU platforms. All measurements of prediction performance are with respect to the 50K validation images from the ImageNet12 dataset. We present results showing the performance of the approximations described in Section 3 in terms of prediction accuracy, speedup gains and reduction in memory overhead. All of our fine-tuning results were achieved by training with less than 2 passes using the ImageNet12 training dataset. Unless stated otherwise, classification numbers refer to those of fine-tuned models. 4.1 Speedup The majority of forward propagation time is spent on the first two convolutional layers (see Supplementary Material for breakdown of time across all layers). Because of this, we restrict our attention to the first and second convolutional layers in our speedup experiments. However, our approximations could easily applied to convolutions in upper layers as well. We implemented several CPU and GPU approximation routines in an effort to achieve empirical speedups. Both the baseline and approximation CPU code is implemented in C++ using Eigen3 library [10] compiled with Intel MKL. We also use Intel’s implementation of openmp and multithreading. The baseline gives comparable performance to highly optimized MATLAB convolution routines and all of our CPU speedup results are computed relative to this. We used Alex Krizhevsky’s CUDA convolution routines 1 as a baseline for GPU comparisons. The approximation versions are written in CUDA. All GPU code was run on a standard nVidia Titan card. We have found that in practice it is often difficult to achieve speedups close to the theoretical gains based on the number of arithmetic operations (see Supplementary Material for discussion of theoretical gains). Moreover, different computer architectures and CNN architectures afford different optimization strategies making most implementations highly specific. However, regardless of implementation details, all of the approximations we present reduce both the number of operations and number of weights required to compute the output by at least a factor of two, often more. 4.1.1 First Layer The first convolutional layer has 3 input channels, 96 output channels and 7x7 filters. We approximated the weights in this layer using the monochromatic approximation described in Section 3.3. The monochromatic approximation works well if the color components span a small number of one dimensional subspaces. Figure 2 illustrates the effect of the monochromatic approximation on the first layer filters. The only parameter in the approximation is C′, the number of color channels used for the intermediate representation. As expected, the network performance begins to degrade as C′ decreases. The number of floating point operations required to compute the output of the monochromatic convolution is reduced by a factor of 2 −3×, with the larger gain resulting for small C′. Figure 3 shows the empirical speedups we achieved on CPU and GPU and the corresponding network performance for various numbers of colors used in the monochromatic approximation. Our CPU and GPU imple1https://code.google.com/p/cuda-convnet/ 6 −1 0 1 2 3 4 5 6 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 C’ = 8 C’ = 12 C’ = 16 C’ = 24 C’ = 4 C’ = 6 C’ = 8 C’ = 12 C’ = 16 C’ = 24 C’ = 4 C’ = 6 C’ = 8 C’ = 12 Percent loss in performance Empirical gain in speed on CPU First layer approximation: Performance loss vs. empirical CPU speedup ||W||F distance metric ||W||data distance metric Finetuned −1 0 1 2 3 4 5 6 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 C’ = 12 C’ = 16 C’ = 24 C’ = 4 C’ = 6 C’ = 12 C’ = 16 C’ = 24 C’ = 4 C’ = 6 C’ = 12 Percent loss in performance Empirical gain in speed on GPU First layer approximation: Performance loss vs. empirical GPU speedup ||W||F distance metric ||W||data distance metric Finetuned Figure 3: Empirical speedups on (Left) CPU and (Right) GPU for the first layer. C′ is the number of colors used in the approximation. −1 0 1 2 3 4 5 6 7 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3 K1 = 24 K2 = 76 K1 = 19 K2 = 64 K1 = 19 K2 = 51 K1 = 24 K2 = 76 K1 = 19 K2 = 64 K1 = 19 K2 = 51 K1 = 16 K2 = 51 K1 = 19 K2 = 44 K1 = 19 K2 = 64 K1 = 19 K2 = 51 K1 = 16 K2 = 51 Percent loss in performance Empirical gain in speed on CPU Second layer approximation: Performance loss vs. empirical CPU speedup ||W||F distance metric ||W||data distance metric Finetuned 0 1 2 3 4 5 6 7 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 K = 6 K = 8 K = 10 K = 12 K = 14 K = 16 K = 5 K = 6 K = 8 K = 10 K = 12 K = 14 K = 16 K = 5 K = 6 K = 8 Percent loss in performance Empirical gain in speed on GPU Second layer approximation: Performance loss vs. empirical GPU speedup ||W||F distance metric ||W||maha distance metric Fine−tuned Figure 4: Empirical speedups for second convolutional layer. (Left) Speedups on CPU using biclustered (G = 2 and H = 2) with SVD approximation. (Right) peedups on GPU using biclustered (G = 48 and H = 2) with outer product decomposition approximation. mentations achieve empirical speedups of 2 −2.5× relative to the baseline with less than 1% drop in classification performance. 4.1.2 Second Layer The second convolutional layer has 96 input channels, 256 output channels and 5x5 filters. We approximated the weights using the techniques described in Section 3.4. We explored various configurations of the approximations by varying the number of input clusters G, the number of output clusters H and the rank of the approximation (denoted by K1 and K2 for the SVD decomposition and K for the outer product decomposition). Figure 4 shows our empirical speedups on CPU and GPU and the corresponding network performance for various approximation configurations. For the CPU implementation we used the biclustering with SVD approximation. For the GPU implementation we using the biclustering with outer product decomposition approximation. We achieved promising results and present speedups of 2 −2.5× relative to the baseline with less than a 1% drop in performance. 4.2 Combining approximations The approximations can also be cascaded to provide greater speedups. The procedure is as follows. Compress the first convolutional layer weights and then fine-tune all the layers above until performance is restored. Next, compress the second convolutional layer weights that result from the fine-tuning. Fine-tune all the layers above until performance is restored and then continue the process. We applied this procedure to the first two convolutional layers. Using the monochromatic approximation with 6 colors for the first layer and the biclustering with outer product decomposition approx7 Approximation method Number of parameters Approximation Reduction Increase hyperparameters in weights in error Standard colvolution CXY F Conv layer 1: Monochromatic CC′ + XY F C′ = 6 3× 0.43% Conv layer 2: Biclustering GHK( C G + XY + F H ) G = 48; H = 2; K = 6 5.3× 0.68% + outer product decomposition Conv layer 2: Biclustering + SVD GH( C G K1 + K1XY K2 + K2 F H ) G = 2; H = 2; K1 = 19; K2 = 24 3.9× 0.9% Standard FC NM FC layer 1: Matrix SVD NK + KM K = 250 13.4× 0.8394% K = 950 3.5× 0.09% FC layer 2: Matrix SVD NK + KM K = 350 5.8× 0.19% K = 650 3.14× 0.06% FC layer 3: Matrix SVD NK + KM K = 250 8.1× 0.67% K = 850 2.4× 0.02% Table 2: Number of parameters expressed as a function of hyperparameters for various approximation methods and empirical reduction in parameters with corresponding network performance. imation for the second layer (G = 48; H = 2; K = 8) and fine-tuning with a single pass through the training set we are able to keep accuracy within 1% of the original model. This procedure could be applied to each convolutional layer, in this sequential manner, to achieve overall speedups much greater than any individual layer can provide. A more comprehensive summary of these results can be found in the Supplementary Material. 4.3 Reduction in memory overhead In many commercial applications memory conservation and storage are a central concern. This mainly applies to embedded systems (e.g. smartphones), where available memory is limited, and users are reluctant to download large files. In these cases, being able to compress the neural network is crucial for the viability of the product. In addition to requiring fewer operations, our approximations require significantly fewer parameters when compared to the original model. Since the majority of parameters come from the fully connected layers, we include these layers in our analysis of memory overhead. We compress the fully connected layers using standard SVD as described in 3.2.2, using K to denote the rank of the approximation. Table 2 shows the number of parameters for various approximation methods as a function of hyperparameters for the approximation techniques. The table also shows the empirical reduction of parameters and the corresponding network performance for specific instantiations of the approximation parameters. 5 Discussion In this paper we have presented techniques that can speed up the bottleneck convolution operations in the first layers of a CNN by a factor 2 −3×, with negligible loss of performance. We also show that our methods reduce the memory footprint of weights in the first two layers by factor of 2 −3× and the fully connected layers by a factor of 5 −13×. Since the vast majority of weights reside in the fully connected layers, compressing only these layers translates into a significant savings, which would facilitate mobile deployment of convolutional networks. These techniques are orthogonal to other approaches for efficient evaluation, such as quantization or working in the Fourier domain. Hence, they can potentially be used together to obtain further gains. An interesting avenue of research to explore in further work is the ability of these techniques to aid in regularization either during or post training. The low-rank projections effectively decrease number of learnable parameters, suggesting that they might improve generalization ability. The regularization potential of the low-rank approximations is further motivated by two observations. The first is that the approximated filters for the first conolutional layer appear to be cleaned up versions of the original filters. Additionally, we noticed that we sporadically achieve better test error with some of the more conservative approximations. Acknowledgments The authors are grateful for support from ONR #N00014-13-1-0646, NSF #1116923, #1149633 and Microsoft Research. 8 References [1] Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229 (2013) [2] Denil, M., Shakibi, B., Dinh, L., Ranzato, M., de Freitas, N.: Predicting parameters in deep learning. arXiv preprint arXiv:1306.0543 (2013) [3] Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012) [4] Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on cpus. In: Proc. Deep Learning and Unsupervised Feature Learning NIPS Workshop. (2011) [5] Mathieu, M., Henaff, M., LeCun, Y.: Fast training of convolutional networks through ffts. arXiv preprint arXiv:1312.5851 (2013) [6] Jaderberg, M., Vedaldi, Andrea, Zisserman, A.: Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866 (2014) [7] Zhang, T., Golub, G.H.: Rank-one approximation to high order tensors. SIAM J. Matrix Anal. Appl. 23(2) (February 2001) 534–550 [8] Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901 (2013) [9] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: A Large-Scale Hierarchical Image Database. In: CVPR09. (2009) [10] Guennebaud, G., Jacob, B., et al.: Eigen v3. http://eigen.tuxfamily.org (2010) [11] Zeiler, M.D., Taylor, G.W., Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: Computer Vision (ICCV), 2011 IEEE International Conference on, IEEE (2011) 2018–2025 [12] Le, Q.V., Ngiam, J., Chen, Z., Chia, D., Koh, P.W., Ng, A.Y.: Tiled convolutional neural networks. In: Advances in Neural Information Processing Systems. (2010) [13] Le, Q.V., Ranzato, M., Monga, R., Devin, M., Chen, K., Corrado, G.S., Dean, J., Ng, A.Y.: Building high-level features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209 (2011) [14] Lowe, D.G.: Object recognition from local scale-invariant features. In: Computer vision, 1999. The proceedings of the seventh IEEE international conference on. Volume 2., Ieee (1999) 1150–1157 [15] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems 25. (2012) 1106– 1114 9
2014
64
5,552
Tree-structured Gaussian Process Approximations Thang Bui tdb40@cam.ac.uk Richard Turner ret26@cam.ac.uk Computational and Biological Learning Lab, Department of Engineering University of Cambridge, Trumpington Street, Cambridge, CB2 1PZ, UK Abstract Gaussian process regression can be accelerated by constructing a small pseudodataset to summarize the observed data. This idea sits at the heart of many approximation schemes, but such an approach requires the number of pseudo-datapoints to be scaled with the range of the input space if the accuracy of the approximation is to be maintained. This presents problems in time-series settings or in spatial datasets where large numbers of pseudo-datapoints are required since computation typically scales quadratically with the pseudo-dataset size. In this paper we devise an approximation whose complexity grows linearly with the number of pseudo-datapoints. This is achieved by imposing a tree or chain structure on the pseudo-datapoints and calibrating the approximation using a Kullback-Leibler (KL) minimization. Inference and learning can then be performed efficiently using the Gaussian belief propagation algorithm. We demonstrate the validity of our approach on a set of challenging regression tasks including missing data imputation for audio and spatial datasets. We trace out the speed-accuracy trade-off for the new method and show that the frontier dominates those obtained from a large number of existing approximation techniques. 1 Introduction Gaussian Processes (GPs) provide a flexible nonparametric prior over functions which can be used as a probabilistic module in both supervised and unsupervised machine learning problems. The applicability of GPs is, however, severely limited by a burdensome computational complexity. For example, this paper will consider non-linear regression on a dataset of size N for which training scales as O(N 3) and prediction as O(N 2). This represents a prohibitively large computational cost for many applications. Consequently, a substantial research effort has sought to develop efficient approximation methods that side-step these significant computational demands [1–9]. Many of these approximation methods are based upon an intuitive idea, which is to use a smaller pseudo-dataset of size M ≪N to summarize the observed dataset, reducing the cost for training and prediction (typically to O(NM 2) and O(M 2)). The methods can be usefully categorized into two non-exclusive classes according to the way in which they arrive at the pseudo-dataset. Indirect posterior approximations employ a modified generative model that is carefully constructed to be calibrated to the original, but for which inference is computationally cheaper. In practice this leads to parametric probabilistic models that inherit some of the GP’s robustness to over-fitting. Direct posterior approximations, on the other hand, cut to the chase and directly calibrate an approximate posterior distribution, chosen to have favourable computational properties, to the true posterior distribution. In other words, the non-parametric model is retained, but the pseudo-datapoints provide a bottleneck at the inference stage, rather than at the modelling stage. Pseudo-datapoint approximations have enabled GPs to be deployed in a far wider range of problems than was previously possible. However, they have a severe limitation which means many challenging datasets still remain far out of their reach. The problem arises from the fact that pseudo-dataset methods are functionally local in the sense that each pseudo-datapoint sculpts out the approximate 1 posterior in a small region of the input space around it [10]. Consequently, when the range of the inputs is large compared to the range of the dependencies in the posterior, many pseudo-datapoints are required to maintain the accuracy of the approximation. In time-series settings [11–13], such as audio denoising and missing data imputation considered later in the paper, this means that the number of pseudo-datapoints must grow with the number of datapoints if restoration accuracy is to be maintained. In other words, M must be scaled with N and so pseudo-datapoint schemes have not reduced the scaling of the computational complexity. In this context, approximation methods built from a series of local GPs are perhaps more appropriate, but they suffer from discontinuities at the boundaries that are problematic in many contexts, in the audio restoration example they lead to audible artifacts. The limitations of pseudo-datapoint approximations are not restricted to the time-series setting. Many datasets in geostatistics, climate science, astronomy and other fields have large, and possibly growing, spatial extent compared to the posterior dependency length. This puts them well out of the reach of all current pseudo-datapoint approximation methods. The purpose of this paper is to develop a new pseudo-datapoint approximation scheme which can be applied to these challenging datasets. Since the need to scale the number of pseudo-datapoints with the range of the inputs appears to be unavoidable, the approach instead focuses on reducing the computational cost of training and inference so that it is truely linear in N. This reduction in computational complexity comes from an indirect posterior approximation method which imposes additional structural restrictions on the pseudo-dataset so that it has a chain or tree structure. The paper is organized as follows: In the next section we will briefly review GP regression together with some well known pseudo-datapoint approximation methods. The tree-structured approximation is then proposed, related to previous methods, and developed in section 2. We demonstrate that this new approximation is able to tractably handle far larger datasets whilst maintaining the accuracy of prediction and learning in section 3. 1.1 Regression using Gaussian Processes This section provides a concise introduction to GP regression [14]. Suppose we have a training set comprising N D-dimensional input vectors {xn}N n=1 and corresponding real valued scalar observations {yn}N n=1. The GP regression model assumes that each observation yn is formed from an unknown function f(.), evaluated at input xn, which is corrupted by independent Gaussian noise. That is yn = f(xn) + ϵn where p(ϵn) = N(ϵn; 0, σ2). Typically a zero mean GP is used to specify a prior over the function f so that any finite set of function values are distributed under the prior according to a multivariate Gaussian p(f) = N(f; 0, Kff).1 The covariance of this Gaussian is specified by a covariance function or kernel, (Kff)n,n′ = kθ(xn, xn′), which depends upon a small number of hyper-parameters θ. The form of the covariance function and the values of the hyper-parameters encapsulates prior knowledge about the unknown function. Having specified the probabilistic model, we now consider regression tasks which typically involve predicting the function value f∗at some unseen input x∗(also known as missing data imputation) or estimating the function value f at a training input xn (also known as denoising). Both of these prediction problems can be handled elegantly in the GP regression framework by noting that the posterior distribution over the function values is another Gaussian process with a mean and covariance function given by mf(x) = Kxf(Kff+ σ2I)−1y, kf(x, x′) = k(x, x′) −Kxf(Kff+ σ2I)−1Kfx′. (1) Here Kffis the covariance matrix on the training set defined above and Kxf is the covariance function evaluated at pairs of test and training inputs. The hyperparameters θ and the noise variance σ2 can be learnt by finding a (local) maximum of the marginal likelihood of the parameters, p(y|θ, σ) = N(y; 0, Kff+ σ2I). The origin of the cubic computational cost of GP regression is the need to compute the Cholesky decomposition of the matrix Kff+ σ2I. Once this step has been performed a subsequent prediction can be made in O(N 2). 1.2 Review of Gaussian process approximation methods There are a plethora of methods for accelerating learning and inference in GP regression. Here we provide a brief and inexhaustive survey that focuses on indirect posterior approximation schemes based on pseudo-datasets. These approximations can be understood in terms of a three stage process. In the first stage the generative model is augmented with pseudo-datapoints, that is a set of pseudo-input points {¯xm}M m=1 and (noiseless) pseudo-observations {um}M m=1. In the second stage 1Here and in what follows, the dependence on the input values x has been suppressed to lighten the notation. 2 some of the dependencies in the model prior distribution are removed so that inference becomes computationally tractable. In the third stage the parameterisation of the new model is chosen in such a way that it is calibrated to the old one. This last stage can seem mysterious, but it can often be usefully understood as a KL divergence minimization between the true and the modified model. Perhaps the simplest example of this general approach is the Fully Independent Training Conditional (FITC) approximation [4] (see table 1). FITC removes direct dependencies between the function values f (see fig. 1) and calibrates the modified prior using the KL divergence KL(p(f, u)||q(f, u)) yielding q(f, u) = p(u) QN n=1 p(fn|u). That this model leads to computational advantages can perhaps most easily be seen by recognising that it is essentially a factor analysis model, with an admittedly clever parameterisation in terms of the covariance function. FITC has since been extended so that the pseudo-datapoints can have a different covariance function to the data [6] and so that some subset of the direct dependencies between the function values f are retained as in the Partially Independent Conditional (PIC) approximation [3,5] which generalizes the Bayesian Committee Machine [15]. There are indirect approximation methods which do not naturally fall into this general scheme. Stationary covariance functions can be approximated using a sum of M cosines which leads to the Sparse Spectrum Gaussian Process (SSGP) [7] which has identical computational cost to FITC. An alternative prior approximation method for stationary covariance functions in the multi-dimensional time-series setting designs a linear Gaussian state space model (LGSSM) so that it approximates the prior power spectrum using a connection to stochastic differential equations (SDEs) [16]. The Kalman smoother can then be used to perform inference and learning in the new representation with a linear complexity. This technique, however, only reduces the computational complexity for the temporal axis and the spatial complexity is still cubic, moreover the extension beyond the timeseries setting requires a second layer of approximations, such as variational free-energy methods [17] which are known to introduce significant biases [18]. In contrast to the methods mentioned above, direct posterior approximation methods do not alter the generative model, but rather seek computational savings through a simplified representation of the posterior distribution. Examples of this type of approach include the Projected Process (PP) method [1, 2] which has been since been interpreted as the expectation step in a variational free energy (VFE) optimisation scheme [8] enabling stochastic versions [19]. Similarly, the Expectation Propagation (EP) framework can also be used to devise posterior approximations with associated hyper-parameter learning scheme [9]. All of these methods employ a pseudo-dataset to parameterize the approximate posterior. Method KL minimization Result FITC∗ KL(p(f, u)||q(u) Q n q(fn|u)) q(u) = p(u), q(fn|u) = p(fn|u) PIC∗ KL(p(f, u)||q(u) Q k q(fCk|u)) q(u) = p(u), q(fCk|u) = p(fCk|u) PP KL( 1 Z p(u)p(f|u)q(y|u)||p(f, u|y)) q(y|u) = N(y; KfuK−1 uuu, σ2I) VFE KL(p(f|u)q(u)||p(f, u|y)) q(u) ∝p(u) exp(⟨log(p(y|f))⟩p(f|u)) EP KL(q(f; u)p(yn|fn)/qn(f; u)||q(f; u)) q(f; u) ∝p(f) Q m p(um|fm) Tree∗ KL(p(f, u)|| Q k q(fCk|uBk)× q(uBk|upar(Bk))) q(fCk|uBk) = p(fCk|uBk) q(uBk|upar(Bk)) = p(uBk|upar(Bk)) Table 1: GP approximations as KL minimization. Ck and Bk are disjoint subsets of the function values and pseudo-datapoints respectively. Indirect posterior approximations are indicated ∗. 1.3 Limitations of current pseudo-dataset approximations There is a conflict at the heart of current pseudo-dataset approximations. Whilst the effect of each pseudo-datapoint is local, the computations involving them are global. The local characteristic means that large numbers of pseudo-datapoints are required to accurately approximate complex posterior distributions. If ld is the range of the dependencies in the posterior in dimension d and Ld is the data-range in each dimension then approximation accuracy will be retained when M ⪆QD d=1 Ld/ld. Critically, for many applications this condition means that large numbers of pseudo-points are required, such as time series (L1 ∝N) and large spatial datasets (Ld ≫ld). Unfortunately, the global graphical structure means that it is computationally costly to handle such large pseudo-datasets. The obvious solution to this conflict is to use the so-called local approximation which splits the observations into disjoint blocks and models each one with a GP. This is a severe approach and this paper 3 f1 f2 f3 fn fN f∗ u (a) Full GP f1 f2 f3 fn fN f∗ u (b) FITC fC1 fC2 fC3 fCk f∗ fCK u (c) PIC fC1 fC2 fC3 fCk f∗ fCK uB1 uB2 uB3 uBk uBK (d) Tree (chain) Figure 1: Graphical models of the GP model and different prior approximation schemes using pseudo-datapoints. Thick edges indicate full pairwise connections and boldface fonts denote sets of variables. The chain structured version of the new approximation is shown for clarity. proposes a more elegant and accurate alternative that retains more of the graphical structure whilst still enabling local computation. 2 Tree-structured prior approximations In this section we develop an indirect posterior approximation in the same family as FITC and PIC. In order to reduce the computational overhead of these approximations, the global graphical structure is replaced by a local one via two modifications. First, the M pseudo-datapoints are divided into K disjoint blocks of potentially different cardinality {uBk}K k=1 and the blocks are then arranged into a tree. Second, the function values are also divided into K disjoint blocks of potentially different cardinality {fCk}K k=1 and the blocks are assumed to be conditionally independent given the corresponding subset of pseudo-datapoints. The new graphical model is shown in fig. 1d and it can be described mathematically as follows, q(u) = K Y k=1 q(uBk|upar(Bk)), q(f|u) = K Y k=1 q(fCk|uBk), p(y|f) = N Y n=1 p(yn; fn, σ2). (2) Here upar(Bk) denotes the pseudo-datapoints in the parent node of uBk. This is an example of prior approximation as the original likelihood function has been retained. The next step is to calibrate the new approximate model by choosing suitable values for the distributions {q(uBk|upar(Bk)), q(fCk|uBk)}K k=1. Taking an identical approach to that employed by FITC and PIC, we minimize a forward KL divergence between the true model prior and the approximation, KL(p(f, u)|| Q k q(fCk|uBk)q(uBk|upar(Bk))) (see table 1). The optimal distributions are found to be the corresponding conditional distributions in the unapproximated augmented model, q(uBk|upar(Bk)) = p(uBk|upar(Bk)) = N(uBk; Akupar(Bk), Qk), (3) q(fCk|uBk) = p(fCk|uBk) = N(fCk; CkuBk, Rk). (4) The parameters depend upon the covariance function. Letting uk = uBk, ul = upar(Bk) and fk = fCk we find that, Ak = KukulK−1 ulul, Qk = Kukuk −KukulK−1 ululKuluk, (5) Ck = KfkukK−1 ukuk, Rk = Kfkfk −KfkukK−1 ukukKukfk. (6) As shown in the graphical model, the local pseudo-data separate test and training latent functions. The marginal posterior distribution of the local pseudo-data is then sufficient to obtain the approximate predictive distribution: p(f∗|y) = R duBkp(f∗, uBk|y) = R duBkp(f∗|uBk)p(uBk|y). In other words, once inference has been performed, prediction is local and therefore fast. The important question of how to assign test and training points to blocks is discussed in the next section. We note that the tree-based prior approximation includes as special cases; the full GP, PIC, FITC, the local method and local versions of PIC and FITC (see table 1 in the supplementary material). Importantly, in a time-series setting the blocks can be organized into a chain and the approximate model becomes a LGSSM. This provides an new method for approximating GPs using LGSSMs in which the state is a set pseudo-observations, rather than for instance, the derivatives of function values at the input locations [16]. 4 Exact inference in this approximate model proceeds efficiently using the up-down algorithm for Gaussian Beliefs (see [20, Ch. 14]). The inference scheme has the same complexity as forming the model, O(KD3) ≈O(ND2) (where D is the average number of observations per block). 2.1 Inference and learning Selecting the pseudo-inputs and constructing the tree First we consider the method for dividing the observed data into blocks and selecting the pseudo-inputs. Typically, the block sizes will be chosen to be fairly small in order to accelerate learning and inference. For data which are on a grid, such as regularly sampled time-series considered later in the paper, it may be simplest to use regular blocks. An alternative, which might be more appropriate for non-regularly sampled data, is to use a k-means algorithm with the Euclidean distance score. Having blocked the observations, a random subset of the data in each block are chosen to set the pseudo-inputs. Whilst it would be possible in principle to optimize the locations of the pseudo-inputs, in practice the new approach can tractably handle a very large number of pseudo-datapoints (e.g. M ≈N), and so optimisation is less critical than for previous approaches. Once the blocks are formed, they are fixed during hyperparameter training and prediction. Second, we consider how to construct the tree. The pair-wise distances between the cluster centers are used to define the weights between candidate edges in a graph. Kruskal’s algorithm uses this information to construct an acyclic graph. The algorithm starts with a fully disconnected graph and recursively adds the edge with the smallest weight that does not introduce loops. A tree is randomly formed from this acyclic subgraph by choosing one node to be the root. This choice is arbitrary and does not affect the results of inference. The parameters of the model {Ak, Qk, Ck, Rk}K k=1 (state transitions and noise) are computed by traversing down the tree from the root to the leaves. These matrices must be recomputed at each step during learning. Inference It is straightforward to marginalize out the latent functions f in the graphical model in which case the effective local likelihood becomes p(yk|uk) = N(yk; Ckuk, Rk +σ2I). The model can be recognized from the graphical model as a tree-structured Gaussian model with latent variables u and observations y. As is shown in the supplementary, the posterior distribution can be found by using the Gaussian belief propagation algorithm (for more see [20]). The passing of messages can be scheduled so the marginals can be found after two passes (asynchronous scheduling: upwards from leaves to root and then downwards). For chain structures inference can be performed using the Kalman smoother at the same cost. Hyperparameter learning The marginal likelihood can be efficiently computed by the same belief propagation algorithms due to its recursive form, p(y1:K|θ) = QK k=1 p(yk|y1:k−1, θ). The derivatives can also be tractably computed as they involve only local moments: d dθ log p(y|θ) = K X k=1  ⟨d dθ log p(uk|ul)⟩p(uk,ul|y) + ⟨d dθ log p(yk|uk)⟩p(uk|y)  . (7) For concreteness, the explicit form of the marginal likelihood and its derivative are included in the supplementary material. We obtain point estimates of the hyperparameters by finding a (local) maximum of the marginal likelihood using the BFGS algorithm. 3 Experiments We test the new approximation method on three challenging real-world prediction tasks2 via a speedaccuracy trade-off as recommended in [21]. Following that work, we did not investigate the effects of pseudo-input optimisation. We used different datasets that had less limited spatial/temporal extent. Experiment 1: Audio sub-band data (exponentiated quadratic kernel) In the first experiment we consider imputation of missing data in a sub-band of a speech signal. The speech signal was taken from the TIMIT database (see fig. 4), a short time Fourier transform was applied (20ms Gaussian window), and the real part of the 152Hz channel selected for the experiments. The signal was T = 50000 samples long and 25 sections of length 80 samples were removed. An exponentiated quadratic kernel, kθ(t, t′) = σ2 exp(−1 2l2 (t −t′)2), was used for prediction. We compare the chain 2Synthetic data experiments can be found in the supplementary material. 5 structured pseudo-datapoint approximation to FITC, VFE, SSGP, local versions of PIC (corresponding to setting Ak = 0, Qk = Kukuk in the tree-structured approximation) and the SDE method.3 Only 20000 datapoints were used for the SDE method due to the long run times. The size of the pseudo-dataset and the number of blocks in the chain and local approximations, and the order of approximation in SDE were varied to trace out speed-accuracy frontiers. Accuracy of the imputation was quantified using the standardized mean squared errors (SMSEs) (for other metrics, see the supplementary material). Hyperparameter learning proceeded until a convergence criteria or a maximum number of function evaluations was reached. Learning and prediction (imputation) times were recorded. We found that the chain structured method outperforms all of the other methods (see fig. 2). For example, for a fixed training time of 100s, the best performing chain provided a three-fold increase in accuracy over the local method which was the next best. A typical imputation is shown in fig. 4 (left hand side). The chain structured method was able to accurately impute the missing data whilst that the local method is less accurate and more uncertain as information is not propagated between the blocks. SMSE Training time/s 2,8 2,8 5,20 20,80 20,80 2,10 2,10 5,25 10,50 2,20 2,20 5,50 5,50 10,100 20,200 2,40 5,100 10,200 20,400 2,50 5,125 10,250 20,500 20,500 16 16 32 32 32 64 64 128 128 128 256 512 512 512 1024 1024 1024 1500 1500 1500 1 2 3 4 5 6 78 10 (a) (b) 10 100 1000 10000 0.01 0.1 0.2 0.5 1 SMSE Test time/s 2,8 2,8 5,20 5,20 10,40 20,80 10,50 10,50 2,20 5,50 10,100 20,200 5,100 10,200 20,400 20,400 2,50 5,125 10,250 10,250 20,500 16 16 16 32 32 64 64 64 128 128 256 256 512 512 1024 1024 1500 1500 1500 2 3 4 5 6 78 10 0.1 1 10 0.01 0.1 0.2 0.5 1 Chain Local FITC VFE SSGP SDE Figure 2: Experiment 1. Audio sub-band reconstruction error as a function of training time (a) and test time (b) for different approximations. The numerical labels for the chain and local methods are the number of pseudo-datapoints per block and the number of observations per block respectively, and for the SDE method are the order of approximation. For the other methods they are the size of the pseudo-dataset. Faster and more accurate approximations are located towards the bottom left hand corners of the plots. Experiment 2: Audio filter data (spectral mixture) The second experiment tested the performance of the chain based approximation when more complex kernels are employed. We filtered the same speech signal using a 152Hz filter with a 50Hz bandwidth, producing a signal of length T = 50000 samples from which missing sections of length 150 samples were removed. Since the complete signal had a complex bandpass spectrum we used a spectral mixture kernel containing two components [22], kθ(t, t′) = P2 k=1 σ2 k cos(ωk(t −t′)) exp(−1 2l2 k (t −t′)2). We compared a chain based approximation to FITC, VFE and the local PIC method finding it to be substantially more accurate than both methods (see fig. 3 for SMSE results and the right hand side of fig. 4 for a typical example). Results with more components showed identical trends (see supplementary material). Experiment 3: Terrain data (two dimensional input space, exponentiated quadratic kernel) In the final experiment we tested the tree based appoximation using a spatial dataset in which terrain altitude was measured as a function of geographical position.4 We considered a 20km by 30km region (400×600 datapoints) and tested prediction on 80 randomly positioned missing blocks of size 1km by 1km (20x20 datapoints). In total, this translates into about 200k/40k training/test points. We used an exponentiated quadratic kernel with different length-scales in the two input dimensions, comparing a tree-based approximation, which was constructed as described in section 2.1, to the 3Code is available at http://www.gaussianprocess.org/gpml/code/matlab/doc/ [FITC], http://www.tsc.uc3m.es/˜miguel/downloads.php [SSGP], http://becs.aalto.fi/en/research/ bayes/gpstuff/ [SDE] and http://mlg.eng.cam.ac.uk/thang/ [Tree+VFE]. 4Dataset is available at http://data.gov.uk/dataset/os-terrain-50-dtm. 6 SMSE Training time/s 2,8 5,20 10,40 10,40 20,80 2,10 20,100 5,50 5,50 20,200 2,40 5,100 5,100 20,400 20,400 2,50 2,50 5,125 5,125 20,500 16 16 32 64 64 128 256 256 512 512 1024 1024 1500 1500 10 100 1000 10000 0.02 0.1 0.2 0.5 1 SMSE Test time/s 2,8 5,20 10,40 20,80 20,80 5,25 10,50 10,50 5,50 20,200 2,40 2,40 5,100 20,400 2,50 2,50 5,125 5,125 20,500 20,500 16 32 32 64 64 128 128 256 512 512 1024 1500 1500 (a) 0.1 1 10 0.02 0.1 0.2 0.5 1 Chain Local FITC VFE (b) Figure 3: Experiment 2. Filtered audio signal reconstruction error as a function of training time (a) and test time (b) for different approximations. See caption of fig. 2 for full details. yt −2 0 2 yt Time/ms 2340 2350 2360 2370 2380 −2 0 2 yt −2 0 2 yt Time/ms 5030 5040 5050 5060 5070 5080 −2 0 2 True Chain Local (a) (b) Figure 4: Missing data imputation for experiment 1 (audio sub-band data, (a)) and experiment 2 (filtered audio data, (b)). Imputation using the chain-structured approximation (top) is more accurate and less uncertain than the predictions obtained from the local method (bottom). Blocks consisted of 5 pseudo-datapoints and 50 observations respectively. pseudo-point approximation methods considered in the first experiment. Figure 5 shows the speedaccuracy trade-off for the various approximation methods at the test and training stages. We found that the global approximation techniques such as FITC or SSGP could not tractably handle a sufficient number of pseudo-datapoints to support accurate imputation. The local variant of our method outperformed the other techniques, but compared poorly to the tree. Typical reconstructions from the tree, local and FITC approximations are shown in fig. 6. Summary of experimental results The speed-accuracy frontier for the new approximation scheme dominates those produced by the other methods over a wide range for each of the three datasets. Similar results were found for additional datasets (see supplementary material). It is perhaps not surprising that the tree approximation performs so favourably. Consider the rule-of-thumb estimate for the number of pseudo-datapoints required. Using the length-scales ld learned by the tree-approximation as a proxy for the posterior dependency length the estimated pseudo-dataset size required for the three datasets is M ⪆Q d Ld/ld ≈{1400, 1000, 5000}. This is at the upper end of what can be tractably handled using standard approximations. Moreover, these approximation schemes can be made arbitrarily poor by expanding the region further. The most accurate treestructured approximation for the three datasets used {2500, 10000, 20000} datapoints respectively. The local PIC method performs more favourably than the standard approximations and is generally faster than the tree since it involves a single pass through the dataset and simpler matrix computations. However, blocking the data into independent chunks results in artifacts at the block boundaries which reduces the approximation’s accuracy significantly when compared to the tree (e.g. if they happen to coincide with a missing region). 7 Training time/s SMSE 64 128 256 512 1024 64 128 256 512 1024 64 128 256 512 1024 5,300 10,300 15,300 25,300 4,240 8,240 5,300 10,300 15,300 4,240 8,240 50 100 1000 10000 0.05 0.1 0.2 0.4 VFE FITC SSGP Tree Local Test time/ms SMSE 64 128 256 512 1024 64 128 256 512 1024 64 128 256 512 1024 10,300 15,300 25,300 4,240 8,240 5,300 15,300 25,300 4,240 8,240 0.5 1 5 10 20 0.05 0.1 0.2 0.4 (a) (b) Figure 5: Experiment 3. Terrain data reconstruction. SMSE as a function of training time (a) and test time (b). See caption of fig. 2 for full details. tree inference error local inference error FITC inference error graph 250m -150m 0 50m 250m (a) (b) (c) 0 3km 0 3km complete data Figure 6: Experiment 3. Terrain data reconstruction. The blocks in this region input space are organized into a tree-structure (a) with missing regions shown by the black squares. The complete terrain altitude data for the region (b). Prediction errors from three methods (c). 4 Conclusion This paper has presented a new pseudo-datapoint approximation scheme for Gaussian process regression problems which imposes a tree or chain structure on the pseudo-dataset that is calibrated using a KL divergence. Inference and learning in the resulting approximate model proceeds efficiently via Gaussian belief propagation. The computational cost of the approximation is linear in the pseudo-dataset size, improving upon the quadratic scaling of typical approaches, and opening the door to more challenging datasets than have previously been considered. Importantly, the method does not require the input data or the covariance function to have special structure (stationarity, regular sampling, time-series settings etc. are not a requirement). We showed that the approximation obtained a superior performance in both predictive accuracy and runtime complexity on challenging regression tasks which included audio missing data imputation and spatial terrain prediction. There are several directions for future work. First, the new approximation scheme should be tested on datasets that have higher dimensional input spaces since it is not clear how well the approximation will generalize to this setting. Second, the tree structure naturally leads to (possibly distributed) online stochastic inference procedures in which gradients computed at a local block, or a collection of local blocks, are used to update hyperparameters directly, as opposed waiting for a full pass up and down the tree. Third, the tree structure used for prediction can be decoupled from the tree structure used for training, whilst still employing the same pseudo-datapoints potentially improving prediction. Acknowledgements We would like to thank the EPSRC (grant numbers EP/G050821/1 and EP/L000776/1) and Google for funding. 8 References [1] M. Seeger, C. K. I. Williams, and N. D. Lawrence, “Fast forward selection to speed up sparse Gaussian process regression,” in International Conference on Artificial Intelligence and Statistics, 2003. [2] M. Seeger, Bayesian Gaussian process models: PAC-Bayesian generalisation error bounds and sparse approximations. PhD thesis, University of Edinburgh, 2003. [3] J. Qui˜nonero-Candela and C. E. Rasmussen, “A unifying view of sparse approximate Gaussian process regression,” The Journal of Machine Learning Research, vol. 6, pp. 1939–1959, 2005. [4] E. Snelson and Z. Ghahramani, “Sparse Gaussian processes using pseudo-inputs,” in Advances in Neural Information Processing Systems 19, pp. 1257–1264, MIT press, 2006. [5] E. Snelson and Z. Ghahramani, “Local and global sparse Gaussian process approximations,” in International Conference on Artificial Intelligence and Statistics, pp. 524–531, 2007. [6] M. L´azaro-Gredilla and A. R. Figueiras-Vidal, “Inter-domain Gaussian processes for sparse inference using inducing features.,” in Advances in Neural Information Processing Systems 22, pp. 1087–1095, Curran Associates, Inc., 2009. [7] M. L´azaro-Gredilla, J. Qui˜nonero-Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal, “Sparse spectrum Gaussian process regression,” The Journal of Machine Learning Research, vol. 11, pp. 1865–1881, 2010. [8] M. K. Titsias, “Variational learning of inducing variables in sparse Gaussian processes,” in International Conference on Artificial Intelligence and Statistics, pp. 567–574, 2009. [9] Y. Qi, A. H. Abdel-Gawad, and T. P. Minka, “Sparse-posterior Gaussian processes for general likelihoods.,” in Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence, pp. 450–457, AUAI Press, 2010. [10] E. Snelson, Flexible and efficient Gaussian process models for machine learning. PhD thesis, Gatsby Computational Neuroscience Unit, University College London, 2007. [11] R. E. Turner and M. Sahani, “Time-frequency analysis as probabilistic inference,” Signal Processing, IEEE Transactions on, vol. Early Access, 2014. [12] R. E. Turner and M. Sahani, “Probabilistic amplitude and frequency demodulation,” in Advances in Neural Information Processing Systems 24, pp. 981–989, 2011. [13] R. E. Turner, Statistical Models for Natural Sounds. PhD thesis, Gatsby Computational Neuroscience Unit, UCL, 2010. [14] C. E. Rasmussen and C. K. I. Williams, Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [15] V. Tresp, “A Bayesian committee machine,” Neural Computation, vol. 12, no. 11, pp. 2719–2741, 2000. [16] S. Sarkka, A. Solin, and J. Hartikainen, “Spatiotemporal learning via infinite-dimensional Bayesian filtering and smoothing: A look at Gaussian process regression through Kalman filtering,” Signal Processing Magazine, IEEE, vol. 30, pp. 51–61, July 2013. [17] E. Gilboa, Y. Saatci, and J. Cunningham, “Scaling multidimensional inference for structured Gaussian processes,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. Early Access, 2013. [18] R. E. Turner and M. Sahani, “Two problems with variational expectation maximisation for time-series models,” in Bayesian Time series models (D. Barber, T. Cemgil, and S. Chiappa, eds.), ch. 5, pp. 109– 130, Cambridge University Press, 2011. [19] J. Hensman, N. Fusi, and N. Lawrence, “Gaussian processes for big data,” in Proceedings of the TwentyNinth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-13), (Corvallis, Oregon), pp. 282–290, AUAI Press, 2013. [20] D. Koller and N. Friedman, Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning. The MIT Press, 2009. [21] K. Chalupka, C. K. Williams, and I. Murray, “A framework for evaluating approximation methods for Gaussian process regression,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 333–350, 2013. [22] A. G. Wilson and R. P. Adams, “Gaussian process kernels for pattern discovery and extrapolation,” in Proceedings of the 30th International Conference on Machine Learning, pp. 1067–1075, 2013. 9
2014
65
5,553
Optimal decision-making with time-varying evidence reliability Jan Drugowitsch1 Rub´en Moreno-Bote2 Alexandre Pouget1 1D´ept. des Neurosciences Fondamentales Universit´e de Gen`eve CH-1211 Gen`eve 4, Switzerland jdrugo@gmail.com, alexandre.pouget@unige.ch 2Research Unit, Parc Sanitari Sant Joan de D´eu and University of Barcelona 08950 Barcelona, Spain rmoreno@fsjd.org Abstract Previous theoretical and experimental work on optimal decision-making was restricted to the artificial setting of a reliability of the momentary sensory evidence that remained constant within single trials. The work presented here describes the computation and characterization of optimal decision-making in the more realistic case of an evidence reliability that varies across time even within a trial. It shows that, in this case, the optimal behavior is determined by a bound in the decision maker’s belief that depends only on the current, but not the past, reliability. We furthermore demonstrate that simpler heuristics fail to match the optimal performance for certain characteristics of the process that determines the time-course of this reliability, causing a drop in reward rate by more than 50%. 1 Introduction Optimal decision-making constitutes making optimal use of sensory information to maximize one’s overall reward, given the current task contingencies. Example of decision-making are the decision to cross the road based on the percept of incoming traffic, or the decision of an eagle to dive for prey based on the uncertain information of the prey’s presence and location. Any kind of decisionmaking based on sensory information requires some temporal accumulation of this information, which makes such accumulation the first integral component of decision-making. Accumulating evidence for a longer duration causes higher certainty about the stimulus but comes at the cost of spending more time to commit to a decision. Thus, the second integral component of such decisionmaking is to decide when enough information has been accumulated to commit to a decision. Previous work has established that, if the reliability of momentary evidence is constant within a trial but might vary across trials, optimal decision-making can be implemented by a class of models known as diffusion models [1, 2, 3]. Furthermore, it has been shown that the behavior of humans and other animals at least qualitatively follow that predicted by such diffusion models [4, 5, 6, 3]. Our work significantly extends this work by moving from the rather artificial case of constant evidence reliability to allowing the reliability of evidence to change within single trials. Based on a principled formulation of this problem, we describe optimal decision-making with time-varying evidence reliability. Furthermore, a comparison to simpler decision-making heuristics demonstrates when such heuristics fail to feature comparable performance. In particular, we derive Bayes-optimal evidence accumulation for our task setup, and compute the optimal policy for such cases by dynamic programming. To do so, we borrow concepts from continuous-time stochastic control to keep the computational complexity linear in the process space size (rather than quadratic for the na¨ıve approach). Finally, we characterize how the optimal policy depends on parameters that determine the evidence reliability time-course, and show that simpler, heuristic policies fail to match the optimal performance for particular sub-regions of this parameter space. 1 2 Perceptual decision-making with time-varying reliability Within a single trial, the decision maker’s task is to identify the state of a binary hidden variable, z ∈{−1, 1} (with units s−1, if time is measured in seconds), based on a stream of momentary evidence dx(t), t ≥0. This momentary evidence provides uncertain information about z by dx = zdt + 1 p τ(t) dW, where dτ = η (µ −τ) dt + σ r2η µ √τdB, (1) where dW and dB are independent Wiener processes. In the above, τ(t) controls how informative the momentary evidence dx(t) is about z, such that τ(t) is the reliability of this momentary evidence. We assume its time-course to be described by the Cox-Ingersoll-Ross (CIR) process (τ(t) in Eq. (1)) [7]. Despite the simplicity of this model and its low number of parameters, it is sufficiently flexible in modeling how the evidence reliability changes with time, and ensures that τ ≥0, always1. It is parameterized by the mean reliability, µ, its variance, σ2, and its speed of change, η, all of which we assume to be known to the decision maker. At the beginning of each trial, at t = 0, τ(0) is drawn from the process’ steady-state distribution, which is gamma with shape µ2/σ2 and scale σ2/µ [7]. It can be shown, that upon observing some momentary evidence, τ(t) can be immediately estimated with infinite precision, such that it is known for all t ≥0 (see supplement). Optimal decision-making requires in each trial computing the posterior z, given all evidence dx0:t from trial onset to some time t. Assuming a uniform prior over z’s, this posterior is given by g(t) ≡p (z = 1|dx0:t) = 1 1 + e−2X(t) , where X(t) = Z t 0 τ(s)dx(s), (2) (this has already been established in [8]; see supplement for derivation). Thus, at time t, the decision maker’s belief g(t) that z = 1 is the sigmoid of the accumulated, reliability-weighted, momentary evidence up until that time. We consider two possible tasks. In the ER task, the decision maker is faced with a single trial in which correct (incorrect) decisions are rewarded by r+ (r−), and the accumulation of evidence comes at a constant cost (for example, attentional effort) of c per unit time. The decision maker’s aim is then to maximize her expected reward, ER, including the cost for accumulating evidence. In the RR task, we consider a long sequence of trials, separated on average by the inter-trial interval ti, which might be extended by the penalty time tp for wrong decisions. Maximizing reward in such a sequence equals maximizing the reward rate, RR, per unit time [9]. Thus, the objective function for either task is given by ER (PC, DT) = PCr++(1−PC)r−−cDT, RR (PC, DT) = ER (PC, DT) DT + ti + (1 −PC)tp , (3) where PC is the probability of performing a correct decision, and DT is the expected decision time. For notational convenience we assume r+ = 1 and r−= 0. The work can be easily generalized to any choice of r+ and r−. 3 Finding the optimal policy by Dynamic Programming 3.1 Dynamic Programming formulation Focusing first on the ER task of maximizing the expected reward in a single trial, the optimal policy can be described by bounds in belief2 at gθ(τ) and 1 −gθ(τ) as functions of the current reliability, τ. Once either of these bounds is crossed, the decision maker chooses z = 1 (for gθ(τ)) or z = −1 (for 1 −gθ(τ)). The bounds are found by solving Bellman’s equation [10, 9], V (g, τ) = max n Vd(g), ⟨V (g + δg, τ + δτ)⟩p(δg,δτ|g,τ) −cδt o , (4) where Vd(g) = max {g, 1 −g}. Here, the value function V (g, τ) denotes the expected return for current state (g, τ) (i.e. holding belief g, and current reliability τ), which is the expected reward at 1We restrict ourselves to µ > σ, in which case τ(t) > 0 (excluding τ = 0) is guaranteed for all t ≥0. 2The subscript ·θ indicates the relation to the optimal decision bound θ. 2 until value iteration, n = 1, 2, 3, ... where and intersect bound expectation by PDE solver (a) (b) root finding on until value iteration with current Figure 1: Finding the optimal policy by dynamic programming. (a) illustrates the approach for the ER task. Here, Vd(g) and Vc(g, τ) denote the expected return for immediate decisions and that for continuing to accumulate evidence, respectively. (b) shows the same approach for RR tasks, in which, in an outer loop, the reward rate ρ is found by root finding. this state within a trial, given that optimal choices are performed in all future states. The right-hand side of Bellman’s equation is the maximum of the expected returns for either making a decision immediately, or continuing to accumulate more evidence and deciding later. When deciding immediately, one expects reward g (or 1 −g) when choosing z = 1 (or z = −1), such that the expected return for this choice is Vd(g). Continuing to accumulate evidence for another small time step δt comes at cost cδt, but promises future expected return ⟨V (g + δg, τ + δτ)⟩p(δg,δτ|g,τ), as expressed by the second term in max{·, ·} in Eq. (4). Given a V (g, t) that satisfies Bellman’s equation, it is easy to see that the optimal policy is to accumulate evidence until the expected return for doing so is exceeded by that for making immediate decisions. The belief g at which this happens differs for different reliabilities τ, such that the optimal policy is determined by a bound in belief, gθ(τ), that depends on the current reliability. We find the solution to Bellman’s equation itself by value iteration on a discretized (g, τ)space, as illustrated in Fig. 1(a). Value iteration is based on a sequence of value functions V 0(g, τ), V 1(g, τ), . . . , where V n(g, τ) is given by the solution to right-hand side of Eq. (4) with ⟨V (g + δg, τ + δτ)⟩based on the previous value function V n−1(g, τ). With n →∞, this procedure guarantees convergence to the solution of Eq. (4). In practice, we terminate value iteration once maxg,τ |V n(g, τ)−V n−1(g, τ)| drops below a pre-defined threshold. The only remaining difficulty is how to compute the expected future return ⟨V (·, ·)⟩on the discretized (g, τ)-space, which we describe in more detail in the next section. The RR task, in which the aim is to maximize the reward rate, requires the use of average-reward Dynamic Programming [9, 11], based on the average-adjusted expected return, ˜V (g, τ). If ρ denotes the reward rate (avg. reward per unit time, RR in Eq. (3)), this expected return penalizes the passage of some time δt by −ρδt, and can be interpreted as how much better or worse the current state is than the average. It is relative to an arbitrary baseline, such that adding a constant to this return for all states does not change the resulting policy [11]. We remove this additional degree of freedom by fixing the average ˜V (·, ·) at the beginning of a trial (where g = 1/2) to ⟨˜V (1/2, τ)⟩p(τ) = 0, where the expectation is with respect to the steady-state distribution of τ. Overall, this leads to Bellman’s equation, ˜V (g, τ) = max  ˜Vd(g), D ˜V (g + δg, τ + δτ) E p(δg,δτ|g,τ) −(c + ρ)δt  (5) with the average-adjusted expected return for immediate decisions given by ˜Vd(g) = max {g −ρ (ti + (1 −g)tp) , 1 −g −ρ (ti + gtp)} . (6) The latter results from a decision being followed by the inter-trial interval ti and an eventual penalty time tp for incorrect choices, after which the average-adjusted expected return is ⟨˜V (1/2, τ)⟩= 0, as previously chosen. The value function is again computed by value iteration, assuming a known ρ. The correct ρ itself is found in an outer loop, by root-finding on the consistency condition, ⟨˜V (1/2, τ)⟩= 0, as illustrated in Fig. 1(b). 3.2 Finding ⟨V (g + δg, τ + δτ)⟩as solution to a PDE Performing value iteration on Eq. (4) requires computing the expectation ⟨V (g + δg, τ + δτ)⟩p(δg,δτ|g,τ) on a discretized (g, τ) space. Na¨ıvely, we could perform the 3 required integration by the rectangle method or related methods, but this has several disadvantages. First, the method scales quadratically in the size of the (g, τ) space. Second, with δt →0, p(δg, δτ|g, τ) becomes singular, such that small time discretization requires even smaller state discretization. Third, it requires explicit computation of p(δg, δτ|g, τ), which might be cumbersome. Instead, we borrow methods from stochastic optimal control [12] to find the expectation as a solution to the partial differential equation (PDE). To do so, we link V (g, τ) to ⟨V (g + δg, τ + δτ)⟩, by considering how g and τ evolve from some time t to time t + δt. Defining u(g, τ, t) ≡V (g, τ) and u(g, τ, t + δt) ≡⟨V (g + δg, τ + δτ)⟩, and replacing this expectation by its second-order Taylor expansion around (g, τ), we find that, with δt →0, we have ∂u ∂t = ⟨dg⟩ dt ∂ ∂g + ⟨dτ⟩ dt ∂ ∂τ + dg2 2dt ∂2 ∂g2 + dτ 2 2dt ∂2 ∂τ 2 + ⟨dgdτ⟩ dt ∂2 ∂g∂τ ! u, (7) with all expectations implicitly conditional on g and τ. If we approximate the partial derivatives with respect to g and τ by their central finite differences, and denote un kj ≡u(gk, τj, t) and un+1 kj ≡ u(gk, τj, t + δt) (gk and τj are the discretized state nodes), applying the Crank-Nicolson method [13] to the above PDE results in the linear system Ln+1un+1 = Lnun (8) where both Ln and Ln+1 are sparse matrices, and the u’s are vectors that contain all ukj. Computing ⟨V (g + δg, τ + δτ)⟩now conforms to solving the above linear system with respect to un+1. As the process on g and τ only appears as its infinitesimal moments in Eq. (7), this approach neither requires explicit computation of p(δg, δτ|g, τ) nor suffers from singularities in this density. It still scales quadratically with the state space discretization, but we achieve linear scaling by switching from the Crank-Nicolson to the Alternating Direction Implicit (ADI) method [13] (see supplement for details). This method splits the computation into two steps of size δt/2, in each of which the partial derivatives are only implicit with respect to one of the two state space dimensions. This results in a tri-diagonal structure of the linear system, and an associated reduction of the computational complexity while preserving the numerical robustness of the Crank-Nicholson method [13]. The PDE approach requires us to specify how V (and thus u) behaves at the boundaries, g ∈{0, 1} and τ ∈{0, ∞}. Beliefs g ∈{0, 1} imply complete certainty about the latent variable z, such that a decision is imminent. This implies that, at these beliefs, we have V (g, τ) = Vd(g) for all τ. With τ →∞, the reliability of the momentary evidence becomes overwhelming, such that the latent variable z is again immediately known, resulting in V (g, τ) →Vd(1) (= Vd(0)) for all g. For τ = 0, the infinitesimal moments are ⟨dg⟩= ⟨dg2⟩= ⟨dτ 2⟩= 0, and ⟨dτ⟩= ηµdt, such that g remains unchanged and τ drifts deterministically towards positive values. Thus, there is no leakage of V towards τ < 0, which makes this lower boundary well-defined. 4 Results We first provide an example of an optimal policy and how it shapes behavior, followed by how different parameters of the process on the evidence reliability τ and different task parameters influence the shape of the optimal bound gθ(τ). Then, we compare the performance of these bounds to the performance that can be achieved by simple heuristics, like the diffusion model with a constant bound, or a bound in belief independent of τ. In all cases, we computed the optimal bounds by dynamic programming on a 200 × 200 grid on (g, τ), using δt = 0.005. g spun its whole [0, 1] range, and τ ranged from 0 to twice the 99th percentile of its steady-state distribution. We used maxg,τ |V n(g, τ) −V n−1(g, τ)| ≤10−3δt as convergence criterion for value iteration. 4.1 Decision-making with reliability-dependent bounds Figure 2(a) shows one example of an optimal policy (black lines) for an ER task with evidence accumulation cost of c = 0.1 and τ-process parameters µ = 0.4, σ = 0.2, and η = 1. This policy can be understood as follows. At the beginning of each trial, the decision maker starts at 4 (a) (b) (c) reliability time t 0 1 0.5 Bound example Reliability time-course Belief time-course 0 0.5 1 1.5 0 0.2 0.4 0.6 0.8 belief g 0 0.2 0.4 0.6 0.8 1 reliability 0 1 0.5 belief g time t 0 0.5 1 1.5 Figure 2: Decision-making with the optimal policy. (a) shows the optimal bounds, at gθ(τ) and 1 − gθ(τ) (black) and an example trajectory (grey). The dashed curve shows the steady-state distribution of the τ-process. (b) shows the τ-component (evidence reliability) of this example trajectory over time. Even though not a jump-diffusion process, the CIR process can feature jump-like transitions — here at around 1s. (c) shows the g-component (belief) of this trajectory over time (grey), and how the change in evidence reliability changes the bounds on this belief (black). Note that the bound fluctuates rapidly due to the rapid fluctuation of τ, even though the bound itself is continuous in τ. g(0) = 1/2 and some τ(0) drawn from the steady-state distribution over τ’s (dashed curve in Fig. 2(a)). When accumulating evidence, the decision maker’s belief g(t) starts diffusing and drifting towards either 1 or 0, following the dynamics described in Eqs. (1) and (2). At the same time, the reliability τ(t) changes according to the CIR process, Eq. (1) (Fig. 2(b)). In combination, this leads to a two-dimensional trajectory in the (g, τ) space (Fig. 2(a), grey line). A decision is reached once this trajectory reaches either gθ(τ) or 1 −gθ(τ) (Fig. 2(a), black lines). In belief space, this corresponds to a bound that changes with the current reliability. For the example trajectory in Fig. 2, this reliability jumps to higher values after around 1s (Fig. 2(b)), which leads to a corresponding jump of the bound to higher levels of confidence (black line in Fig. 2(c)). In general, the optimal bound is an increasing function in τ. Thus, the larger the current reliability of the momentary evidence, the more sense it makes to accumulate evidence to a higher level of confidence before committing to a choice. This is because a low evidence reliability implies that – at least in the close future – this reliability will remain low, such that it does not make sense to pay the cost for accumulating evidence without the associated gain in choice accuracy. A higher evidence reliability implies that high levels of confidence, and associated choice accuracy, are reached more quickly, and thus at a lower cost. This also indicates that a decision bound increasing in τ does not imply that high-reliability evidence will lead to slower choices. In fact, the opposite is true, as a faster move towards higher confidence for high reliability causes faster decisions in such cases. 4.2 Optimal bounds for different reliability/task parameters To see how different parameters of the CIR process on the reliability influence the optimal decision bound, we compared bounds where one of its parameters is systematically varied. In all cases, we assumed an ER task with c = 0.1, and default CIR process parameters µ = 0.4, σ = 0.2, η = 2. Figure 3(a) shows how the bound differs for different means µ of the CIR process. A lower mean implies that, on average, the task will be harder, such that more evidence needs to be accumulated to reach the same level of performance. This accumulation comes at a cost, such that the optimal policy is to stop accumulating earlier in harder tasks. This causes lower decision bounds for smaller µ. Fig. 3(b) shows that the optimal bound only very weakly depends on the standard deviation σ of the reliability process. This standard deviation determines how far τ can deviate from its mean, µ. The weak dependence of the bound on this parameter shows that it is not that important to which degree τ fluctuates, as long as it fluctuates with the same speed, η. This speed has a strong influence on the optimal bound, as shown in Fig. 3(c). For a slowly changing τ (low η), the current τ is likely to remain the same in the future, such that the optimal bound strongly depends on τ. For a rapidly changing τ, in contrast, the current τ does not provide much information about future reliabilities, such that the optimal bound features only a very weak dependence on the current evidence reliability. Similar observations can be made for changes in task parameters. Figure 3(d) illustrates that a larger cost c generally causes lower bounds, as it pays less to accumulate evidence. In RR tasks, the 5 (a) (b) (c) (d) (e) (f) 0.04 0.20 1.00 5.00 25.00 125.00 625.00 belief g 0.5 0.6 0.7 0.8 0.9 1 0.01 0.05 0.10 0.20 0.40 0.80 0.00 0.10 0.30 0.90 2.70 8.10 24.30 inter-trial interval penalty time evidence accumulation cost c 0 0.2 0.4 0.6 0.8 1 1.2 1.4 reliability mean reliability standard deviation reliability speed 1 0 0.5 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1 belief g 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 reliability reliability reliability 0.14 0.40 1.00 0.0500 0.1000 0.2000 0.5657 0.25 1.00 4.00 16.00 Figure 3: Optimal bounds for different reliability process / task parameters. In the top row, we vary (a) the mean, µ, (b) the standard deviation σ, or (c) the speed η of the CIR process that describes the reliability time-course. In the bottom row, we vary (d) the momentary cost c in an ER task, and, in an RR task (e) the inter-trial interval ti, or (f) the penalty time tp. In all panels, solid lines show optimal bounds, and dashed lines show steady-state densities of τ (vertically re-scaled). inter-trial timing also plays an important role. If the inter-trial interval ti is long, performing well in single trials is more important, as there are fewer opportunities per unit time to gather reward. In fact, for ti →∞, the optimal bound in RR tasks becomes equivalent to that of an ER task [3]. For short ti’s, in contrast, quick, uninformed decisions are better, as many of them can be performed in quick succession, and they are bound to be correct in at least half of the trials. This is reflected in optimal bounds that are significantly lower for shorter ti’s (Fig. 3(e)). A larger penalty time, tp, in contrast, causes a rise in the optimal bound (Fig.3(f)), as it is better to make better, slower decisions, if incorrect decisions are penalized by longer waits between consecutive trials. 4.3 Performance comparison with alternative heuristics As previous examples have shown, the optimal policy is — due to its two-dimensional nature — not only hard to compute but might also be hard to implement. For these reasons we investigated if simpler, one-dimensional heuristics were able to achieve comparable performance. We focused on two heuristics in particular. First, we considered standard diffusion models [1, 2] that trigger decisions as soon as the accumulated evidence, x(t) (Eq. (1)), not weighted by τ, reaches one of the timeinvariant bounds at xθ and −xθ. These models have been shown to feature optimal performance when the evidence reliability is constant within single trials [2, 3], and electrophysiological recordings have provided support for their implementation in neural substrate [14, 15]. Diffusion models use the unweighted x(t) in Eq. (1) and thus do not perform Bayes-optimal inference if the evidence reliability varies within single trials. For this reason, we considered a second heuristic that performs Bayes-optimal inference by Eq. (2), with time-invariant bounds Xθ and −Xθ on X(t). This heuristic deviates from the optimal policy only by not taking into account the bound’s dependence on the current reliability, τ. We compared the performance of the optimal bound with the two heuristics exhaustively by discretizing a subspace of all possible reliability process parameters. The comparison is shown only for the ER task with accumulation cost c = 0.1, but we observed qualitatively similar results for other accumulation costs, and RR tasks with various combinations of c, ti and tp. For a fair comparison, we tuned for each set of reliability process parameters the bound of each of the heuristics such that it maximized the associated ER / RR. This optimization was performed by the Subplex algorithm [16] in the NLopt tookit [17], where the ER / RR was found by Monte Carlo simulations. 6 (a) (b) (c) (d) 0.8 0.6 0.4 0.2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0 0.5 1 1.5 2 0.8 0.6 0.4 0.2 0.5 0.6 0.7 0.8 0.9 1 belief g 0.5 0.6 0.7 0.8 0.9 1 belief g 0 0.5 1 1.5 2 2.5 0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 reliability Figure 4: Expected reward comparison between optimal bound and heuristics. (a) shows the reward rate difference (white = no difference, dark green = optimal bound ≥2× higher expected reward) between optimal bound and diffusion model for different τ-process parameters. The process SD is shown as fraction of the mean (e.g. µ = 1.4, ˜σ = 0.8 implies σ = 1.5×0.8 = 1.12). (b) The optimal bound (black, for η = 0 independent of µ and σ) and effective tuned diffusion model bounds (blue, dotted curves) for speed η = 0 and two different mean / SD combinations (blue, dotted rectangles in (a)). The dashed curves show the associated τ steady-state distributions. (c) same as (a), but comparing optimal bound to constant bound on belief. (d) The optimal bounds (solid curves) and tuned constant bounds (dotted curves) for different η and the same µ / σ combination (red rectangles in (c)). The dashed curve shows the steady-state distribution of τ. 4.3.1 Comparison to diffusion models Figure 4(a) shows that for very slow process speeds (e.g. η = 0), the diffusion model performance is comparable to the optimal bound found by dynamic programming. At higher speeds (e.g. η = 16), however, diffusion models are no match for the optimal bound anymore. Their performance degrades most strongly when the reliability SD is large, and close to the reliability’s mean (dark green area for η = 16, large ˜σ, in Fig. 4(a)). This pattern can be explained as follows. In the extreme case of η = 0, the evidence reliability remains unchanged within single trials. Then, by Eq. (2), we have X(t) = τx(t), such that a constant bound xθ on x(t) corresponds to a τ-dependent bound Xθ = τxθ on X(t). Mapped into belief by Eq. (2), this results in a sigmoidal bound that closely follows the similarly rising optimal bound. Figure 4(b) illustrates that, depending on the steady-state distribution of τ, the tuned diffusion model bound focuses on approximating different regions of the optimal bound. For a non-stationary evidence reliability, η > 0, the relation between X(t) and x(t) changes for different trajectories of τ(t). In this case, the diffusion model bounds cannot be directly related to a bound in X(t) (or, equivalently, in belief g(t)). As a result, the effective diffusion model bound in belief fluctuate strongly, causing possibly strong deviations from the optimal bound. This is illustrated in Fig. 4(a) by a significant loss in performance for larger process speeds. This loss is most pronounced for large spreads of τ (i.e. a large σ). For small spreads, in contrast, the τ(t) remains mostly stationary, which is again well approximated by a stationary τ whose associated optimal policy is well captured by a diffusion model bound. To summarize, diffusion models approximate well the optimal bound as long as the reliability within single trials is close-to stationary. As soon as this reliability starts to fluctuate significantly within single trials (e.g. large η and σ), the performance of diffusion models deteriorates. 4.3.2 Comparison to a bound that does not depend on evidence reliability In contrast to diffusion models, a heuristic, constant bound in belief (i.e. either in X(t) or g(t)), as used in [8], causes a drop in performance for slow rather than fast changes of the evidence reliability. 7 This is illustrated in Fig. 4(c), where the performance loss is largest for η = 0 and large σ, and drops with an increase in η, σ, and µ. Figure 4(d) shows why this performance loss is particularly pronounced for slow changes in evidence reliability (i.e. low η). As can be seen, the optimal bound becomes flatter as a function of τ when the process speed η increases. As previously mentioned, for large η, this is due to the current reliability providing little information about future reliability. As a consequence, the optimal bound is in these cases well approximated by a constant bound in belief that completely ignores the current reliability. For smaller η, the optimal bound becomes more strongly dependent on the current reliability τ, such that a constant bound provides a worse approximation, and thus a larger loss in performance. The dependence of performance loss on the mean µ and standard deviation σ of the steady-state reliability arises similarly. As has been shown in Fig. 3(a), a larger mean reliability µ causes the optimal bound to become flatter as a function of the current reliability, such that a constant bound approximation performs better for larger µ, as confirmed in Fig. 4(c). The smaller performance loss for smaller spreads of τ (i.e. smaller σ) is not explained by a change in the optimal bound, which is mostly independent of the exact value of σ (Fig. 3(b)). Instead, it arises from the constant bound focusing its approximation to regions of the optimal bound where the steady-state distribution of τ has high density (dashed curves in Fig. 3(b)). The size of this region shrinks with shrinking σ, thus improving the approximation of the optimal bound by a constant, and the associated performance of this approximation. Overall, a constant bound in belief features competitive performance compared to the optimal bound if the evidence reliability changes rapidly (large η), if the task is generally easy (large µ), and if the reliability does not fluctuate strongly within single trials (small σ). For widely and rapidly changing evidence reliability τ in difficult tasks, in contrast, a constant bound in belief provides a poor approximation to the optimal bound. 5 Discussion Our work offers the following contributions. First, it pushes the boundaries of the theory of optimal human and animal decision-making by moving towards more realistic tasks in which the reliability changes over time within single trials. Second, it shows how to derive the optimal policy while avoiding the methodological caveats that have plagued previous, related approaches [3]. Third, it demonstrates that optimal behavior is achieved by a bound on the decision maker’s belief that depends on the current evidence reliability. Fourth, it explains how the shape of the bound depends on task contingencies and the parameters that determine how the evidence reliability changes with time (in contrast to, e.g., [18], where the utilized heuristic policy is independent of the τ process). Fifth, it shows that alternative decision-making heuristics can match the optimal bound’s performance only for a particular subset of these parameters, outside of which their performance deteriorates. As derived in Eq. (2), optimal evidence accumulation with time-varying reliability is achieved by weighting the momentary evidence by its current reliability [8]. Previous work has shown that humans and other animals optimally accumulate evidence if its reliability remains constant within a trial [5, 3], or changes with a known time-course [8]. It remains to be clarified if humans and other animals can optimally accumulate evidence if the time-course of its reliability is not known in advance. They have the ability to estimate this reliability on a trial-by-trial basis[19, 20], but how quickly this estimate is formed remains unclear. To this respect, our model predicts that access to the momentary evidence is sufficient to estimate its reliability immediately and with high precision. This property arises from the Wiener process being only an approximation of physical realism. Further work will extend our approach to processes where this reliability is not known with absolute certainty, and that can feature jumps. We do not expect such process modifications to induce qualitative changes to our predictions. Our theory predicts that, for optimal decision-making, the decision bounds need to be a function of the current evidence reliability, that depends on the parameters that describe the reliability timecourse. This prediction can be used to guide the design of experiments that test if humans and other animals are optimal in the increasingly realistic scenarios addressed in this work. While we do not expect our quantitative prediction to be a perfect match to the observed behavior, we expect the decision makers to qualitatively change their decision strategies according to the optimal strategy for different reliability process parameters. Then, having shown in which cases simpler heuristics fail to match the optimal performance allows us focus on such cases to validate our theory. 8 References [1] Roger Ratcliff. A theory of memory retrieval. Psychological Review, 85(2):59–108, 1978. [2] Rafal Bogacz, Eric Brown, Jeff Moehlis, Philip J. Holmes, and Jonathan D. Cohen. The physics of optimal decision making: A formal analysis of models of performance in two-alternative forced-choice tasks. Psychological Review, 113(4):700–765, 2006. [3] Jan Drugowitsch, Rub´en Moreno-Bote, Anne K. Churchland, Michael N. Shadlen, and Alexandre Pouget. The cost of accumulating evidence in perceptual decision making. The Journal of Neuroscience, 32(11): 3612–3628, 2012. [4] John Palmer, Alexander C. Huk, and Michael N. Shadlen. The effect of stimulus strength on the speed and accuracy of a perceptual decision. Journal of Vision, 5:376–404, 2005. [5] Roozbeh Kiani, Timothy D. Hanks, and Michael N. Shadlen. Bounded integration in parietal cortex underlies decisions even when viewing duration is dictated by the environment. The Journal of Neuroscience, 28(12):3017–3029, 2008. [6] Rafal Bogacz, Peter T. Hu, Philip J. Holmes, and Jonathan D. Cohen. Do humans produce the speedaccuarcy trade-off that maximizes reward rate. The Quarterly Journal of Experimental Psychology, 63 (5):863–891, 2010. [7] John C. Cox, Jonathan E. Ingersoll Jr., and Stephen A. Ross. A theory of the term structure of interest rates. Econometrica, 53(2):385–408, 1985. [8] Jan Drugowitsch, Gregory C DeAngelis, Eliana M Klier, Dora E Angelaki, and Alexandre Pouget. Optimal multisensory decision-making in a reaction-time task. eLife, 2014. doi: 10.7554/eLife.03005. [9] Martin L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series in Probability and Statistics. John Wiley & Sons, Inc., 2005. [10] Richard E. Bellman. Dynamic Programming. Princeton University Press, 1957. [11] Sridhar Mahadevan. Average reward reinforcement learning: Foundations, algorithms, and empirical results. Machine Learning, 22:159–195, 1996. [12] Wendell H. Fleming and Raymond W. Rishel. Deterministic and Stochastic Optimal Control. Stochastic Modelling and Applied Probability. Springer-Verlag, 1975. [13] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes: The Art of Scientific Computing. Cambridge University Press, 3rd edition, 2007. [14] Jamie D. Roitman and Michael N. Shadlen. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. The Journal of Neuroscience, 22(21):9475–9489, 2002. [15] Mark E. Mazurek, Jamie D. Roitman, Jochen Ditterich, and Michael N. Shadlen. A role for neural integrators in perceptual decision making. Cerebral Cortex, 13:1257–1269, 2003. [16] Thomas Harvey Rowan. Functional Stability Analysis of Numerical Algorithms. PhD thesis, Department of Computer Sciences, University of Texas at Austin, 1990. [17] Steven G. Johnson. The NLopt nonlinear-optimization package. URL http://ab-initio.mit. edu/nlopt. [18] Sophie Deneve. Making decisions with unknown sensory reliability. Frontiers in Neuroscience, 6(75), 2012. ISSN 1662-453X. doi: 10.3389/fnins.2012.00075. [19] Marc O. Ernst and Martin S. Banks. Humans integrate visual and haptic information in a statistically optimal fashion. Nature, 415:429–433, 2002. [20] Christopher R. Fetsch, Amanda H. Turner, Gregory C. DeAngelis, and Dora E. Angelaki. Dynamic reweighting of visual and vestibular cues during self-motion perception. The Journal of Neuroscience, 29 (49):15601–15612, 2009. 9
2014
66
5,554
An Autoencoder Approach to Learning Bilingual Word Representations Sarath Chandar A P1 ∗, Stanislas Lauly2 ∗, Hugo Larochelle2, Mitesh M Khapra3, Balaraman Ravindran1, Vikas Raykar3, Amrita Saha3 1Indian Institute of Technology Madras, 2Universit´e de Sherbrooke, 3IBM Research India apsarathchandar@gmail.com, {stanislas.lauly,hugo.larochelle}@usherbrooke.ca, {mikhapra,viraykar,amrsaha4}@in.ibm.com, ravi@cse.iitm.ac.in ∗Both authors contributed equally Abstract Cross-language learning allows one to use training data from one language to build models for a different language. Many approaches to bilingual learning require that we have word-level alignment of sentences from parallel corpora. In this work we explore the use of autoencoder-based methods for cross-language learning of vectorial word representations that are coherent between two languages, while not relying on word-level alignments. We show that by simply learning to reconstruct the bag-of-words representations of aligned sentences, within and between languages, we can in fact learn high-quality representations and do without word alignments. We empirically investigate the success of our approach on the problem of cross-language text classification, where a classifier trained on a given language (e.g., English) must learn to generalize to a different language (e.g., German). In experiments on 3 language pairs, we show that our approach achieves state-of-the-art performance, outperforming a method exploiting word alignments and a strong machine translation baseline. 1 Introduction The accuracy of Natural Language Processing (NLP) tools for a given language depend heavily on the availability of annotated resources in that language. For example, high quality POS taggers [1], parsers [2], sentiment analyzers [3] are readily available for English. However, this is not the case for many other languages such as Hindi, Marathi, Bodo, Farsi, and Urdu, for which annotated data is scarce. This situation was acceptable in the past when only a few languages dominated the digital content available online and elsewhere. However, the ever increasing number of languages on the web today has made it important to accurately process natural language data in such resourcedeprived languages also. An obvious solution to this problem is to improve the annotated inventory of these languages, but the cost, time and effort required act as a natural deterrent to this. Another option is to exploit the unlabeled data available in a language. In this context, vectorial text representations have proven useful for multiple NLP tasks [4, 5]. It has been shown that meaningful representations, capturing syntactic and semantic similarity, can be learned from unlabeled data. While the majority of previous work on vectorial text representations has concentrated on the monolingual case, there has also been considerable interest in learning word and document representations that are aligned across languages [6, 7, 8, 9, 10, 11, 12]. Such aligned representations allow the use of resources from a resource-fortunate language to develop NLP capabilities in a resource-deprived language. One approach to cross-lingual exploitation of resources is to project parameters learned from the annotated data of one language to another language [13, 14, 15, 16, 17]. These approaches rely on a 1 bilingual resource such as a Machine Translation (MT) system. Recent attempts at learning common bilingual representations [9, 10, 11] aim to eliminate the need of such an MT system. A common property of these approaches is that a word-level alignment of translated sentences is leveraged to derive a regularization term relating word embeddings across languages. Such methods not only eliminate the need for an MT system but also outperform MT based projection approaches. In this paper, we experiment with methods that learn bilingual word representations without word-toword alignments of bilingual corpora during training. Unlike previous approaches, we only require aligned sentences and do not rely on word-level alignments (e.g., extracted using GIZA++, as is usual), simplifying the learning procedure. To do so, we propose and investigate bilingual autoencoder models, that learn hidden encoder representations of paired bag-of-words sentences that are not only informative of the original bag-of-words but also predictive of the other language. Word representations can then easily be extracted from the encoder and used in the context of a supervised NLP task. Specifically, we demonstrate the quality of these representations for the task of cross-language document classification, where a labeled data set can be available in one language, but not in another one. As we’ll see, our approach is able to reach state-of-the-art performance, outperforming a method exploiting word alignments and a strong machine translation baseline. 2 Autoencoder for Bags-of-Words Let x be the bag-of-words representation of a sentence. Specifically, each xi is a word index from a fixed vocabulary of V words. As this is a bag-of-words, the order of the words within x does not correspond to the word order in the original sentence. We wish to learn a D-dimensional vectorial representation of our words from a training set of sentence bags-of-words {x(t)}T t=1. We propose to achieve this by using an autoencoder model that encodes an input bag-of-words x with a sum of the representations (embeddings) of the words present in x, followed by a non-linearity. Specifically, let matrix W be the D × V matrix whose columns are the vector representations for each word. The encoder’s computation will involve summing over the columns of W for each word in the bag-of-word. We will denote this encoder function φ(x). Then, using a decoder, the autoencoder will be trained to optimize a loss function that measures how predictive of the original bag-of-words is the encoder representation φ(x) . There are different variations we can consider in the design of the encoder/decoder and the choice of loss function. One must be careful however, as certain choices can be inappropriate for training on word observations, which are intrinsically sparse and high-dimensional. In this paper, we explore and compare two different approaches, described in the next two sub-sections. 2.1 Binary bag-of-words reconstruction training with merged bags-of-words In the first approach, we start from the conventional autoencoder architecture, which minimizes a cross-entropy loss that compares a binary vector observation with a decoder reconstruction. We thus convert the bag-of-words x into a fixed-size but sparse binary vector v(x), which is such that v(x)xi is 1 if word xi is present in x and otherwise 0. From this representation, we obtain an encoder representation by multiplying v(x) with the word representation matrix W a(x) = c + Wv(x), φ(x) = h(a(x)) (1) where h(·) is an element-wise non-linearity such as the sigmoid or hyperbolic tangent, and c is a D-dimensional bias vector. Encoding thus involves summing the word representations of the words present at least once in the bag-of-words. To produce a reconstruction, we parametrize the decoder using the following non-linear form: bv(x) = sigm(Vφ(x) + b) (2) where V = WT , b is the bias vector of the reconstruction layer and sigm(a) = 1/(1 + exp(−a)) is the sigmoid non-linearity. 2 Then, the reconstruction is compared to the original binary bag-of-words as follows: ℓ(v(x)) = − V X i=1 v(x)i log(bv(x)i) + (1 −v(x)i) log(1 −bv(x)i) . (3) Training proceeds by optimizing the sum of reconstruction cross-entropies across the training set, e.g., using stochastic or mini-batch gradient descent. Note that, since the binary bags-of-words are very high-dimensional (the dimensionality corresponds to the size of the vocabulary, which is typically large), the above training procedure which aims at reconstructing the complete binary bag-of-word, will be slow. Since we will later be training on millions of sentences, training on each individual sentence bag-of-words will be expensive. Thus, we propose a simple trick, which exploits the bag-of-words structure of the input. Assuming we are performing mini-batch training (where a mini-batch contains a list of the bags-of-words of adjacent sentences), we simply propose to merge the bags-of-words of the mini-batch into a single bag-of-words and perform an update based on that merged bag-of-words. The resulting effect is that each update is as efficient as in stochastic gradient descent, but the number of updates per training epoch is divided by the mini-batch size . As we’ll see in the experimental section, this trick produces good word representations, while sufficiently reducing training time. We note that, additionally, we could have used the stochastic approach proposed by Dauphin et al. [18] for reconstructing binary bag-of-words representations of documents, to further improve the efficiency of training. They use importance sampling to avoid reconstructing the whole V -dimensional input vector. 2.2 Tree-based decoder training The previous autoencoder architecture worked with a binary vectorial representation of the input bag-of-words. In the second autoencoder architecture we investigate, we consider an architecture that instead works with the bag (unordered list) representation more directly. First, the encoder representation will now involve a sum of the representation of all words, reflecting the relative frequency of each word: a(x) = c + |x| X i=1 W·,xi, φ(x) = h (a(x)) . (4) Moreover, decoder training will assume that, from the decoder’s output, we can obtain a probability distribution p(bx|φ(x)) over any word bx observed at the reconstruction output layer. Then, we can treat the input bag-of-words as a |x|-trials multinomial sample from that distribution and use as the reconstruction loss its negative log-likelihood: ℓ(x) = V X i=1 −log p(bx = xi|φ(x)) . (5) We now must ensure that the decoder can compute p(bx = xi|φ(x)) efficiently from φ(x). Specifically, we’d like to avoid a procedure scaling linearly with the vocabulary size V , since V will be very large in practice. This precludes any procedure that would compute the numerator of p(bx = w|φ(x)) for each possible word w separately and normalize it so it sums to one. We instead opt for an approach borrowed from the work on neural network language models [19, 20]. Specifically, we use a probabilistic tree decomposition of p(bx = xi|φ(x)). Let’s assume each word has been placed at the leaf of a binary tree. We can then treat the sampling of a word as a stochastic path from the root of the tree to one of the leaves. We denote as l(x) the sequence of internal nodes in the path from the root to a given word x, with l(x)1 always corresponding to the root. We will denote as π(x) the vector of associated left/right branching choices on that path, where π(x)k = 0 means the path branches left at internal node l(x)k and otherwise branches right if π(x)k = 1. Then, the probability p(bx = x|φ(x)) of reconstructing a certain word x observed in the bag-of-words is computed as p(bx|φ(x)) = |π(ˆx)| Y k=1 p(π(bx)k|φ(x)) (6) 3 where p(π(bx)k|φ(x)) is output by the decoder. By using a full binary tree of words, the number of different decoder outputs required to compute p(bx|φ(x)) will be logarithmic in the vocabulary size V . Since there are |x| words in the bag-of-words, at most O(|x| log V ) outputs are required from the decoder. This is of course a worst case scenario, since words will share internal nodes between their paths, for which the decoder output can be computed just once. As for organizing words into a tree, as in Larochelle and Lauly [21] we used a random assignment of words to the leaves of the full binary tree, which we have found to work well in practice. Finally, we need to choose a parametrized form for the decoder. We choose the following form: p(π(bx)k = 1|φ(x)) = sigm(bl(ˆxi)k + Vl(ˆxi)k,·φ(x)) (7) where b is a (V -1)-dimensional bias vector and V is a (V −1)×D matrix. Each left/right branching probability is thus modeled with a logistic regression model applied on the encoder representation of the input bag-of-words φ(x). 3 Bilingual autoencoders Let’s now assume that for each sentence bag-of-words x in some source language X, we have an associated bag-of-words y for this sentence translated in some target language Y by a human expert. Assuming we have a training set of such (x, y) pairs, we’d like to use it to learn representations in both languages that are aligned, such that pairs of translated words have similar representations. To achieve this, we propose to augment the regular autoencoder proposed in Section 2 so that, from the sentence representation in a given language, a reconstruction can be attempted of the original sentence in the other language. Specifically, we now define language specific word representation matrices Wx and Wy, corresponding to the languages of the words in x and y respectively. Let V X and V Y also be the number of words in the vocabulary of both languages, which can be different. The word representations however are of the same size D in both languages. For the binary reconstruction autoencoder, the bag-of-words representations extracted by the encoder become φ(x) = h c + WX v(x)  , φ(y) = h c + WYv(y)  and are similarly extended for the tree-based autoencoder. Notice that we share the bias c before the non-linearity across encoders, to encourage the encoders in both languages to produce representations on the same scale. From the sentence in either languages, we want to be able to perform a reconstruction of the original sentence in both the languages. In particular, given a representation in any language, we’d like a decoder that can perform a reconstruction in language X and another decoder that can reconstruct in language Y. Again, we use decoders of the form proposed in either Section 2.1 or 2.2 (see Figure 1), but let the decoders of each language have their own parameters (bX , VX ) and (bY, VY). This encoder/decoder decomposition structure allows us to learn a mapping within each language and across the languages. Specifically, for a given pair (x, y), we can train the model to (1) construct y from x (loss ℓ(x, y)), (2) construct x from y (loss ℓ(y, x)), (3) reconstruct x from itself (loss ℓ(x)) and (4) reconstruct y from itself (loss ℓ(y)). We follow this approach in our experiments and optimize the sum of the corresponding 4 losses during training. 3.1 Joint reconstruction and cross-lingual correlation We also considered incorporating two additional terms to the loss function, in an attempt to favour even more meaningful bilingual representations: ℓ(x, y) + ℓ(y, x) + ℓ(x) + ℓ(y) + βℓ([x, y], [x, y]) −λ · cor(a(x), a(y)) (8) The term ℓ([x, y], [x, y]) is simply a joint reconstruction term, where both languages are simultanouesly presented as input and reconstructed. The second term cor(a(x), a(y)) encourages correlation between the representation of each language. It is the sum of the scalar correlations between each pair a(x)k, a(y)k, across all dimensions k of the vectors a(x), a(y)1. To obtain a stochastic estimate of the correlation, during training, small mini-batches are used. 1While we could have applied the correlation term on φ(x), φ(y) directly, applying it to the pre-activation function vectors was found to be more numerically stable. 4 Figure 1: Left: Bilingual autoencoder based on the binary reconstruction error. Right: Tree-based bilingual autoencoder. In this example, they both reconstruct the bag-of-words for the English sentence “the dog barked” from its French translation “le chien a japp´e”. 3.2 Document representations Once we learn the language specific word representation matrices Wx and Wy as described above, we can use them to construct document representations, by using their columns as word vector representations. Given a document d written in language Z ∈{X, Y} and containing m words, z1, z2, . . . , zm, we represent it as the tf-idf weighted sum of its words’ representations ψ(d) = Pm i=1 tf-idf(zi) · WZ .,zi. We use the document representations thus obtained to train our document classifiers, in the cross-lingual document classification task described in Section 5. 4 Related Work Recent work that has considered the problem of learning bilingual representations of words usually has relied on word-level alignments. Klementiev et al. [9] propose to train simultaneously two neural network languages models, along with a regularization term that encourages pairs of frequently aligned words to have similar word embeddings. Thus, the use of this regularization term requires to first obtain word-level alignments from parallel corpora. Zou et al. [10] use a similar approach, with a different form for the regularizer and neural network language models as in [5]. In our work, we specifically investigate whether a method that does not rely on word-level alignments can learn comparably useful multilingual embeddings in the context of document classification. Looking more generally at neural networks that learn multilingual representations of words or phrases, we mention the work of Gao et al. [22] which showed that a useful linear mapping between separately trained monolingual skip-gram language models could be learned. They too however rely on the specification of pairs of words in the two languages to align. Mikolov et al. [11] also propose a method for training a neural network to learn useful representations of phrases, in the context of a phrase-based translation model. In this case, phrase-level alignments (usually extracted from word-level alignments) are required. Recently, Hermann and Blunsom [23], [24] proposed neural network architectures and a margin-based training objective that, as in this work, does not rely on word alignments. We will briefly discuss this work in the experiments section. 5 Experiments The techniques proposed in this paper enable us to learn bilingual embeddings which capture crosslanguage similarity between words. We propose to evaluate the quality of these embeddings by using them for the task of cross-language document classification. We followed closely the setup used by Klementiev et al. [9] and compare with their method, for which word representations are publicly available2. The set up is as follows. A labeled data set of documents in some language X is available to train a classifier, however we are interested in classifying documents in a different language Y at test time. To achieve this, we leverage some bilingual corpora, which is not labeled with any 2http://people.mmci.uni-saarland.de/˜aklement/data/distrib/ 5 document-level categories. This bilingual corpora is used to learn document representations that are coherent between languages X and Y. The hope is thus that we can successfully apply the classifier trained on document representations for language X directly to the document representations for language Y. Following this setup, we performed experiments on 3 data sets of language pairs: English/German (EN/DE), English/French (EN/FR) and English/Spanish (EN/ES). 5.1 Data For learning the bilingual embeddings, we used sections of the Europarl corpus [25] which contains roughly 2 million parallel sentences. We considered 3 language pairs. We used the same preprocessing as used by Klementiev et al. [9]. We tokenized the sentences using NLTK [26], removed punctuations and lowercased all words. We did not remove stopwords. As for the labeled document classification data sets, they were extracted from sections of the Reuters RCV1/RCV2 corpora, again for the 3 pairs considered in our experiments. Following Klementiev et al. [9], we consider only documents which were assigned exactly one of the 4 top level categories in the topic hierarchy (CCAT, ECAT, GCAT and MCAT). These documents are also pre-processed using a similar procedure as that used for the Europarl corpus. We used the same vocabularies as those used by Klementiev et al. [9] (varying in size between 35, 000 and 50, 000). For each pair of languages, our overall procedure for cross-language classification can be summarized as follows: Train representation: Train bilingual word representations Wx and Wy on sentence pairs extracted from Europarl for languages X and Y. Optionally, we also use the monolingual documents from RCV1/RCV2 to reinforce the monolingual embeddings (this choice is cross-validated). These non-parallel documents can be used through the losses ℓ(x) and ℓ(y) (i.e. by reconstructing x from x or y from y). Note that Klementiev et al. [9] also used this data when training word representations. Train classifier: Train document classifier on the Reuters training set for language X, where documents are represented using the word representations Wx (see Section 3.2). As in Klementiev et al. [9] we used an averaged perceptron trained for 10 epochs, for all the experiments. Test-time classification: Use the classifier trained in the previous step on the Reuters test set for language Y, using the word representations Wy to represent the documents. We trained the following autoencoders3: BAE-cr which uses reconstruction error based decoder training (see Section 2.1) and BAE-tr which uses tree-based decoder training (see Section 2.2). Models were trained for up to 20 epochs using the same data as described earlier. BAE-cr used mini-batch (of size 20) stochastic gradient descent, while BAE-tr used regular stochastic gradient. All results are for word embeddings of size D = 40, as in Klementiev et al. [9]. Further, to speed up the training for BAE-cr we merged each 5 adjacent sentence pairs into a single training instance, as described in Section 2.1. For all language pairs, the joint reconstruction β was fixed to 1 and the cross-lingual correlation factor λ to 4 for BAE-cr. For BAE-tr, none of these additional terms were found to be particularly beneficial, so we set their weights to 0 for all tasks. The other hyperparameters were tuned to each task using a training/validation set split of 80% and 20% and using the performance on the validation set of an averaged perceptron trained on the smaller training set portion (notice that this corresponds to a monolingual classification experiment, since the general assumption is that no labeled data is available in the test set language). 5.2 Comparison of the performance of different models We now present the cross language classification results obtained by using the embeddings produced by our two autoencoders. We also compare our models with the following approaches: Klementiev et al.: This model uses word embeddings learned by a multitask neural network language model with a regularization term that encourages pairs of frequently aligned words to have similar word embeddings. From these embeddings, document representations are computed as described in Section 3.2. 3Our word representations and code are available at http://www.sarathchandar.in/crl.html 6 Table 1: Cross-lingual classification accuracy for 3 language pairs, with 1000 labeled examples. EN →DE DE →EN EN →FR FR →EN EN →ES ES →EN BAE-tr 81.8 60.1 70.4 61.8 59.4 60.4 BAE-cr 91.8 74.2 84.6 74.2 49.0 64.4 Klementiev et al. 77.6 71.1 74.5 61.9 31.3 63.0 MT 68.1 67.4 76.3 71.1 52.0 58.4 Majority Class 46.8 46.8 22.5 25.0 15.3 22.2 Table 2: Example English words along with the closest words both in English (EN) and German (DE), using the Euclidean distance between the embeddings learned by BAE-cr. Word Lang Nearest neighbors Word Lang Nearest neighbors january EN january, march, october oil EN oil, supply, supplies, gas DE januar, m¨arz, oktober DE ¨ol, boden, befindet, ger¨at president EN president, i, mr, presidents microsoft EN microsoft, cds, insider DE pr¨asident, pr¨asidentin DE microsoft, cds, warner said EN said, told, say, believe market EN market, markets, single DE gesagt, sagte, sehr, heute DE markt, marktes, m¨arkte MT: Here, test documents are translated to the language of the training documents using a standard phrase-based MT system, MOSES4 which was trained using default parameters and a 5-gram language model on the Europarl corpus (same as the one used for inducing our bilingual embeddings). Majority Class: Test documents are simply assigned the most frequent class in the training set. For the EN/DE language pairs, we directly report the results from Klementiev et al. [9]. For the other pairs (not reported in Klementiev et al. [9]), we used the embeddings available online and performed the classification experiment ourselves. Similarly, we generated the MT baseline ourselves. Table 1 summarizes the results. They were obtained using 1000 RCV training examples. We report results in both directions, i.e. language X to Y and vice versa. The best performing method is always either BAE-cr or BAE-tr, with BAE-cr having the best performance overall. In particular, BAE-cr often outperforms the approach of Klementiev et al. [9] by a large margin. We also mention the recent work of Hermann and Blunsom [23], who proposed two neural network architectures for learning word and document representations using sentence-aligned data only. Instead of an autoencoder paradigm, they propose a margin-based objective that aims to make the representation of aligned sentences closer than non-aligned sentences. While their trained embeddings are not publicly available, they report results for the EN/DE classification experiments, with representations of the same size as here (D = 40) and trained on 500K EN/DE sentence pairs. Their best model reaches accuracies of 83.7% and 71.4% respectively for the EN →DE and DE →EN tasks. One clear advantage of our model is that unlike their model, it can use additional monolingual data. Indeed, when we train BAE-cr with 500k EN/DE sentence pairs, plus monolingual RCV documents (which come at no additional cost), we get accuracies of 87.9% (EN →DE) and 76.7% (DE →EN), still improving on their best model. If we do not use the monolingual data, BAE-cr’s performance is worse but still competitive at 86.1% for EN →DE and 68.8% for DE →EN. We also evaluate the effect of varying the amount of supervised training data for training the classifier. For brevity, we report only the results for the EN/DE pair, which are summarized in Figure 2. We observe that BAE-cr clearly outperforms the other models at almost all data sizes. More importantly, it performs remarkably well at very low data sizes (100), suggesting it learns very meaningful embeddings, though the method can still benefit from more labeled data (as in the DE →EN case). Table 2 also illustrates the properties captured within and across languages, for the EN/DE pair5. For a few English words, the words with closest word representations (in Euclidean distance) are shown, for both English and German. We observe that words that form a translation pair are close, but also that close words within a language are syntactically/semantically similar as well. 4http://www.statmt.org/moses/ 5See also the supplementary material for a t-SNE visualization of the word representations. 7 Figure 2: Cross-lingual classification accuracy results, from EN →DE (left), and DE →EN (right). The excellent performance of BAE-cr suggests that merging several sentences into single bags-ofwords can still yield good word embeddings. In other words, not only we do not need to rely on word-level alignments, but exact sentence-level alignment is also not essential to reach good performances. We experimented with the merging of 5, 25 and 50 adjacent sentences (see the supplementary material). Generally speaking, these experiments also confirm that even coarser merges can sometimes not be detrimental. However, for certain language pairs, there can be an important decrease in performance. On the other hand, when comparing the performance of BAE-tr with the use of 5-sentences merges, no substantial impact is observed. 6 Conclusion and Future Work We presented evidence that meaningful bilingual word representations could be learned without relying on word-level alignments or using fairly coarse sentence-level alignments. In particular, we showed that even though our model does not use word level alignments, it is able to reach state-ofthe-art performance, even compared to a method that exploits word-level alignments. In addition, it also outperforms a strong machine translation baseline. For future work, we would like to investigate extensions of our bag-of-words bilingual autoencoder to bags-of-n-grams, where the model would also have to learn representations for short phrases. Such a model should be particularly useful in the context of a machine translation system. We would also like to explore the possibility of converting our bilingual model to a multilingual model which can learn common representations for multiple languages given different amounts of parallel data between these languages. Acknowledgement We would like to thank Alexander Klementiev and Ivan Titov for providing the code for the classifier and data indices. This work was supported in part by Google. References [1] Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’03, pages 173–180, 2003. [2] Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 455–465, Sofia, Bulgaria, August 2013. [3] Bing Liu. Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2012. 8 [4] Joseph Turian, Lev Ratinov, and Yoshua Bengio. Word representations: A simple and general method for semi-supervised learning. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL2010), pages 384–394, 2010. [5] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12, 2011. [6] Susan T Dumais, Todd A Letsche, Michael L Littman, and Thomas K Landauer. Automatic crosslanguage retrieval using latent semantic indexing. AAAI spring symposium on cross-language text and speech retrieval, 15:21, 1997. [7] John C. Platt, Kristina Toutanova, and Wen-tau Yih. Translingual document representations from discriminative projections. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, EMNLP ’10, pages 251–261, Stroudsburg, PA, USA, 2010. [8] Wen-tau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, CoNLL ’11, pages 247–256, Stroudsburg, PA, USA, 2011. [9] Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. Inducing Crosslingual Distributed Representations of Words. In Proceedings of the International Conference on Computational Linguistics, 2012. [10] Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. Bilingual Word Embeddings for Phrase-Based Machine Translation. In Empirical Methods in Natural Language Processing, 2013. [11] Tomas Mikolov, Quoc Le, and Ilya Sutskever. Exploiting Similarities among Languages for Machine Translation. Technical report, arXiv, 2013. [12] Manaal Faruqui and Chris Dyer. Improving vector space word representations using multilingual correlation. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 462–471, Gothenburg, Sweden, April 2014. [13] David Yarowsky and Grace Ngai. Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8, Pennsylvania, 2001. [14] Dipanjan Das and Slav Petrov. Unsupervised part-of-speech tagging with bilingual graph-based projections. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 600–609, Portland, Oregon, USA, June 2011. [15] Rada Mihalcea, Carmen Banea, and Janyce Wiebe. Learning multilingual subjective language via crosslingual projections. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 976–983, Prague, Czech Republic, June 2007. [16] Xiaojun Wan. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235–243, Suntec, Singapore, August 2009. [17] Sebastian Pad´o and Mirella Lapata. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research (JAIR), 36:307–340, 2009. [18] Yann Dauphin, Xavier Glorot, and Yoshua Bengio. Large-Scale Learning of Embeddings with Reconstruction Sampling. In Proceedings of the 28th International Conference on Machine Learning (ICML 2011), pages 945–952. Omnipress, 2011. [19] Frederic Morin and Yoshua Bengio. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics (AISTATS 2005), pages 246–252. Society for Artificial Intelligence and Statistics, 2005. [20] Andriy Mnih and Geoffrey E Hinton. A Scalable Hierarchical Distributed Language Model. In Advances in Neural Information Processing Systems 21 (NIPS 2008), pages 1081–1088, 2009. [21] Hugo Larochelle and Stanislas Lauly. A Neural Autoregressive Topic Model. In Advances in Neural Information Processing Systems 25 (NIPS 25), 2012. [22] Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. Learning continuous phrase representations for translation modeling. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 699–709, Baltimore, Maryland, June 2014. [23] Karl Moritz Hermann and Phil Blunsom. Multilingual models for compositional distributed semantics. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 58–68, 2014. [24] Karl Moritz Hermann and Phil Blunsom. Multilingual Distributed Representations without Word Alignment. In Proceedings of International Conference on Learning Representations (ICLR), 2014. [25] Philipp Koehn. Europarl: A parallel corpus for statistical machine translation. In MT Summit, 2005. [26] Edward Loper Bird Steven and Ewan Klein. Natural Language Processing with Python. OReilly Media Inc., 2009. 9
2014
67
5,555
Testing Unfaithful Gaussian Graphical Models De Wen Soh Department of Electrical Engineering Yale University 17 Hillhouse Ave, New Haven, CT 06511 dewen.soh@yale.edu Sekhar Tatikonda Department of Electrical Engineering Yale University 17 Hillhouse Ave, New Haven, CT 06511 sekhar.tatikonda@yale.edu Abstract The global Markov property for Gaussian graphical models ensures graph separation implies conditional independence. Specifically if a node set S graph separates nodes u and v then Xu is conditionally independent of Xv given XS. The opposite direction need not be true, that is, Xu ⊥Xv | XS need not imply S is a node separator of u and v. When it does, the relation Xu ⊥Xv | XS is called faithful. In this paper we provide a characterization of faithful relations and then provide an algorithm to test faithfulness based only on knowledge of other conditional relations of the form Xi ⊥Xj | XS. 1 Introduction Graphical models [1, 2, 3] are a popular and important means of representing certain conditional independence relations between random variables. In a Gaussian graphical model, each variable is associated with a node in a graph, and any two nodes are connected by an undirected edge if and only if their two corresponding variables are independent conditioned on the rest of the variables. An edge between two nodes therefore corresponds directly to the non-zero entries of the precision matrix Ω= Σ−1, where Σ is the covariance matrix of the multivariate Gaussian distribution in question. With the graphical model defined in this way, the Gaussian distribution satisfies the global Markov property: for any pair of nodes i and j, if all paths between the two pass through a set of nodes S, then the variables associated with i and j are conditionally independent given the variables associated with S. The converse of the global Markov property does not always hold. When it does hold for a conditional independence relation, that relation is called faithful. If it holds for all relations in a model, that model is faithful. Faithfulness is important in structural estimation of graphical models, that is, identifying the zeros of Ω. It can be challenging to simply invert Σ. With faithfulness, to determine an edge between nodes i and j, one could run through all possible separator sets S and test for conditional independence. If S is small, the computation becomes more accurate. In the work of [4, 5, 6, 7], different assumptions are used to bound S to this end. The main problem of faithfulness in graphical models is one of identifiability. Can we distinguish between a faithful graphical model and an unfaithful one? The idea of faithfulness was first explored for conditional independence relations that were satisfied in a family of graphs, using the notion of θ-Markov perfectness [8, 9]. For Gaussian graphical models with a tree topology the the distribution has been shown to be faithful [10, 11]. In directed graphical models, the class of unfaithful distributions has been studied in [12, 13]. In [14, 15], a notion of strong-faithfulness as a means of relaxing the conditions of faithfulness is defined. In this paper, we study the identifiability of a conditional independence relation. In [6], the authors restrict their study of Gaussians to walk-summable ones. In [7], the authors restrict their class of distributions to loosely connected Markov random fields. These restrictions are such that the 1 local conditional independence relations imply something about the global structure of the graphical model. In our discussion, we assume no such restrictions. We provide a testable condition for the faithfulness of a conditional independence relation in a Gaussian undirected graphical model. Checking this condition requires only using other conditional independence relations in the graph. We can think of these conditional independence relations as local patches of the covariance matrix Σ. To check if a local patch reflects the global graph (that is, a local path is faithful) we have to make use of other local patches. Our algorithm is the first algorithm, to the best of our knowledge, that is able to distinguish between faithful and unfaithful conditional independence relations without any restrictions on the topology or assumptions on spatial mixing of the Gaussian graphical model. This paper is structured as follows: In Section 2, we discuss some preliminaries. In Section 3, we state our main theorem and proofs, as well as key lemmas used in the proofs. In Section 4, we lay out an algorithm that detects unfaithful conditional independence relations in Gaussian graphical models using only local patches of the covariance matrix. We also describe a graph learning algorithm for unfaithful graphical models. In Section 5, we discuss possible future directions of research. 2 Preliminaries We first define some linear algebra and graph notation. For a matrix M, let M T denote its transpose and let |M| denote its determinant. If I is a subset of its row indices and J a subset of its column indices, then we define the submatrix M IJ as the |I| × |J| matrix with elements with both row and column indices from I and J respectively. If I = J, we use the notation M I for convenience. Let M(−i, −j) be the submatrix of M with the i-th row and j-th column deleted. Let M(−I, −J) be the submatrix with rows with indices from I and columns with indices from J removed. In the same way, for a vector v, we define vI to be the subvector of v with indices from I. Similarly, we define v(−I) to be the subvector of v with indices not from I. For two vectors v and w, we denote the usual dot product by v · w. Let G = (W, E) be an undirected graph, where W = {1, . . . , n} is the set of nodes and E is the set of edges, namely, a subset of the set of all unordered pairs {(u, v) | u, v ∈W}. In our paper we are dealing with graphs that have no self-loops and no multiple edges between the same pair of nodes. For I ⊆W, we denote the induced subgraph on nodes I by GI. For any two distinct nodes u and v, we say that the node set S ⊆W \ {u, v} is a node separator of u and v if all paths from u to v must pass through some node in S. Let X = (X1, . . . , Xn) be a multivariate Gaussian distribution with mean µ and covariance matrix Σ. Let Ω= Σ−1 be the precision or concentration matrix of the graph. For any set S ⊂W, we define XS = {Xi | i ∈S}. We note here that Σuv = 0 if and only if Xu is independent of Xv, which we denote by Xu ⊥Xv. If Xu is independent of Xv conditioned on some random variable Z, we denote this independence relation by Xu ⊥Xv | Z. Note that Ωuv = 0 if and only if Xu ⊥Xv | XW\{u,v}. For any set S ⊆W, the conditional distribution of XW\S given XS = xS follows a multivariate Gaussian distribution with conditional mean µW\S −Σ(W\S)SΣ−1 S (xS −µS) and conditional covariance matrix ΣW\S −Σ(W\S)SΣ−1 S ΣS(W\S). For distinct nodes u, v ∈W and any set S ⊆W \ {u, v}, the following property easily follows. Proposition 1 Xu ⊥Xv | XS if and only if Σuv = ΣuSΣ−1 S ΣSv. The concentration graph GΣ = (W, E) of a multivariate Gaussian distribution X is defined as follows: We have node set W = {1, . . . , n}, with random variable Xu associated with node u, and edge set E where unordered pair (u, v) is in E if and only if Ωuv ̸= 0. The multivariate Gaussian distribution, along with its concentration graph, is also known as a Gaussian graphical model. Any Gaussian graphical model satisfies the global Markov property, that is, if S is a node separator of nodes u and v in GΣ, then Xu ⊥Xv | XS. The converse is not necessarily true, and therefore, this motivates us to define faithfulness in a graphical model. Definition 1 The conditional independence relation Xu ⊥Xv | XS is said to be faithful if S is a node separator of u and v in the concentration graph GΣ. Otherwise, it is unfaithful. A multivari2 Figure 1: Even though ΣS∪{u,v} is a submatrix of Σ, GΣS∪{u,v} need not be a subgraph of GΣ. Edge properties do not translate as well. That means the local patch ΣS∪{u,v} need not reflect the edge properties of the global graph structure of Σ. ate Gaussian distribution is faithful if all its conditional independence relations are faithful. The distribution is unfaithful if it is not faithful. Example 1 (Example of an unfaithful Gaussian distribution) Consider the multivariate Gaussian distribution X = (X1, X2, X3, X4) with zero mean and positive definite covariance matrix Σ =   3 2 1 2 2 4 2 1 1 2 7 1 2 1 1 6  . (1) By Proposition 1, we have X1 ⊥X3 | X2 since Σ13 = Σ12Σ−1 22 Σ23. However, the precision matrix Ω= Σ−1 has no zero entries, so the concentration graph is a complete graph. This means that node 2 is not a node separator of nodes 1 and 3. The independence relation X1 ⊥X3 | X2 is thus not faithful and the distribution X is not faithful as well. We can think of the submatrix ΣS∪{u,v} as a local patch of the covariance matrix Σ. When Xu ⊥ Xv | XS, nodes u and v are not connected by an edge in the concentration graph of the local patch ΣS∪{u,v}, that is, we have (Σ−1 S∪{u,v})uv = 0. This does not imply that u and v are not connected in the concentration graph GΣ. If Xu ⊥Xv | XS is faithful, then the implication follows. If Xu ⊥Xv | XS is unfaithful, then u and v may be connected in GΣ (See Figure 1). Faithfulness is important in structural estimation, especially in high-dimensional settings. If we assume faithfulness, then finding a node set S such that Xu ⊥Xv | XS would imply that there is no edge between u and v in the concentration graph. When we have access only to the sample covariance instead of the population covariance matrix, if the size of S is small compared to n, the error of computing Xu ⊥Xv | XS is much less than the error of inverting the entire covariance matrix. This method of searching through all possible node separator sets of a certain size is employed in [6, 7]. As mention before, these authors impose other restrictions on their models to overcome the problem of unfaithfulness. We do not place any restriction on the Gaussian models. However, we do not provide probabilistic bounds when dealing with samples, which they do. 3 Main Result In this section, we will state our main theoretical result. This result is the backbone for our algorithm that differentiates a faithful conditional independence relation from an unfaithful one. Our main goal is to decide if a conditional independence relation Xu ⊥Xv | XS is faithful or not. For convenience, we will denote GΣ simply by G = (W, E) for the rest of this paper. Now let us suppose that it is faithful; S is a node separator for u and v in G. Then we should not be able to find a path from u to v in the induced subgraph GW\S. The main idea therefore is to search for a path between u and v in GW\S. If this fails, then we know that the conditional independence relation is faithful. By the global Markov property, for any two distinct nodes i, j ∈W \ S, if Xi ̸⊥Xj | XS, then we know that there is a path between i and j in GW\S. Thus, if we find some w ∈W \(S ∪{i, j}) such that Xu ̸⊥Xw | XS and Xv ̸⊥Xw | XS, then a path exists from u to w and another exists from v to w, so u and v are connected in GW\S. This would imply that Xu ⊥Xv | XS is unfaithful. 3 However, testing for paths this way does not necessarily rule out all possible paths in GW\S. The problem is that some paths may be obscured by other unfaithful conditional independence relations. There may be some w whereby Xu ̸⊥Xw | XS and Xv ⊥Xw | XS, but the latter relation is unfaithful. This path from u to v through w is thus not detected by these two independence relations. We will show however, that if there is no path from u to v in GW\S, then we cannot find a series of distinct nodes w1, . . . , wt ∈W \(S ∪{u, v}) for some natural number t > 0 such that Xu ̸⊥Xw1 | XS, Xw1 ̸⊥Xw2 | XS, . . ., Xwt−1 ̸⊥Xwt | XS, Xwk ̸⊥Xv | XS. This is to be expected because of the global Markov property. What is more surprising about our result is that the converse is true. If we cannot find such nodes w1, . . . , wt, then u and v are not connected by a path in GW\S. This means that if there is a path from u to v in GW\S, even though it may be hidden by some unfaithful conditional independence relations, ultimately there are enough conditional dependence relations to reveal that u and v are connected by a path in GW\S. This gives us an equivalent condition for faithfulness that is in terms of conditional independence relations. Not being able to find a series of nodes w1, . . . , wt that form a string of conditional dependencies from u to v as described in the previous paragraph is equivalent to the following: we can find a partition (U, V ) of W \ S with u ∈U and v ∈V such that for all i ∈U and j ∈V , we have Xi ⊥Xj | XS. Our main result uses the existence of this partition as a test for faithfulness. Theorem 1 Let X = (X1, . . . , Xn) be a Gaussian distribution with mean zero, covariance matrix Σ and concentration matrix Ω. Let u, v be two distinct elements of W and S ⊂W \{i, j} such that Xu ⊥Xv | XS. Then Xu ⊥Xv | XS is faithful if and only if there exists a partition of W \ S into two disjoint sets U and V such that u ∈U, v ∈V , and Xi ⊥Xj | XS for any i ∈U and j ∈V . Proof of Theorem 1 . One direction is easy. Suppose Xu ⊥Xv | XS is faithful and S separates u and v in G. Let U be the set of all nodes reachable from u in GW\S including u. Let V = {W \ S ∪U}. Then v ∈V since S separates u and v in G. Also, for any i ∈U and j ∈V , S separates i and j in G, and by the global Markov property, Xi ⊥Xj | XS. Next, we prove the opposite direction. Suppose that there exists a partition of W \ S into two sets U and V such that u ∈U, v ∈V , and Xi ⊥Xj | XS. for any i ∈U and j ∈V . Our goal is to show that S separates u and v in the concentration graph G of X. Let ΩW\S = Ω′ where the latter is the submatrix of the precision matrix Ω. Let the h-th column vector of Ω′ be ω(h), for h = 1, . . . , |W \ S|. Step 1: We first solve the trivial case where |U| = |V | = 1. If |U| = |V | = 1, then S = W \ {u, v}, and trivially, Xu ⊥Xv | XW\{u,v} implies S separates u and v, and we are done. Thus, we assume for the rest of the proof that U and V cannot both be size one. Step 2: We deal with a second trivial case in our proof, which is the case where ω(i)(−i) is identically zero for any i ∈U. In the case where i = u, we have Ωuj = 0 for all j ∈W \ (S ∪{u}). This implies that u is an isolated node in GW\S, and so trivially, S must separate u and v, and we are done. In the case where i ̸= u, we can manipulate the sets U and V so that ω(i)(−i) is not identically zero for any i ∈U, i ̸= u. If there is some i′ ∈U, i′ ̸= u, such that X′ i ⊥Xh | XS for all h ∈U, h ̸= i′, then we can simply move i′ from U into V to form a new partition (U ′, V ′) of W \ S. This new partition still satisfies u ∈U ′, v ∈V ′, and Xi ⊥Xj | XS for all i ∈U ′ and j ∈V ′. We can therefore shift nodes one by one over from U to V until either |U| = 1, or for any i ∈U, i ̸= u, there exists an h ∈U such that Xi ̸⊥Xh | XS. By the global Markov property, this assumption implies that every node i ∈U, i ̸= u is connected by a path to some node in U, which means it must connected to some node in W \ (S ∪{i}) by an edge. Thus, for all i ∈U, i ̸= u, the vector ω(i)(−i) is non-zero. Step 3: We can express the conditional independence relations in terms of elements in the precision matrix Ω, since the topology of G can be read off the non-zero entries of Ω. The proof of the following Lemma 1 uses the matrix block inversion formula and we omit the proof due to space. Lemma 1 Xi ⊥Xj | XS if and only if |Ω′(−i, −j)| = 0. From Lemma 1, observe that the conditional independence relations Xi ⊥Xj | XS are all statements about the cofactors of the matrix Ω′. It follows immediately from Lemma 1 that the vector 4 sets {ω(h)(−i) : h ∈W \ S, h ̸= j} are linearly dependent for all i ∈U and j ∈V . Each of these vector sets consists of the i-th entry truncated column vectors of Ω′, with the j-th column vector excluded. Assume that the matrix Ω′ is partitioned as follows, Ω′ =  ΩUU ΩUV ΩV U ΩV V  . (2) The strategy of this proof is to use these linear dependencies to show that the submatrix ΩV U has to be zero. This would imply that for any node in U, it is not connected to any node in V by an edge. Therefore, S is a node separator of u and v in G, which is our goal. Step 4: Let us fix i ∈U. Consider the vector sets of the form {ω(h)(−i) : h ∈W \ S, h ̸= j}, j ∈V . There are |V | such sets. The intersection of these sets is the vector set {ω(h)(−i) : h ∈U}. We want to use the |V | linearly dependent vector sets to say something about the linear dependency of {ω(h)(−i) : h ∈U}. With that in mind, we have the following lemmas. Lemma 2 The vector set {ω(h)(−i) : h ∈U} is linearly dependent for any i ∈U. Step 5: Our final step is to show that these linear dependencies imply that ΩUV = 0. We now have |U| vector sets {ω(h)(−i) : h ∈U} that are linearly dependent. These sets are truncated versions of the vector set {ω(h) : h ∈U}, and they are specifically truncated by taking out entries only in U and not in V . The set {ω(h) : h ∈U} must be linearly independent since Ω′ is invertible. Observe that the entries of ΩV U are contained in {ω(h)(−i) : h ∈U} for all i ∈U. We can now use these vector sets to say something about the entries of ΩV U. Lemma 3 The vector components ω(i) j = Ωij are zero for all i ∈U and j ∈V . This implies that any node in U is not connected to any node in V by an edge. Therefore, S separates u and v in G and the relation Xu ⊥Xv | XS is faithful. □ 4 Algorithm for Testing Unfaithfulness In this section, we will describe a novel algorithm for testing faithfulness of a conditional independence relation Xu ⊥Xv | XS. The algorithm tests the necessary and sufficient conditions for faithfulness, namely, that we can find a partition (U, V ) of W \ S such that u ∈U, v ∈V , and Xi ⊥Xj | XS for all i ∈U and j ∈V . Algorithm 1 (Testing Faithfulness) Input covariance matrix Σ. 1. Define new graph ¯G = { ¯ W, ¯E}, where ¯ W = W \ S and ¯E = {(i, j) : i, j ∈W \ S, Xi ̸⊥ Xj | XS, i ̸= j}. 2. Generate set U to be the set of all nodes in ¯ W that are connected to u by a path in ¯G, including u. (A breadth-first search could be used.) 3. If v ∈U, there exists a path from u to v in ¯G, output Xu ⊥Xv | XS as unfaithful. 4. If v /∈U, let V = ¯W \ U. Output Xu ⊥Xv | XS as faithful. If we consider each test of whether two nodes are conditionally independent given XS as one step, the running time of the algorithm is the that of the algorithm used to determine set U. If a breadthfirst search is used, the running time is O(|W \ S|2|). Theorem 2 Suppose Xu ⊥Xv | XS. If S is a node separator of u and v in the concentration graph, then Algorithm 1 will classify Xu ⊥Xv | XS as faithful. Otherwise, Algorithm 1 will classify Xu ⊥Xv | XS as unfaithful. Proof. If Algorithm 1 determines that Xu ⊥Xv | XS is faithful, that means that it has found a partition (U, V ) of W \ S such that u ∈U, v ∈V , and Xi ⊥Xj | XS for any i ∈U and 5 Figure 2: The concentration graph of the distribution in Example 4. j ∈V . By Theorem 1, this implies that Xu ⊥Xv | XS is faithful and so Algorithm 1 is correct. If Algorithm 1 decides that Xu ⊥Xv | XS is unfaithful, it does so by finding a series of nodes wℓ1, . . . , wℓt ∈W \ (S ∪{u, v}) for some natural number t > 0 such that Xu ̸⊥Xwℓ1 | XS, Xwℓ1 ̸⊥Xwℓ2 | XS, . . ., Xwℓt−1 ̸⊥Xwℓt | XS, Xwk ̸⊥Xv | XS, where ℓ1, . . . , ℓt are t distinct indices from R. By the global Markov property, this means that u is connected to v by a path in G, so this implies that Xu ⊥Xv | XS is unfaithful and Algorithm 1 is correct. □ Example 2 (Testing an Unfaithful Distribution (1)) Let us take a look again at the 4-dimensional Gaussian distribution in Example 1. Suppose we want to test if X1 ⊥X3 | X2 is faithful or not. From its covariance matrix, we have Σ14 −Σ12Σ−1 2 Σ24 = 2 −2 · 1/4 = 3/2 ̸= 0, so this implies that X1 ̸⊥X4 | X2. Similarly, X3 ̸⊥X4 | X2. So there exists a path from X1 to X3 in G{1,3,4} (it is trivially the edge (1, 3)), so the relation X1 ⊥X3 | X2 is unfaithful. Example 3 (Testing an Unfaithful Distribution (2)) Consider a 6-dimensional Gaussian distribution X = (X1, . . . , X6) that has the covariance matrix Σ =   7 1 2 2 3 4 1 8 2 1 2.25 3 2 2 10 4 3 8 2 1 4 9 1 6 3 2.25 3 1 11 9 4 3 8 6 9 12   . (3) We want to test if the relation X1 ⊥X2 | X6 is faithful or unfaithful. Working out the necessary conditional independence relations to obtain ¯G with S = {6}, we observed that (1, 3), (3, 5), (5, 4), (4, 2) ∈¯E This means that 2 is reachable from 1 in G, so the relation is unfaithful. In fact, the concentration graph is the complete graph K6, and 6 is not a node separator of 1 and 2. Example 4 (Testing a Faithful Distribution) We consider a 6-dimensional Gaussian distribution X = (X1, . . . , X6) that has a covariance matrix which is similar to the distribution in Example 3, Σ =   7 1 2 2 3 4 1 8 2 1 2.25 3 2 2 10 4 6 8 2 1 4 9 1 6 3 2.25 6 1 11 9 4 3 8 6 9 12   . (4) Observe that only Σ35 is changed. We again test the relation X1 ⊥X2 | X6. Running the algorithm produces a viable partition with U = {1, 3} and V = {2, 4, 5}. This agrees with the concentration graph, as shown in Figure 2. We include now an algorithm that learns the topology of a class of (possibly) unfaithful Gaussian graphical models using local patches. Let us fix a natural number K < n−2. We consider graphical models that satisfy the following assumption: for any nodes i and j that are not connected by an edge in G, there exists a vertex set S with |S| ≤K such that S is a vertex separator of i and j. Certain graphs have this property, including graphs with bounded degree and some random graphs with high probability, like the Erd¨os-Renyi graph. The following algorithm learns the edges of a graphical model that satisfies the above assumptions. Algorithm 2 (Edge Learning) Input covariance matrix Σ. For each node pair (i, j), 6 1. Let F = {S ⊂W \ {i, j} : |S| = K, Xi ⊥Xj | XS, and it is faithful}. 2. If F ̸= φ, output (i, j) /∈E. If F = φ, output (i, j) ∈E. 3. Output E. Again, considering a computation of a conditional independence relation as one step, the running time of the algorithm is O(nK+4). This comes from exhaustively checking through all n−2 K  possible separation sets S for each of the n 2  (i, j) pairs. Each time there is a conditional independence relation, we have to check for faithfulness using Algorithm 1, and the running time for that is O(n2). The novelty of the algorithm is in its ability to learn graphical models that are unfaithful. Theorem 3 Algorithm 2 recovers the concentration graph G. Proof. If F ̸= φ, F is non-empty so there exists an S such that Xi ⊥Xj | XS is faithful. Therefore, S separates i and j in G and (i, j) /∈E. If F = φ, then for any S ⊆W, |S| ≤K, we have either Xi ̸⊥Xj | XS or Xi ⊥Xj | XS but it is unfaithful. In both cases, S does not separate i and j in G, for any S ⊆W, |S| ≤K. By the assumption on the graphical model, (i, j) must be in E. This shows that Algorithm 2 will correctly output the edges of G. □ 5 Conclusion We have presented an equivalence condition for faithfulness in Gaussian graphical models and an algorithm to test whether a conditional independence relation is faithful or not. Gaussian distributions are special because its conditional independence relations depend on its covariance matrix, whose inverse, the precision matrix, provides us with a graph structure. The question of faithfulness in other Markov random fields, like Ising models, is an area of study that has much to be explored. The same questions can be asked, such as when unfaithful conditional independence relations occur, and whether they can be identified. In the future, we plan to extend some of these results to other Markov random fields. Determining statistical guarantees is another important direction to explore. 6 Appendix 6.1 Proof of Lemma 2 Case 1: |V | = 1. In this case, |U| > 1 since |U| and |V | cannot both be one. the vector set {ω(h)(−i) : h ∈W \ S, h ̸= j} is the vector set {ω(h)(−i) : h ∈U}. Case 2: |V | > 1. Let us fix i ∈U. Note that ω(i)(−j) ̸= 0 for all j ∈W \ (S ∪{i}), since the diagonal entries of a positive definite matrix are non-zero, that is, ω(i) i ̸= 0. Also, ω(i)(−i) ̸= 0 for all i ∈U as well by Step 2 of the proof of Theorem 1. As such, the linear dependency of {ω(h)(−i) : h ∈W \ S, h ̸= j} for any i ∈U and j ∈V implies that there exists scalars c(i,j) 1 , . . ., c(i,j) j−1 , c(i,j) j+1 , . . ., c(i,j) |W\S| such that X 1≤h≤|W\S|,h̸=j c(i,j) h ω(h)(−i) = 0. (5) If c(i,j) i = 0, the vector set {ω(h)(−i) : 1 ≤h ≤|W \ S|, h ̸= u, j} is linearly dependent. This implies that the principal submatrix Ω′(−i, −i) has zero determinant, which contradicts Ω′ being positive definite. Thus, we have c(i,j) i ̸= 0 for all i ∈U and j ∈V . For each i ∈U and j ∈V , this allows us to manipulate (5) such that w(i)(−i) is expressed in terms of the other vectors in (5). More precisely, let ¯c(i,j) = [c(i,j) i ]−1(c(i,j) 1 , . . . , c(i,j) i−1 , c(i,j) i+1 , . . . , c(i,j) j−1 , c(i,j) j+1 , . . . , c(i,j) |W\S|), for i ∈ U and j ∈V . Note that Ω′(−j, −{i, j}) has the form [ω(1)(−i), . . ., ω(i−1)(−i), ω(i+1)(−i), . . ., ω(j−1)(−i), ω(j+1)(−i), . . ., ω(|W\S|)(−i)], where the vectors in the notation described above are column vectors. From (5), for any distinct j1, j2 ∈V , we can generate equations ω(i)(−i) = Ω′(−j1, −{i.j1})¯c(i,j1) = Ω′(−j2, −{i, j2})¯c(i,j2), (6) 7 or effectively, Ω′(−j1, −{i.j1})¯c(i,j1) −Ω′(−j2, −{i, j2})¯c(i,j2) = 0. (7) This is a linear equation in terms of the column vectors {ω(h)(−i) : h ̸= i, h ∈W}. These vectors must be linear independent, otherwise |Ω′(−i, −i)| = 0. Therefore, the coefficient of each of the vectors must be zero. Specifically, the coefficient of ω(j2)(−i) in 7 is c(i,j1) j2 /c(i,j1) i is zero, which implies that c(i,j1) j2 is zero, as required. Similarly, c(i,j2) j1 is zero as well. Since this holds for any j1, j2 ∈V , this implies that for any j ∈V , c(i,j) h = 0 for all h ∈V, h ̸= j. There are now two cases to consider. The first is where |U| = 1. Here, i = u. Then, by (5), c(u,j) h = 0 for all distinct j, h ∈V implies that ωu(−u) = 0, which is a contradiction. Therefore |U| ̸= 1, so |U| must be greater than 1. We then substitute c(i,j) h = 0, for all distinct j, h ∈V , into (5) to deduce that {ω(h)(−i) : h ∈U} is indeed linearly dependent for any i ∈U. □ 6.2 Proof of Lemma 3 Let |U| = k > 1 We arrange the indices of the column vectors of Ω′ so that U = {1, . . . , k}. For each i ∈U, since {ω(h)(−i) : h ∈U} is linearly dependent and {ω(h) : h ∈U} is linearly independent, there exists a non-zero vector d(i) = (d(i) 1 , . . . , d(i) k ) ∈Rk such that Pk h=1 d(i) i ω(h)(−i) = 0. Let y(i) = (ω(1) i , . . . , ω(k) i ) ∈Rk. Note that y(i) = ω(i) U , since Ω′ is symmetric, and so is a non-zero vector for all i = 1, . . . , k. Because ω(1), . . . , ω(k) are linearly independent, for each i = 1, . . . , k, we have d(i) · y(h) = 0 for all h ̸= i, h ∈U and d(i) · y(i) ̸= 0. We next show that vectors d(1), . . . , d(k) are linearly independent. Suppose that they are not. Then there exists some index i ∈U and scalars a1, . . . , ai−1, ai+1, . . . , ak not all zeros, such that d(i) = P 1≤j≤k,j̸=i ajd(j). We then have 0 ̸= d(i) · y(i) = P 1≤h≤k,j̸=i ahd(j) · y(i) = 0, a contradiction. Therefore, d(1), . . . , d(k) are linearly independent. For each j such that k+1 ≤j ≤|W\S| (that is, j ∈V ), let us define yj = (ω(1) j , . . . , ω(k) j ). Let us fix j. Observe that d(h) · yj = 0 for all h = 1, . . . , k. Since d(1), . . . , d(k) are linearly independent, this implies that yj is the zero vector. Since this holds for all j such that k + 1 ≤j ≤|W \ S|, therefore, ω(i) j = 0 for all 1 ≤i ≤k and k + 1 ≤j ≤|W \ S|. □ References [1] J. Pearl, Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [2] S. L. Lauritzen, Graphical models. New York: Oxford University Press, 1996. [3] J. Whittaker, Graphical Models in Applied Multivariate Statistics. Wiley, 1990. [4] N. Meinshausen and P. B¨uhlmann, “High dimensional graphs and variable selection with the lasso,” Annals of Statistics, vol. 34, no. 3, pp. 1436–1462, 2006. [5] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu, “High dimensional covariance estimation by minimizing ℓ-1 penalized log-determinant divergence,” Electronic Journal in Statistics, vol. 4, pp. 935– 980, 2011. [6] A. Anandkumar, V. Tan, F. Huang, and A. Willsky, “High-dimensional gaussian graphical model selection: walk-summability and local separation criterion,” J. Machine Learning Research, vol. 13, pp. 2293–2337, Aug 2012. [7] R. Wu, R. Srikant, and J. Ni, “Learning loosely connected markov random fields,” Stochastic Systems, vol. 3, 2013. [8] M. Frydenberg, “Marginalisation and collapsibility in graphical interaction models,” Annals of Statistics, vol. 18, pp. 790–805, 1990. [9] G. Kauermann, “On a dualization of graphical gaussian models,” Scandinavian Journal of Statistics, vol. 23, no. 1, pp. 105–116, 1996. 8 [10] A. Becker, D. Geiger, and C. Meek, “Perfect tree-like markovian distributions,” Probability and Mathematical Statistics, vol. 25, no. 2, pp. 231–239, 2005. [11] D. Malouche and B. Rajaratnam, “Gaussian covariance faithful markov trees,” Technical report, Department of Statistics, Stanford University, 2009. [12] P. Spirites, C. Glymore, and R. Scheines, Causation, prediction and search. New York: Springer Verlag, 1993. [13] C. Meek, “Strong completeness and faithfulness in bayesian networks,” in Proceedings of the eleventh international conference on uncertainty in artificial intelligence, 1995. [14] C. Uhler, G. Raskutti, P. B¨uhlmann, and B. Yu, “Geometry of faithfulness assumption in causal inference,” Annals of Statistics, vol. 41, pp. 436–463, 2013. [15] S. Lin, C. Uhler, B. Sturmfels, and P. B¨uhlmann, “Hypersurfaces and their singularities in partial correlation testing,” Preprint. 9
2014
68
5,556
Deep Recursive Neural Networks for Compositionality in Language Ozan ˙Irsoy Department of Computer Science Cornell University Ithaca, NY 14853 oirsoy@cs.cornell.edu Claire Cardie Department of Computer Science Cornell University Ithaca, NY 14853 cardie@cs.cornell.edu Abstract Recursive neural networks comprise a class of architecture that can operate on structured input. They have been previously successfully applied to model compositionality in natural language using parse-tree-based structural representations. Even though these architectures are deep in structure, they lack the capacity for hierarchical representation that exists in conventional deep feed-forward networks as well as in recently investigated deep recurrent neural networks. In this work we introduce a new architecture — a deep recursive neural network (deep RNN) — constructed by stacking multiple recursive layers. We evaluate the proposed model on the task of fine-grained sentiment classification. Our results show that deep RNNs outperform associated shallow counterparts that employ the same number of parameters. Furthermore, our approach outperforms previous baselines on the sentiment analysis task, including a multiplicative RNN variant as well as the recently introduced paragraph vectors, achieving new state-of-the-art results. We provide exploratory analyses of the effect of multiple layers and show that they capture different aspects of compositionality in language. 1 Introduction Deep connectionist architectures involve many layers of nonlinear information processing [1]. This allows them to incorporate meaning representations such that each succeeding layer potentially has a more abstract meaning. Recent advancements in efficiently training deep neural networks enabled their application to many problems, including those in natural language processing (NLP). A key advance for application to NLP tasks was the invention of word embeddings that represent a single word as a dense, low-dimensional vector in a meaning space [2], and from which numerous problems have benefited [3, 4]. Recursive neural networks, comprise a class of architecture that operates on structured inputs, and in particular, on directed acyclic graphs. A recursive neural network can be seen as a generalization of the recurrent neural network [5], which has a specific type of skewed tree structure (see Figure 1). They have been applied to parsing [6], sentence-level sentiment analysis [7, 8], and paraphrase detection [9]. Given the structural representation of a sentence, e.g. a parse tree, they recursively generate parent representations in a bottom-up fashion, by combining tokens to produce representations for phrases, eventually producing the whole sentence. The sentence-level representation (or, alternatively, its phrases) can then be used to make a final classification for a given input sentence — e.g. whether it conveys a positive or a negative sentiment. Similar to how recurrent neural networks are deep in time, recursive neural networks are deep in structure, because of the repeated application of recursive connections. Recently, the notions of depth in time — the result of recurrent connections, and depth in space — the result of stacking 1 that movie was cool (a) that movie was cool (b) that movie was cool (c) Figure 1: Operation of a recursive net (a), untied recursive net (b) and a recurrent net (c) on an example sentence. Black, orange and red dots represent input, hidden and output layers, respectively. Directed edges having the same color-style combination denote shared connections. multiple layers on top of one another, are distinguished for recurrent neural networks. In order to combine these concepts, deep recurrent networks were proposed [10, 11, 12]. They are constructed by stacking multiple recurrent layers on top of each other, which allows this extra notion of depth to be incorporated into temporal processing. Empirical investigations showed that this results in a natural hierarchy for how the information is processed [12]. Inspired by these recent developments, we make a similar distinction between depth in structure and depth in space, and to combine these concepts, propose the deep recursive neural network, which is constructed by stacking multiple recursive layers. The architecture we study in this work is essentially a deep feedforward neural network with an additional structural processing within each layer (see Figure 2). During forward propagation, information travels through the structure within each layer (because of the recursive nature of the network, weights regarding structural processing are shared). In addition, every node in the structure (i.e. in the parse tree) feeds its own hidden state to its counterpart in the next layer. This can be seen as a combination of feedforward and recursive nets. In a shallow recursive neural network, a single layer is responsible for learning a representation of composition that is both useful and sufficient for the final decision. In a deep recursive neural network, a layer can learn some parts of the composition to apply, and pass this intermediate representation to the next layer for further processing for the remaining parts of the overall composition. To evaluate the performance of the architecture and make exploratory analyses, we apply deep recursive neural networks to the task of fine-grained sentiment detection on the recently published Stanford Sentiment Treebank (SST) [8]. SST includes a supervised sentiment label for every node in the binary parse tree, not just at the root (sentence) level. This is especially important for deep learning, since it allows a richer supervised error signal to be backpropagated across the network, potentially alleviating vanishing gradients associated with deep neural networks [13]. We show that our deep recursive neural networks outperform shallow recursive nets of the same size in the fine-grained sentiment prediction task on the Stanford Sentiment Treebank. Furthermore, our models outperform multiplicative recursive neural network variants, achieving new state-of-the-art performance on the task. We conduct qualitative experiments that suggest that each layer handles a different aspect of compositionality, and representations at each layer capture different notions of similarity. 2 Methodology 2.1 Recursive Neural Networks Recursive neural networks (e.g. [6]) (RNNs) comprise an architecture in which the same set of weights is recursively applied within a structural setting: given a positional directed acyclic graph, it visits the nodes in topological order, and recursively applies transformations to generate further representations from previously computed representations of children. In fact, a recurrent neural network is simply a recursive neural network with a particular structure (see Figure 1c). Even though 2 RNNs can be applied to any positional directed acyclic graph, we limit our attention to RNNs over positional binary trees, as in [6]. Given a binary tree structure with leaves having the initial representations, e.g. a parse tree with word vector representations at the leaves, a recursive neural network computes the representations at each internal node η as follows (see also Figure 1a): xη = f(WLxl(η) + WRxr(η) + b) (1) where l(η) and r(η) are the left and right children of η, WL and WR are the weight matrices that connect the left and right children to the parent, and b is a bias vector. Given that WL and WR are square matrices, and not distinguishing whether l(η) and r(η) are leaf or internal nodes, this definition has an interesting interpretation: initial representations at the leaves and intermediate representations at the nonterminals lie in the same space. In the parse tree example, a recursive neural network combines the representations of two subphrases to generate a representation for the larger phrase, in the same meaning space [6]. We then have a task-specific output layer above the representation layer: yη = g(Uxη + c) (2) where U is the output weight matrix and c is the bias vector to the output layer. In a supervised task, yη is simply the prediction (class label or response value) for the node η, and supervision occurs at this layer. As an example, for the task of sentiment classification, yη is the predicted sentiment label of the phrase given by the subtree rooted at η. Thus, during supervised learning, initial external errors are incurred on y, and backpropagated from the root, toward leaves [14]. 2.2 Untying Leaves and Internals Even though the aforementioned definition, which treats the leaf nodes and internal nodes the same, has some attractive properties (such as mapping individual words and larger phrases into the same meaning space), in this work we use an untied variant that distinguishes between a leaf and an internal node. We do this by a simple parametrization of the weights W with respect to whether the incoming edge emanates from a leaf or an internal node (see Figure 1b in contrast to 1a, color of the edges emanating from leaves and internal nodes are different): hη = f(W l(η) L hl(η) + W r(η) R hr(η) + b) (3) where hη = xη ∈X if η is a leaf and hη ∈H otherwise, and W η · = W xh · if η is a leaf and W η · = W hh · otherwise. X and H are vector spaces of words and phrases, respectively. The weights W xh · act as a transformation from word space to phrase space, and W hh as a transformation from phrase space to itself. With this untying, a recursive network becomes a generalization of the Elman type recurrent neural network with h being analogous to the hidden layer of the recurrent network (memory) and x being analogous to the input layer (see Figure 1c). Benefits of this untying are twofold: (1) Now the weight matrices W xh · , and W hh · are of size |h| × |x| and |h| × |h| which means that we can use large pretrained word vectors and a small number of hidden units without a quadratic dependence on the word vector dimensionality |x|. Therefore, small but powerful models can be trained by using pretrained word vectors with a large dimensionality. (2) Since words and phrases are represented in different spaces, we can use rectifier activation units for f, which have previously been shown to yield good results when training deep neural networks [15]. Word vectors are dense and generally have positive and negative entries whereas rectifier activation causes the resulting intermediate vectors to be sparse and nonnegative. Thus, when leaves and internals are represented in the same space, a discrepancy arises, and the same weight matrix is applied to both leaves and internal nodes and is expected to handle both sparse and dense cases, which might be difficult. Therefore separating leaves and internal nodes allows the use of rectifiers in a more natural manner. 2.3 Deep Recursive Neural Networks Recursive neural networks are deep in structure: with the recursive application of the nonlinear information processing they become as deep as the depth of the tree (or in general, DAG). However, this notion of depth is unlikely to involve a hierarchical interpretation of the data. By applying 3 that movie was cool Figure 2: Operation of a 3-layer deep recursive neural network. Red and black points denote output and input vectors, respectively; other colors denote intermediate memory representations. Connections denoted by the same color-style combination are shared (i.e. share the same set of weights). the same computation recursively to compute the contribution of children to their parents, and the same computation to produce an output response, we are, in fact, representing every internal node (phrase) in the same space [6, 8]. However, in the more conventional stacked deep learners (e.g. deep feedforward nets), an important benefit of depth is the hierarchy among hidden representations: every hidden layer conceptually lies in a different representation space and potentially is a more abstract representation of the input than the previous layer [1]. To address these observations, we propose the deep recursive neural network, which is constructed by stacking multiple layers of individual recursive nets: h(i) η = f(W (i) L h(i) l(η) + W (i) R h(i) r(η) + V (i)h(i−1) η + b(i)) (4) where i indexes the multiple stacked layers, W (i) L , W (i) R , and b(i) are defined as before within each layer i, and V (i) is the weight matrix that connects the (i −1)th hidden layer to the ith hidden layer. Note that the untying that we described in Section 2.2 is only necessary for the first layer, since we can map both x ∈X and h(1) ∈H(1) in the first layer to h(2) ∈H(2) in the second layer using separate V (2) for leaves and internals (V xh(2) and V hh(2)). Therefore every node is represented in the same space at layers above the first, regardless of their “leafness”. Figure 2 provides a visualization of weights that are untied or shared. For prediction, we connect the output layer to only the final hidden layer: yη = g(Uh(ℓ) η + c) (5) where ℓis the total number of layers. Intuitively, connecting the output layer to only the last hidden layer forces the network to represent enough high level information at the final layer to support the supervised decision. Connecting the output layer to all hidden layers is another option; however, in that case multiple hidden layers can have synergistic effects on the output and make it more difficult to qualitatively analyze each layer. Learning a deep RNN can be conceptualized as interleaved applications of the conventional backpropagation across multiple layers, and backpropagation through structure within a single layer. During backpropagation a node η receives error terms from both its parent (through structure), and from its counterpart in the higher layer (through space). Then it further backpropagates that error signal to both of its children, as well as to its counterpart in the lower layer. 4 3 Experiments 3.1 Setting Data. For experimental evaluation of our models, we use the recently published Stanford Sentiment Treebank (SST) [8], which includes labels for 215,154 phrases in the parse trees of 11,855 sentences, with an average sentence length of 19.1 tokens. Real-valued sentiment labels are converted to an integer ordinal label in {0, . . . , 4} by simple thresholding. Therefore the supervised task is posed as a 5-class classification problem. We use the single training-validation-test set partitioning provided by the authors. Baselines. In addition to experimenting among deep RNNs of varying width and depth, we compare our models to previous work on the same data. We use baselines from [8]: a naive bayes classifier that operates on bigram counts (BINB), shallow RNN (RNN) [6, 7] that learns the word vectors from the supervised data and uses tanh units, in contrast to our shallow RNNs, a matrix-vector RNN in which every word is assigned a matrix-vector pair instead of a vector, and composition is defined with matrix-vector multiplications (MV-RNN) [16], and the multiplicative recursive net (or the recursive neural tensor network) in which the composition is defined as a bilinear tensor product (RNTN) [8]. Additionally, we use a method that is capable of generating representations for larger pieces of text (PARAGRAPH VECTORS) [17], and the dynamic convolutional neural network (DCNN) [18]. We use the previously published results for comparison using the same trainingdevelopment-test partitioning of the data. Activation Units. For the output layer, we employ the standard softmax activation: g(x) = exi/ P j exj. For the hidden layers we use the rectifier linear activation: f(x) = max{0, x}. Experimentally, rectifier activation gives better performance, faster convergence, and sparse representations. Previous work with rectifier units reported good results when training deep neural networks, with no pre-training step [15]. Word Vectors. In all of our experiments, we keep the word vectors fixed and do not finetune for simplicity of our models. We use the publicly available 300 dimensional word vectors by [19], trained on part of the Google News dataset (∼100B words). Regularizer. For regularization of the networks, we use the recently proposed dropout technique, in which we randomly set entries of hidden representations to 0, with a probability called the dropout rate [20]. Dropout rate is tuned over the development set out of {0, 0.1, 0.3, 0.5}. Dropout prevents learned features from co-adapting, and it has been reported to yield good results when training deep neural networks [21, 22]. Note that dropped units are shared: for a single sentence and a layer, we drop the same units of the hidden layer at each node. Since we are using a non-saturating activation function, intermediate representations are not bounded from above, hence, they can explode even with a strong regularization over the connections, which is confirmed by preliminary experiments. Therefore, for stability reasons, we use a small fixed additional L2 penalty (10−5) over both the connection weights and the unit activations, which resolves the explosion problem. Network Training. We use stochastic gradient descent with a fixed learning rate (.01). We use a diagonal variant of AdaGrad for parameter updates [23]. AdaGrad yields a smooth and fast convergence. Furthermore, it can be seen as a natural tuning of individual learning rates per each parameter. This is beneficial for our case since different layers have gradients at different scales because of the scale of non-saturating activations at each layer (grows bigger at higher layers). We update weights after minibatches of 20 sentences. We run 200 epochs for training. Recursive weights within a layer (W hh) are initialized as 0.5I + ϵ where I is the identity matrix and ϵ is a small uniformly random noise. This means that initially, the representation of each node is approximately the mean of its two children. All other weights are initialized as ϵ. We experiment with networks of various sizes, however we have the same number of hidden units across multiple layers of a single RNN. When we increase the depth, we keep the overall number of parameters constant, therefore deeper networks become narrower. We do not employ a pre-training step; deep architectures are trained with the supervised error signal, even when the output layer is connected to only the final hidden layer. 5 ℓ |h| Fine-grained Binary 1 50 46.1 85.3 2 45 48.0 85.5 3 40 43.1 83.5 1 340 48.1 86.4 2 242 48.3 86.4 3 200 49.5 86.7 4 174 49.8 86.6 5 157 49.0 85.5 (a) Results for RNNs. ℓand |h| denote the depth and width of the networks, respectively. Method Fine-grained Binary Bigram NB 41.9 83.1 RNN 43.2 82.4 MV-RNN 44.4 82.9 RNTN 45.7 85.4 DCNN 48.5 86.8 Paragraph Vectors 48.7 87.8 DRNN (4, 174) 49.8 86.6 (b) Results for previous work and our best model (DRNN). Table 1: Accuracies for 5-class predictions over SST, at the sentence level. Additionally, we employ early stopping: out of all iterations, the model with the best development set performance is picked as the final model to be evaluated. 3.2 Results Quantitative Evaluation. We evaluate on both fine-grained sentiment score prediction (5-class classification) and binary (positive-negative) classification. For binary classification, we do not train a separate network, we use the network trained for fine-grained prediction, and then decode the 5 dimensional posterior probability vector into a binary decision which also effectively discards the neutral cases from the test set. This approach solves a harder problem. Therefore there might be room for improvement on binary results by separately training a binary classifier. Experimental results of our models and previous work are given in Table 1. Table 1a shows our models with varying depth and width (while keeping the overall number of parameters constant within each group). ℓdenotes the depth and |h| denotes the width of the networks (i.e. number of hidden units in a single hidden layer). We observe that shallow RNNs get an improvement just by using pretrained word vectors, rectifiers, and dropout, compared to previous work (48.1 vs. 43.2 for the fine-grained task, see our shallow RNN with |h| = 340 in Table 1a and the RNN from [8] in Table 1b). This suggests a validation for untying leaves and internal nodes in the RNN as described in Section 2.2 and using pre-trained word vectors. Results on RNNs of various depths and sizes show that deep RNNs outperform single layer RNNs with approximately the same number of parameters, which quantitatively validates the benefits of deep networks over shallow ones (see Table 1a). We see a consistent improvement as we use deeper and narrower networks until a certain depth. The 2-layer RNN for the smaller networks and 4layer RNN for the larger networks give the best performance with respect to the fine-grained score. Increasing the depth further starts to cause a degrade. An explanation for this might be the decrease in width dominating the gains from an increased depth. Furthermore, our best deep RNN outperforms previous work on both the fine-grained and binary prediction tasks, and outperforms Paragraph Vectors on the fine-grained score, achieving a new state-of-the-art (see Table 1b). We attribute an important contribution of the improvement to dropouts. In a preliminary experiment with simple L2 regularization, a 3-layer RNN with 200 hidden units each achieved a fine-grained score of 46.06 (not shown here), compared to our current score of 49.5 with the dropout regularizer. Input Perturbation. In order to assess the scale at which different layers operate, we investigate the response of all layers to a perturbation in the input. A way of perturbing the input might be an addition of some noise, however with a large amount of noise, it is possible that the resulting noisy input vector is outside of the manifold of meaningful word vectors. Therefore, instead, we simply pick a word from the sentence that carries positive sentiment, and alter it to a set of words that have sentiment values shifting towards the negative direction. 6 Roger Dodger 6 . 7 8 is 5 one 4 of 3 2 the 1 [best]variations on this theme coolest/good/average/bad/worst 1 2 3 4 5 6 7 8 Figure 3: An example sentence with its parse tree (left) and the response measure of every layer (right) in a three-layered deep recursive net. We change the word “best” in the input to one of the words “coolest”, “good”, “average”, “bad”, “worst” (denoted by blue, light blue, black, orange and red, respectively) and measure the change of hidden layer representations in one-norm for every node in the path. charming results 1 charming , interesting results charming chemistry 2 charming and riveting performances perfect ingredients 3 appealingly manic and energetic gripping performances brilliantly played 4 refreshingly adult take on adultery joyous documentary perfect medium 5 unpretentious , sociologically pointed an amazing slapstick instrument engaging film not great 1 as great nothing good not very informative 2 a great not compelling not really funny 3 is great only good not quite satisfying 4 Is n’t it great too great thrashy fun 5 be great completely numbing experience fake fun Table 2: Example shortest phrases and their nearest neighbors across three layers. In Figure 3, we give an example sentence, “Roger Dodger is one of the best variations on this theme” with its parse tree. We change the word “best” into the set of words “coolest”, “good”, “average”, “bad”, “worst”, and measure the response of this change along the path that connects the leaf to the root (labeled from 1 to 8). Note that all other nodes have the same representations, since a node is completely determined by its subtree. For each node, the response is measured as the change of its hidden representation in one-norm, for each of the three layers in the network, with respect to the hidden representations using the original word (“best”). In the first layer (bottom) we observe a shared trend change as we go up in the tree. Note that “good” and “bad” are almost on top of each other, which suggests that there is not necessarily enough information captured in the first layer yet to make the correct sentiment decision. In the second layer (middle) an interesting phenomenon occurs: Paths with “coolest” and “good” start close together, as well as “worst” and “bad”. However, as we move up in the tree, paths with “worst” and “coolest” come closer together as well as the paths with “good” and “bad”. This suggests that the second layer remembers the intensity of the sentiment, rather than direction. The third layer (top) is the most consistent one as we traverse upward the tree, and correct sentiment decisions persist across the path. 7 Nearest Neighbor Phrases. In order to evaulate the different notions of similarity in the meaning space captured by multiple layers, we look at nearest neighbors of short phrases. For a three layer deep recursive neural network we compute hidden representations for all phrases in our data. Then, for a given phrase, we find its nearest neighbor phrases across each layer, with the one-norm distance measure. Two examples are given in Table 2. For the first layer, we observe that similarity is dominated by one of the words that is composed, i.e. “charming” for the phrase “charming results” (and “appealing”, “refreshing” for some neighbors), and “great” for the phrase “not great”. This effect is so strong that it even discards the negation for the second case, “as great” and “is great” are considered similar to “not great”. In the second layer, we observe a more diverse set of phrases semantically. On the other hand, this layer seems to be taking syntactic similarity more into account: in the first example, the nearest neighbors of “charming results” are comprised of adjective-noun combinations that also exhibit some similarity in meaning (e.g. “interesting results”, “riveting performances”). The account is similar for “not great”: its nearest neighbors are adverb-adjective combinations in which the adjectives exhibit some semantic overlap (e.g. “good”, “compelling”). Sentiment is still not properly captured in this layer, however, as seen with the neighbor “too great” for the phrase “not great”. In the third and final layer, we see a higher level of semantic similarity, in the sense that phrases are mostly related to one another in terms of sentiment. Note that since this is a supervised task on sentiment detection, it is sufficient for the network to capture only the sentiment (and how it is composed in context) in the last layer. Therefore, it should be expected to observe an even more diverse set of neighbors with only a sentiment connection. 4 Conclusion In this work we propose the deep recursive neural network, which is constructed by stacking multiple recursive layers on top of each other. We apply this architecture to the task of fine-grained sentiment classification using binary parse trees as the structure. We empirically evaluated our models against shallow recursive nets. Additionally, we compared with previous work on the task, including a multiplicative RNN and the more recent Paragraph Vectors method. Our experiments show that deep models outperform their shallow counterparts of the same size. Furthermore, deep RNN outperforms the baselines, achieving state-of-the-art performance on the task. We further investigate our models qualitatively by performing input perturbation, and examining nearest neighboring phrases of given examples. These results suggest that adding depth to a recursive net is different from adding width. Each layer captures a different aspect of compositionality. Phrase representations focus on different aspects of meaning at each layer, as seen by nearest neighbor phrase examples. Since our task was supervised, learned representations seemed to be focused on sentiment, as in previous work. An important future direction might be an application of the deep RNN to a broader, more general task, even an unsupervised one (e.g. as in [9]). This might provide better insights on the operation of different layers and their contribution, with a more general notion of composition. The effects of fine-tuning word vectors on the performance of deep RNN is also open to investigation. Acknowledgments This work was supported in part by NSF grant IIS-1314778 and DARPA DEFT FA8750-13-2-0015. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of NSF, DARPA or the U.S. Government. References [1] Yoshua Bengio. Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1):1–127, 2009. [2] Yoshua Bengio, Rjean Ducharme, Pascal Vincent, Christian Jauvin, Jaz K, Thomas Hofmann, Tomaso Poggio, and John Shawe-taylor. A neural probabilistic language model. In In Advances in Neural Information Processing Systems, 2001. 8 [3] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM, 2008. [4] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November 2011. [5] Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990. [6] Richard Socher, Cliff C Lin, Andrew Ng, and Chris Manning. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 129–136, 2011. [7] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semisupervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics, 2011. [8] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’13, 2013. [9] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809, 2011. [10] J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234–242, 1992. [11] Salah El Hihi and Yoshua Bengio. Hierarchical recurrent neural networks for long-term dependencies. In Advances in Neural Information Processing Systems, pages 493–499, 1995. [12] Michiel Hermans and Benjamin Schrauwen. Training and analysing deep recurrent neural networks. In Advances in Neural Information Processing Systems, pages 190–198, 2013. [13] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. Neural Networks, IEEE Transactions on, 5(2):157–166, 1994. [14] Christoph Goller and Andreas Kuchler. Learning task-dependent distributed representations by backpropagation through structure. In Neural Networks, 1996., IEEE International Conference on, volume 1, pages 347–352. IEEE, 1996. [15] Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier networks. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. JMLR W&CP Volume, volume 15, pages 315–323, 2011. [16] Richard Socher, Brody Huval, Christopher D Manning, and Andrew Y Ng. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Association for Computational Linguistics, 2012. [17] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. arXiv preprint arXiv:1405.4053, 2014. [18] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014. [19] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119, 2013. [20] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [21] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, volume 1, page 4, 2012. [22] George E Dahl, Tara N Sainath, and Geoffrey E Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 8609–8613. IEEE, 2013. [23] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. 9
2014
69
5,557
Predictive Entropy Search for Efficient Global Optimization of Black-box Functions Jos´e Miguel Hern´andez-Lobato jmh233@cam.ac.uk University of Cambridge Matthew W. Hoffman mwh30@cam.ac.uk University of Cambridge Zoubin Ghahramani zoubin@eng.cam.ac.uk University of Cambridge Abstract We propose a novel information-theoretic approach for Bayesian optimization called Predictive Entropy Search (PES). At each iteration, PES selects the next evaluation point that maximizes the expected information gained with respect to the global maximum. PES codifies this intractable acquisition function in terms of the expected reduction in the differential entropy of the predictive distribution. This reformulation allows PES to obtain approximations that are both more accurate and efficient than other alternatives such as Entropy Search (ES). Furthermore, PES can easily perform a fully Bayesian treatment of the model hyperparameters while ES cannot. We evaluate PES in both synthetic and realworld applications, including optimization problems in machine learning, finance, biotechnology, and robotics. We show that the increased accuracy of PES leads to significant gains in optimization performance. 1 Introduction Bayesian optimization techniques form a successful approach for optimizing black-box functions [5]. The goal of these methods is to find the global maximizer of a nonlinear and generally nonconvex function f whose derivatives are unavailable. Furthermore, the evaluations of f are usually corrupted by noise and the process that queries f can be computationally or economically very expensive. To address these challenges, Bayesian optimization devotes additional effort to modeling the unknown function f and its behavior. These additional computations aim to minimize the number of evaluations that are needed to find the global optima. Optimization problems are widespread in science and engineering and as a result so are Bayesian approaches to this problem. Bayesian optimization has successfully been used in robotics to adjust the parameters of a robot’s controller to maximize gait speed and smoothness [16] as well as parameter tuning for computer graphics [6]. Another example application in drug discovery is to find the chemical derivative of a particular molecule that best treats a given disease [20]. Finally, Bayesian optimization can also be used to find optimal hyper-parameter values for statistical [29] and machine learning techniques [24]. As described above, we are interested in finding the global maximizer x⋆= arg maxx∈X f(x) of a function f over some bounded domain, typically X ⊂Rd. We assume that f(x) can only be evaluated via queries to a black-box that provides noisy outputs of the form yi ∼N(f(xi), σ2). We note, however, that our framework can be extended to other non-Gaussian likelihoods. In this setting, we describe a sequential search algorithm that, after n iterations, proposes to evaluate f at some location xn+1. To make this decision the algorithm conditions on all previous observations Dn = {(x1, y1), . . . , (xn, yn)}. After N iterations the algorithm makes a final recommendation exN for the global maximizer of the latent function f. We take a Bayesian approach to the problem described above and use a probabilistic model for the latent function f to guide the search and to select exN. In this work we use a zero-mean Gaussian 1 Algorithm 1 Generic Bayesian optimization Input: a black-box with unknown mean f 1: for n = 1, . . . , N do 2: select xn = arg maxx∈X αn−1(x) 3: query the black-box at xn to obtain yn 4: augment data Dn = Dn−1 ∪{(xn, yn)} 5: end for 6: return exN = arg maxx∈X µN(x) Algorithm 2 PES acquisition function Input: a candidate x; data Dn 1: sample M hyperparameter values {ψ(i)} 2: for i = 1, . . . , M do 3: sample f (i) ∼p(f|Dn, φ, ψ(i)) 4: set x(i) ⋆ ←arg maxx∈X f (i)(x) 5: compute m(i) 0 , V(i) 0 and em(i), ev(i) 6: compute v(i) n (x) and v(i) n (x|x(i) ⋆) 7: end for 8: return αn(x) as in (10) precomputed process (GP) prior for f [22]. This prior is specified by a positive-definite kernel function k(x, x′). Given any finite collection of points {x1, . . . , xn}, the values of f at these points are jointly zeromean Gaussian with covariance matrix Kn, where [Kn]ij = k(xi, xj). For the Gaussian likelihood described above, the vector of concatenated observations yn is also jointly Gaussian with zero-mean. Therefore, at any location x, the latent function f(x) conditioned on past observations Dn is then Gaussian with marginal mean µn(x) and variance vn(x) given by µn(x) = kn(x)T(Kn + σ2I)−1yn , vn(x) = k(x, x) −kn(x)T(Kn + σ2I)−1kn(x) , (1) where kn(x) is a vector of cross-covariance terms between x and {x1, . . . , xn}. Bayesian optimization techniques use the above predictive distribution p(f(x)|Dn) to guide the search for the global maximizer x⋆. In particular, p(f(x)|Dn) is used during the computation of an acquisition function αn(x) that is optimized at each iteration to determine the next evaluation location xn+1. This process is shown in Algorithm 1. Intuitively, the acquisition function αn(x) should be high in areas where the maxima is most likely to lie given the current data. However, αn(x) should also encourage exploration of the search space to guarantee that the recommendation exN is a global optimum of f, not just a global optimum of the posterior mean. Several acquisition functions have been proposed in the literature. Some examples are the probability of improvement (PI) [14], the expected improvement (EI) [19, 13] or upper confidence bounds (UCB) [26]. Alternatively, one can combine several of these acquisition functions [10]. The acquisition functions described above are based on probabilistic measures of improvement (PI an EI) or on optimistic estimates of the latent function (UCB) which implicitly trade off between exploiting the posterior mean and exploring based on the uncertainty. An alternate approach, introduced by [28], proposes maximizing the expected posterior information gain about the global maximizer x⋆evaluated over a grid in the input space. A similar strategy was later employed by [9] which although it requires no such grid, instead relies on a difficult-to-evaluate approximation. In Section 2 we derive a rearrangement of this information-based acquisition function which leads to a more straightforward approximation that we call Predictive Entropy Search (PES). In Section 3 we show empirically that our approximation is more accurate than that of [9]. We evaluate this claim on both synthetic and real-world problems and further show that this leads to real gains in performance. 2 Predictive entropy search We propose to follow the information-theoretic method for active data collection described in [17]. We are interested in maximizing information about the location x⋆of the global maximum, whose posterior distribution is p(x⋆|Dn). Our current information about x⋆can be measured in terms of the negative differential entropy of p(x⋆|Dn). Therefore, our strategy is to select xn+1 which maximizes the expected reduction in this quantity. The corresponding acquisition function is αn(x) = H[p(x⋆|Dn)] −Ep(y|Dn,x)[H[p(x⋆|Dn ∪{(x, y)})]] , (2) where H[p(x)] = − R p(x) log p(x)dx represents the differential entropy of its argument and the expectation above is taken with respect to the posterior predictive distribution of y given x. The exact evaluation of (2) is infeasible in practice. The main difficulties are i) p(x⋆|Dn ∪{(x, y)}) must be computed for many different values of x and y during the optimization of (2) and ii) the entropy computations themselves are not analytical. In practice, a direct evaluation of (2) is only 2 possible after performing many approximations [9]. To avoid this, we follow the approach described in [11] by noting that (2) can be equivalently written as the mutual information between x⋆and y given Dn. Since the mutual information is a symmetric function, αn(x) can be rewritten as αn(x) = H[p(y|Dn, x)] −Ep(x⋆|Dn)[H[p(y|Dn, x, x⋆)]] , (3) where p(y|Dn, x, x⋆) is the posterior predictive distribution for y given the observed data Dn and the location of the global maximizer of f. Intuitively, conditioning on the location x⋆pushes the posterior predictions up in locations around x⋆and down in regions away from x⋆. Note that, unlike the previous formulation, this objective is based on the entropies of predictive distributions, which are analytic or can be easily approximated, rather than on the entropies of distributions on x⋆whose approximation is more challenging. The first term in (3) can be computed analytically using the posterior marginals for f(x) in (1), that is, H[p(y|Dn, x)] = 0.5 log[2πe (vn(x) + σ2)], where we add σ2 to vn(x) because y is obtained by adding Gaussian noise with variance σ2 to f(x). The second term, on the other hand, must be approximated. We first approximate the expectation in (3) by averaging over samples x(i) ⋆ drawn approximately from p(x⋆|Dn). For each of these samples, we then approximate the corresponding entropy function H[p(y|Dn, x, x(i) ⋆)] using expectation propagation [18]. The code for all these operations is publicly available at http://jmhl.org. 2.1 Sampling from the posterior over global maxima In this section we show how to approximately sample from the conditional distribution of the global maximizer x⋆given the observed data Dn, that is, p(x⋆|Dn) = p f(x⋆) = max x∈X f(x) Dn  . (4) If the domain X is restricted to some finite set of m points, the latent function f takes the form of an m-dimensional vector f. The probability that the ith element of f is optimal can then be written as R p(f|Dn) Q j≤m I[fi ≥fj] df. This suggests the following generative process: i) draw a sample from the posterior distribution p(f|Dn) and ii) return the index of the maximum element in the sampled vector. This process is known as Thompson sampling or probability matching when used as an arm-selection strategy in multi-armed bandits [8]. This same approach could be used for sampling the maximizer over a continuous domain X. At first glance this would require constructing an infinite-dimensional object representing the function f. To avoid this, one could sequentially construct f while it is being optimized. However, evaluating such an f would ultimately have cost O(m3) where m is the number of function evaluations necessary to find the optimum. Instead, we propose to sample and optimize an analytic approximation to f. We will briefly derive this approximation below, but more detail is given in Appendix A. Given a shift-invariant kernel k, Bochner’s theorem [4] asserts the existence of its Fourier dual s(w), which is equal to the spectral density of k. Letting p(w) = s(w)/α be the associated normalized density, we can write the kernel as the expectation k(x, x′) = α Ep(w)[e−iwT(x−x′)] = 2α Ep(w,b)[cos(wTx + b) cos(wTx′ + b)] , (5) where b ∼U[0, 2π]. Let φ(x) = p 2α/m cos(Wx+b) denote an m-dimensional feature mapping where W and b consist of m stacked samples from p(w, b). The kernel k can then be approximated by the inner product of these features, k(x, x′) ≈φ(x)Tφ(x′). This approach was used by [21] as an approximation method in the context of kernel methods. The feature mapping φ(x) allows us to approximate the Gaussian process prior for f with a linear model f(x) = φ(x)Tθ where θ ∼N(0, I) is a standard Gaussian. By conditioning on Dn, the posterior for θ is also multivariate Gaussian, θ|Dn ∼N(A−1ΦTyn, σ2A−1) where A = ΦTΦ + σ2I and ΦT = [φ(x1) . . . φ(xn)]. Let φ(i) and θ(i) be a random set of features and the corresponding posterior weights sampled both according to the generative process given above. They can then be used to construct the function f (i)(x) = φ(i)(x)Tθ(i), which is an approximate posterior sample of f—albeit one with a finite parameterization. We can then maximize this function to obtain x(i) ⋆ = arg maxx∈X f (i)(x), which is approximately distributed according to p(x⋆|Dn). Note that for early iterations when n < m, we can efficiently sample θ(i) with cost O(n2m) using the method described in Appendix B.2 of [23]. This allows us to use a large number of features in φ(i)(x). 3 2.2 Approximating the predictive entropy We now show how to approximate H[p(y|Dn, x, x⋆)] in (3). Note that we can write the argument to H in this expression as p(y|Dn, x, x⋆) = R p(y|f(x))p(f(x)|Dn, x⋆) df(x). Here p(f(x)|Dn, x⋆) is the posterior distribution on f(x) given Dn and the location x⋆of the global maximizer of f. When the likelihood p(y|f(x)) is Gaussian, we have that p(f(x)|Dn) is analytically tractable since it is the predictive distribution of a Gaussian process. However, by further conditioning on the location x⋆of the global maximizer we are introducing additional constraints, namely that f(z) ≤f(x⋆) for all z ∈X. These constraints make p(f(x)|Dn, x⋆) intractable. To circumvent this difficulty, we instead use the following simplified constraints: C1. x⋆is a local maximum. This is achieved by letting ∇f(x⋆) = 0 and ensuring that ∇2f(x⋆) is negative definite. We further assume that the non-diagonal elements of ∇2f(x⋆), denoted by upper[∇2f(x⋆)], are known, for example they could all be zero. This simplifies the negative-definite constraint. We denote by C1.1 the constraint given by ∇f(x⋆) = 0 and upper[∇2f(x⋆)] = 0. We denote by C1.2 the constraint that forces the elements of diag[∇2f(x⋆)] to be negative. C2. f(x⋆) is larger than past observations. We also assume that f(x⋆) ≥f(xi) for all i ≤n. However, we only observe f(xi) noisily via yi. To avoid making inference on these latent function values, we approximate the above hard constraints with the soft constraint f(x⋆) > ymax + ϵ, where ϵ ∼N(0, σ2) and ymax is the largest yi seen so far. C3. f(x) is smaller than f(x⋆). This simplified constraint only conditions on the given x rather than requiring f(x⋆) ≤f(z) for all z ∈X. We incorporate these simplified constraints into p(f(x)|Dn) to approximate p(f(x)|Dn, x⋆). This is achieved by multiplying p(f(x)|Dn) with specific factors that encode the above constraints. In what follows we briefly show how to construct these factors; more detail is given in Appendix B. Consider the latent variable z = [f(x⋆); diag[∇2f(x⋆)]]. To incorporate constraint C1.1 we can condition on the data and on the “observations” given by the constraints ∇f(x⋆) = 0 and upper[∇2f(x⋆)] = 0. Since f is distributed according to a GP, the joint distribution between z and these observations is multivariate Gaussian. The covariance between the noisy observations yn and the extra noise-free derivative observations can be easily computed [25]. The resulting conditional distribution is also multivariate Gaussian with mean m0 and covariance V0. These computations are similar to those performed in (1). Constraints C1.2 and C2 can then be incorporated by writing p(z|Dn, C1, C2) ∝Φσ2(f(x⋆) −ymax) h Qd i=1 I [∇2f(x⋆)]ii ≤0 i N(z|m0, V0) , (6) where Φσ2 is the cdf of a zero-mean Gaussian distribution with variance σ2. The first new factor in this expression guarantees that f(x⋆) > ymax +ϵ, where we have marginalized ϵ out, and the second set of factors guarantees that the entries in diag[∇2f(x⋆)] are negative. Later integrals that make use of p(z|Dn, C1, C2), however, will not admit a closed-form expression. As a result we compute a Gaussian approximation q(z) to this distribution using Expectation Propagation (EP) [18]. The resulting algorithm is similar to the implementation of EP for binary classification with Gaussian processes [22]. EP approximates each non-Gaussian factor in (6) with a Gaussian factor whose mean and variance are emi and evi, respectively. The EP approximation can then be written as q(z) ∝[Qd+1 i=1 N(zi| emi, evi)]N(z|m0, V0). Note that these computations have so far not depended on x, so we can compute {m0, V0, em, ev} once and store them for later use, where em = ( ˜m1, . . . , ˜md+1) and ev = (˜v1, . . . , ˜vd+1). We will now describe how to compute the predictive variance of some latent function value f(x) given these constraints. Let f = [f(x); f(x⋆)] be a vector given by the concatenation of the values of the latent function at x and x⋆. The joint distribution between f, z, the evaluations yn collected so far and the derivative “observations” ∇f(x⋆) = 0 and upper[∇2f(x⋆)] = 0 is multivariate Gaussian. Using q(z), we then obtain the following approximation: p(f|Dn, C1, C2) ≈ R p(f|z, Dn, C1.1) q(z) dz = N(f|mf, Vf) . (7) Implicitly we are assuming above that f depends on our observations and constraint C1.1, but is independent of C1.2 and C2 given z. The computations necessary to obtain mf and Vf are similar 4 to those used above and in (1). The required quantities are similar to the ones used by EP to make predictions in the Gaussian process binary classifier [22]. We can then incorporate C3 by multiplying N(f|mf, Vf) with a factor that guarantees f(x) < f(x⋆). The predictive distribution for f(x) given Dn and all the constraints can be approximated as p(f(x)|Dn, C1, C2, C3) ≈Z−1 R I(f1 < f2) N(f|mf, Vf) df2 , (8) where Z is a normalization constant. The variance of the right hand size of (8) is given by vn(x|x⋆) = [Vf]1,1 −v−1β(β + α){[Vf]1,1 −[Vf]1,2}2 , (9) where v = [−1, 1]TVf[−1, 1], α = m/√v, m = [−1, 1]Tmf, β = φ(α)/Φ(α), and φ(·) and Φ(·) are the standard Gaussian density function and cdf, respectively. By further approximating (8) by a Gaussian distribution with the same mean and variance we can write the entropy as H[p(y|Dn, x, x⋆)] ≈0.5 log[2πe(vn(x|x⋆) + σ2)]. The computation of (9) can be numerically unstable when s is very close to zero. This occurs when [Vf]1,1 is very similar to [Vf]1,2. To avoid these numerical problems, we multiply [Vf]1,2 by the largest 0 ≤κ ≤1 that guarantees that s > 10−10. This can be understood as slightly reducing the amount of dependence between f(x) and f(x⋆) when x is very close to x⋆. Finally, fixing upper[∇2f(x⋆)] to be zero can also produce poor predictions when the actual f does not satisfy this constraint. To avoid this, we instead fix this quantity to upper[∇2f (i)(x⋆)], where f (i) is the ith sample function optimized in Section 2.1 to sample x(i) ⋆. 2.3 Hyperparameter learning and the PES acquisition function We now show how the previous approximations are integrated to compute the acquisition function used by predictive entropy search (PES). This acquisition function performs a formal treatment of the hyperparameters. Let ψ denote a vector of hyperparameters which includes any kernel parameters as well as the noise variance σ2. Let p(ψ|Dn) ∝p(ψ) p(Dn|ψ) denote the posterior distribution over these parameters where p(ψ) is a hyperprior and p(Dn|ψ) is the GP marginal likelihood. For a fully Bayesian treatment of ψ we must marginalize the acquisition function (3) with respect to this posterior. The corresponding integral has no analytic expression and must be approximated using Monte Carlo. This approach is also taken in [24]. We draw M samples {ψ(i)} from p(ψ|Dn) using slice sampling [27]. Let x(i) ⋆ denote a sampled global maximizer drawn from p(x⋆|Dn, ψ(i)) as described in Section 2.1. Furthermore, let v(i) n (x) and v(i) n (x|x(i) ⋆) denote the predictive variances computed as described in Section 2.2 when the model hyperparameters are fixed to ψ(i). We then write the marginalized acquisition function as αn(x) = 1 M PM i=1 n 0.5 log[v(i) n (x) + σ2] −0.5 log[v(i) n (x|x(i) ⋆) + σ2] o . (10) Note that PES is effectively marginalizing the original acquisition function (2) over p(ψ|Dn). This is a significant advantage with respect to other methods that optimize the same information-theoretic acquisition function but do not marginalize over the hyper-parameters. For example, the approach of [9] approximates (2) only for fixed ψ. The resulting approximation is computationally very expensive and recomputing it to average over multiple samples from p(ψ|Dn) is infeasible in practice. Algorithm 2 shows pseudo-code for computing the PES acquisition function. Note that most of the computations necessary for evaluating (10) can be done independently of the input x, as noted in the pseudo-code. This initial cost is dominated by a matrix inversion necessary to pre-compute V for each hyperparameter sample. The resulting complexity is O[M(n+d+d(d−1)/2)3]. This cost can be reduced to O[M(n + d)3] by ignoring the derivative observations imposed on upper[∇2f(x⋆)] by constraint C1.1. Nevertheless, in the problems that we consider d is very small (less than 20). After these precomputations are done, the evaluation of (10) is O[M(n + d + d(d −1)/2)]. 3 Experiments In our experiments, we use Gaussian process priors for f with squared-exponential kernels k(x, x′) = γ2 exp{−0.5 P i(xi −x′ i)2/ℓ2 i }. The corresponding spectral density is zero-mean Gaussian with covariance given by diag([ℓ−2 i ]) and normalizing constant α = γ2. The model hyperparameters are {γ, ℓ1, . . . , ℓd, σ2}. We use broad, uninformative Gamma hyperpriors. 5 0.20 0.25 0.30 0.35 x x x x x x x x x x 0.2 0.2 0.2 0.25 0.25 0.25 0.25 0.25 0.25 0.3 0.35 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 x x x x x x x x x x 0 0.01 0.01 0.01 0.01 0.02 0.02 0.02 0.02 0.03 0.03 0.03 0.03 0.03 0.03 0.04 0.04 0.04 0.04 0.05 0.05 0.06 0.06 0.06 0.00 0.05 0.10 0.15 0.20 0.25 0.30 x x x x x x x x x x 0.05 0.05 0.05 0.05 0.05 0.1 0.1 0.1 0.15 0.2 0.2 0.25 0.25 Figure 1: Comparison of different estimates of the objective function αn(x) given by (2). Left, ground truth obtained by the rejection sampling method RS. Middle, approximation produced by the ES method. Right, approximation produced by the proposed PES method. These plots show that the PES objective is much more similar to the RS ground truth than the ES objective. First, we analyze the accuracy of PES in the task of approximating the differential entropy (2). We compare the PES approximation (10), with the approximation used by the entropy search (ES) method [9]. We also compare with the ground truth for (2) obtained using a rejection sampling (RS) algorithm based on (3). For this experiment we generate the data Dn using an objective function f sampled from the Gaussian process prior as in [9]. The domain X of f is fixed to be [0, 1]2 and data are generated using γ2 = 1, σ2 = 10−6, and ℓ2 i = 0.1. To compute (10) we avoid sampling the hyperparameters and use the known values directly. We further fix M = 200 and m = 1000. The ground truth rejection sampling scheme works as follows. First, X is discretized using a uniform grid. The expectation with respect to p(x⋆|Dn) in (3) is then approximated using sampling. For this, we sample x⋆by evaluating a random sample from p(f|Dn) on each grid cell and then selecting the cell with highest value. Given x⋆, we then approximate H[p(y|Dn, x, x⋆)] by rejection sampling. We draw samples from p(f|Dn) and reject those whose corresponding grid cell with highest value is not x⋆. Finally, we approximate H[p(y|Dn, x, x⋆)] by first, adding zero-mean Gaussian noise with variance σ2 to the the evaluations at x of the functions not rejected during the previous step and second, we estimate the differential entropy of the resulting samples using kernels [1]. Figure 1 shows the objective functions produced by RS, ES and PES for a particular Dn with 10 measurements whose locations are selected uniformly at random in [0, 1]2. The locations of the collected measurements are displayed with an “x” in the plots. The particular objective function used to generate the measurements in Dn is displayed in the left part of Figure 2. The plots in Figure 1 show that the PES approximation to (2) is more similar to the ground truth given by RS than the approximation produced by ES. In this figure we also see a discrepancy between RS and PES at locations near x = (0.572, 0.687). This difference is an artifact of the discretization used in RS. By zooming in and drawing many more samples we would see the same behavior in both plots. We now evaluate the performance of PES in the task of finding the optimum of synthetic black-box objective functions. For this, we reproduce the within-model comparison experiment described in [9]. In this experiment we optimize objective functions defined in the 2-dimensional unit domain X = [0, 1]2. Each objective function is generated by first sampling 1024 function values from the GP prior assumed by PES, using the same γ2, ℓi and σ2 as in the previous experiment. The objective function is then given by the resulting GP posterior mean. We generated a total of 1000 objective functions by following this procedure. The left plot in Figure 2 shows an example function. In these experiments we compared the performance of PES with that of ES [9] and expected improvement (EI) [13], a widely used acquisition function in the Bayesian optimization literature. We again assume that the optimal hyper-parameter values are known to all methods. Predictive performance is then measured in terms of the immediate regret (IR) |f(exn) −f(x⋆)|, where x⋆is the known location of the global maximum and exn is the recommendation of each algorithm had we stopped at step n—for all methods this is given by the maximizer of the posterior mean. The right plot in Figure 2 shows the decimal logarithm of the median of the IR obtained by each method across the 1000 different objective functions. Confidence bands equal to one standard deviation are obtained using the bootstrap method. Note that while averaging these results is also interesting, corresponding to the expected performance averaged over the prior, here we report the median IR 6 −2 −1 0 1 2 x x x x x x x x x x −2 −1.5 −1.5 −1 −1 −1 −1 −1 −0.5 −0.5 −0.5 −0.5 −0.5 −0.5 −0.5 0 0 0 0 0 0 0 0 0.5 0.5 0.5 0.5 0.5 0.5 0.5 1 1 1 1 1 1 1.5 1.5 1.5 2 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −5.5 −4.5 −3.5 −2.5 −1.5 −0.5 0 10 20 30 40 50 Number of Function Evaluations Log10 Median IR Methods ● ● ● EI ES PES Results on Synthetic Cost Functions Figure 2: Left, example of objective functions f. Right, median of the immediate regret (IR) for the methods PES, ES and EI in the experiments with synthetic objective functions. ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −3.9 −2.9 −1.9 −0.9 0 10 20 30 Number of Function Evaluations Log10 Median IR Methods ● ● ● ● EI ES PES PES−NB Results on Branin Cost Function ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −4.6 −3.6 −2.6 −1.6 −0.6 0 10 20 30 Number of Function Evaluations Results on Cosines Cost Function ●●● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ●●● ● ●● ● ●●●●●●●●●●●● ●●● ●●● ● ● ● ● ●●● ●● ● ● ● ●● ● ●●●● ●● ●●● ●●● ●●● ● ●●●●●● ●●●●●●● ●●● ●●●●●● ●● ● ●●●● ● ● ●● ● ●●●● ● ●● ●●●●● ●●●●● ● ●●●●●●● ●●● ● ●●● ●●● ● ● ● ● ● ● ● ●● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ●●●● ●●●●●●●●●●●● −2.7 −1.7 −0.7 0 10 20 30 40 50 Number of Function Evaluations Results on Hartmann Cost Function Figure 3: Median of the immediate regret (IR) for the methods EI, ES, PES and PES-NB in the experiments with well-known synthetic benchmark functions. because the empirical distribution of IR values is very heavy-tailed. In this case, the median is more representative of the exact location of the bulk of the data. These results indicate that the best method in this setting is PES, which significantly outperforms ES and EI. The plot also shows that in this case ES is significantly better than EI. We perform another series of experiments in which we optimize well-known synthetic benchmark functions including a mixture of cosines [2] and Branin-Hoo (both functions defined in [0, 1]2) as well as the Hartmann-6 (defined in [0, 1]6) [15]. In all instances, we fix the measurement noise to σ2 = 10−3. For both PES and EI we marginalize the hyperparameters ψ using the approach described in Section 2.3. ES, by contrast, cannot average its approximation of (2) over the posterior on ψ. Instead, ES works by fixing ψ to an estimate of its posterior mean (obtained using slice sampling) [27]. To evaluate the gains produced by the fully Bayesian treatment of ψ in PES, we also compare with a version of PES (PES-NB) which performs the same non-Bayesian (NB) treatment of ψ as ES. In PES-NB we use a single fixed hyperparameter as in previous sections with value given by the posterior mean of ψ. All the methods are initialized with three random measurements collected using latin hypercube sampling [5]. The plots in Figure 3 show the median IR obtained by each method on each function across 250 random initializations. Overall, PES is better than PES-NB and ES. Furthermore, PES-NB is also significantly better than ES in most of the cases. These results show that the fully Bayesian treatment of ψ in PES is advantageous and that PES can produce better approximations than ES. Note that PES performs better than EI in the Branin and cosines functions, while EI is significantly better on the Hartmann problem. This appears to be due to the fact that entropy-based strategies explore more aggressively which in higher-dimensional spaces takes more iterations. The Hartmann problem, however, is a relatively simple problem and as a result the comparatively more greedy behavior of EI does not result in significant adverse consequences. Note that the synthetic functions optimized in the previous experiment were much more multimodal that the ones considered here. 3.1 Experiments with real-world functions We finally optimize different real-world cost functions. The first one (NNet) returns the predictive accuracy of a neural network on a random train/test partition of the Boston Housing dataset [3]. 7 ● ● ● ● ● ● ●● ● ● ●● ● ● ● ●● ● ● ● ● ●●● ●●●● ●● ●●● ● ●●●●● ● ● ● ● ●● ● ●● ● ● ● ● ●●● ●● ● ●●● ● ● ●●● ●●● ● ● ● ●● ●●●● ●● ● ● ● ● ● ● ● ●● ● ●● ● ●● ●●● ● ● ● ●● ● ● ●●● ●● ● ● ● ●●● ●●● ● ● ● ● ● ● ● ●●●● ● ● ● ● ● ● ● ●● ●●●● ●●●●● ●● ●● ●●● ● ● ●● ● −1.4 −0.4 0.6 0 10 20 30 40 Function Evaluations Log10 Median IR Methods ● ● ● ● EI ES PES PES−NB NNet Cost ●● ● ● ● ● ●● ● ● ● ● ● ● ●● ● ●● ●●● ●●● ●●● ●●●●● ●●●●●● ● ●● ● ● ●●● ● ● ● ● ● ● ● ● ●●●● ● ●● ● ●● ● ●●●●●●●●●●●●●● ●● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●●● ● ●● ●● ●● ● ● ●● ● ●●●●●●●●● ●● ● ● ●● ● ● ● ●● ●● ● ● ● ●● ● ● ●●●● ●●●● ● ● ●●●●●●●●●● −0.1 0 10 20 30 40 Function Evaluations Hydrogen ● ● ● ● ● ● ●● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ● ●● ●● ●● ●● ●●● ● ● ● ● ● ● ● ● ● ● ● ● ●● ● ●● ●●● ●●● ● ● ●● ● ● ●●● ●●● ● ●● ● ● ● ● ● ● ● ●● ● ●●● ● ●● ● ●●● ● ●●● ●●●●●● ● ●●●●●●●●●●● ● ● ● ●● ● ● ● ● ●●● ● ● ● ● ●● ● ●●●● ●●●●●●● ● ● ●● ● ● ● ●●● −1.9 −0.9 0 10 20 30 40 Function Evaluations Portfolio ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −1.9 −0.9 0 10 20 30 Function Evaluations Walker A −0.3 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 10 20 30 Function Evaluations Walker B Figure 4: Median of the immediate regret (IR) for the methods PES, PES-NB, ES and EI in the experiments with non-analytic real-world cost functions. The variables to optimize are the weight-decay parameter and the number of training iterations for the neural network. The second function (Hydrogen) returns the amount of hydrogen production of a particular bacteria in terms of the PH and Nitrogen levels of the growth medium [7]. The third one (Portfolio) returns the ratio of the mean and the standard deviation (the Sharpe ratio) of the 1-year ahead returns generated by simulations from a multivariate time-series model that is adjusted to the daily returns of stocks AXP, BA and HD. The time-series model is formed by univariate GARCH models connected with a Student’s t copula [12]. These three functions (NNet, Hydrogen and Portfolio) have as domain [0, 1]2. Furthermore, in these examples, the ground truth function that we want to optimize is unknown and is only available through noisy measurements. To obtain a ground truth, we approximate each cost function as the predictive distribution of a GP that is adjusted to data sampled from the original function (1000 uniform samples for NNet and Portfolio and all the available data for Hydrogen [7]). Finally, we also consider another real-world function that returns the walking speed of a bipedal robot [30]. This function is defined in [0, 1]8 and its inputs are the parameters of the robot’s controller. In this case the ground truth function is noiseless and can be exactly evaluated through expensive numerical simulation. We consider two versions of this problem (Walker A) with zero-mean, additive noise of σ = 0.01 and (Walker B) with σ = 0.1. Figure 4 shows the median IR values obtained by each method on each function across 250 random initializations, except in Hydrogen where we used 500 due to its higher level of noise. Overall, PES, ES and PES-NB perform similarly in NNet, Hydrogen and Portfolio. EI performs rather poorly in these first three functions. This method seems to make excessively greedy decisions and fails to explore the search space enough. This strategy seems to be advantageous in Walker A, where EI obtains the best results. By contrast, PES, ES and PES-NB tend to explore more in this latter dataset. This leads to worse results than those of EI. Nevertheless, PES is significantly better than PES-NB and ES in both Walker datasets and better than EI in the noisier Walker B. In this case, the fully Bayesian treatment of hyper-parameters performed by PES produces improvements in performance. 4 Conclusions We have proposed a novel information-theoretic approach for Bayesian optimization. Our method, predictive entropy search (PES), greedily maximizes the amount of one-step information on the location x⋆of the global maximum using its posterior differential entropy. Since this objective function is intractable, PES approximates the original objective using a reparameterization that measures entropy in the posterior predictive distribution of the function evaluations. PES produces more accurate approximations than Entropy Search (ES), a method based on the original, non-transformed acquisition function. Furthermore, PES can easily marginalize its approximation with respect to the posterior distribution of its hyper-parameters, while ES cannot. Experiments with synthetic and real-world functions show that PES often outperforms ES in terms of immediate regret. In these experiments, we also observe that PES often produces better results than expected improvement (EI), a popular heuristic for Bayesian optimization. EI often seems to make excessively greedy decisions, while PES tends to explore more. As a result, EI seems to perform better for simple objective functions while often getting stuck with noisier objectives or for functions with many modes. Acknowledgements J.M.H.L acknowledges support from the Rafael del Pino Foundation. 8 References [1] I. Ahmad and P.-E. Lin. A nonparametric estimation of the entropy for absolutely continuous distributions. IEEE Transactions on Information Theory, 22(3):372–375, 1976. [2] B. S. Anderson, A. W. Moore, and D. Cohn. A nonparametric approach to noisy and costly optimization. In ICML, pages 17–24, 2000. [3] K. Bache and M. Lichman. UCI machine learning repository, 2013. [4] S. Bochner. Lectures on Fourier integrals. Princeton University Press, 1959. [5] E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical Report UBC TR-2009-23 and arXiv:1012.2599v1, Dept. of Computer Science, University of British Columbia, 2009. [6] E. Brochu, N. de Freitas, and A. Ghosh. Active preference learning with discrete choice data. In NIPS, pages 409–416, 2007. [7] E. H. Burrows, W.-K. Wong, X. Fern, F. W. R. Chaplen, and R. L. Ely. Optimization of ph and nitrogen for enhanced hydrogen production by synechocystis sp. pcc 6803 via statistical and machine learning methods. Biotechnology Progress, 25(4):1009–1017, 2009. [8] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In NIPS, pages 2249–2257, 2011. [9] P. Hennig and C. J. Schuler. Entropy search for information-efficient global optimization. Journal of Machine Learning Research, 13, 2012. [10] M. W. Hoffman, E. Brochu, and N. de Freitas. Portfolio allocation for Bayesian optimization. In UAI, pages 327–336, 2011. [11] N. Houlsby, J. M. Hern´andez-Lobato, F. Huszar, and Z. Ghahramani. Collaborative Gaussian processes for preference learning. In NIPS, pages 2096–2104, 2012. [12] E. Jondeau and M. Rockinger. The copula-GARCH model of conditional dependencies: An international stock market application. Journal of international money and finance, 25(5):827–853, 2006. [13] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998. [14] H. Kushner. A new method of locating the maximum of an arbitrary multipeak curve in the presence of noise. Journal of Basic Engineering, 86, 1964. [15] D. Lizotte. Practical Bayesian Optimization. PhD thesis, University of Alberta, Canada, 2008. [16] D. Lizotte, T. Wang, M. Bowling, and D. Schuurmans. Automatic gait optimization with Gaussian process regression. In IJCAI, pages 944–949, 2007. [17] D. J. MacKay. Information-based objective functions for active data selection. Neural Computation, 4(4):590–604, 1992. [18] T. P. Minka. A family of algorithms for approximate Bayesian inference. PhD thesis, Massachusetts Institute of Technology, 2001. [19] J. Moˇckus, V. Tiesis, and A. ˇZilinskas. The application of Bayesian methods for seeking the extremum. In L. Dixon and G. Szego, editors, Toward Global Optimization, volume 2. Elsevier, 1978. [20] D. M. Negoescu, P. I. Frazier, and W. B. Powell. The knowledge-gradient algorithm for sequencing experiments in drug discovery. INFORMS Journal on Computing, 23(3):346–363, 2011. [21] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177–1184, 2007. [22] C. E. Rasmussen and C. K. Williams. Gaussian processes for machine learning. The MIT Press, 2006. [23] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. Journal of Machine Learning Research, 9:759–813, 2008. [24] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, pages 2960–2968, 2012. [25] E. Solak, R. Murray-Smith, W. E. Leithead, D. J. Leith, and C. E. Rasmussen. Derivative observations in Gaussian process models of dynamic systems. In NIPS, pages 1057–1064, 2003. [26] N. Srinivas, A. Krause, S. M. Kakade, and M. Seeger. Gaussian process optimization in the bandit setting: No regret and experimental design. In ICML, pages 1015–1022, 2010. [27] J. Vanhatalo, J. Riihim¨aki, J. Hartikainen, P. Jyl¨anki, V. Tolvanen, and A. Vehtari. Bayesian modeling with Gaussian processes using the matlab toolbox GPstuff (v3.3). CoRR, abs/1206.5754, 2012. [28] J. Villemonteix, E. Vazquez, and E. Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509–534, 2009. [29] Z. Wang, S. Mohamed, and N. de Freitas. Adaptive Hamiltonian and Riemann Monte Carlo samplers. In ICML, 2013. [30] E. Westervelt and J. Grizzle. Feedback Control of Dynamic Bipedal Robot Locomotion. Control and Automation Series. CRC PressINC, 2007. 9
2014
7
5,558
Signal Aggregate Constraints in Additive Factorial HMMs, with Application to Energy Disaggregation Mingjun Zhong, Nigel Goddard, Charles Sutton School of Informatics University of Edinburgh United Kingdom {mzhong,nigel.goddard,csutton}@inf.ed.ac.uk Abstract Blind source separation problems are difficult because they are inherently unidentifiable, yet the entire goal is to identify meaningful sources. We introduce a way of incorporating domain knowledge into this problem, called signal aggregate constraints (SACs). SACs encourage the total signal for each of the unknown sources to be close to a specified value. This is based on the observation that the total signal often varies widely across the unknown sources, and we often have a good idea of what total values to expect. We incorporate SACs into an additive factorial hidden Markov model (AFHMM) to formulate the energy disaggregation problems where only one mixture signal is assumed to be observed. A convex quadratic program for approximate inference is employed for recovering those source signals. On a real-world energy disaggregation data set, we show that the use of SACs dramatically improves the original AFHMM, and significantly improves over a recent state-of-the-art approach. 1 Introduction Many learning tasks require separating a time series into a linear combination of a larger number of “source” signals. This general problem of blind source separation (BSS) arises in many application domains, including audio processing [17, 2], computational biology [1], and modelling electricity usage [8, 12]. This problem is difficult because it is inherently underdetermined and unidentifiable, as there are many more sources than dimensions in the original time series. The unidentifiability problem is especially serious because often the main goal of interest is for people to interpret the resulting source signals. For example, consider the application of energy disaggregation. In this application, the goal is to help people understand what appliances in their home use the most energy; the time at which the appliance is used is of less importance. To place an electricity monitor on every appliance in a household is expensive and intrusive, so instead researchers have proposed performing BSS on the total household electricity usage [8, 22, 15]. If this is to be effective, we must deal with the issue of identifiability: it will not engender confidence to show the householder a “franken-appliance” whose electricity usage looks like a toaster from 8am to 10am, a hot water heater until 12pm, and a television until midnight. To address this problem, we need to incorporate domain knowledge regarding what sorts of sources we are hoping to find. Recently a number of general frameworks have been proposed for incorporating prior constraints into general-purpose probabilistic models. These include posterior regularization [4], the generalized expectation criterion [14], and measurement-based learning [13]. However, all of these approaches leave open the question of what types of domain knowledge we should include. This paper considers precisely that research issue, namely, how to identify classes 1 of constraints for which we often have prior knowledge, which are general across a wide variety of domains, and for which we can perform efficient computation. In this paper we observe that in many applications of BSS, the total signal often varies widely across the different unknown sources, and we often have a good idea of what total values to expect. We introduce signal aggregate constraints (SACs) that encourage the aggregate values, such as the sums, of the source signals to be close to some specified values. For example, in the energy disaggregation problem, we know in advance that a toaster might use 50 Wh in a day and will be most unlikely to use as much as 1000 Wh. We incorporate these constraints into an additive factorial hidden Markov model (AFHMM), a commonly used model for BSS [17]. SACs raise difficult inference issues, because each constraint is a function of the entire state sequence of one chain of the AFHMM, and does not decompose according to the Markov structure of the model. We instead solve a relaxed problem and transform the optimization problem into a convex quadratic program which is computationally efficient. On real-world data from the electricity disaggregation domain (Section 7.2.2), we show that the use of SACs significantly improves performance, resulting in a 45% decrease in normalized disaggregation error compared to the original AFHMM, and a significant improvement (29%) in performance compared to a recent state-of-the-art approach to the disaggregation problem [12]. To summarize, the contributions of this paper are: (a) introducing signal aggregate constraints for blind source separation problems (Section 4), (b) a convex quadratic program for the relaxed AFHMM with SACs (Section 5), and (c) an evaluation (Section 7) of the use of SACs on a realworld problem in energy disaggregation. 2 Related Work The problem of energy disaggregation, also called non-intrusive load monitoring, was introduced by [8] and has since been the subject of intense research interest. Reviews on energy disaggregation can be found in [22] and [24]. Various approaches have been proposed to improve the basic AFHMM by constraining the states of the HMMs. The additive factorial approximate maximum a posteriori (AFAMAP) algorithm in [12] introduces the constraint that at most one chain can change state at any one time point. Another approach [21] proposed non-homogeneous HMMs combining with the constraint of changing at most one chain at a time. Alternately, semi-Markov models represent duration distributions on the hidden states and are another approach to constrain the hidden states. These have been applied to the disaggregation problems by [11] and [10]. Both [12] and [16] employ other kinds of additional information to improve the AFHMM. Other approaches could also be applicable for constraining the AFHMM, e.g., the k-segment constraints introduced for HMMs [19]. Some work in probabilistic databases has considered aggregate constraints [20], but that work considers only models with very simple graphical structure, namely, independent discrete variables. 3 Problem Setting Suppose we have observed a time series of sensor readings, for example the energy measured in watt hours by an electricity meter, denoted by Y = (Y1, Y2, · · · , YT ) where Yt ∈R+. It is assumed that this signal was aggregated from some component signals, for example the energy consumption of individual appliances used by the household. Suppose there were I components, and for each component, the signal is represented as Xi = (xi1, xi2, · · · , xiT ) where xit ∈R+. Therefore, the observation signal could be represented as the summation of the component signals as follows Yt = I X i=1 xit + ϵt (1) where ϵt is assumed Gaussian noise with zero mean and variance σ2 t . The disaggregation problem is then to recover the unknown time series Xi given only the observed data Y . This is essentially the BSS problem [3] where only one mixture signal was observed. As discussed earlier, there is no 2 unique solution for this model, due to the identifiability problem: component signals are exchangeable. 4 Models Our models in this paper will assume that the component signals Xi can be modelled by a hidden Markov chain, in common with much work in BSS. For simplicity, each Markov chain is assumed to have a finite set of states such that for the chain i, xit ≈µit for some µit ∈{µi1, · · · , µiKi} where Ki denotes the number of the states in chain i. The idea of the SAC is fairly general, however, and could be easily incorporated into other models of the hidden sources. 4.1 The Additive Factorial HMM Our baseline model will be the AFHMM. The AFHMM is a natural model for generation of an aggregated signal Y where the component signals Xi are assumed each to be a hidden Markov chain with states Zit ∈{1, 2, · · · , Ki} over time t. In the AFHMM, and variants such as AFAMAP, the model parameters, denoted by θ, are unknown. These parameters are the µik; the initial probabilities πi = (πi1, · · · , πiKi)T for each chain where πik = P(Zi1 = k); and the transition probabilities p(i) jk = P(Zit = j|Zi,t−1 = k). Those parameters can be estimated by using approximation methods such as the structured variational approximation [5]. In this paper we focus on inferring the sequence over time of hidden states Zit for each hidden Markov chain; θ are assumed known. We are interested in maximum a posteriori (MAP) inference, and the posterior distribution has the following form P(Z|Y ) ∝ IY i=1 P(Zi1) T Y t=1 p(Yt|Zt) T Y t=2 IY i=1 P(Zit|Zi,t−1) (2) where p(Yt|Zt) = N(PI i=1 µi,zit, σ2 t ) is a Gaussian distribution. An alternative way to represent the posterior distibution would use a binary vector Sit = (Sit1, Sit2, · · · , SitKi)T to represent the discrete variable Zit such that Sitk = 1 when Zit = k and for all Sitj = 0 when j ̸= k. The logarithm of posterior distribution over S then has the following form log P(S|Y ) ∝ I X i=1 ST i1 log πi + T X t=2 I X i=1 ST it  log P (i) Si,t−1 −1 2 T X t=1 1 σ2 t Yt − I X i=1 ST itµi !2 (3) where P (i) = (p(i) jk ) is the transition probability matrix and µi = (µi1, µi2, · · · , µiKi)T . Exact inference is not tractable as the numbers of chains and states increase. A MAP value can be conveniently found by using the chainwise Viterbi algorithm [18], which optimizes jointly over each chain Si1 . . . SiT in sequence, holding the other chains constant. However, the chainwise Viterbi algorithm can get stuck in local optima. Instead, in this paper we solve a convex quadratic program for a relaxed version of the MAP problem (see Section 5). However, this solution is not guaranteed optimal due to the identifiability problem. Many efforts have been made to provide tractable solutions to this problem by constraining the states of the hidden Markov chains. In the next section we introduce signal aggregate constraints, which will help to address this problem. 4.2 The Additive Factorial HMM with Signal Aggregate Constraints Now we add Signal Aggregate Constraints to the AFHMM, yielding a new model AFHMM+SAC. The AFHMM+SAC assumes that the aggregate value of each component signal i over the entire sequence is expected to be a certain value µi0, which is known in advance. In other words, the SAC assumes PT t=1 xit ≈µi0. The constraint values µi0 (i = 1, 2, · · · , I) could be obtained from expert knowledge or by experiments. For example, in the energy disaggregation domain, extensive research has been undertaken to estimate the average national consumption of different appliances [23]. 3 Incorporating this constraint into the AFHMM, using the formulation from (3), results in the following optimization problem for MAP inference maximize S log P(S|Y ) subject to T X t=1 µT i Sit −µi0 !2 ≤δi, i = 1, 2, · · · , I, (4) where µi0 (i = 1, 2, · · · , I) are assumed known, and δi ≥0 is a tuning parameter which has the similar role as the ones used in ridge regression and LASSO [9]. Instead of solving this optimization problem directly, we equivalently solve the penalized objective function maximize S L(S) = log P(S|Y ) − I X i=1 λi T X t=1 µT i Sit −µi0 !2 , (5) where λi ≥0 is a complexity parameter which has a one-to-one correspondence with the tuning parameter δi. In the Bayesian point of view, the constraint terms could be viewed as the logarithm of the prior distributions over the states S. Therefore, the objective can be viewed as a log posterior distribution over S. Now the Viterbi algorithm is not applicable directly since at any time t, the state Sit depends on all the states at all time steps, because of the regularization terms which are non-Markovian inherently. Therefore, in the following section we transform the optimization problem (5) into a convex quadratic program which can be efficiently solved. Note that the constraints in equation (4) could be generalised. Rather than making only one constraint on each chain in the time period [0, T] (as described above), a series of constraints could be made. We could define J constraints such that, for j = 1, 2, · · · , J, the jth constraint for chain i is: Ptb ij τ (i) j =ta ij µT i Si,τ (i) j −µj i0 2 ≤δij where [ta ij, tb ij] denotes the time period for the constraint. This could be reasonable particularly in household energy data to represent the fact that some appliances are commonly used during the daytime and are unlikely to be used between 2am and 5am. This is a straightforward extension that does not complicate the algorithms, so for presentational simplicity, we only use a single constraint per chain, as shown in (4), in the rest of this paper. 5 Convex Quadratic Programming for AFHMM+SAC In this section we derive a convex quadratic program (CQP) for the relaxed problem for (5). The problem (5) is not convex even if the constraint Sitk ∈{0, 1} is relaxed, because log P(S|Y ) is not convex. By adding an additional set of variables, we obtain a convex problem. Similar to [12], we define a new Ki × Ki variable matrix Hit = (hit jk) such that hit jk = 1 when Si,t−1,k = 1 and Sitj = 1, and otherwise hit jk = 0. In order to present a CQP problem, we define the following notation. Denote 1T as a column vector of size T × 1 with all the elements being 1. Denote µ∗ i = 1T ⊗µi with size TKi × 1, where ⊗is Kronecker product, then Λi = λiµ∗ i µ∗T i and ˜µi = 2λiµi0µ∗ i . Denote eT as a T × 1 vector with the first element being 1 and all the others being zero. Denote ˜πi = eT ⊗log πi with size TKi ×1. We represent −→ µ = (µT 1 , µT 2 , · · · , µT I )T with size P i Ki × 1, and denote Vt = σ−2 t −→ µ −→ µ T and ut = σ−2 t Yt−→ µ . We also denote Si = (ST i1, · · · , ST iT )T with size TKi × 1 and St = (ST 1t, · · · , ST It)T with size P i Ki × 1. Denote Hit .l and Hit l. as the column and row vectors of the matrix Hit, respectively. The objective function in equation (5) can then be equivalently represented as L(S, H) = I X i=1 ST i ˜πi + X i,t,k,j hit jk log p(i) jk − I X i=1 “ ST i ΛiSi −ST i ˜µi ” −1 2 T X t=1 “ ST t VtSt −2uT t St ” + C = X i,t,k,j hit jk log p(i) jk − I X i=1 “ ST i ΛiSi −ST i ( ˜µi + ˜πi) ” −1 2 T X t=1 “ ST t VtSt −2uT t St ” + C 4 where C is constant. Our aim is to optimize the problem maximize S,H L(S, H) subject to Ki X k=1 Sitk = 1, Sitk ∈{0, 1}, i = 1, 2, · · · , I; t = 1, 2, · · · , T, Ki X l=1 Hit l. = ST i,t−1, Ki X l=1 Hit .l = Sit, hit jk ∈{0, 1}. (6) This problem is equavalent to the problem in equation (5). It should be noted that the matrices Λi and Vt are positive semidefinite (PSD). Therefore, the problem is an integer quadratic program (IQP) which is hard to solve. Instead we solve the relaxed problem where Sitk ∈[0, 1] and hit jk ∈[0, 1]. The problem is thus a CQP. To solve this problem we used CVX, a package for specifying and solving convex programs [7, 6]. Note that a relaxed problem for AFHMM could also be obtained by setting λi = 0, which is also a CQP. Concerning the computational complexity, the CQP for AFHMM+SAC has polynomial time in the number of time steps times the total number of states of the HMMs. In practice, our implementations of AFHMM, AFAMAP, and AFHMM+SAC scale similarly (see Section 7.2). 6 Relation to Posterior Regularization In this section we show that the objective function in (5) can also be derived from the posterior regularization framework [4]. The posterior regularization framework guides the model to approach desired behavior by constraining the space of the model posteriors. The distribution defined in (3) is the model posterior distribution for the AFHMM. However, the desired distribution eP we are interested in is defined in the constrained space n eP|E e P (ϕi(S, Y )) ≤δi o where ϕi(S, Y ) = PT t=1 µT i Sit −µi0 2 . To ensure eP is a valid distribution, it is required to optimize minimize e P KL( eP(S)|P(S|Y )) subject to E e P (ϕi(S, Y )) ≤δi, i = 1, 2, · · · , I, (7) where KL(·|·) denotes the KL-divergence. According to [4], the unique optimal solution for the desired distribution is eP ∗(S) = 1 Z P(S|Y ) exp n −PI i=1 λiϕi(S, Y ) o . This is exactly the distribution in equation (5). 7 Results In this section, the AFHMM+SAC is evaluated by applying it to the disaggregation problems of a toy data set and energy data, and comparing with AFHMM and AFAMAP performance. 7.1 Toy Data In this section the AFHMM+SAC was applied to a toy data set to evaluate the robustness of the method. Two chains were generated with state values µ1 = (0, 24, 280)T and µ2 = (0, 300, 500)T . The initial and transition probabilities were randomly generated. Suppose the generated chains were xi = xi1, xi2, · · · , xiT (i = 1, 2), with T = 100. The aggregated data were generated by the equation Yt = x1t + x2t + ϵt where ϵt follows a Gaussian distribution with zero mean and variance σ2 = 0.01. The AFHMM+SAC was applied to this data to disaggregate Y into component signals. Note that we simply set λi = 1 for all the experiments including the energy data, though in practice these hyper-parameters could be tuned using cross validation. Denote ˆxi as the estimated signal for xi. The disaggregation performance was evaluated by the normalized disaggregation error (NDE) NDE = P i,t(ˆxit −xit)2 P i,t x2 it . (8) 5 For the energy data we are also particularly interested in recovering the total energy used by each appliance [16, 10]. Therefore, another objective of the disaggregation is to estimate the total energy consumed by each appliance over a period of time. To measure this, we employ the following signal aggregate error (SAE) SAE = 1 I I X i=1 | PT t=1 ˆxit −PT t′=1 xit′| PT t=1 Yt . (9) In order to assess how the SAC regularizer affects the results, various values for µ0 = (µ10, µ20)T were used for the AFHMM+SAC algorithm. Figure 1 shows the NDE and SAE results. It shows that as the Euclidean distance between the input vector µ0 and the true signal aggregate vector PT t=1 x1t, PT t=1 x2t  increases, both the NDE and SAE increase. This shows how the SACs affect the performance of AFHMM+SAC. 10 3 10 4 0 0.2 0.4 0.6 0.8 Error Distance Normalized Disaggregation Error Signal Aggregate Error Figure 1: Normalized disaggregation error and signal aggregate error computed by AFHMM+SAC using various input vectors µi0. The x-axis shows the Euclidean distance between the input vector (µ10, µ20)T and the true signal aggregate vector PT t=1 x1t, PT t=1 x2t T . 7.2 Energy Disaggregation In this section, the AFHMM, AFAMAP, and AFHMM+SAC were applied to electrical energy disaggregation problems. We use the Household Electricity Survey (HES) data. HES was a recent study commissioned by the UK Department of Food and Rural Affairs, which monitored a total of 251 owner-occupied households across England from May 2010 to July 2011 [23]. The study monitored 26 households for an entire year, while the remaining 225 were monitored for one month during the year with periods selected to be representative of the different seasons. Individual appliances as well as the overall electricity consumption were monitored. The households were carefully selected to be representative of the overall population. The data were recorded every 2 or 10 minutes, depending on the household. This ultra-low frequency data presents a challenge for disaggregation techniques; typically studies rely on much higher data rates, e.g., the REDD data [12]. Both the data measured without and with a mains reading were used to compare those models. The model parameters θ defined in AFHMM, AFAMAP and AFHMM+SAC for every appliance were estimated by using 15-30 days’ data for each household. We simply assume 3 states for all the appliances, though we could assume more states which requires more computational costs. The µi was estimated by using k-means clustering on each appliance’s signals in the training data. 7.2.1 Energy Data without Mains Readings In the first experiment, we generated the aggregate data by adding up the appliance signals, since no mains reading had been measured for most of the households. One-hundred households were studied, and one day’s usage was used as test data for each household. The model parameters were 6 Table 1: Normalized disaggregation error (NDE), signal aggregate error (SAE), and computing time obtained by AFHMM, AFAMAP, and AFHMM+SAC on the energy data for 100 houses without mains. Shown are the mean±std values over days. NTC: National total consumption which was the average consumption of each appliance over the training days; TTC: True total consumption for each appliance for that day and household in the test data. METHODS NDE SAE TIME (SECOND) AFHMM 0.98± 0.68 0.144± 0.067 206±114 AFAMAP [12] 0.96± 0.42 0.083± 0.004 325±177 AFHMM+SAC (NTC) 0.64± 0.37 0.069± 0.004 356±262 AFHMM+SAC (TTC) 0.36± 0.28 0.0015± 0.0089 260±108 estimated by using 15-26 days’ data as the training data. In future work, it would be straightforward to incorporate the SAC into unsupervised disaggregation approaches [11], by using prior information such as national surveys to estimate µ0. The AFHMM, AFAMAP and AFHMM+SAC were applied to the aggregated signal to recover the component appliances. For the AFHMM+SAC, two kinds of total consumption vectors were used as the vector µ0. The first, the national total consumption (NTC), was the average consumption of each appliance over the training days across all households in the data set. The second, for comparison, was the true total consumption (TTC) for each appliance for that day and household. Obviously, TTC is the optimal value for the regularizer in AFHMM+SAC, so this gives us an oracle result which indicates the largest possible benefit from including this kind of SAC. Table 1 shows the NDE and SAE when the three methods were applied to one day’s data for 100 households. We see that AFHMM+SAC outperformed the AFHMM in terms of both NDE and SAE. The AFAMAP outperformed the AFHMM in terms of SAE, and otherwise they performed similar in terms of NDE. Unsurprisingly, the AFHMM+SAC using TTC performs the best among these methods. This shows the difference the constraints made, even though we would never be able to obtain the TTC in reality. By looking at the mean values in the Table 1, we also conclude that AFHMM+SAC using NTC had improved 33% and 16% over state-of-the-art AFAMAP in terms of NDE and SAE, respectively. This was also verified by computing the paired t-test to show that the mean NDE and SAE obtained by AFHMM+SAC and AFAMAP were different at the 5% significance level. To demonstrate the computational efficiency, the computing time is also shown in the Table 1. It indicates that AFHMM, AFAMAP and AFHMM+SAC consumed similar time for inference. 7.2.2 Energy Data with Mains Readings We studied 9 houses in which the mains as well as the appliances were measured. In this experiment we applied the models directly to the measured mains signal. This scenario is more difficult than that of the previous section, because the mains power will also include the demand of some appliances which are not included in the training data, but it is also the most realistic. The summary of the 9 houses is shown in Table 2. The training data were used to estimate the model parameters. The number of appliances corresponds to the number of the HMMs in the model. The mains measured in the test days are inputted into the models to recover the consumption of those appliances. We computed the NTC by using the training data for the AFHMM+SAC. The NDE and SAE were computed for every house and each method. The results are shown in Figure 2. For each house we also computed the paired t-test for the NDE and SAE computed by AFAMAP and AFHMM+SAC(NTC), which shows that the mean errors are different at the 5% significance level. This indicates that across all the houses AFHMM+SAC has improved over AFAMAP. The overall results for all the test days are shown in Table 3, which shows that AFHMM+SAC has improved over both AFHMM and AFAMAP. In terms of computing time, however, AFHMM+SAC is similar to AFHMM and AFAMAP. It should be noted that, by looking at Tables 1 and 3, all the three methods require more time for the data with mains than those without mains. This is because the algorithms take more time to converge for realistic data. These results indicate the value of signal aggregate constraints for this problem. 7 Table 2: Summary of the 9 houses with mains HOUSE 1 2 3 4 5 6 7 8 9 NUMBERS OF TRAINING DAYS 17 16 15 29 27 28 27 15 30 NUMBERS OF TEST DAYS 9 9 10 8 9 9 9 10 10 NUMBERS OF APPLIANCES 21 25 24 15 24 22 23 20 25 Table 3: The normalized disaggregation error (NDE), signal aggregate error (SAE), and computing time obtained by AFHMM, AFAMAP, and AFHMM+SAC using mains as the input. Shown are the mean±std values computed from all the test days of the 9 houses. NTC: National total consumption which was the average consumption of each appliance over the training days; TTC: True total consumption for each appliance for that day and household in the test data. METHODS NDE SAE TIME (SECOND) AFHMM 1.36± 0.75 0.069± 0.039 1008±269 AFAMAP [12] 1.05± 0.29 0.043± 0.012 1327±453 AFHMM+SAC (NTC) 0.74± 0.34 0.030± 0.014 1101±342 AFHMM+SAC (TTC) 0.57± 0.28 0.001± 0.0048 1276±410 1 2 3 4 5 6 7 8 9 0 0.5 1 1.5 2 2.5 3 3.5 House Error Normalized Disaggregation Error AFHMM AFAMAP AFHMM+SAC(NTC) AFHMM+SAC(TTC) 1 2 3 4 5 6 7 8 9 0 0.02 0.04 0.06 0.08 0.1 House Error Signal Aggregate Error AFHMM AFAMAP AFHMM+SAC(NTC) AFHMM+SAC(TTC) Figure 2: Mean and std plots for NDE and SAE computed by AFHMM, AFAMAP and AFHMM+SAC using mains as the input for 9 houses. 8 Conclusions In this paper, we have proposed an additive factorial HMM with signal aggregate constraints. The regularizer was derived from a prior distribution over the chain states. We also showed that the objective function can be derived in the framework of posterior regularization. We focused on finding the MAP configuration for the posterior distribution with the constraints. Since dynamic programming is not directly applicable, we pose the optimization problem as a convex quadratic program and solve the relaxed problem. On simulated data, we showed that the AFHMM+SAC is robust to errors in specification of the constraint value. On real world data from the energy disaggregation problem, we showed that the AFHMM+SAC performed better both than a simple AFHMM and than previously published research. Acknowledgments This work is supported by the Engineering and Physical Sciences Research Council (grant number EP/K002732/1). 8 References [1] H.M.S. Asif and G. Sanguinetti. Large-scale learning of combinatorial transcriptional dynamics from gene expression. Bioinformatics, 27(9):1277–1283, 2011. [2] F. Bach and M. I. Jordan. Blind one-microphone speech separation: A spectral learning approach. In Neural Information Processing Systems, pages 65–72, 2005. [3] P. Comon and C. Jutten, editors. Handbook of Blind Source Separation: Independent Component Analysis and Applications. Academic Press, First Edition, 2010. [4] K. Ganchev, J. Grac¸a, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11:2001–2049, 2010. [5] Z. Ghahramani and M.I. Jordan. Factorial hidden Markov models. Machine Learning, 27:245–273, 1997. [6] M. Grant and S. Boyd. Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences, pages 95–110. Springer-Verlag Limited, 2008. http://stanford.edu/˜boyd/ graph_dcp.html. [7] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.1. http: //cvxr.com/cvx, March 2014. [8] G.W. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12):1870 –1891, Dec 1992. [9] T. Hastie, R. Tibshirani, and J. Friedman, editors. The Elements of Statistical Learning, Second Edition. Springer, 2009. [10] M.J. Johnson and A.S. Willsky. Bayesian nonparametric hidden semi-Markov models. Journal of Machine Learning Research, 14:673–701, 2013. [11] H. Kim, M. Marwah, M. Arlitt, G. Lyon, and J. Han. Unsupervised disaggregation of low frequency power measurements. In Proceedings of the SIAM Conference on Data Mining, pages 747–758, 2011. [12] J. Z. Kolter and T. Jaakkola. Approximate inference in additive factorial HMMs with application to energy disaggregation. In Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12), volume 22, pages 1472–1482, 2012. [13] P. Liang, M.I. Jordan, and D. Klein. Learning from measurements in exponential families. In The 26th Annual International Conference on Machine Learning, pages 641–648, 2009. [14] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proceedings of Association for Computational Linguistics (ACL-08), pages 870–878, Columbus, Ohio, June 2008. [15] O. Parson. Unsupervised Training Methods for Non-intrusive Appliance Load Monitoring from Smart Meter Data. PhD thesis, University of Southampton, April 2014. [16] O. Parson, S. Ghosh, M. Weal, and A. Rogers. Non-intrusive load monitoring using prior models of general appliance types. In Proceedings of the Twenty-Sixth Conference on Artificial Intelligence (AAAI12), pages 356–362, July 2012. [17] S. T. Roweis. One microphone source separation. In Advances in Neural Information Processing, pages 793–799, 2001. [18] L.K. Saul and M.I. Jordan. Mixed memory Markov chains: Decomposing complex stochastic processes as mixtures of simpler ones. Machine Learning, 37:75–87, 1999. [19] M.K. Titsias, C. Yau, and C.C. Holmes. Statistical inference in hidden Markov models using k-segment constraints. Eprint arXiv:1311.1189, 2013. [20] M. Yang, H. Wang, H. Chen, and W. Ku. Querying uncertain data with aggregate constraints. In Proceedings of the 2011 ACM SIGMOD International Conference on Management of Data, SIGMOD ’11, pages 817–828, New York, NY, USA, 2011. [21] M. Zhong, N. Goddard, and C. Sutton. Interleaved factorial non-homogeneous hidden Markov models for energy disaggregation. In Neural Information Processing Systems, Workshop on Machine Learning for Sustainability, Lake Tahoe, Nevada, USA, 2013. [22] M. Ziefman and K. Roth. Nonintrusive appliance load monitoring: review and outlook. IEEE transactions on Consumer Electronics, 57:76–84, 2011. [23] J.-P. Zimmermann, M. Evans, J. Griggs, N. King, L. Harding, P. Roberts, and C. Evans. Household electricity survey, 2012. [24] A. Zoha, A. Gluhak, M.A. Imran, and S. Rajasegarar. Non-intrusive load monitoring approaches for disaggregated energy sensing: a survey. Sensors, 12:16838–16866, 2012. 9
2014
70
5,559
Poisson Process Jumping between an Unknown Number of Rates: Application to Neural Spike Data Florian Stimberg Computer Science, TU Berlin Florian.Stimberg@tu-berlin.de Andreas Ruttor Computer Science, TU Berlin Andreas.Ruttor@tu-berlin.de Manfred Opper Computer Science, TU Berlin Manfred.Opper@tu-berlin.de Abstract We introduce a model where the rate of an inhomogeneous Poisson process is modified by a Chinese restaurant process. Applying a MCMC sampler to this model allows us to do posterior Bayesian inference about the number of states in Poisson-like data. Our sampler is shown to get accurate results for synthetic data and we apply it to V1 neuron spike data to find discrete firing rate states depending on the orientation of a stimulus. 1 Introduction Event time data is often modeled as an inhomogeneous Poisson process, whose rate λ(t) as a function of time t has to be learned from the data. Poisson processes have been used to model a wide variety of data, ranging from network traffic [25] to photon emission data [12]. Although neuronal spikes are in general not perfectly modeled by a Poisson process [17], there has been extensive work based on the simplified Poisson assumption [e.g. 19, 20]. Prior assumptions about the rate process strongly influence the result of inference. Some models assume that the rate λ(t) changes continuously [1, 7, 22], but for certain applications it is more useful to model it as a piecewise constant function of time, which switches between a finite number of distinct states. Such an assumption could be of interest, when one tries to relate the change of the rate to sudden changes of certain external experimental conditions, e.g. changes of neural spike activity when external stimuli are switched. An example for a discrete state rate process is the Markov modulated Poisson process (MMPP) [10, 18], where changes between the states of the rate follow a continuous time Markov jump process (MJP). For the MMPP one has to specify the number of states beforehand and it is often not clear how this number should be chosen. Comparing models with different numbers of states by computing Bayes factors can be cumbersome and time consuming. On the other hand, nonparametric Bayesian methods for models with an unknown number of model parameters based on Dirichlet or Chinese restaurant processes have been highly popular in recent years [e.g. 24, 26]. However—to our knowledge—such an idea has not yet been applied to the conceptually simpler Poisson process scenario. In this paper, we present a computationally efficient MCMC approach to this model, which utilizes its feature that given the jump process the observed Poisson events are independent. This property makes computing the data likelihood very fast in each iteration of our sampler and leads to a highly efficient estimation of the rate. This allows us to apply our sampler to large data sets. 1 c f τi c π pλ α λi λ[0 : T] s Y Figure 1: Generative model. 2 Model We assume that the data comes from an inhomogeneous Poisson process, which has rate λ(t) at time t. In our model λ(t) is a latent, piecewise constant process. The likelihood of the data given a path λ(0:T ) with s distinct states then becomes [8] P(Y|λ(0:T )) ∝ s Y i=1 λni i e−τiλi, (1) where τi is the overall time spent in state i defined as λ(t) = λi and ni is the number of Poisson events in the data Y, while the system is in this state. A trajectory of λ(0:T ) is generated by drawing c jump times from a Poisson process with rate f. This means λ(0:T ) is separated in c + 1 segments during which it remains in one state λi. To deal with an unknown number of discrete states and their unknown probability π of being visited, we assume that the distribution π is drawn from a Dirichlet process with concentration parameter α and base distribution pλ. By integrating out π we get a Chinese restaurant process (CRP) with the same parameters as the Dirichlet process. For a derivation of this result see [27]. Let us assume we already have i segments and draw the next jump time from an exponential distribution with rate f. The next segment gets a new λ-value sampled from pλ with probability α/(α + i), otherwise one of the previous segments is chosen with equal probability and its λ-value is also used for the new segment. This leads to the following prior probability of a path λ(0:T ): P(λ(0:T )|f, α, pλ) ∝f ce−fT αs Qs j=1 (pλ(λj)(#j −1)!) Qc i=0(α + i) , (2) where s is the number of distinct values of λ. To summarize, we have f as the rate of jumps, pλ as a prior distribution over the values of λ, #j as the number of segments assigned to state j, and α as a hyperparameter which determines how likely a jump will lead to a completely new value for λ. If there are c jumps in the path λ(0:T ), then a priori the expected number of distinct λ-values is [28] E[s|c] = c+1 X i=1 α α + i −1. (3) We choose a gamma distribution for pλ with shape a and scale b, pλ(λ) = Gamma(λ; a, b) ∝λa−1e−λ/b, (4) which is conjugate to the likelihood (2). The generative model is visualized in figure 1. 3 MCMC Sampler We use a Metropolis-within-Gibbs sampler with two main steps: First, we change the path of the Chinese restaurant process conditioned on the current parameters with a Metropolis Hastings random walk. In the seconds step, the time of the jumps and the states are held fixed, and we directly sample the λ-values and f from their conditional posteriors. 2 3.1 Random Walk on the many-state Markov jump process To generate a proposal path λ∗ (0:T ) (for the remainder of this paper ∗will always denote a variable concerning the proposal path) we manipulate the current path λ(0:T ) by one of the following actions: shifting one of the jumps in time, adding a jump, removing one of the existing jumps, switching the state of a segment, joining two states, or dividing one state into two. This is similar to the birth-death approach, which has been used before for other types of MJPs [e.g. 5]. We shift a jump by drawing the new time from a Gaussian distribution centered at the current time with standard deviation σt and truncated at the neighboring jumps. σt is a parameter of the sampler, which we chose by hand and which should be in the same scale as the typical time between Poisson events. If in doubt, a high value should be chosen, so that the truncated distribution becomes more uniform. When adding a jump the time of the new jump is drawn from a uniform distribution over the whole time interval. With probability qn a new value of λ is added, otherwise we reuse an old one. The parameter qn was chosen by hand to be 0.1, which worked well for all data sets we tested the sampler on. To remove a jump we choose one of the jumps with equal probability. Switching the state of a segment is done by choosing one of the segments at random and either assigning it to an existing value or introducing a value which was not used before, again with probability qn. When adding a new value of λ, both when adding a jump or when switching the state of a segment, we draw it from the conditional density P(λ∗ s+1|Y, λ(0:T )) ∝ Gamma(λ∗ s+1; a, b) Gamma(λ∗ s+1; ns+1 + 1, 1/τs+1) ∝ Gamma λ∗ s+1; a + ns+1, b/(τs+1b + 1)  . (5) If we instead reuse an already existing λ, we choose which state to use by drawing it from a discrete distribution with probabilities proportional to (5), but with n and τ being the number of Poisson events and the time in this segment, respectively. Changing the number of states through adding and removing jumps or switching the states of segments is sufficient to guarantee that the sampler converges to the posterior density. However, the sampler is very unlikely to reduce the number of states through these actions, if all states are used in multiple segments, so that convergence might take a very long time in this case. Therefore, we introduce the option to join all segments assigned to a neighboring (when ordered by their λ-values) pair of states into one state. Here the geometrical mean λ∗ j = p λi1λi2 of both λ-values is used for the joined state. Because we added the join action, we need an inverted action, which divides a state into two new ones, in order to guarantee reversibility and therefore fulfill detailed balance. The state to divide is randomly chosen among the states which have at least two segments assigned to them. Then a small factor ǫ > 1 is drawn from a shifted exponential distribution and the λ-value of the chosen state is multiplied with and divided by ǫ, respectively, to get the λ-values λ∗ j1 = λi ǫ and λ∗ j2 = λi/ǫ of the two new states. The distribution over ǫ is bounded, so that the new λ-values are assured to be between the neighboring ones. After this, the segments of the old state are randomly assigned to the two new states with probability proportional to the data likelihood (1). If by the last segment only one of the two states was chosen for all segments, the last segment is set to the other state. This method assures that every possible assignment (where both states are used) of the two states to the segments of the old state can occur. Additionally, there is exactly one way for each assignment to be drawn allowing a simple calculation of the Metropolis-Hastings acceptance probability for both the join and the divide action. Figure 2 shows how these actions work on the path. A proposed path λ∗ (0:T ) is accepted with probability pMH = min 1, P(Y|λ∗ (0:T )) P(Y|λ(0:T )) Q(λ(0:T )|λ∗ (0:T )) Q(λ∗ (0:T )|λ(0:T )) P(λ∗ (0:T )|f, α, pλ) P(λ(0:T )|f, α, pλ) ! . (6) 3 Switch Shift Remove Add Join Divide Figure 2: Example showing how the proposal actions modify the path of the Chinese restaurant process. The new path is drawn in dark blue, the old one in light blue. While the data likelihood ratio is the same for all proposal actions and follows from (1), the proposal and prior ratios Ψ = Q(λ(0:T )|λ∗ (0:T )) Q(λ∗ (0:T )|λ(0:T )) P(λ∗ (0:T )|f, α, pλ) P(λ(0:T )|f, α, pλ) (7) depend on the chosen proposal action. The acceptance probability for each action (provided in the supplementary material) can be calculated based on its description and the probability of a path (2). Because our proposal process is a simple random walk, the major contribution to the computation time comes from calculating the data likelihood. Luckily, this can be done very efficiently, because we only need to know how many Poisson events occur during the segments of λ∗ (0:T ) and λ(0:T ), how often the process changes state, and how much time it spends in each state. In order to avoid iterating over all the data for each proposal, we compute the index of the next event in the data for a fine time grid before the sampler starts. This ensures that the computational time is linear in the number of jumps in λ(0:T ), while the number of Poisson events in the data only introduces onetime costs for calculating the grid, which are negligible in practice. Additionally, we only need to compute the likelihood ratio over those segments which are changed in the proposal, because the unchanged parts cancel each other out. 3.2 Sampling the parameters As we use a gamma prior Gamma(λi; a, b) for each λi, it is easy to see from (1) that this leads to gamma posteriors Gamma (λi; a + ni, b/(τib + 1)) (8) over λi. Thus a Gibbs sampling step is used to update each λi. As for the rate f of change points, if we assume a gamma prior for f ∼Gamma(af, bf), the posterior becomes a gamma distribution, too: Gamma (f; af + c, bf/(Tbf + 1)) . (9) 4 Experiments We first validate our sampler on synthetic data sets, then we test our Chinese restaurant approach on neural spiking data from a cat’s primary visual cortex. 4.1 Synthetic Data We sampled 100 data sets from the prior with f = 0.02 and α = 3.0. Figure 3 compares the true values for the number of states and number of jumps with the posterior mean after 1.1 million samples with the first 100, 000 dropped as burn-in. On average the sampler took around 25 seconds to generate the samples on an Intel Xeon CPU with 2.40 GHz. The amounts of both jumps and states seem to be captured well, but for a large number of distinct states the mean seems to underestimate the true value. This is not surprising, because the λ parameters are drawn from the same base distribution. For a large number of states the probability that two 4 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 true number of states posterior mean number of states 0 10 20 30 0 10 20 30 true number of jumps posterior mean number of jumps Figure 3: Posterior mean vs. true number of states (left) and jumps (right) for 100 data sets drawn from the prior. The red line shows the identity function. 0 10 20 30 0 250 500 750 1000 t λ 10 20 30 0 250 500 750 1000 t λ 5 10 15 20 0 250 500 750 1000 t λ 10 14 18 0 250 500 750 1000 t λ Figure 4: Posterior of λ over t for the first 4 toy data sets. The black line is the true path, while the posterior mean is drawn as a dashed green lined surrounded by a 95% confidence interval. 0 30 60 90 rate triangle width = 0.1 triangle width = 1 triangle width = 10 0 50 100 540 550 560 time in s orientation in ° Figure 5: Stimulus and data for a part of the recordings from the first neuron. (top) Mean rates computed by using a moving triangle function. (middle) Spiking times. (bottom) Orientation of the stimulus. 3 4 5 6 0 4000 8000 12000 number of spikes posterior mean number of states 0 50 100 150 200 posterior mean number of jumps Figure 6: (left) Posterior mean number of states vs. number of spikes in the data for all neurons. (right) Posterior mean number of states over the posterior mean number of jumps. states are very similar becomes high, which makes them indistinguishable without observing more data. For four of the 100 data sets the posterior distribution over λ(t) is compared to the true path in figure 4. While we used the true value of α for our simulations the model seems to be robust against different choices of the parameter. This is shown in the supplementary material. 4.2 Bursting of Cat V1 Neurons Poisson processes are not an ideal model for single neuron spiking times [3]. The two main reasons for this are the refractory period of neurons and bursting [14]. Despite this, Poisson processes have been used extensively to analyze spiking data [e.g. 19, 20]. Additionally, both reasons should not be a problem for us. The refractory period is not as important for inference since spiking during it will not be observed. Bursting, on the other hand, is exactly what models with jumping Poisson rates are made to explain: sudden changes in the spiking rate. The data set used in this paper was obtained from multi-site silicon electrodes in the primary visual cortex of an anesthetized cat. For further information on the experimental setup see [4]. The data set contains spike trains from 10 different neurons, which were recorded while bars of varying orientation moved through the visual field of the cat. Since the stimulus is discrete (the orientation 5 0.00 0.25 0.50 0.75 1.00 probability state 1 2 3 4 5 260 270 280 290 time in s Figure 7: Detail of the results for one of the neurons. The black lines at the bottom represent the spike data, while the colors indicate the state with the highest posterior probability, which is represented by the height of the area. The states are ordered by increasing rate λ. 0.0 0.2 0.4 0.6 0 100 200 300 state probability 0 100 200 300 0.0 0.2 0.4 0.6 0 100 200 300 orientation state probability 0 100 200 300 orientation state 1 state 2 state 3 state 4 state 5 Figure 8: Probability distribution of the orientation of the stimulus conditioned on the active state. The states are ordered by increasing rate λ and the results are taken from samples at the MAP number of states. ranges from 0◦to 340◦in steps of 20◦), we expect to find discrete states in the response of the neurons. The recording lasted for 720 seconds and, while the orientation of the stimulus changed randomly, each orientation was shown 8 times for 5 seconds each over the whole experiment. In figure 5, a section of the spiking times of one neuron is shown together with the orientation of the stimulus. When computing a mean spiking rate by sliding a triangle function over the data, it is crucial to select a good width for the triangle function. A small width makes it possible to find short phases of very high spiking rate (so called bursts), but also leads to jumps in the rate even for single spikes. A larger width, on the other hand, smoothes the bursts out. Using our sampler for Bayesian inference based on our model allows us to find bursts and cluster them by their spiking rate, but at the same time the spikes between bursts are explained by one of the ground states, which have lower rates, but longer durations. We used an exponential prior for f with mean rate 10−4 and a low value of α = 0.1 to prevent overfitting. A second simulation running with a ten times higher prior mean for f and α = 0.5 lead to almost the same posterior number of states and only a slightly higher number of jumps, of which a larger fraction had no impact, because the state was not changed. The base distribution pλ was chosen to be exponential with mean 106, which is a fairly uninformative prior, because the duration of a single spike is in the order of magnitude of 1ms [11] resulting in an upper bound for the rate at around 1000/s. The posterior number of states for all of the 10 neurons is in the same region, as shown in figure 6, even though the number of spikes differs widely (from 725 to 13244). Although there seem to be more states if more jumps are found, the posterior differs strongly from the prior—a priori the expected number of states is under 2—indicating that the posterior is dominated by the data likelihood. For a small time frame of the spiking data from one of the neurons figure 7 shows which state had the highest posterior probability at each time and how high this probability was. It can be seen that the bursting states, which have high rates, are only active for a short time. Figure 8 shows that these burst states are clearly orientation dependent (see the supplementary material for results of all 10 neurons). Over the whole experiment all orientations were shown for exactly the same amount of time. While the highest state is always clearly concentrated on a range of about 60◦, the lower bursting states cover neighboring orientations. Often a smaller reaction can be seen for bars rotated by 180◦from the favored angle. The lowest state might indicate inhibition, because it is mostly active between the favored state and the one rotated by 180◦. As we can see in figure 9, some of the rates of the states are pretty similar over all the neurons, although it has to be noted that the orientation is probably not the only feature of the stimulus the 6 neurons are receptive to. Especially the position of the bar in the visual field should be important and could explain, why only some of the neurons reach the highest burst rate. It may seem that finding bursts is a simple task, but there has been extensive work in this field [e.g. 6, 13, 16] and naive approaches, like looking at the mean rate of events over time, fail easily, if the time resolution is not chosen well (as seen in figure 5). Additionally, our sampler not only distinguishes between burst and non-burst phases, but also uncovers discrete intensities, which are associated with features of the stimulus. 4.3 Comparison to a continuous rate model While our model assumes that the Poisson rates are discrete values, there have been other approaches applying continuous functions to estimate the rate. [1] use a Gaussian process prior over λ(t) and present a Markov chain Monte Carlo sampler to sample from the posterior. Since the sampler is very slow for our neuron data, we restricted the inference task to a small time window of the spike train from only one of the neurons. In figure 10 the results from the Sigmoidal Gaussian Cox Process (SGCP) model of [1] are shown for different values of the length scale hyperparameter and contrasted with the results from our model. Similar to the naive approach of computing a moving average of the rate (as in figure 5) the GP seems to either smooth out the bursts or becomes so sensitive that even single spikes change the rate function significantly depending on the choice of the GP hyperparameters. Our neural data seems to be especially bad for the performance of this algorithm, because it is based on the principle of uniformization. Uniformization was introduced by [9] and allows to sample from an inhomogeneous Poisson process by first sampling from a homogeneous one. If the rate of the homogeneous process is an upper bound of the rate function of the inhomogeneous Poisson process, then a sample of the latter can be generated by thinning out the events, where each event is omitted with a certain probability. The sampler for the SGCP model performs inference using this method, so that events are sampled at the current estimate of the maximum rate for the whole data set and thinned out afterwards. For our neural data the maximum rate would have to be the spiking rate during the strongest bursts, but this would lead to a very large number of (later thinned out) event times to be sampled in the long periods between bursts, which slows down the algorithm severely. This problem only occurs if uniformization is applied on λ(t) while other approaches, like [21], use it on the rate of a MJP with a fixed number of states. When we use a practically flat prior for the sampling of the maximum rate, it will be very low compared to the bursting rates our algorithm finds (see figure 10). On the other hand, if we use a very peaked prior around our burst rates, the algorithm becomes extremely slow (taking hours for just 100 samples) even when used on less than a tenth of the data for one neuron. 5 Conclusion We have introduced an inhomogeneous Poisson process model with a flexible number of states. Our inference is based on a MCMC sampler which detects recurring states in the data set and joins them in the posterior. Thus the number of distinct event rates is estimated directly during MCMC sampling. Clearly, sampling the number of states together with the jump times and rates needs considerably more samples to fully converge compared to a MJP with a fixed number of states. For our application to neural data in section 4.2 we generated 110 million samples for each neuron, which took between 80 and 325 minutes on an Intel Xeon CPU with 2.4 GHz. For all neurons the posterior had converged at the latest after a tenth of the time. It has to be remembered that to obtain similar results without the Chinese restaurant process, we would need to compute the Bayes factors for different number of states. This is a more complicated task than just doing posterior inference for a fixed number of states and would require more computationally demanding approaches, e.g. a bridge sampler, in order to get reasonably good estimates. Additionally, it would be hard to decide for what range of state dimensionality the samplers should be run. In contrast to this, our sampler typically gave a good estimate of the number of states in the data set already after just a few seconds of sampling. 7 0 25 50 75 1 2 3 4 5 states posterior mean fire rate 1 2 3 4 5 states 1 2 3 4 5 states 1 2 3 4 5 states Figure 9: Posterior mean rates λi for the MAP number of states. 0 20 40 60 time in s λ prior mean lengthscale=0.25 prior mean lengthscale=0.50 prior mean lengthscale=0.75 prior mean lengthscale=1.00 260 270 280 290 300 time in s Figure 10: Results of the SGCP Sampler on a small part of the data of one neuron. The black dashed line shows the posterior mean from our sampler. The spiking times are drawn as black vertical lines below. Longer run times are only needed for a higher accuracy estimate of the posterior distribution over the number of states. Although our prior for the transition rates of the MJP is state-independent, which facilitates the integration over the maximum number of states and gives rise to the Chinese restaurant process, this does not hold for the posterior. We can indeed compute the full posterior state transition matrix— with state-dependent jump rates—from the samples. A huge advantage of our algorithm is that its computation time scales linearly in the number of jumps in the hidden process and the influence of the number of events can be neglected in practice. This has been shown to speed up inference for MMPPs [23], but our more flexible model makes it possible to find simple underlying structures in huge data sets (e.g. network access data with millions of events) in reasonable time without the need to fix the number of states beforehand. In contrast to other MCMC algorithms [2, 8, 15] for MMPPs, our sampler is very flexible and can be easily adapted to e.g. Gamma processes generating the data or semi-Markov jump processes, which have non-exponentially distributed waiting times for the change of the rate. For Gamma process data the computation time to calculate the likelihood would no longer be independent of the number of events, but it might lead to better results for data which is strongly non-Poissonian. We showed that our model can be applied to neural spike trains and that our MCMC sampler finds discrete states in the data, which are linked to the discreteness of the stimulus. In general, our model should yield the best results when applied to data with many events and a discrete structure of unknown dimensionality influencing the rate. Acknowledgments Neural data were recorded by Tim Blanche in the laboratory of Nicholas Swindale, University of British Columbia, and downloaded from the NSF-funded CRCNS Data Sharing website. References [1] Ryan Prescott Adams, Iain Murray, and David J. C. MacKay. Tractable nonparametric bayesian inference in poisson processes with gaussian process intensities. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 9–16, New York, NY, USA, 2009. ACM. [2] Elja Arjas and Dario Gasbarra. Nonparametric Bayesian inference from right censored survival data, using the Gibbs sampler. Statistica Sinica, 4:505–524, 1994. [3] R. Barbieri, M. C. Quirk, L. M. Frank, M. A. Wilson, and E. N. Brown. Construction and analysis of non-poisson stimulus-response models of neural spiking activity. J. Neurosci. Methods, 105(1):25–37, January 2001. 8 [4] Timothy J. Blanche, Martin A. Spacek, Jamille F. Hetke, and Nicholas V. Swindale. Polytrodes: HighDensity Silicon Electrode Arrays for Large-Scale Multiunit Recording. Journal of Neurophysiology, 93(5):2987–3000, 2005. [5] R. J. Boys, D. J. Wilkinson, and T. B. Kirkwood. Bayesian inference for a discretely observed stochastic kinetic model. Statistics and Computing, 18(2):125–135, June 2008. [6] M. Chiappalone, A. Novellino, I. Vajda, A. Vato, S. Martinoia, and J. van Pelt. Burst detection algorithms for the analysis of spatio-temporal patterns in cortical networks of neurons. Neurocomputing, 6566(0):653–662, 2005. [7] John P. Cunningham, Vikash Gilja, Stephen I. Ryu, and Krishna V. Shenoy. Methods for estimating neural firing rates, and their application to brainmachine interfaces. Neural Networks, 22(9):1235–1246, November 2009. [8] Paul Fearnhead and Chris Sherlock. An exact Gibbs sampler for the Markov-modulated Poisson process. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(5):767–784, November 2006. [9] W.K. Grassmann. Transient solutions in markovian queueing systems. Computers & Operations Research, 4(1):47–53, 1977. [10] H. Heffes and D. Lucantoni. A markov modulated characterization of packetized voice and data traffic and related statistical multiplexer performance. Selected Areas in Communications, IEEE Journal on, 4(6):856–868, 1986. [11] Peter R. Huttenlocher. Development of cortical neuronal activity in the neonatal cat. Experimental Neurology, 17(3):247–262, 1967. [12] Mark J¨ager, Alexander Kiel, Dirk-Peter Herten, and Fred A. Hamprecht. Analysis of single-molecule fluorescence spectroscopic data with a markov-modulated poisson process. ChemPhysChem, 10(14):2486– 2495, 2009. [13] Y. Kaneoke and J.L. Vitek. Burst and oscillation as disparate neuronal properties. Journal of Neuroscience Methods, 68(2):211–223, 1996. [14] R. E. Kass, V. Ventura, and E. N. Brown. Statistical issues in the analysis of neuronal data. J Neurophysiol, 94(1):8–25, July 2005. [15] S. C. Kou, X. Sunney Xie, and Jun S. Liu. Bayesian analysis of single-molecule experimental data. Journal of the Royal Statistical Society: Series C (Applied Statistics), 54(3):469–506, June 2005. [16] C. R. Leg´endy and M. Salcman. Bursts and recurrences of bursts in the spike trains of spontaneously active striate cortex neurons. Journal of Neurophysiology, 53(4):926–939, April 1985. [17] Gaby Maimon and John A. Assad. Beyond poisson: Increased spike-time regularity across primate parietal cortex. Neuron, 62(3):426–440, 2009. [18] K. S. Meier-Hellstern. A fitting algorithm for markov-modulated poisson processes having two arrival rates. European Journal of Operational Research, 29(3):370–377, 1987. [19] Martin Nawrot, Ad Aertsen, and Stefan Rotter. Single-trial estimation of neuronal firing rates: From single-neuron spike trains to population activity. Journal of Neuroscience Methods, 94:81–92, 1999. [20] D. H. Perkel, G. L. Gerstein, and G. P. Moore. Neuronal spike trains and stochastic point processes. I. The single spike train. Biophysical Journal, 7(4):391–418, July 1967. [21] V. A. Rao. Markov chain Monte Carlo for continuous-time discrete-state systems. PhD thesis, University College London, 2012. [22] V. A. Rao and Y. W. Teh. Gaussian process modulated renewal processes. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 2474–2482. 2011. [23] V. A. Rao and Y. W. Teh. Fast MCMC sampling for Markov jump processes and extensions. Journal of Machine Learning Research, 14:3207–3232, 2013. [24] Ardavan Saeedi and Alexandre Bouchard-Cˆot´e. Priors over Recurrent Continuous Time Processes. In Advances in Neural Information Processing Systems (NIPS), volume 24, 2011. [25] K. Sriram and W. Whitt. Characterizing superposition arrival processes in packet multiplexers for voice and data. Selected Areas in Communications, IEEE Journal on, 4(6):833–846, sep 1986. [26] Florian Stimberg, Andreas Ruttor, and Manfred Opper. Bayesian inference for change points in dynamical systems with reusable states—a chinese restaurant process approach. Journal of Machine Learning Research, Proceedings Track, 22:1117–1124, 2012. [27] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, 2010. [28] Xinhua Zhang. A very gentle note on the construction of dirichlet process. Technical report, Canberra, Australia, 09 2008. 9
2014
71
5,560
Low-Rank Time-Frequency Synthesis C´edric F´evotte Laboratoire Lagrange (CNRS, OCA & Universit´e de Nice) Nice, France cfevotte@unice.fr Matthieu Kowalski∗ Laboratoire des Signaux et Syst`emes (CNRS, Sup´elec & Universit´e Paris-Sud) Gif-sur-Yvette, France kowalski@lss.supelec.fr Abstract Many single-channel signal decomposition techniques rely on a low-rank factorization of a time-frequency transform. In particular, nonnegative matrix factorization (NMF) of the spectrogram – the (power) magnitude of the short-time Fourier transform (STFT) – has been considered in many audio applications. In this setting, NMF with the Itakura-Saito divergence was shown to underly a generative Gaussian composite model (GCM) of the STFT, a step forward from more empirical approaches based on ad-hoc transform and divergence specifications. Still, the GCM is not yet a generative model of the raw signal itself, but only of its STFT. The work presented in this paper fills in this ultimate gap by proposing a novel signal synthesis model with low-rank time-frequency structure. In particular, our new approach opens doors to multi-resolution representations, that were not possible in the traditional NMF setting. We describe two expectation-maximization algorithms for estimation in the new model and report audio signal processing results with music decomposition and speech enhancement. 1 Introduction Matrix factorization methods currently enjoy a large popularity in machine learning and signal processing. In the latter field, the input data is usually a time-frequency transform of some original time series x(t). For example, in the audio setting, nonnegative matrix factorization (NMF) is commonly used to decompose magnitude or power spectrograms into elementary components [1]; the spectrogram, say S, is approximately factorized into WH, where W is the dictionary matrix collecting spectral patterns in its columns and H is the activation matrix. The approximate WH is generally of lower rank than S, unless additional constraints are imposed on the factors. NMF was originally designed in a deterministic setting [2]: a measure of fit between S and WH is minimized with respect to (w.r.t) W and H. Choosing the “right” measure for a specific type of data and task is not straightforward. Furthermore, NMF-based spectral decompositions often arbitrarily discard phase information: only the magnitude of the complex-valued short-time Fourier transform (STFT) is considered. To remedy these limitations, a generative probabilistic latent factor model of the STFT was proposed in [3]. Denoting by {yfn} the complex-valued coefficients of the STFT of x(t), where f and n index frequencies and time frames, respectively, the so-called Gaussian Composite Model (GCM) introduced in [3] writes simply yfn ∼Nc(0, [WH]fn), (1) where Nc refers to the circular complex-valued normal distribution.1 As shown by Eq. (1), in the GCM the STFT is assumed centered (reflecting an equivalent assumption in the time domain which ∗Authorship based on alphabetical order to reflect an equal contribution. 1A random variable x has distribution Nc(x|µ, λ) = (πλ)−1 exp −(|x −µ|2/λ) if and only if its real and imaginary parts are independent and with distribution N(Re(µ), λ/2) and N(Im(µ), λ/2), respectively. 1 is valid for many signals such as audio signals) and its variance has a low-rank structure. Under these assumptions, the negative log-likelihood −log p(Y|W, H) of the STFT matrix Y and parameters W and H is equal, up to a constant, to the Itakura-Saito (IS) divergence DIS(S|WH) between the power spectrogram S = |Y|2 and WH [3]. The GCM is a step forward from traditional NMF approaches that fail to provide a valid generative model of the STFT itself – other approaches have only considered probabilistic models of the magnitude spectrogram under Poisson or multinomial assumptions, see [1] for a review. Still, the GCM is not yet a generative model of the raw signal x(t) itself, but of its STFT. The work reported in this paper fills in this ultimate gap. It describes a novel signal synthesis model with low-rank time-frequency structure. Besides improved accuracy of representation thanks to modeling at lowest level, our new approach opens doors to multi-resolution representations, that were not possible in the traditional NMF setting. Because of the synthesis approach, we may represent the signal as a sum of layers with their own time resolution, and their own latent low-rank structure. The paper is organized as follows. Section 2 introduces the new low-rank time-frequency synthesis (LRTFS) model. Section 3 addresses estimation in LRTFS. We present two maximum likelihood estimation approaches with companion EM algorithms. Section 4 describes how LRTFS can be adapted to multiple-resolution representations. Section 5 reports experiments with audio applications, namely music decomposition and speech enhancement. Section 6 concludes. 2 The LRTFS model 2.1 Generative model The LRTFS model is defined by the following set of equations. For t = 1, . . . , T, f = 1, . . . , F, n = 1, . . . , N: x(t) = X fn αfnφfn(t) + e(t) (2) αfn ∼Nc(0, [WH]fn) (3) e(t) ∼Nc(0, λ) (4) For generality and simplicity of presentation, all the variables in Eq. (2) are assumed complexvalued. In the real case, the hermitian symmetry of the time-frequency (t-f) frame can be exploited: one only needs to consider the atoms relative to positive frequencies, generate the corresponding complex signal and then generate the real signal satisfying the hermitian symmetry on the coefficients. W and H are nonnegative matrices of dimensions F × K and K × N, respectively.2 For a fixed t-f point (f, n), the signal φfn = {φfn(t)}t, referred to as atom, is the element of an arbitrary t-f basis, for example a Gabor frame (a collection of tapered oscillating functions with short temporal support). e(t) is an identically and independently distributed (i.i.d) Gaussian residual term. The variables {αfn} are synthesis coefficients, assumed conditionally independent. Loosely speaking, they are dual of the analysis coefficients, defined by yfn = P t x(t)φ∗ fn(t). The coefficients of the STFT can be interpreted as analysis coefficients obtained with a Gabor frame. The synthesis coefficients are assumed centered, ensuring that x(t) has zero expectation as well. A low-rank latent structure is imposed on their variance. This is in contrast with the GCM introduced at Eq. (1), that instead imposes a low-rank structure on the variance of the analysis coefficients. 2.2 Relation to sparse Bayesian learning Eq. (2) may be written in matrix form as x = Φα + e , (5) where x and e are column vectors of dimension T with coefficients x(t) and e(t), respectively. Given an arbitrary mapping from (f, n) ∈{1, . . . , F} × {1, . . . , N} to m ∈{1, . . . , M}, where M = FN, α is a column vector of dimension M with coefficients {αfn}fn and Φ is a matrix of size T × M with columns {φfn}fn. In the following we will sometimes slightly abuse notations by 2In the general unsupervised setting where both W and H are estimated, WH must be low-rank such that K < F and K < N. However, in supervised settings where W is known, we may have K > F. 2 indexing the coefficients of α (and other variables) by either m or (f, n). It should be understood that m and (f, n) are in one-to-one correspondence and the notation should be clear from the context. Let us denote by v the column vector of dimension M with coefficients vfn = [WH]fn. Then, from Eq. (3), we may write that the prior distribution for α is p(α|v) = Nc(α|0, diag(v)) . (6) Ignoring the low-rank constraint, Eqs. (5)-(6) resemble sparse Bayesian learning (SBL), as introduced in [4, 5], where it is shown that marginal likelihood estimation of the variance induces sparse solutions of v and thus α. The essential difference between our model and SBL is that the coefficients are no longer unstructured in LRTFS. Indeed, in SBL, each coefficient αm has a free variance parameter vm. This property is fundamental to the sparsity-inducing effect of SBL [4]. In contrast, in LRTFS, the variances are now tied together and such that vm = vfn = [WH]fn . 2.3 Latent components reconstruction As its name suggests, the GCM described by Eq. (1) is a composite model, in the following sense. We may introduce independent complex-valued latent components ykfn ∼Nc(0, wfkhkn) and write yfn = PK k=1 ykfn. Marginalizing the components from this simple Gaussian additive model leads to Eq. (1). In this perspective, the GCM implicitly assumes the data STFT Y to be a sum of elementary STFT components Yk = {ykfn}fn . In the GCM, the components can be reconstructed after estimation of W and H , using any statistical estimator. In particular, the minimum mean square estimator (MMSE), given by the posterior mean, reduces to so-called Wiener filtering: ˆykfn = wfkhkn [WH]fn yfn. (7) The components may then be STFT-inversed to obtain temporal reconstructions that form the output of the overall signal decomposition approach. Of course, the same principle applies to LRTFS. The synthesis coefficients αfn may equally be written as a sum of latent components, such that αfn = P k αkfn, with αkfn ∼Nc(0, wfkhkn). Denoting by αk the column vector of dimension M with coefficients {αkfn}fn, Eq. (5) may be written as x = X k Φαk + e = X k ck + e , (8) where ck = Φαk. The component ck is the “temporal expression” of spectral pattern wk, the kth column of W. Given estimates of W and H, the components may be reconstructed in various way. The equivalent of the Wiener filtering approach used traditionally with the GCM would consist in computing ˆcMMSE k = ΦˆαMMSE k , with ˆαMMSE k = E{αk|x, W, H}. Though the expression of ˆαMMSE k is available in closed form it requires the inversion of a too large matrix, of dimensions T × T (see also Section 3.2). We will instead use ˆck = Φˆαk with ˆαk = E{αk|ˆα, W, H}, where ˆα is the available estimate of α. In this case, the coefficients of ˆαk are given by ˆαkfn = wfkhkn [WH]fn ˆαfn. (9) 3 Estimation in LRTFS We now consider two approaches to estimation of W, H and α in the LRTFS model defined by Eqs. (2)-(4). The first approach, described in the next section is maximum joint likelihood estimation (MJLE). It relies on the minimization of −log p(x, α|W, H, λ). The second approach is maximum marginal likelihood estimation (MMLE), described in Section 3.2. It relies on the minimization of −log p(x|W, H, λ), i.e., involves the marginalization of α from the joint likelihood, following the principle of SBL. Though we present MMLE for the sake of completeness, our current implementation does not scale with the dimensions involved in the audio signal processing applications presented in Section 5, and large-scale algorithms for MMLE are left as future work. 3 3.1 Maximum joint likelihood estimation (MJLE) Objective. MJLE relies on the optimization of CJL(α, W, H, λ) def = −log p(x, α|W, H, λ) (10) = 1 λ∥x −Φα∥2 2 + DIS(|α|2|v) + log(|α|2) + M log π , (11) where we recall that v is the vectorized version of WH and where DIS(A|B) = P ij dIS(aij|bij) is the IS divergence between nonnegative matrices (or vectors, as a special case), with dIS(x|y) = (x/y) −log(x/y) −1. The first term in Eq. (11) measures the discrepancy between the raw signal and its approximation. The second term ensures that the synthesis coefficients are approximately low-rank. Unexpectedly, a third term that favors sparse solutions of α, thanks to the log function, naturally appears from the derivation of the joint likelihood. The objective function (11) is not convex and the EM algorithm described next may only ensure convergence to a local solution. EM algorithm. In order to minimize CJL, we employ an EM algorithm based on the architecture proposed by Figueiredo & Nowak [6]. It consists of rewriting Eq. (5) as z = α + p β e1 , (12) x = Φz + e2 , (13) where z acts as a hidden variable, e1 ∼Nc(0, I), e2 ∼Nc(0, λI −βΦΦ∗), with the operator ·∗ denoting Hermitian transpose. Provided that β ≤λ/δΦ, where δΦ is the largest eigenvalue of ΦΦ∗, the likelihood function p(x|α, λ) under Eqs. (12)-(13) is the same as under Eq. (5). Denoting the set of parameters by θJL = {α, W, H, λ}, the EM algorithm relies on the iterative minimization of Q(θJL|˜θJL) = − Z z log p(x, α, z|W, H, λ)p(z|x, ˜θJL)dz , (14) where ˜θJL acts as the current parameter value. Loosely speaking, the EM algorithm relies on the idea that if z was known, then the estimation of α and of the other parameters would boil down to the mere white noise denoising problem described by Eq. (12). As z is not known, the posterior mean value w.r.t z of the joint likelihood is considered instead. The complete likelihood in Eq. (14) may be decomposed as log p(x, α, z|W, H, λ) = log p(x|z, λ) + log p(z|α) + log p(α|WH). (15) The hidden variable posterior simplifies to p(z|x, θJL) = p(z|x, λ). From there, using standard manipulations with Gaussian distributions, the (i + 1)th iteration of the resulting algorithm writes as follows. E-step: z(i) = E{z|x, λ(i)} = α(i) + β λ(i) Φ∗(x −Φα(i)) (16) M-step: ∀(f, n), α(i+1) fn = v(i) fn v(i) fn + β z(i) fn (17) (W(i+1), H(i+1)) = arg min W,H≥0 X fn DIS  |α(i+1) fn |2|[WH]fn  (18) λ(i+1) = 1 T ∥x −Φα(i+1)∥2 F (19) In Eq. (17), v(i) fn is a shorthand for [W(i)H(i)]fn . Eq. (17) is simply the application of Wiener filtering to Eq. (12) with z = z(i). Eq. (18) amounts to solving a NMF with the IS divergence; it may be solved using majorization-minimization, resulting in the standard multiplicative update rules given in [3]. A local solution might only be obtained with this approach, but this is still decreasing the negative log-likelihood at every iteration. The update rule for λ is not the one that exactly derives from the EM procedure (this one has a more complicated expression), but it still decreases the negative log-likelihood at every iteration as explained in [6]. 4 Note that the overall algorithm is rather computationally friendly as no matrix inversion is required. The Φα and Φ∗x operations in Eq. (16) correspond to analysis and synthesis operations that can be realized efficiently using optimized packages, such as the Large Time-Frequency Analysis Toolbox (LTFAT) [7]. 3.2 Maximum marginal likelihood estimation (MMLE) Objective. The second estimation method relies on the optimization of CML(W, H, λ) def = −log p(x|W, H, λ) (20) = −log Z α p(x|α, λ)p(α|WH)dα (21) It corresponds to the “type-II” maximum likelihood procedure employed in [4, 5]. By treating α as a nuisance parameter, the number of parameters involved in the data likelihood is significantly reduced, yielding more robust estimation with fewer local minima in the objective function [5]. EM algorithm. In order to minimize CML, we may use the EM architecture described in [4, 5] that quite naturally uses α has the hidden data. Denoting the set of parameters by θML = {W, H, λ}, the EM algorithm relies on the iterative minimization of Q(θML|˜θML) = − Z α log p(x, α|W, H, λ)p(α|x, ˜θML)dα, (22) where ˜θML acts as the current parameter value. As the derivations closely follow [4, 5], we skip details for brevity. Using rather standard results about Gaussian distributions the (i + 1)th iteration of the algorithm writes as follows. E-step : Σ(i) = (Φ∗Φ/λ(i) + diag(v(i−1))−1)−1 (23) α(i) = Σ(i)Φ∗x/λ(i) (24) v(i) = E{|α|2|x, v(i), λ(i)} = diag(Σ(i)) + |α(i)|2 (25) M-step : (W(i+1), H(i+1)) = arg min W,H≥0 X fn DIS  v(i) fn|[WH]fn  (26) λ(i+1) = 1 T  ∥x −Φα(i)∥2 2 + λ(i) XM m=1(1 −Σ(i) mm/v(i) m )  (27) The complexity of this algorithm can be problematic as it involves the computation of the inverse of a matrix of size M in the expression of Σ(i). M is typically at least twice larger than T, the signal length. Using the Woodbury matrix identity, the expression of Σ(i) can be reduced to the inversion of a matrix of size T, but this is still too large for most signal processing applications (e.g., 3 min of music sampled at CD quality makes T in the order of 106). As such, we will discard MMLE in the experiments of Section 5 but the methodology presented in this section can be relevant to other problems with smaller dimensions. 4 Multi-resolution LRTFS Besides the advantage of modeling the raw signal itself, and not its STFT, another major strength of LRTFS is that it offers the possibility of multi-resolution modeling. The latter consists of representing a signal as a sum of t-f atoms with different temporal (and thus frequency) resolutions. This is for example relevant in audio where transients, such as the attacks of musical notes, are much shorter than sustained parts such as the tonal components (the steady, harmonic part of musical notes). Another example is speech where different classes of phonemes can have different resolutions. At even higher level, stationarity of female speech holds at shorter resolution than male speech. Because traditional spectral factorizations approaches work on the transformed data, the time resolution is set once for all at feature computation and cannot be adapted during decomposition. In contrast, LRTFS can accommodate multiple t-f bases in the following way. Assume for simplicity that x is to be expanded on the union of two frames Φa and Φb, with common column size T 5 and with t-f grids of sizes Fa × Na and Fb × Nb, respectively. Φa may be for example a Gabor frame with short time resolution and Φb a Gabor frame with larger resolution – such a setting has been considered in many audio applications, e.g., [8, 9], together with sparse synthesis coefficients models. The multi-resolution LRTFS model becomes x = Φaαa + Φbαb + e (28) with ∀(f, n) ∈{1, . . . , Fa} × {1, . . . , Na}, αa,fn ∼Nc([WaHa]fn) , (29) ∀(f, n) ∈{1, . . . , Fb} × {1, . . . , Nb}, αb,fn ∼Nc([WbHb]fn) , (30) and where {αa,fn}fn and {αb,fn}fn are the coefficients of αa and αb, respectively. By stacking the bases and synthesis coefficients into Φ = [Φa Φb] and α = [αT a αT b ]T and introducing a latent variable z = [zT a zT b ]T , the negative joint log-likelihood −log p(x, α|Wa, Ha, Wb, Hb, λ) in the multi-resolution LRTFS model can be optimized using the EM algorithm described in Section 3.1. The resulting algorithm at iteration (i + 1) writes as follows. E-step: for ℓ= {a, b}, z(i) ℓ = α(i) ℓ + β λΦ∗ ℓ(x −Φaα(i) a −Φbα(i) b ) (31) M-step: for ℓ= {a, b}, ∀(f, n) ∈{1, . . . , Fℓ} × {1, . . . , Nℓ}, α(i+1) ℓ,fn = v(i) ℓ,fn v(i) ℓ,fn + β z(i) fn (32) for ℓ= {a, b}, (W(i+1) ℓ , H(i+1) ℓ ) = arg min Wℓ,Hℓ≥0 X fn DIS  |α(i+1) ℓ,fn |2|[WℓHℓ]fn  (33) λ(i+1) = ∥x −Φaα(i+1) a −Φbα(i+1) b ∥2 2/T (34) The complexity of the algorithm remains fully compatible with signal processing applications. Of course, the proposed setting can be extended to more than two bases. 5 Experiments We illustrate the effectiveness of our approach on two experiments. The first one, purely illustrative, decomposes a jazz excerpt into two layers (tonal and transient), plus a residual layer, according to the hybrid/morphological model presented in [8, 10]. The second one is a speech enhancement problem, based on a semi-supervised source separation approach in the spirit of [11]. Even though we provided update rules for λ for the sake of completeness, this parameter was not estimated in our experiments, but instead treated as an hyperparameter, like in [5, 6]. Indeed, the estimation of λ with all the other parameters free was found to perform poorly in practice, a phenomenon observed with SBL as well. 5.1 Hybrid decomposition of music We consider a 6 s jazz excerpt sampled at 44.1 kHz corrupted with additive white Gaussian noise with 20 dB input Signal to Noise Ratio (SNR). The hybrid model aims to decompose the signal as x = xtonal + xtransient + e = Φtonalαtonal + Φtransientαtransient + e , (35) using the multi-resolution LRTFS method described in Section 4. As already mentionned, a classical design consists of working with Gabor frames. We use a 2048 samples-long (∼46 ms) Hann window for the tonal layer, and a 128 samples-long (∼3 ms) Hann window for the transient layer, both with a 50% time overlap. The number of latent components in the two layers is set to K = 3. We experimented several values for the hyperparameter λ and selected the results leading to best output SNR (about 26 dB). The estimated components are shown at Fig. 1. When listening to the signal components (available in the supplementary material), one can identify the hit-hat in the first and second components of the transient layer, and the bass and piano attacks in the third component. In the tonal layer, one can identify the bass and some piano in the first component, some piano in the second component, and some hit-hat “ring” in the third component. 6 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Time Frequency 0 1 2 3 4 5 0 0.5 1 1.5 2 x 10 4 Figure 1: Top: spectrogram of the original signal (left), estimated transient coefficients log |αtransient| (center), estimated tonal coefficients log |αtonal| (right). Middle: the 3 latent components (of rank 1) from the transient layer. Bottom: the 3 latent components (of rank 1) from the tonal layer. 5.2 Speech enhancement The second experiment considers a semi-supervised speech enhancement example (treated as a single-channel source separation problem). The goal is to recover a speech signal corrupted by a texture sound, namely applauses. The synthesis model considered is given by x = Φtonal  αspeech tonal + αnoise tonal  + Φtransient  αspeech transient + αnoise transient  + e, (36) with αspeech tonal ∼Nc  0, Wtrain tonalHspeech tonal  , αnoise tonal ∼Nc 0, Wnoise tonalHnoise tonal  , (37) and αspeech transient ∼Nc  0, Wtrain transientHspeech transient  , αnoise transient ∼Nc 0, Wnoise transientHnoise transient  . (38) Wtrain tonal and Wtrain transient are fixed pre-trained dictionaries of dimension K = 500, obtained from 30 min of training speech containing male and female speakers. The training data, with sampling rate 16kHz, is extracted from the TIMIT database [12]. The noise dictionaries Wnoise tonal and Wnoise transient are learnt from the noisy data, using K = 2. The two t-f bases are Gabor frames with Hann window of length 512 samples (∼32 ms) for the tonal layer and 32 samples (∼2 ms) for the transient layer, both with 50% overlap. The hyperparameter λ is gradually decreased to a negligible value during iterations (resulting in a negligible residual e), a form of warm-restart strategy [13]. We considered 10 test signals composed of 10 different speech excerpts (from the TIMIT dataset as well, among excerpts not used for training) mixed in the middle of a 7 s-long applause sample. For every test signal, the estimated speech signal is computed as ˆx = Φtonal ˆαspeech tonal + Φtransient ˆαspeech transient (39) 7 Time Frequency Noisy signal: long window STFT analysis 0 1 2 3 4 5 6 7 0 1000 2000 3000 4000 5000 6000 7000 8000 Time Frequency Noisy signal: short window STFT analysis 0 1 2 3 4 5 6 7 0 1000 2000 3000 4000 5000 6000 7000 8000 Time Frequency Denoised signal: Tonal Layer 0 1 2 3 4 5 6 7 0 1000 2000 3000 4000 5000 6000 7000 8000 Time Frequency Denoised signal: Transient Layer 0 1 2 3 4 5 6 7 0 1000 2000 3000 4000 5000 6000 7000 8000 Figure 2: Time-frequency representations of the noisy data (top) and of the estimated tonal and transient layers from the speech (bottom). and a SNR improvement is computed as the difference between the output and input SNRs. With our approach, the average SNR improvement other the 10 test signals was 6.6 dB. Fig. 2 displays the spectrograms of one noisy test signal with short and long windows, and the clean speech synthesis coefficients estimated in the two layers. As a baseline, we applied IS-NMF in a similar setting using one Gabor transform with a window of intermediate length (256 samples, ∼16 ms). The average SNR improvement was 6 dB in that case. We also applied the standard OMLSA speech enhancement method [14] (using the implementation available from the author with default parameters) and the average SNR improvement was 4.6 dB with this approach. Other experiments with other noise types (such as helicopter and train sounds) gave similar trends of results. Sound examples are provided in the supplementary material. 6 Conclusion We have presented a new model that bridges the gap between t-f synthesis and traditional NMF approaches. The proposed algorithm for maximum joint likelihood estimation of the synthesis coefficients and their low-rank variance can be viewed as an iterative shrinkage algorithm with an additional Itakura-Saito NMF penalty term. In [15], Elad explains in the context of sparse representations that soft thresholding of analysis coefficients corresponds to the first iteration of the forwardbackward algorithm for LASSO/basis pursuit denoising. Similarly, Itakura-Saito NMF followed by Wiener filtering correspond to the first iteration of the proposed EM algorithm for MJLE. As opposed to traditional NMF, LRTFS accommodates multi-resolution representations very naturally, with no extra difficulty at the estimation level. The model can be extended in a straightforward manner to various additional penalties on the matrices W or H (such as smoothness or sparsity). Future work will include the design of a scalable algorithm for MMLE, using for example message passing [16], and a comparison of MJLE and MMLE for LRTFS. Moreover, our generative model can be considered for more general inverse problems such as multichannel audio source separation [17]. More extensive experimental studies are planned in this direction. Acknowledgments The authors are grateful to the organizers of the Modern Methods of Time-Frequency Analysis Semester held at the Erwin Schr¨oedinger Institute in Vienna in December 2012, for arranging a very stimulating event where the presented work was initiated. 8 References [1] P. Smaragdis, C. F´evotte, G. Mysore, N. Mohammadiha, and M. Hoffman. Static and dynamic source separation using nonnegative factorizations: A unified view. IEEE Signal Processing Magazine, 31(3):66–75, May 2014. [2] D. D. Lee and H. S. Seung. Learning the parts of objects with nonnegative matrix factorization. Nature, 401:788–791, 1999. [3] C. F´evotte, N. Bertin, and J.-L. Durrieu. Nonnegative matrix factorization with the ItakuraSaito divergence. With application to music analysis. Neural Computation, 21(3):793–830, Mar. 2009. [4] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001. [5] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Transactions on Signal Processing, 52(8):2153–2164, Aug. 2004. [6] M. Figueiredo and R. Nowak. An EM algorithm for wavelet-based image restoration. IEEE Transactions on Image Processing, 12(8):906–916, Aug. 2003. [7] Z. Pr˚uˇsa, P. Søndergaard, P. Balazs, and N. Holighaus. LTFAT: A Matlab/Octave toolbox for sound processing. In Proc. 10th International Symposium on Computer Music Multidisciplinary Research (CMMR), pages 299–314, Marseille, France, Oct. 2013. [8] L. Daudet and B. Torr´esani. Hybrid representations for audiophonic signal encoding. Signal Processing, 82(11):1595 – 1617, 2002. [9] M. Kowalski and B. Torr´esani. Sparsity and persistence: mixed norms provide simple signal models with dependent coefficients. Signal, Image and Video Processing, 3(3):251–264, 2009. [10] M. Elad, J.-L. Starck, D. L. Donoho, and P. Querre. Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Journal on Applied and Computational Harmonic Analysis, 19:340–358, Nov. 2005. [11] P. Smaragdis, B. Raj, and M. V. Shashanka. Supervised and semi-supervised separation of sounds from single-channel mixtures. In Proc. 7th International Conference on Independent Component Analysis and Signal Separation (ICA), London, UK, Sep. 2007. [12] TIMIT: acoustic-phonetic continuous speech corpus. Linguistic Data Consortium, 1993. [13] A. Hale, W. Yin, and Y. Zhang. Fixed-point continuation for ℓ1-minimization: Methodology and convergence. SIAM Journal on Optimisation, 19(3):1107–1130, 2008. [14] I. Cohen. Noise spectrum estimation in adverse environments: Improved minima controlled recursive averaging. IEEE Transactions on Speech and Audio Processing, 11(5):466–475, 2003. [15] M. Elad. Why simple shrinkage is still relevant for redundant representations? IEEE Transactions on Information Theory, 52(12):5559–5569, 2006. [16] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. The Journal of Machine Learning Research, 9:759–813, 2008. [17] A. Ozerov and C. F´evotte. Multichannel nonnegative matrix factorization in convolutive mixtures for audio source separation. IEEE Transactions on Audio, Speech and Language Processing, 18(3):550–563, Mar. 2010. 9
2014
72
5,561
Spike Frequency Adaptation Implements Anticipative Tracking in Continuous Attractor Neural Networks Yuanyuan Mi State Key Laboratory of Cognitive Neuroscience & Learning, Beijing Normal University,Beijing 100875,China miyuanyuan0102@bnu.edu.cn C. C. Alan Fung, K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Hong Kong phccfung@ust.hk, phkywong@ust.hk Si Wu State Key Laboratory of Cognitive Neuroscience & Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China wusi@bnu.edu.cn Abstract To extract motion information, the brain needs to compensate for time delays that are ubiquitous in neural signal transmission and processing. Here we propose a simple yet effective mechanism to implement anticipative tracking in neural systems. The proposed mechanism utilizes the property of spike-frequency adaptation (SFA), a feature widely observed in neuronal responses. We employ continuous attractor neural networks (CANNs) as the model to describe the tracking behaviors in neural systems. Incorporating SFA, a CANN exhibits intrinsic mobility, manifested by the ability of the CANN to support self-sustained travelling waves. In tracking a moving stimulus, the interplay between the external drive and the intrinsic mobility of the network determines the tracking performance. Interestingly, we find that the regime of anticipation effectively coincides with the regime where the intrinsic speed of the travelling wave exceeds that of the external drive. Depending on the SFA amplitudes, the network can achieve either perfect tracking, with zero-lag to the input, or perfect anticipative tracking, with a constant leading time to the input. Our model successfully reproduces experimentally observed anticipative tracking behaviors, and sheds light on our understanding of how the brain processes motion information in a timely manner. 1 Introduction Over the past decades, our knowledge of how neural systems process static information has advanced considerably, as is well documented by the receptive field properties of neurons. The equally important issue of how neural systems process motion information remains much less understood. A main challenge in processing motion information is to compensate for time delays that are pervasive in neural systems. For instance, visual signal transmitting from the retina to the primary visual cortex takes about 50-80 ms [1], and the time constant for single neurons responding to synaptic input is of the order 10-20 ms [2]. If these delays are not compensated properly, our perception of a fast moving object will lag behind its true position in the external world significantly, impairing our vision and motor control. 1 A straightforward way to compensate for time delays is to anticipate the future position of a moving object, covering the distance the object will travel through during the delay period. Experimental data has suggested that our brain does employ such a strategy. For instance, it was found that in spatial navigation, the internal head-direction encoded by anterior dorsal thalamic nuclei (ADN) cells in a rodent was leading the instant position of the rodent’s head by ∼25 ms [3, 4, 5], i.e., it was the direction the rodent’s head would turn into ∼25 ms later. Anticipation also justifies the well-known flash-lag phenomenon [6], that is, the perception that a moving object leads a flash, although they coincide with each other at the same physical location. The reason is due to the anticipation of our brain for the future position of the continuously moving object, in contrast to the lack of anticipation for intermittent flashes. Although it is clear that the brain do have anticipative response to the animal’s head direction, it remains unclear how neural systems implement appropriate anticipations against various forms of delays. Depending on the available information, the brain may employ different strategies to implement anticipations. In the case of self-generated motion, the brain may use an efference copy of the motor command responsible for the motion to predict the motion consequence in advance [7]; and in the case when there is an external visual cue, such as the speed of a moving object, the neural system may dynamically select a transmission route which sends the object information directly to the future cortical location during the delay [8]. These two strategies work well in their own feasible conditions, but they may not compensate for all kinds of neural delays, especially when the internal motor command and visual cues are not available. Notably, it was found that when a rodent was moving passively, i.e., a situation where no internal motor command is available, the head-direction encoded by ADN cells was still leading the actual position of the rodent’s head by around ∼50ms, even larger than that in a free-moving condition [5]. Thus, extra anticipation strategies may exist in neural systems. Here, we propose a novel mechanism to generate anticipative responses when a neural system is tracking a moving stimulus. This strategy does not depend on the motor command information nor external visual cues, but rather relies on the intrinsic property of neurons, i.e., spike-frequency adaptation (SFA). SFA is a dynamical feature commonly observed in the activities of neurons when they have experienced prolonged firing. It may be generated by a number of mechanisms [10]. In one mechanism, neural firing elevates the intracellular calcium level of a neuron, which induces an inward potassium current and subsequently hyperpolarizes the neuronal membrane potential [11]. In other words, strong neuronal response induces a negative feedback to counterbalance itself. In the present study, we use continuous attractor neural networks (CANNs) to model the tracking behaviors in neural systems. It was known that SFA can give rise to travelling waves in CANNs [12] analogous to the effects of asymmetric neuronal interactions; here we will show that its interplay with external moving stimuli determines the tracking performance of the network. Interestingly, we find that when the intrinsic speed of the network is greater than that of the external drive, anticipative tracking occurs for sufficiently weak stimuli; and different SFA amplitude results in different anticipative times. 2 The Model 2.1 Continuous attractor neural networks We employ CANNs as the model to investigate the tracking behaviors in neural systems. CANNs have been successfully applied to describe the encoding of continuous stimuli in neural systems, including orientation [13], head-direction [14], moving direction [15] and self location [16]. Recent experimental data strongly indicated that CANNs capture some fundamental features of neural information representation [17]. Consider a one-dimensional continuous stimulus x encoded by an ensemble of neurons (Fig. 1). The value of x is in the range of (−π, π] with the periodic condition imposed. Denote U(x, t) as the synaptic input at time t of the neurons whose preferred stimulus is x, and r(x, t) the corresponding firing rate. The dynamics of U(x, t) is determined by the recurrent input from other neurons, its own relaxation and the external input Iext(x, t), which is written as τ dU(x, t) dt = −U(x, t) + ρ ∫ x′ J(x, x′)r(x′, t)dx′ + Iext(x, t), (1) 2 U(x,t) I (x,t) ext J(x,x’) vext delay Figure 1: A CANN encodes a continuous stimulus, e.g., head-direction. Neurons are aligned in the network according to their preferred stimuli. The neuronal interaction J(x, x′) is translationinvariant in the space of stimulus values. The network is able to track a moving stimulus, but the response bump U(x, t) is always lagging behind the external input Iext(x, t) due to the neural response delay. where τ is the synaptic time constant, typically of the order 2 ∼5 ms, ρ is the neural density and J(x, x′) = J0 √ 2πa exp [ −(x −x′)2/(2a2) ] is the neural interaction from x′ to x, where the Gaussian width a controls the neuronal interaction range. We will consider a ≪π. Under this condition, the neuronal responses are localized and we can effectively treat −∞< x < ∞in the following analysis. The nonlinear relationship between r(x, t) and U(x, t) is given by r(x, t) = U(x, t)2 1 + kρ ∫ x′ U(x′, t)2dx′ , (2) where the divisive normalization could be realized by shunting inhibition. r(x, t) first increases with U(x, t) and then saturates gradually when the total network activity is sufficiently large. The parameter k controls the strength of divisive normalization. This choice of global normalization can simplify our analysis and should not alter our main conclusion if localized inhibition is considered. It can be checked that when Iext = 0, the network supports a continuous family of Gaussian-shaped stationary states, called bumps, which are, U(x) = U0exp [ −(x −z)2 4a2 ] , r(x) = r0exp [ −(x −z)2 2a2 ] , ∀z (3) where the peak position of the bump z is a free parameter. r0 = √ 2U0/(ρJ0) and U0 = [1 + √ 1 −8 √ 2πak/(ρJ2 0)]/(2 √ 2πakρ). The bumps are stable for 0 < k < kc, with kc = ρJ2 0/(8 √ 2πa). The bump states of a CANN form a sub-manifold in the state space of the network, on which the network is neutrally stable. This property enables a CANN to track a moving stimulus smoothly, provided that the stimulus speed is not too large [18]. However, during the tracking, the network bump is always lagging behind the instant position of the moving stimulus due to the delay in neuronal responses (Fig. 1). 2.2 CANNs with the asymmetrical neuronal interaction It is instructive to look at the dynamical properties of a CANN when the asymmetrical neuronal interaction is included. In an influential study [14], Zhang proposed an idea of adding asymmetrical interactions between neurons in a CANN, such that the network can support travelling waves, i.e., spontaneously moving bumps. The modified model well describes the experimental finding that in tracking the rotation of a rodent, the internal representation of head-direction constructed by ADN cells also rotates and the bump of neural population activity remains largely invariant in the rotating frame. By including the asymmetrical neuronal interaction, the CANN model presented above also supports a travelling wave state. The new neuronal recurrent interaction is written as ˜J(x, x′) = J0 √ 2πa exp [ −(x −x′)2 2a2 ] + γτ J0 √ 2πa3 (x −x′) exp [ −(x −x′)2 2a2 ] , (4) 3 where γ is a constant controlling the strength of asymmetrical interaction. It is straightforward to check that the network supports the following traveling wave solution, U(x, t) = U0 exp { −[x −(z + vt)]2 /(4a2) } , r(x, t) = r0 exp { −[x −(z + vt)]2 /(2a2) } , where v is the speed of the travelling wave, and v = γ, i.e., the asymmetrical interaction strength determines the speed of the travelling wave (see Supplementary Information). 2.3 CANNs with SFA The aim of the present study is to explore the effect of SFA on the tracking behaviors of a CANN. Incorporating SFA, the dynamics of a CANN is written as τ dU(x, t) dt = −U(x, t) + ρ ∫ x′ J(x, x′)r(x′, t)dx′ −V (x, t) + Iext(x, t), (5) where the synaptic current V (x, t) represents the effect of SFA, whose dynamics is given by [12] τv dV (x, t) dt = −V (x, t) + mU(x, t), (6) where τv is the time constant of SFA, typically of the order 40 ∼120 ms. The parameter m controls the SFA amplitude. Eq. (6) gives rise to V (x, t) = m ∫t −∞exp [−(t −t′)/τv] U(x, t′)dt′/τv, that is, V (x, t) is the integration of the neuronal synaptic input (and hence the neuronal activity) over an effective period of τv. The negative value of V (x, t) is subsequently fed back to the neuron to suppress its response (Fig. 2A). The higher the neuronal activity level is, the larger the negative feedback will be. The time constant τv ≫τ indicates that SFA is slow compared to neural firing. A B -V(x,t) : SFA vτ ( , ) ext I x t U(x,t) ij j j J r ∑ 0 0.02 0.04 0.06 0.08 −0.005 0 0.005 0.01 0.015 0.02 0.025 τ / τv m vint Figure 2: A. The inputs to a single neuron in a CANN with SFA, which include a recurrent input ∑ j Jijrj from other neurons, an external input Iext(x, t) containing the stimulus information, and a negative feedback current −V (x, t) representing the SFA effect. The feedback of SFA is effectively delayed by time τv. B. The intrinsic speed of the network vint (in units of 1/τ) increases with the SFA amplitude m. The network starts to have a travelling wave state at m = τ/τv. The parameters are: τ = 1, τv = 60, a = 0.5. Obtained by Eq. (13). 3 Travelling Wave in a CANN with SFA We find that SFA has the same effect as the asymmetrical neuronal interaction on retaining travelling waves in a CANN. The underlying mechanism is intuitively understandable. Suppose that a bump emerges at an arbitrary position in the network. Due to SFA, those neurons which are most active receive the strongest negative feedback, and their activities will be suppressed accordingly. Under the competition (mediated by recurrent connections and divisive normalization) from the neighboring neurons which are less affected by SFA, the bump tends to shift to the neighborhood; and at the new location, SFA starts to destabilize neuronal responses again. Consequently, the bump will keep moving in the network like a travelling wave. The condition for the network to support a travelling wave state can be theoretically analyzed. In simulations, we observe that in a traveling wave state, the profiles of U(x, t) and V (x, t) have approximately a Gaussian shape, if m is small enough. We therefore consider the following Gaussian 4 ansatz for the travelling wave state, U(x, t) = Au exp { −[x −z(t)]2 4a2 } , (7) r(x, t) = Ar exp { −[x −z(t)]2 2a2 } , (8) V (x, t) = Av exp { −[x −(z(t) −d)]2 4a2 } , (9) where dz(t)/dt is the speed of the travelling wave and d is the separation between U(x, t) and V (x, t). Without loss of generality, we assume that the bump moves from left to right, i.e., dz(t)/dt > 0. Since V (x, t) lags behind U(x, t) due to slow SFA, d > 0 normally holds. To solve the network dynamics, we utilize an important property of CANNs, that is, the dynamics of a CANN are dominated by a few motion modes corresponding to different distortions in the shape of a bump [18]. We can project the network dynamics onto these dominating modes and simplify the network dynamics significantly. The first two dominating motion modes used in the present study correspond to the distortions in the height and position of the Gaussian bump, which are given by ϕ0(x|z) = exp [ −(x −z)2/(4a2) ] and ϕ1(x|z) = (x −z) exp [ −(x −z)2/(4a2) ] . By projecting a function f(x) onto a mode ϕn(x), we mean computing ∫ x f(x)ϕn(x)dx/ ∫ x ϕn(x)2dx. Applying the projection method, we solve the network dynamics and obtain the travelling wave state. The speed of the travelling wave and the bumps’ separation are calculated to be (see Supplementary Information) d = 2a √ 1 − √τ mτv , vint ≡dz(t) dt = 2a τv √ mτv τ − √mτv τ . (10) The speed of the travelling wave reflects the intrinsic mobility of the network, and its value is fully determined by the network parameters (see Eq. (10)). Hereafter, we call it the intrinsic speed of the network, referred to as vint. vint increases with the SFA amplitude m (Fig. 2B). The larger the value of vint, the higher the mobility of the network. From the above equations, we see that the condition for the network to support a travelling wave state is m > τ/τv. We note that SFA effects can reduce the firing rate of neurons significantly [11]. Since the ratio τ/τv is small, it is expected that this condition can be realistically fulfilled. 3.1 Analogy to the asymmetrical neuronal interaction Both SFA and the asymmetrical neuronal interaction have the same capacity of generating a travelling wave in a CANN. We compare their dynamics to unveil the underlying cause. Consider that the network state is given by Eq. (8). The contribution of the asymmetrical neuronal interaction can be written as (substituting the asymmetrical component in Eq. (4) into the second term on the right-hand side of Eq. (1)), J0ργτr0 √ 2πa3 ∫ x′(x −x′)e−(x−x′)2 2a2 e−(x′−z)2 2a2 dx′ = ρJ0r0γτ(x −z) 2 √ 2a2 e−(x−z)2 4a2 . (11) In a CANN with SFA, when the separation d is sufficiently small, the synaptical current induced by SFA can be approximately expressed as (the 1st-order Taylor expansion; see Eq. (9)), −V (x, t) ≈−Av exp [ −(x −z)2 4a2 ] + dAv x −z 2a2 exp [ −(x −z)2 4a2 ] , (12) which consists of two terms: the first one has the same form as U(x, t) and the second one has the same form as the contribution of the asymmetrical interaction (compared to Eq. (11)). Thus, SFA has the similar effect as the asymmetrical neuronal interaction on the network dynamics. The notion of the asymmetrical neuronal interaction is appealing for retaining a travelling wave in a CANN, but its biological basis has not been properly justified. Here, we show that SFA may provide 5 an effective way to realize the effect of the asymmetrical neuronal interaction without recruiting the hard-wired asymmetrical synapses between neurons. Furthermore, SFA can implement travelling waves in either direction, whereas, the hard-wired asymmetrical neuronal connections can only support a travelling wave in one direction along the orientation of the asymmetry. Consequently, a CANN with the asymmetric coupling can only anticipatively track moving objects in one direction. 4 Tracking Behaviors of a CANN with SFA SFA induces intrinsic mobility of the bump states of a CANN, manifested by the ability of the network to support self-sustained travelling waves. When the network receives an external input from a moving stimulus, the tracking behavior of the network will be determined by two competing factors: the intrinsic speed of the network (vint) and the speed of the external drive (vext). Interestingly, we find that when vint > vext, the network bump leads the instant position of the moving stimulus for sufficiently weak stimuli, achieving anticipative tracking. Without loss of generality, we set the external input to be Iext(x, t) = α exp { −[x −z0(t)]2 /(4a2) } , where α represents the input strength, z0(t) is the stimulus at time t and the speed of the moving stimulus is vext = dz0(t)/dt. Define s = z(t) −z0(t) to be the displacement of the network bump relative to the external drive. We consider that the network is able to track the moving stimulus, i.e., the network dynamics will reach a stationary state with dz(t)/dt = dz0(t)/dt and s a constant. Since we consider that the stimulus moves from left to right, s > 0 means that the network tracking is leading the moving input; whereas s < 0 means the network tracking is lagging behind. Using the Gaussian ansatz for the network state as given by Eqs. (7-9) and applying the projection method, we solve the network dynamics and obtain (see Supplementary Information), d = 2a−a + √ a2 + (vextτv)2 vextτv , (13) s exp ( −s2 8a2 ) = 1 αAu τ vext (md2 ττv −v2 ext ) . (14) Combining Eqs. (10, 13, 14), it can be checked that when vext = vint, v2 ext = md2/(ττv), which gives s = 0 ; and when vext < vint, v2 ext < md2/(ττv), which gives s > 0, i.e., the bump is leading the external drive (For detail, see Supplementary Information). Fig. 3A presents the simulation result. There is a minor discrepancy between the theoretical prediction and the simulation result: the separation s = 0 happens at the point when the stimulus speed vext is slightly smaller than the intrinsic speed of the network vint. This discrepancy arises from the distortion of the bump shape from Gaussian when the input strength is strong, the stimulus speed is high and m is large, and hence the Gaussian ansatz on the network state is not accurate. Nevertheless, for sufficiently weak stimuli, the theoretical prediction is correct. 4.1 Perfect tracking and perfect anticipative tracking As observed in experiments, neural systems can compensate for time delays in two different ways: 1) perfect tracking, in which the network bump has zero-lag with respect to the external drive, i.e., s = 0; and 2) perfect anticipative tracking, in which the network bump leads the external drive by approximately a constant time tant = s/vext. In both cases, the tracking performance of the neural system is largely independent of the stimulus speed. We check whether a CANN with SFA exhibits these appealing properties. Define a scaled speed variable vext ≡τvvext/a. In a normal situation, vext ≪1. For instance, taking the biologically plausible parameters τv = 100 ms and a = 50o, vext = 0.1 corresponds to vext = 500o/s, which is a rather high speed for a rodent rotating its head in ordinary life. By using the scaled speed variable, Eq. (14) becomes s exp ( −s2 8a2 ) = 1 αAua [ 4m(−1 + √ 1 + v2 ext)2 v3 ext −τ τv vext ] . (15) 6 A B C 0 0.004 0.008 0.012 0.016 −0.06 −0.03 0 0.03 0.06 vext S vint π π/2 0 π/2 π s>0 d U(x) Iext(x) V(x) π π/2 0 π/2 π s<0 d Iext(x) U(x) V(x) Figure 3: A. The separation s vs. the speed of the external input vext. Anticipative tracking s > 0 occurs when vext < vint. The simulation was done with a network of N = 1000 neurons. The parameters are: J0 = 1, k = 0.1, a = 0.5, τ = 1, τv = 60, α = 0.5 and m = 2.5τ/τv. B. An example of anticipative tracking in the reference frame of the external drive. C. An example of delayed tracking. In both cases, the profile of V (x, t) is lagging behind the bump U(x, t) due to slow SFA. In the limit of vext ≪1 and consider s/(2 √ 2a) ≪1 (which is true in practice), we get s ≈ Auτvvext(m −τ τv )/α. Thus, we have the following two observations: • Perfect tracking. When m ≈τ/τv, s ≈0 holds, and perfect tracking is effectively achieved. Notably, when there is no stimulus, m = τ/τv is the condition for the network starting to have a traveling wave state. • Perfect anticipative tracking. When m > τ/τv, s increases linearly with vext, and the anticipative time tant is approximately a constant. These two properties hold for a wide range of stimulus speed, as long as the approximation vext ≪1 is applicable. We carried out simulations to confirm the theoretical analysis, and the results are presented in Fig. 4. We see that: (1) when SFA is weak, i.e., m < τ/τv, the network tracking is lagging behind the external drive, i.e. s < 0 (Fig. 4A); (2) when the amplitude of SFA increases to a critical value m = τ/τv, s becomes effectively zero for a wide range of stimulus speed, and perfect tracking is achieved (Fig. 4B); (3) when SFA is large enough satisfying m > τ/τv, s increases linearly with vext for a wide range of stimulus speeds, achieving perfect anticipative tracking (Fig. 4C); and (4) with the increasing amplitude of SFA, the anticipative time of the network also increases (Fig. 4D). Notably, by choosing the parameters properly, our model can replicate the experimental finding on a constant leading time of around 25 ms when a rodent was tracking head-direction by ADN cells (the red points in Fig. 4D for τ = 5 ms) [19]. 5 Conclusions In the present study, we have proposed a simple yet effective mechanism to implement anticipative tracking in neural systems. The proposed strategy utilizes the property of SFA, a general feature in neuronal responses, whose contribution is to destabilize spatially localized attractor states in a network. Analogous to asymmetrical neuronal interactions, SFA induces self-sustained travelling wave in a CANN. Compared to the former, SFA has the advantage of not requiring the hard-wired asymmetrical synapses between neurons. We systematically explored how the intrinsic mobility of a CANN induced by SFA affects the network tracking performances, and found that: (1) when the intrinsic speed of the network (i.e., the speed of the travelling wave the network can support) is larger than that of the external drive, anticipative tracking occurs; (2) an increase in the SFA amplitude can enhance the capability of a CANN to achieve an anticipative tracking with a longer anticipative time and (3) with the proper SFA amplitudes, the network can achieve either perfect tracking or perfect anticipative tracking for a wide range of stimulus speed. The key point for SFA achieving anticipative tracking in a CANN is that it provides a negative feedback modulation to destabilize strong localized neuronal responses. Thus, other negative feedback 7 A B C D 0 0.1 0.2 0.3 0.4 −0.02 −0.015 −0.01 0 −0.02 vext S m = 0.5 τ/τv 0 0.1 0.2 0.3 0.4 −0.02 −0.01 0 0.01 vext S S = 0 m = τ/τv 0 0.1 0.2 0.3 0.4 0 0.01 0.02 0.03 0.04 S = vext tant vext S m = 2.5 τ/τv 0 0.1 0.2 0.3 0.4 0 5 10 15 20 m = 1.5 τ/τv m = 2.0 τ/τv m = 2.5 τ/τv vext tant [τ] Figure 4: Tracking performances of a CANN with SFA. A. An example of delayed tracking for m < τ/τv; B. An example of perfect tracking for m = τ/τv. s = 0 roughly holds for a wide range of stimulus speed. C. An example of perfect anticipative tracking for m > τ/τv. s increases linearly with vext for a wide range of stimulus speed. D. Anticipative time increases with the SFA amplitude m. The other parameters are the same as those in Fig. 3. modulation processes, such as short-term synaptic depression (STD) [20, 21] and negative feedback connections (NFC) from other networks [22], should also be able to realize anticipative tracking. Indeed, it was found in the previous studies that a CANN with STD or NFC can produce leading behaviors in response to moving inputs. The three mechanisms, however, have different time scales and operation levels: SFA has a time scale of one hundred milliseconds and functions at the single neuron level; STD has a time scale of hundreds to thousands of milliseconds and functions at the synapse level; and NFC has a time scale of tens of milliseconds and functions at the network level. The brain may employ them for different computational tasks in conjunction with brain functions. It was known previously that a CANN with SFA can retain travelling wave [12]. But, to our knowledge, our study is the first one that links this intrinsic mobility of the network to the tracking performance of the neural system. We demonstrate that through regulating the SFA amplitude, a neural system can implement anticipative tracking with a range of anticipative times. Thus, it provides a flexible mechanism to compensate for a range of delay times, serving different computational purposes, e.g., by adjusting the SFA amplitudes, neural circuits along the hierarchy of a signal transmission pathway can produce increasing anticipative times, which compensate for the accumulated time delays. Our study sheds light on our understanding of how the brain processes motion information in a timely manner. Acknowledgments This work is supported by grants from National Key Basic Research Program of China (NO.2014CB846101, S.W.), and National Foundation of Natural Science of China (No.11305112, Y.Y.M.; No. 31261160495, S.W.), and the Fundamental Research Funds for the central Universities (No. 31221003, S. W.), and SRFDP (No.20130003110022, S.W), and Research Grants Council of Hong Kong (Nos. 605813, 604512 and N HKUST606/12, C.C.A.F. and K.Y.W), and Natural Science Foundation of Jiangsu Province BK20130282. 8 References [1] L. G. Nowak, M. H. J. Munk, P. Girard & J. Bullier. Visual Latencies in Areas V1 and V2 of the Macaque Monkey. Vis. Neurosci., 12, 371 – 384 (1995). [2] C. Koch, M. Rapp & Idan Segev. A Brief History of Time (Constants). Cereb. Cortex, 6, 93 – 101 (1996). [3] H. T. Blair & P. E. Sharp. Anticipatory Head Direction Signals in Anterior Thalamus: Evidence for a Thalamocortical Circuit that Integrates Angular Head Motion to Compute Head Direction. J. Neurosci., 15(9), 6260 – 6270 (1995). [4] J. S. Taube & R. U. Muller. Comparisons of Head Direction Cell Activity in the Postsubiculum and Anterior Thalamus of Freely Moving Rats. Hippocampus, 8, 87 – 108 (1998). [5] J. P. Bassett, M. B. Zugaro, G. M. Muir, E. J. Golob, R. U. Muller & J. S. Taube. Passive Movements of the Head Do Not Abolish Anticipatory Firing Properties of Head Direction Cells. J. Neurophysiol., 93, 1304 – 1316 (2005). [6] R. Nijhawan. Motion Extrapolation in Catching. Nature, 370, 256 – 257 (1994). [7] J. R. Duhamel, C. L. Colby & M. E. Goldberg. The Updating of the Representation of Visual Space in Parietal Cortex by Intended Eye Movements. Science 255, 90 – 92 (1992). [8] R. Nijhawan & S. Wu. Phil. Compensating Time Delays with Neural Predictions: Are Predictions Sensory or Motor? Trans. R. Soc. A, 367, 1063 – 1078 (2009). [9] P. E. Sharp, A. Tinkelman & Cho J. Angular Velocity and Head Direction Signals Recorded from the Dorsal Tegmental Nucleus of Gudden in the Rat: Implications for Path Integration in the Head Direction Cell Circuit. Behav. Neurosci., 115, 571 – 588 (2001). [10] B. Gutkin & F. Zeldenrust. Spike Frequency Adaptation. Scholarpedia, 9, 30643 (2014). [11] J. Benda & A. V. M. Herz. A Universal Model for Spike-Frequency Adaptation. Neural Comput., 15, 2523 – 2564 (2003). [12] P. C. Bressloff. Spatiotemporal Dynamics of Continuum Neural Fields. J. Phys. A, 45, 033001 (2012). [13] R. Ben-Yishai, R. L. Bar-Or & H. Sompolinsky. Theory of Orientation Tuning in Visual Cortex. Proc. Natl. Acad. Sci. U.S.A., 92, 3844 – 3848 (1995). [14] K. Zhang. Representation of Spatial Orientation by the Intrinsic Dynamics of the HeadDirection Cell Ensemble: a Theory. J. Neurosci., 16, 2112 – 2126 (1996). [15] A. P. Georgopoulos, M. Taira & A. Lukashin. Cognitive Neurophysiology of the Motor Cortex. Science, 260, 47 – 52 (1993). [16] A. Samsonovich & B. L. McNaughton. Path Integration and Cognitive Mapping in a Continuous Attractor Neural Network Model. J. Neurosci, 17, 5900 – 5920 (1997). [17] K. Wimmer, D. Q. Nykamp, C. Constantinidis & A. Compte. Bump Attractor Dynamics in Prefrontal Cortex Explains Behavioral Precision in Spatial Working Memory. Nature, 17(3), 431 – 439 (2014). [18] C. C. A. Fung, K. Y. M. Wong & S. Wu. A Moving Bump in a Continuous Manifold: a Comprehensive Study of the Tracking Dynamics of Continuous Attractor Neural Networks. Neural Comput., 22, 752 – 792 (2010). [19] J. P. Goodridge & D. S. Touretzky. Modeling attractor deformation in the rodent head direction system. J. Neurophysio., 83, 3402 – 3410 (2000). [20] C. C. A. Fung, K. Y. M. Wong, H. Wang & S. Wu. Dynamical Synapses Enhance Neural Information Processing: Gracefulness, Accuracy, and Mobility. Neural Comput., 24, 1147 – 1185 (2012). [21] C. C. A. Fung, K. Y. M. Wong & S. Wu. Delay Compensation with Dynamical Synapses. Adv. in NIPS. 25, P. Bartlett, F. C. N. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger (eds), 1097 – 1105 (2012). [22] W. Zhang & S. Wu. Neural Information Processing with Feedback Modulations. Neural Comput., 24(7), 1695 – 1721 (2012). 9
2014
73
5,562
Learning to Optimize via Information-Directed Sampling Daniel Russo Stanford University Stanford, CA 94305 djrusso@stanford.edu Benjamin Van Roy Stanford University Stanford, CA 94305 bvr@stanford.edu Abstract We propose information-directed sampling – a new algorithm for online optimization problems in which a decision-maker must balance between exploration and exploitation while learning from partial feedback. Each action is sampled in a manner that minimizes the ratio between the square of expected single-period regret and a measure of information gain: the mutual information between the optimal action and the next observation. We establish an expected regret bound for information-directed sampling that applies across a very general class of models and scales with the entropy of the optimal action distribution. For the widely studied Bernoulli and linear bandit models, we demonstrate simulation performance surpassing popular approaches, including upper confidence bound algorithms, Thompson sampling, and knowledge gradient. Further, we present simple analytic examples illustrating that informationdirected sampling can dramatically outperform upper confidence bound algorithms and Thompson sampling due to the way it measures information gain. 1 Introduction There has been significant recent interest in extending multi-armed bandit techniques to address problems with more complex information structures, in which sampling one action can inform the decision-maker’s assessment of other actions. Effective algorithms must take advantage of the information structure to learn more efficiently. Recent work has extended popular algorithms for the classical multi-armed bandit problem, such as upper confidence bound (UCB) algorithms and Thompson sampling, to address such contexts. For some cases, such as classical and linear bandit problems, strong performance guarantees have been established for UCB algorithms (e.g. [4, 8, 9, 13, 21, 23, 29]) and Thompson sampling (e.g. [1, 15, 19, 24]). However, as we will demonstrate through simple analytic examples, these algorithms can perform very poorly when faced with more complex information structures. The shortcoming lies in the fact that these algorithms do not adequately assess the information gain from selecting an action. In this paper, we propose a new algorithm – information-directed sampling (IDS) – that preserves numerous guarantees of Thompson sampling for problems with simple information structures while offering strong performance in the face of more complex problems that daunt alternatives like Thompson sampling or UCB algorithms. IDS quantifies the amount learned by selecting an action through an information theoretic measure: the mutual information between the true optimal action and the next observation. Each action is sampled in a manner that minimizes the ratio between squared expected single-period regret and this measure of information gain. As we will show through simple analytic examples, the way in which IDS assesses information gain allows it to dramatically outperform UCB algorithms and Thompson sampling. Further, we establish 1 an expected regret bound for IDS that applies across a very general class of models and scales with the entropy of the optimal action distribution. We then specialize this bound to several widely studied problem classes. Finally, we benchmark the performance of IDS through simulations of the widely studied Bernoulli and linear bandit problems, for which UCB algorithms and Thompson sampling are known to be very effective. We find that even in these settings, IDS outperforms UCB algorithms, Thompson sampling, and knowledge gradient. IDS solves a single-period optimization problem as a proxy to an intractable multi-period problem. Solution of this single-period problem can itself be computationally demanding, especially in cases where the number of actions is enormous or mutual information is difficult to evaluate. To carry out computational experiments, we develop numerical methods for particular classes of online optimization problems. More broadly, we feel this work provides a compelling proof of concept and hope that our development and analysis of IDS facilitate the future design of efficient algorithms that capture its benefits. Related literature. Two other papers [17, 30] have used the mutual information between the optimal action and the next observation to guide action selection. Both focus on the optimization of expensive-to-evaluate, black-box functions. Each proposes sampling points so as to maximize the mutual information between the algorithm’s next observation and the true optimizer. Several features distinguish our work. First, these papers focus on pure exploration problems: the objective is simply to learn about the optimum – not to attain high cumulative reward. Second, and more importantly, they focus only on problems with Gaussian process priors and continuous action spaces. For such problems, simpler approaches like UCB algorithms, Probability of Improvement, and Expected Improvement are already extremely effective (See [6]). By contrast, a major motivation of our work is that a richer information measure is needed in order to address problems with more complicated information structures. Finally, we provide a variety of general theoretical guarantees for IDS, whereas Villemonteix et al. [30] and Hennig and Schuler [17] propose their algorithms only as heuristics. The full-length version of this paper [26] shows our theoretical guarantees extend to pure exploration problems. The knowledge gradient (KG) algorithm uses a different measure of information to guide action selection: the algorithm computes the impact of a single observation on the quality of the decision made by a greedy algorithm, which simply selects the action with highest posterior expected reward. This measure has been thoroughly studied (see e.g. [22, 27]). KG seems natural since it explicitly seeks information that improves decision quality. Computational studies suggest that for problems with Gaussian priors, Gaussian rewards, and relatively short time horizons, KG performs very well. However, even in some simple settings, KG may not converge to optimality. In fact, it may select a suboptimal action in every period, even as the time horizon tends to infinity. Our work also connects to a much larger literature on Bayesian experimental design (see [10] for a review). Recent work has demonstrated the effectiveness of greedy or myopic policies that always maximize the information gain the next sample. Jedynak et al. [18] consider problem settings in which this greedy policy is optimal. Another recent line of work [14] shows that information gain based objectives sometimes satisfy a decreasing returns property known as adaptive sub-modularity, implying the greedy policy is competitive with the optimal policy. Our algorithm also only considers only the information gain due to the next sample, even though the goal is to acquire information over many periods. Our results establish that the manner in which IDS encourages information gain leads to an effective algorithm, even for the different objective of maximizing cumulative reward. 2 Problem formulation We consider a general probabilistic, or Bayesian, formulation in which uncertain quantities are modeled as random variables. The decision–maker sequentially chooses actions (At)t∈N from the finite action set A and observes the corresponding outcomes (Yt(At))t∈N. There is a random outcome Yt(a) ∈Y associated with each a ∈A and time t ∈N. Let Yt ≡(Yt(a))a∈A be the vector of outcomes at time t ∈N. The “true outcome distribution” p∗is a distribution over Y|A| that is itself randomly drawn from the family of distributions P. We assume that, conditioned on p∗, (Yt)t∈N is an iid sequence with each element Yt distributed according to p∗. Let p∗ a be the marginal distribution corresponding to Yt(a). 2 The agent associates a reward R(y) with each outcome y ∈Y, where the reward function R : Y → R is fixed and known. We assume R(y) −R(y) ≤1 for any y, y ∈Y. Uncertainty about p∗induces uncertainty about the true optimal action, which we denote by A∗∈arg max a∈A E y∼p∗a [R(y)]. The T period regret is the random variable, Regret(T) := T X t=1 [R(Yt(A∗)) −R(Yt(At))] , (1) which measures the cumulative difference between the reward earned by an algorithm that always chooses the optimal action, and actual accumulated reward up to time T. In this paper we study expected regret E [Regret(T)] where the expectation is taken over the randomness in the actions At and the outcomes Yt, and over the prior distribution over p∗. This measure of performance is sometimes called Bayesian regret or Bayes risk. Randomized policies. We define all random variables with respect to a probability space (Ω, F, P). Fix the filtration (Ft)t∈N where Ft−1 ⊂F is the sigma–algebra generated by the history of observations (A1, Y1(A1), ..., At−1, Yt−1(At−1)). Actions are chosen based on the history of past observations, and possibly some external source of randomness1. It’s useful to think of the actions as being chosen by a randomized policy π, which is an Ft–predictable sequence (πt)t∈N. An action is chosen at time t by randomizing according to πt(·) = P(At = ·|Ft−1), which specifies a probability distribution over A. We denote the set of probability distributions over A by D(A). We explicitly display the dependence of regret on the policy π, letting E [Regret(T, π)] denote the expected value of (1) when the actions (A1, .., AT ) are chosen according to π. Further notation. We set αt(a) = P (A∗= a|Ft−1) to be the posterior distribution of A∗. For a probability distribution P over a finite set X, the Shannon entropy of P is defined as H(P) = −P x∈X P(x) log (P(x)) . For two probability measures P and Q over a common measurable space, if P is absolutely continuous with respect to Q, the Kullback-Leibler divergence between P and Q is DKL(P||Q) = Z Y log dP dQ  dP (2) where dP dQ is the Radon–Nikodym derivative of P with respect to Q. The mutual information under the posterior distribution between random variables X1 : Ω→X1, and X2 : Ω→X2, denoted by It(X1; X2) := DKL (P ((X1, X2) ∈·|Ft−1) || P (X1 ∈·|Ft−1) P (X2 ∈·|Ft−1)) , (3) is the Kullback-Leibler divergence between the joint posterior distribution of X1 and X2 and the product of the marginal distributions. Note that It(X1; X2) is a random variable because of its dependence on the conditional probability measure P (·|Ft−1). To simplify notation, we define the information gain from an action a to be gt(a) := It(A∗; Yt(a)). As shown for example in Lemma 5.5.6 of Gray [16], this is equal to the expected reduction in entropy of the posterior distribution of A∗due to observing Yt(a): gt(a) = E [H(αt) −H(αt+1)|Ft−1, At = a] , (4) which plays a crucial role in our results. Let ∆t(a) := E [Rt(Yt(A∗)) −R(Yt(a))|Ft−1] denote the expected instantaneous regret of action a at time t. We overload the notation gt(·) and ∆t(·). For π ∈D(A), define gt(π) = P a∈A π(a)gt(a) and ∆t(π) = P a∈A π(a)∆t(a). 3 Information-directed sampling IDS explicitly balances between having low expected regret in the current period and acquiring new information about which action is optimal. It does this by maximizing over all action sampling distributions π ∈D(A) the ratio between the square of expected regret ∆t(π)2 and information 1Formally, At is measurable with respect to the sigma–algebra generated by (Ft−1, ξt) where (ǫt)t∈N are random variables representing this external source of randomness, and are jointly independent of p∗and (Yt)t∈N 3 gain gt(π) about the optimal action A∗. In particular, the policy πIDS = ! πIDS 1 , πIDS 2 , ...  is defined by: πIDS t ∈arg min π∈D(A)  Ψt (π) := ∆t(π)2 gt(π)  . (5) We call Ψt(π) the information ratio of a sampling distribution π and Ψ∗ t = minπ Ψt(π) = Ψt(πIDS t ) the minimal information ratio. Each roughly measures the “cost” per bit of information acquired. Optimization problem. Suppose that there are K = |A| actions, and that the posterior expected regret and information gain are stored in the vectors ∆∈RK + and g ∈RK + . Assume g ̸= 0, so that the optimal action is not known with certainty. The optimization problem (5) can be written as minimize Ψ(π) := ! πT ∆ 2 /πT g subject to πT e = 1, π ≥0. (6) The following result shows this is a convex optimization problem, and surprisingly, has an optimal solution with only two non-zero components. Therefore, while IDS is a randomized policy, it randomizes over at most two actions. Algorithm 1, presented in the supplementary material, solves (6) by looping over all pairs of actions, and solving a one dimensional convex optimization problem. Proposition 1. The function Ψ : π 7→ ! πT ∆ 2 /πT g is convex on  π ∈RK|πT g > 0 . Moreover, there is an optimal solution π∗to (6) with |{i : π∗ i > 0}| ≤2. 4 Regret bounds This section establishes regret bounds for IDS that scale with the entropy of the optimal action distribution. The next proposition shows that bounds on a policy’s information ratio imply bounds on expected regret. We then provide several bounds on the information ratio of IDS. Proposition 2. Fix a deterministic λ ∈R and a policy π = (π1, π2, ...) such that Ψt(πt) ≤λ almost surely for each t ∈{1, .., T}. Then, E [Regret (π, T)] ≤ p λH(α1)T. Bounds on the information ratio. We establish upper bounds on the minimal information ratio Ψ∗ t = Ψ∗ t (πIDS t ) in several important settings. These bound show that, in any period, the algorithm’s expected regret can only be large if it’s expected to acquire a lot of information about which action is optimal. It effectively balances between exploration and exploitation in every period. The proofs of these bounds essentially follow from a very recent analysis of Thompson sampling, and the implied regret bounds are the same as those established for Thompson sampling. In particular, since Ψ∗ t ≤Ψt(πTS) where πTS is the Thompson sampling policy, it is enough to bound Ψt(πTS). Several such bounds were provided by Russo and Van Roy [25].2 While the analysis is similar in the cases considered here, IDS outperforms Thompson sampling in simulation, and, as we will highlight in the next section, is sometimes provably much more informationally efficient. We briefly describe each of these bounds below and then provide a more complete discussion for linear bandit problems. For each of the other cases, more formal propositions, their proofs, and a discussion of lower bounds can be found in the supplementary material or the full version of this paper [26]. Finite action space: With no additional assumption, we show Ψ∗ t ≤|A|/2. Linear bandit: Each action is associated with a d dimensional feature vector, and the mean reward generated by an action is the inner product between its known feature vector and some unknown parameter vector. We show Ψ∗ t ≤d/2. Full information: Upon choosing an action, the agent observes the reward she would have received had she chosen any other action. We show Ψ∗ t ≤1/2. Combinatorial action sets: At time t, project i ∈{1, .., d} yields a random reward θt,i, and the reward from selecting a subset of projects a ∈A ⊂{a′ ⊂{0, 1, ..., d} : |a′| ≤m} is m−1 P i∈A θt,i. The outcome of each selected project (θt,i : i ∈a) is observed, which is sometimes called “semi–bandit” feedback [3]. We show Ψ∗ t ≤d/2m2. 2Ψt(πTS) is exactly equal to the term Γ2 t that is bounded in [25]. 4 Linear optimization under bandit feedback. The stochastic linear bandit problem has been widely studied (e.g. [13, 23]) and is one of the most important examples of a multi-armed bandit problem with “correlated arms.” In this setting, each action is associated with a finite dimensional feature vector, and the mean reward generated by an action is the inner product between its known feature vector and some unknown parameter vector. The next result bounds Ψ∗ t for such problems. Proposition 3. If A ⊂Rd and for each p ∈P there exists θp ∈Rd such that for all a ∈A E y∼pa [R(y)] = aT θp, then for all t ∈N, Ψ∗ t ≤d/2 almost surely. This result shows that E  Regret(T, πIDS)  ≤ q 1 2H(α1)dT ≤ q 1 2 log(|A|)dT for linear bandit problems. Dani et al. [12] show this bound is order optimal, in the sense that for any time horizon T and dimension d if the actions set is A = {0, 1}d, there exists a prior distribution over p∗such that infπ E [Regret(T, π)] ≥c0 p log(|A|)dT where c0 is a constant the is independent of d and T. The bound here improves upon this worst case bound since H(α1) can be much smaller than log(|A|). 5 Beyond UCB and Thompson sampling Upper confidence bound algorithms (UCB) and Thompson sampling are two of the most popular approaches to balancing between exploration and exploitation. In some cases, these algorithms are empirically effective, and have strong theoretical guarantees. But we will show that, because they don’t quantify the information provided by sampling actions, they can be grossly suboptimal in other cases. We demonstrate this through two examples - each designed to be simple and transparent. To set the stage for our discussion, we now introduce UCB algorithms and Thompson sampling. Thompson sampling. The Thompson sampling algorithm simply samples actions according to the posterior probability they are optimal. In particular, actions are chosen randomly at time t according to the sampling distribution πTS t = αt. By definition, this means that for each a ∈A, P(At = a|Ft−1) = P(A∗= a|Ft−1) = αt(a). This algorithm is sometimes called probability matching because the action selection distribution is matched to the posterior distribution of the optimal action. Note that Thompson sampling draws actions only from the support of the posterior distribution of A∗. That is, it never selects an action a if P (A∗= a) = 0. Put differently, this implies that it only selects actions that are optimal under some p ∈P. UCB algorithms. UCB algorithms select actions through two steps. First, for each action a ∈A an upper confidence bound Bt(a) is constructed. Then, an action At ∈arg maxa∈A Bt(a) with maximal upper confidence bound is chosen. Roughly, Bt(a) represents the greatest mean reward value that is statistically plausible. In particular, Bt(a) is typically constructed so that Bt(a) → E y∼p∗ a [R(y)] as data about action a accumulates, but with high probability E y∼p∗ a [R(y)] ≤Bt(a). Like Thompson sampling, many UCB algorithms only select actions that are optimal under some p ∈P. Consider an algorithm that constructs at each time t a confidence set Pt ⊂P containing the set of distributions that are statistically plausible given observed data. (e.g. [13]). Upper confidence bounds are then set to be the highest expected reward attainable under one of the plausible distributions: Bt(a) = max p∈P E y∼pa [R(y)] . Any action At ∈arg maxa Bt(a) must be optimal under one of the outcome distributions p ∈Pt. An alternative method involves choosing Bt(a) to be a particular quantile of the posterior distribution of the action’s mean reward under p∗[20]. In each of the examples we construct, such an algorithm chooses actions from the support of A∗unless the quantiles are so low that maxa∈A Bt(a) < E [R(Yt(A∗))]. 5.1 Example: sparse linear bandits Consider a linear bandit problem where A ⊂Rd and the reward from an action a ∈A is aT θ∗. The true parameter θ∗is known to be drawn uniformly at random from the set of 1–sparse vectors Θ = {θ ∈{0, 1}d : ∥θ∥0 = 1}. For simplicity, assume d = 2m for some m ∈N. The action set is taken to be the set of vectors in {0, 1}d normalized to be a unit vector in the L1 norm: A = 5 n x ∥x∥1 : x ∈{0, 1}d, x ̸= 0 o . We will show that the expected number of time steps for Thompson sampling (or a UCB algorithm) to identify the optimal action grows linearly with d, whereas IDS requires only log2(d) time steps. When an action a is selected and y = aT θ∗∈{0, 1/∥a∥0} is observed, each θ ∈Θ with aT θ ̸= y is ruled out. Let Θt denote the parameters in Θ that are consistent with the observations up to time t and let It = {i ∈{1, ..., d} : θi = 1, θ ∈Θt} be the set of possible positive components. For this problem, A∗= θ∗. That is, if θ∗were known, the optimal action would be to choose the action θ∗. Thompson sampling and UCB algorithms only choose actions from the support of A∗ and therefore will only sample actions a ∈A that have only a single positive component. Unless that is also the positive component of θ∗, the algorithm will observe a reward of zero and rule out only one possible value for θ∗. The algorithm may require d samples to identify the optimal action. Consider an application of IDS to this problem. It essentially performs binary search: it selects a ∈A with ai > 0 for half of the components i ∈It and ai = 0 for the other half as well as for any i /∈It. After just log2(d) time steps the true support of θ∗is identified. To see why this is the case, first note that all parameters in Θt are equally likely and hence the expected reward of an action a is 1 |It| P i∈It ai. Since ai ≥0 and P i ai = 1 for each a ∈A, every action whose positive components are in It yields the highest possible expected reward of 1/|It|. Therefore, binary search minimizes expected regret in period t for this problem. At the same time, binary search is assured to rule out half of the parameter vectors in Θt at each time t. This is the largest possible expected reduction, and also leads to the largest possible information gain about A∗. Since binary search both minimizes expected regret in period t and uniquely maximizes expected information gain in period t, it is the sampling strategy followed by IDS. 5.2 Example: recommending products to a customer of unknown type Consider the problem of repeatedly recommending an assortment of products to a customer. The customer has unknown type c∗∈C where |C| = n. Each product is geared toward customers of a particular type, and the assortment a ∈A = Cm of m products offered is characterized by the vector of product types a = (c1, .., cm). We model customer responses through a random utility model in which customers are apriori more likely to derive high value from a product geared toward their type. When offered an assortment of products a, the customer associates with the ith product utility U (t) ci (a) = β1{ai=c} + W (t) ci , where W t ci follows an extreme–value distribution and β ∈R is a known constant. This is a standard multinomial logit discrete choice model. The probability a customer of type c chooses product i is given by exp{β1{ai=c}}/ Pm j=1 exp{β1{aj=c}}. When an assortment a is offered at time t, the customer makes a choice It = arg maxi U (t) ci (a) and leaves a review U (t) cIt(a) indicating the utility derived from the product, both of which are observed by the recommendation system. The system’s reward is the normalized utility of the customer ( 1 β )U (t) cIt(a). If the type c∗of the customer were known, then the optimal recommendation would be A∗= (c∗, c∗, ..., c∗), which consists only of products targeted at the customer’s type. Therefore, both Thompson sampling and UCB algorithms would only offer assortments consisting of a single type of product. Because of this, each type of algorithm requires order n samples to learn the customer’s true type. IDS will instead offer a diverse assortment of products to the customer, allowing it to learn much more quickly. To make the presentation more transparent, suppose that c∗is drawn uniformly at random from C and consider the behavior of each type of algorithm in the limiting case where β →∞. In this regime, the probability a customer chooses a product of type c∗if it available tends to 1, and the review U (t) cIt(a) tends to 1{aIt = c∗}, an indicator for whether the chosen product had type c∗. The initial assortment offered by IDS will consist of m different and previously untested product types. Such an assortment maximizes both the algorithm’s expected reward in the next period and the algorithm’s information gain, since it has the highest probability of containing a product of type c∗. The customer’s response almost perfectly indicates whether one of those items was of type c∗. The algorithm continues offering assortments containing m unique, untested, product types until a 6 review near U (t) cIt(a) ≈1 is received. With extremely high probability, this takes at most ⌈n/m⌉ time periods. By diversifying the m products in the assortment, the algorithm learns m times faster. 6 Computational experiments Section 5 showed that, for some complicated information structures, popular approaches like UCB algorithms and Thompson sampling are provably outperformed by IDS. Our computational experiments focus instead on simpler settings where these algorithms are extremely effective. We find that even for these widely studied settings, IDS displays performance exceeding state of the art. For each experiment, the algorithm used to implement IDS is presented in Appendix C. Mean-based IDS. Some of our numerical experiments use an approximate form of IDS that is suitable for some problems with bandit feedback, satisfies our regret bounds for such problems, and can sometimes facilitate design of more efficient numerical methods. More details can be found in the appendix, or in the full version of this paper [26]. Beta-Bernoulli experiment. Our first experiment involves a multi-armed bandit problem with independent arms. The action ai ∈{a1, ..., aK} yields in each time period a reward that is 1 with probability θi and 0 otherwise. The θi are drawn independently from Beta(1, 1), which is the uniform distribution. Figure 1a presents the results of 1000 independent trials of an experiment with 10 arms and a time horizon of 1000. We compare IDS to six other algorithms, and find that it has the lowest average regret of 18.16. Our results indicate that the the variation of IDS πIDSME presented in Section 6 has extremely similar performance to standard IDS for this problem. 0 200 400 600 800 1000 0 10 20 30 40 50 60 Cumulative Regret Time Period Knowledge Gradient IDS Mean−based IDS Thompson Sampling Bayes UCB UCB Tuned MOSS KL UCB (a) Binary rewards 0 2 4 6 8 10 x 10 4 0 10 20 30 40 50 60 Cumulative Regret Time Period IDS Thompson Sampling Bayes UCB Lower Bound (b) Asymptotic performance In this experiment, the famous UCB1 algorithm of Auer et al. [4] had average regret 131.3, which is dramatically larger than that of IDS. For this reason UCB1 is omitted from Figure 1a. The confidence bounds of UCB1 are constructed to facilitate theoretical analysis. For practical performance Auer et al. [4] proposed using a heuristic algorithm called UCB-Tuned. The MOSS algorithm of Audibert and Bubeck [2] is similar to UCB1 and UCB–Tuned, but uses slightly different confidence bounds. It is known to satisfy regret bounds for this problem that are minimax optimal up to a constant factor. In previous numerical experiments [11, 19, 20, 28], Thompson sampling and Bayes UCB exhibited state-of-the-art performance for this problem. Unsurprisingly, they are the closest competitors to IDS. The Bayes UCB algorithm, studied in Kaufmann et al. [20], uses upper confidence bounds at time step t that are the 1 −1 t quantile of the posterior distribution of each action3. The knowledge gradient (KG) policy of Ryzhov et al. [27], uses the one–step value of information to incentivize exploration. However, for this problem, KG does not explore sufficiently to identify the optimal arm in this problem, and therefore its expected regret grows linearly with time. It should be noted that KG is particularly poorly suited to problems with discrete observations and long time horizons. It can perform very well in other types of experiments. Asymptotic optimality. That IDS outperforms Bayes UCB and Thompson sampling in our last experiment is is particularly surprising, as each of these algorithms is known, in a sense we will 3Their theoretical guarantees require choosing a somewhat higher quantile, but the authors suggest choosing this quantile, and use it in their own numerical experiments. 7 soon formalize, to be asymptotically optimal for these problems. We now present simulation results over a much longer time horizon that suggest IDS scales in the same asymptotically optimal way. The seminal work of Lai and Robbins [21] provides the following asymptotic frequentist lower bound on regret of any policy π. When applied with an independent uniform prior over θ, both Bayes UCB and Thompson sampling are known to attain this frequentist lower bound [19, 20]: lim inf T →∞ E [Regret(T, π)|θ] log T ≥ X a̸=A∗ (θA∗−θa) DKL(θA∗|| θa) := c(θ) Our next numerical experiment fixes a problem with three actions and with θ = (.3, .2, .1). We compare algorithms over a 10,000 time periods. Due to the computational expense of this experiment, we only ran 200 independent trials. Each algorithm uses a uniform prior over θ. Our results, along with the asymptotic lower bound of c(θ) log(T), are presented in Figure 1b. Linear bandit problems. Our final numerical experiment treats a linear bandit problem. Each action a ∈ R5 is defined by a 5 dimensional feature vector. 0 50 100 150 200 250 0 10 20 30 40 50 60 Cumulative Regret Time Period Bayes UCB Knowledge Gradient Thompson Sampling Mean−based IDS GP UCB GP UCB Tuned Figure 1: Regret in linear–Gaussian model. The reward of action a at time t is aT θ + ǫt where θ ∼N(0, 10I) is drawn from a multivariate Gaussian prior distribution, and ǫt ∼ N(0, 1) is independent Gaussian noise. In each period, only the reward of the selected action is observed. In our experiment, the action set A contains 30 actions, each with features drawn uniformly at random from [−1/ √ 5, 1/ √ 5]. The results displayed in Figure 1 are averaged over 1000 independent trials. We compare the regret of five algorithms. Three of these - GP-UCB, Thompson sampling , and IDS - satisfy strong regret bounds for this problem4. Both GP-UCB and Thompson sampling are significantly outperformed by IDS. Bayes UCB [20] and a version of GP-UCB that was tuned to minimize its average regret, are each competitive with IDS. These algorithms are heuristics, in the sense that their confidence bounds differ significantly from those of linear UCB algorithms known to satisfy theoretical guarantees. 7 Conclusion This paper has proposed information-directed sampling – a new algorithm for balancing between exploration and exploitation. We establish a general regret bound for the algorithm, and specialize this bound to several widely studied classes of online optimization problems. We show the way in which IDS assesses information gain allows it to dramatically outperform UCB algorithms and Thompson sampling in some settings. Finally, for two simple and widely studied classes of multiarmed bandit problems we demonstrate state of art performance in simulation experiments. In these ways, we feel this work provides a compelling proof of concept. Many important open questions remain, however. IDS solves a single-period optimization problem as a proxy to an intractable multi-period problem. Solution of this single-period problem can itself be computationally demanding, especially in cases where the number of actions is enormous or mutual information is difficult to evaluate. An important direction for future research concerns the development of computationally elegant procedures to implement IDS in important cases. Even when the algorithm cannot be directly implemented, however, one may hope to develop simple algorithms that capture its main benefits. Proposition 2 shows that any algorithm with small information ratio satisfies strong regret bounds. Thompson sampling is a very tractable algorithm that, we conjecture, sometimes has nearly minimal information ratio. Perhaps simple schemes with small information ratio could be developed for other important problem classes, like the sparse linear bandit problem. 4Regret analysis of GP-UCB can be found in [29] and for Thompson sampling can be found in [1, 24, 25] 8 References [1] S. Agrawal and N. Goyal. Thompson sampling for contextual bandits with linear payoffs. In ICML, 2013. [2] J.-Y. Audibert and S. Bubeck. Minimax policies for bandits games. COLT, 2009. [3] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 2013. [4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2):235–256, 2002. [5] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004. [6] E. Brochu, V.M. Cora, and N. De Freitas. A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. [7] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721, 2012. [8] S. Bubeck, R. Munos, G. Stoltz, and Cs. Szepesv´ari. X-armed bandits. JMLR, 12:1655–1695, June 2011. [9] O. Capp´e, A. Garivier, O.-A. Maillard, R. Munos, and G. Stoltz. Kullback-Leibler upper confidence bounds for optimal sequential allocation. Annals of Statistics, 41(3):1516–1541, 2013. [10] K. Chaloner, I. Verdinelli, et al. Bayesian experimental design: A review. Statistical Science, 10(3): 273–304, 1995. [11] O. Chapelle and L. Li. An empirical evaluation of Thompson sampling. In NIPS, 2011. [12] V. Dani, S.M. Kakade, and T.P. Hayes. The price of bandit information for online optimization. In NIPS, pages 345–352, 2007. [13] V. Dani, T.P. Hayes, and S.M. Kakade. Stochastic linear optimization under bandit feedback. In COLT, pages 355–366, 2008. [14] D. Golovin and A. Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 42(1):427–486, 2011. [15] A. Gopalan, S. Mannor, and Y. Mansour. Thompson sampling for complex online problems. In ICML, 2014. [16] R.M. Gray. Entropy and information theory. Springer, 2011. [17] P. Hennig and C.J. Schuler. Entropy search for information-efficient global optimization. JMLR, 98888 (1):1809–1837, 2012. [18] B. Jedynak, P.I. Frazier, R. Sznitman, et al. Twenty questions with noise: Bayes optimal policies for entropy loss. Journal of Applied Probability, 49(1):114–136, 2012. [19] E. Kauffmann, N. Korda, and R. Munos. Thompson sampling: an asymptotically optimal finite time analysis. In ALT, 2012. [20] E. Kaufmann, O. Capp´e, and A. Garivier. On Bayesian upper confidence bounds for bandit problems. In AISTATS, 2012. [21] T.L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. [22] W.B. Powell and I.O. Ryzhov. Optimal learning, volume 841. John Wiley & Sons, 2012. [23] P. Rusmevichientong and J.N. Tsitsiklis. Linearly parameterized bandits. Mathematics of Operations Research, 35(2):395–411, 2010. [24] D. Russo and B. Van Roy. Learning to optimize via posterior sampling. CoRR, abs/1301.2609, 2013. [25] D. Russo and B. Van Roy. An information-theoretic analysis of thompson sampling. arXiv preprint arXiv:1403.5341, 2014. [26] D. Russo and B. Van Roy. Learning to optimize via information directed sampling. arXiv preprint arXiv:1403.5556, 2014. [27] I.O. Ryzhov, W.B. Powell, and P.I. Frazier. The knowledge gradient algorithm for a general class of online learning problems. Operations Research, 60(1):180–195, 2012. [28] S.L. Scott. A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26(6):639–658, 2010. [29] N. Srinivas, A. Krause, S.M. Kakade, and M. Seeger. Information-theoretic regret bounds for Gaussian process optimization in the bandit setting. IEEE Transactions on Information Theory, 58(5):3250 –3265, may 2012. [30] Julien Villemonteix, Emmanuel Vazquez, and Eric Walter. An informational approach to the global optimization of expensive-to-evaluate functions. Journal of Global Optimization, 44(4):509–534, 2009. 9
2014
74
5,563
Distributed Estimation, Information Loss and Exponential Families Qiang Liu Alexander Ihler Department of Computer Science, University of California, Irvine qliu1@uci.edu ihler@ics.uci.edu Abstract Distributed learning of probabilistic models from multiple data repositories with minimum communication is increasingly important. We study a simple communication-efficient learning framework that first calculates the local maximum likelihood estimates (MLE) based on the data subsets, and then combines the local MLEs to achieve the best possible approximation to the global MLE given the whole dataset. We study this framework’s statistical properties, showing that the efficiency loss compared to the global setting relates to how much the underlying distribution families deviate from full exponential families, drawing connection to the theory of information loss by Fisher, Rao and Efron. We show that the “full-exponential-family-ness” represents the lower bound of the error rate of arbitrary combinations of local MLEs, and is achieved by a KL-divergence-based combination method but not by a more common linear combination method. We also study the empirical properties of both methods, showing that the KL method significantly outperforms linear combination in practical settings with issues such as model misspecification, non-convexity, and heterogeneous data partitions. 1 Introduction Modern data-science applications increasingly require distributed learning algorithms to extract information from many data repositories stored at different locations with minimal interaction. Such distributed settings are created due to high communication costs (for example in sensor networks), or privacy and ownership issues (such as sensitive medical or financial data). Traditional algorithms often require access to the entire dataset simultaneously, and are not suitable for distributed settings. We consider a straightforward two-step procedure for distributed learning that follows a “divide and conquer” strategy: (i) local learning, which involves learning probabilistic models based on the local data repositories separately, and (ii) model combination, where the local models are transmitted to a central node (the “fusion center”), and combined to form a global model that integrates the information in the local repositories. This framework only requires transmitting the local model parameters to the fusion center once, yielding significant advantages in terms of both communication and privacy constraints. However, the two-step procedure may not fully extract all the information in the data, and may be less (statistically) efficient than a corresponding centralized learning algorithm that operates globally on the whole dataset. This raises important challenges in understanding the fundamental statistical limits of the local learning framework, and proposing optimal combination methods to best approximate the global learning algorithm. In this work, we study these problems in the setting of estimating generative model parameters from a distribution family via the maximum likelihood estimator (MLE). We show that the loss of statistical efficiency caused by using the local learning framework is related to how much the underlying distribution families deviate from full exponential families: local learning can be as efficient as (in fact exactly equivalent to) global learning on full exponential families, but is less efficient on non-exponential families, depending on how nearly “full exponential family” they are. The 1 “full-exponential-family-ness” is formally captured by the statistical curvature originally defined by Efron (1975), and is a measure of the minimum loss of Fisher information when summarizing the data using first order efficient estimators (e.g., Fisher, 1925, Rao, 1963). Specifically, we show that arbitrary combinations of the local MLEs on the local datasets can approximate the global MLE on the whole dataset at most up to an asymptotic error rate proportional to the square of the statistical curvature. In addition, a KL-divergence-based combination of the local MLEs achieves this minimum error rate in general, and exactly recovers the global MLE on full exponential families. In contrast, a more widely-used linear combination method does not achieve the optimal error rate, and makes mistakes even on full exponential families. We also study the two methods empirically, examining their robustness against practical issues such as model mis-specification, heterogeneous data partitions, and the existence of hidden variables (e.g., in the Gaussian mixture model). These issues often cause the likelihood to have multiple local optima, and can easily degrade the linear combination method. On the other hand, the KL method remains robust in these practical settings. Related Work. Our work is related to Zhang et al. (2013a), which includes a theoretical analysis for linear combination. Merugu and Ghosh (2003, 2006) proposed the KL combination method in the setting of Gaussian mixtures, but without theoretical analysis. There are many recent theoretical works on distributed learning (e.g., Predd et al., 2007, Balcan et al., 2012, Zhang et al., 2013b, Shamir, 2013), but most focus on discrimination tasks like classification and regression. There are also many works on distributed clustering (e.g., Merugu and Ghosh, 2003, Forero et al., 2011, Liang et al., 2013) and distributed MCMC (e.g., Scott et al., 2013, Wang and Dunson, 2013, Neiswanger et al., 2013). An orthogonal setting of distributed learning is when the data is split across the variable dimensions, instead of the data instances; see e.g., Liu and Ihler (2012), Meng et al. (2013). 2 Problem Setting Assume we have an i.i.d. sample X = {xi ∶i = 1,...,n}, partitioned into d sub-samples Xk = {xi ∶i ∈αk} that are stored in different locations, where ∪d k=1αk = [n]. For simplicity, we assume the data are equally partitioned, so that each group has n/d instances; extensions to the more general case is straightforward. Assume X is drawn i.i.d. from a distribution with an unknown density from a distribution family {p(x∣θ)∶θ ∈Θ}. Let θ∗be the true unknown parameter. We are interested in estimating θ∗via the maximum likelihood estimator (MLE) based on the whole sample, ˆθmle = arg max θ∈Θ ∑ i∈[n] log p(xi∣θ). However, directly calculating the global MLE often requires distributed optimization algorithms (such as ADMM (Boyd et al., 2011)) that need iterative communication between the local repositories and the fusion center, which can significantly slow down the algorithm regardless of the amount of information communicated at each iteration. We instead approximate the global MLE by a twostage procedure that calculates the local MLEs separately for each sub-sample, then sends the local MLEs to the fusion center and combines them. Specifically, the k-th sub-sample’s local MLE is ˆθk = arg max θ∈Θ ∑ i∈αk log p(xi∣θ), and we want to construct a combination function f(ˆθ1,..., ˆθd) →ˆθf to form the best approximation to the global MLE ˆθmle. Perhaps the most straightforward combination is the linear average, Linear-Averaging: ˆθlinear = 1 d ∑ k ˆθk. However, this method is obviously limited to continuous and additive parameters; in the sequel, we illustrate it also tends to degenerate in the presence of practical issues such as non-convexity and non-i.i.d. data partitions. A better combination method is to average the models w.r.t. some distance metric, instead of the parameters. In particular, we consider a KL-divergence based averaging, KL-Averaging: ˆθKL = arg min θ∈Θ ∑ k KL(p(x∣ˆθk) ∣∣p(x∣θ)). (1) The estimate ˆθKL can also be motivated by a parametric bootstrap procedure that first draws sample Xk′ from each local model p(x∣ˆθk), and then estimates a global MLE based on all the combined 2 bootstrap samples X′ = {Xk′∶k ∈[d]}. We can readily show that this reduces to ˆθKL as the size of the bootstrapped samples Xk′ grows to infinity. Other combination methods based on different distance metrics are also possible, but may not have a similarly natural interpretation. 3 Exactness on Full Exponential Families In this section, we analyze the KL and linear combination methods on full exponential families. We show that the KL combination of the local MLEs exactly equals the global MLE, while the linear average does not in general, but can be made exact by using a special parameterization. This suggests that distributed learning is in some sense “easy” on full exponential families. Definition 3.1. (1). A family of distributions is said to be a full exponential family if its density can be represented in a canonical form (up to one-to-one transforms of the parameters), p(x∣θ) = exp(θT φ(x) −log Z(θ)), θ ∈Θ ≡{θ ∈Rm∶∫x exp(θT φ(x))dH(x) < ∞}. where θ = [θ1,...θm]T and φ(x) = [φ1(x),...φm(x)]T are called the natural parameters and the natural sufficient statistics, respectively. The quantity Z(θ) is the normalization constant, and H(x) is the reference measure. An exponential family is said to be minimal if [1,φ1(x),...φm(x)]T is linearly independent, that is, there is no non-zero constant vector α, such that αT φ(x) = 0 for all x. Theorem 3.2. If P = {p(x∣θ)∶θ ∈Θ} is a full exponential family, then the KL-average ˆθKL always exactly recovers the global MLE, that is, ˆθKL = ˆθmle. Further, if P is minimal, we have ˆθKL = µ−1 (µ(ˆθ1) + ⋯+ µ(ˆθd) d ), (2) where µ ∶θ ↦Eθ[φ(x)] is the one-to-one map from the natural parameters to the moment parameters, and µ−1 is the inverse map of µ. Note that we have µ(θ) = ∂log Z(θ)/∂θ. Proof. Directly verify that the KL objective in (1) equals the global negative log-likelihood. The nonlinear average in (2) gives an intuitive interpretation of why ˆθKL equals ˆθmle on full exponential families: it first calculates the local empirical moment parameters µ(ˆθk) = d/n∑i∈αk φ(xk); averaging them gives the empirical moment parameter on the whole data ˆµn = 1/n∑i∈[n] φ(xk), which then exactly maps to the global MLE. Eq (2) also suggests that ˆθlinear would be exact only if µ(⋅) is an identity map. Therefore, one may make ˆθlinear exact by using the special parameterization ϑ = µ(θ). In contrast, KL-averaging will make this reparameterization automatically (µ is different on different exponential families). Note that both KL-averaging and global MLE are invariant w.r.t. one-to-one transforms of the parameter θ, but linear averaging is not. Example 3.3 (Variance Estimation). Consider estimating the variance σ2 of a zero-mean Gaussian distribution. Let ˆsk = (d/n)∑i∈αk(xi)2 be the empirical variance on the k-th sub-sample and ˆs = ∑k ˆsk/d the overall empirical variance. Then, ˆθlinear would correspond to different power means on ˆsk, depending on the choice of parameterization, e.g., θ = σ2 (variance) θ = σ (standard deviation) θ = σ−2 (precision) ˆθlinear 1 d ∑k ˆsk 1 d ∑k(ˆsk)1/2 1 d ∑k(ˆsk)−1 where only the linear average of ˆsk (when θ = σ2) matches the overall empirical variance ˆs and equals the global MLE. In contrast, ˆθKL always corresponds to a linear average of ˆsk, equaling the global MLE, regardless of the parameterization. 3 4 Information Loss in Distributed Learning The exactness of ˆθKL in Theorem 3.2 is due to the beauty (or simplicity) of exponential families. Following Efron’s intuition, full exponential families can be viewed as “straight lines” or “linear subspaces” in the space of distributions, while other distribution families correspond to “curved” sets of distributions, whose deviation from full exponential families can be measured by their statistical curvatures as defined by Efron (1975). That work shows that statistical curvature is closely related to Fisher and Rao’s theory of second order efficiency (Fisher, 1925, Rao, 1963), and represents the minimum information loss when summarizing the data using first order efficient estimators. In this section, we connect this classical theory with the local learning framework, and show that the statistical curvature also represents the minimum asymptotic deviation of arbitrary combinations of the local MLEs to the global MLE, and that this is achieved by the KL combination method, but not in general by the simpler linear combination method. 4.1 Curved Exponential Families and Statistical Curvature We follow the convention in Efron (1975), and illustrate the idea of statistical curvature using curved exponential families, which are smooth sub-families of full exponential families. The theory can be naturally extended to more general families (see e.g., Efron, 1975, Kass and Vos, 2011). Definition 4.1. A family of distributions {p(x∣θ)∶θ ∈Θ} is said to be a curved exponential family if its density can be represented as p(x∣θ) = exp(η(θ)T φ(x) −log Z(η(θ))), (3) where the dimension of θ = [θ1,...,θq] is assumed to be smaller than that of η = [η1,...,ηm] and φ = [φ1,...,φm], that is q < m. Following Kass and Vos (2011), we assume some regularity conditions for our asymptotic analysis. Assume Θ is an open set in Rq, and the mapping η ∶Θ →η(Θ) is one-to-one and infinitely differentiable, and of rank q, meaning that the q × m matrix ˙η(θ) has rank q everywhere. In addition, if a sequence {η(θi) ∈N0} converges to a point η(θ0), then {ηi ∈Θ} must converge to φ(η0). In geometric terminology, such a map η ∶Θ →η(Θ) is called a q-dimensional embedding in Rm. Obviously, a curved exponential family can be treated as a smooth subset of a full exponential family p(x∣η) = exp(ηT φ(x) −log Z(η)), with η constrained in η(Θ). If η(θ) is a linear function, then the curved exponential family can be rewritten into a full exponential family in lower dimensions; otherwise, η(θ) is a curved subset in the η-space, whose curvature – its deviation from planes or straight lines – represents its deviation from full exponential families. 1/γ✓ ⌘(✓) Consider the case when θ is a scalar, and hence η(θ) is a curve; the geometric curvature γθ of η(θ) at point θ is defined to be the reciprocal of the radius of the circle that fits best to η(θ) locally at θ. Therefore, the curvature of a circle of radius r is a constant 1/r. In general, elementary calculus shows that γ2 θ = ( ˙ηT θ ˙ηθ)−3(¨ηT θ ¨ηθ ⋅˙ηT θ ˙ηθ −(¨ηT θ ˙ηθ)2). The statistical curvature of a curved exponential family is defined similarly, except equipped with an inner product defined via its Fisher information metric. Definition 4.2 (Statistical Curvature). Consider a curved exponential family P = {p(x∣θ)∶θ ∈Θ}, whose parameter θ is a scalar (q = 1). Let Σθ = covθ[φ(x)] be the m × m Fisher information on the corresponding full exponential family p(x∣η). The statistical curvature of P at θ is defined as γ2 θ = ( ˙ηT θ Σθ ˙ηθ)−3[(¨ηT θ Σθ¨ηθ) ⋅( ˙ηT θ Σθ ˙ηθ) −(¨ηT θ Σθ ˙ηθ)2]. The definition can be extended to general multi-dimensional parameters, but requires involved notation. We give the full definition and our general results in the appendix. Example 4.3 (Bivariate Normal on Ellipse). Consider a bivariate normal distribution with diagonal covariance matrix and mean vector restricted on an ellipse η(θ) = [acos(θ),bsin(θ)], that is, p(x∣θ) ∝exp[ −1 2(x2 1 + x2 2) + acosθ x1 + bsinθ x2)], θ ∈(−π,π), x ∈R2. We have that Σθ equals the identity matrix in this case, and the statistical curvature equals the geometric curvature of the ellipse in the Euclidian space, γθ = ab(a2 sin2(θ) + b2 cos2(θ))−3/2. 4 The statistical curvature was originally defined by Efron (1975) as the minimum amount of information loss when summarizing the sample using first order efficient estimators. Efron (1975) showed that, extending the result of Fisher (1925) and Rao (1963), lim n→∞[IX θ∗−I ˆθmle θ∗ ] = γ2 θ∗Iθ∗, (4) where Iθ∗is the Fisher information (per data instance) of the distribution p(x∣θ) at the true parameter θ∗, and IX θ∗= nIθ∗is the total information included in a sample X of size n, and I ˆθmle θ∗ is the Fisher information included in ˆθmle based on X. Intuitively speaking, we lose about γ2 θ∗units of Fisher information when summarizing the data using the ML estimator. Fisher (1925) also interpreted γ2 θ∗ as the effective number of data instances lost in MLE, easily seen from rewriting I ˆθmle θ∗ ≈(n − γ2 θ∗)Iθ∗, as compared to IX θ∗= nIθ∗. Moreover, this is the minimum possible information loss in the class of “first order efficient” estimators T(X), those which satisfy the weaker condition limn→∞Iθ∗/IT θ∗= 1. Rao coined the term “second order efficiency” for this property of the MLE. The intuition here has direct implications for our distributed setting, since ˆθf depends on the data only through {ˆθk}, each of which summarizes the data with a loss of γ2 θ∗units of information. The total information loss is d ⋅γ2 θ∗, in contrast with the global MLE, which only loses γ2 θ∗overall. Therefore, the additional loss due to the distributed setting is (d −1) ⋅γ2 θ∗. We will see that our results in the sequel closely match this intuition. 4.2 Lower Bound The extra information loss (d−1)γ2 θ∗turns out to be the asymptotic lower bound of the mean square error rate n2Eθ∗[Iθ∗(ˆθf −ˆθmle)2] for any arbitrary combination function f(ˆθ1,..., ˆθd). Theorem 4.4 (Lower Bound). For an arbitrary measurable function ˆθf =f(ˆθ1,..., ˆθd), we have liminf n→+∞n2 Eθ∗[∣∣f(ˆθ1,..., ˆθd) −ˆθmle∣∣2] ≥(d −1)γ2 θ∗I−1 θ∗. Sketch of Proof . Note that Eθ∗[∣∣ˆθf −ˆθmle∣∣2] = Eθ∗[∣∣ˆθf −Eθ∗(ˆθmle∣ˆθ1,..., ˆθd)∣∣2] + Eθ∗[∣∣ˆθmle −Eθ∗(ˆθmle∣ˆθ1,..., ˆθd)∣∣2] ≥Eθ∗[∣∣ˆθmle −Eθ∗(ˆθmle∣ˆθ1,..., ˆθd)∣∣2] = Eθ∗[varθ∗(ˆθmle∣ˆθ1,..., ˆθd)], where the lower bound is achieved when ˆθf = Eθ∗(ˆθmle∣ˆθ1,..., ˆθd). The conclusion follows by showing that limn→+∞Eθ∗[varθ∗(ˆθmle∣ˆθ1,..., ˆθd)] = (d −1)γ2 θ∗I−1 θ∗; this requires involved asymptotic analysis, and is presented in the Appendix. ( ⇡1 n2 (d −1) · γ2 ˆ✓1 ˆ✓d f(ˆ✓1, . . . , ˆ✓d) ˆ✓mle The proof above highlights a geometric interpretation via the projection of random variables (e.g., Van der Vaart, 2000). Let F be the set of all random variables in the form of f(ˆθ1,..., ˆθd). The optimal consensus function should be the projection of ˆθmle onto F, and the minimum mean square error is the distance between ˆθmle and F. The conditional expectation ˆθf = Eθ∗(ˆθmle∣ˆθ1,..., ˆθd) is the exact projection and ideally the best combination function; however, this is intractable to calculate due to the dependence on the unknown true parameter θ∗. We show in the sequel that ˆθKL gives an efficient approximation and achieves the same asymptotic lower bound. 4.3 General Consistent Combination We now analyze the performance of a general class of ˆθf, which includes both the KL average ˆθKL and the linear average ˆθlinear; we show that ˆθKL matches the lower bound in Theorem 4.4, while ˆθlinear is not optimal even on full exponential families. We start by defining conditions which any “reasonable” f(ˆθ1,..., ˆθd) should satisfy. 5 Definition 4.5. (1). We say f(⋅) is consistent, if for ∀θ ∈Θ, θk →θ, ∀k ∈[d] implies f(θ1,...,θd) →θ. (2). f(⋅) is symmetric if f(ˆθ1,..., ˆθd) = f(ˆθσ(1),..., ˆθσ(d)), for any permutation σ on [d]. The consistency condition guarantees that if all the ˆθk are consistent estimators, then ˆθf should also be consistent. The symmetry is also straightforward due to the symmetry of the data partition {Xk}. In fact, if f(⋅) is not symmetric, one can always construct a symmetric version that performs better or at least the same (see Appendix for details). We are now ready to present the main result. Theorem 4.6. (1). Consider a consistent and symmetric ˆθf = f(ˆθ1,..., ˆθd) as in Definition 4.5, whose first three orders of derivatives exist. Then, for curved exponential families in Definition 4.1, Eθ∗[ˆθf −ˆθmle] = d −1 n βf θ∗+ o(n−1), Eθ∗[∣∣ˆθf −ˆθmle∣∣2] = d −1 n2 ⋅[γ2 θ∗I−1 θ∗+ (d + 1)(βf θ∗)2] + o(n−2), where βf θ∗is a term that depends on the choice of the combination function f(⋅). Note that the mean square error is consistent with the lower bound in Theorem 4.4, and is tight if βf θ∗= 0. (2). The KL average ˆθKL has βf θ∗= 0, and hence achieves the minimum bias and mean square error, Eθ∗[ˆθKL −ˆθmle] = o(n−1), Eθ∗[∣∣ˆθKL −ˆθmle∣∣2] = d −1 n2 ⋅γ2 θ∗I−1 θ∗+ o(n−2). In particular, note that the bias of ˆθKL is smaller in magnitude than that of general ˆθf with βf θ∗≠0. (4). The linear averaging ˆθlinear, however, does not achieve the lower bound in general. We have βlinear θ∗ = I−2 ∗(¨ηT θ∗Σθ∗˙ηθ∗+ 1 2Eθ∗[∂3 log p(x∣θ∗) ∂θ3 ]), which is in general non-zero even for full exponential families. (5). The MSE w.r.t. the global MLE ˆθmle can be related to the MSE w.r.t. the true parameter θ∗, by Eθ∗[∣∣ˆθKL −θ∗∣∣2] = Eθ∗[∣∣ˆθmle −θ∗∣∣2] + d −1 n2 ⋅γ2 θ∗I−1 θ∗+ o(n−2). Eθ∗[∣∣ˆθlinear −θ∗∣∣2] = Eθ∗[∣∣ˆθmle −θ∗∣∣2] + d −1 n2 ⋅[γ2 θ∗I−1 θ∗+ 2(βlinear θ∗ )2] + o(n−2). Proof. See Appendix for the proof and the general results for multi-dimensional parameters. Theorem 4.6 suggests that ˆθf −ˆθmle = Op(1/n) for any consistent f(⋅), which is smaller in magnitude than ˆθmle −θ∗= Op(1/√n). Therefore, any consistent ˆθf is first order efficient, in that its difference from the global MLE ˆθmle is negligible compared to ˆθmle −θ∗asymptotically. This also suggests that KL and the linear methods perform roughly the same asymptotically in terms of recovering the true parameter θ∗. However, we need to treat this claim with caution, because, as we demonstrate empirically, the linear method may significantly degenerate in the non-asymptotic region or when the conditions in Theorem 4.6 do not hold. 5 Experiments and Practical Issues We present numerical experiments to demonstrate the correctness of our theoretical analysis. More importantly, we also study empirical properties of the linear and KL combination methods that are not enlightened by the asymptotic analysis. We find that the linear average tends to degrade significantly when its local models (ˆθk) are not already close, for example due to small sample sizes, heterogenous data partitions, or non-convex likelihoods (so that different local models find different local optima). In contrast, the KL combination is much more robust in practice. 6 150 250 500 1000 −8 −6 −4 Total Sample Size (n) Linear−Avg KL−Avg Theoretical 150 250 500 1000 −6 −5 −4 −3 −2 Total Sample Size (n) 150 250 500 1000 −4 −3.5 −3 Total Sample Size (n) Linear−Avg KL−Avg Global MLE 150 250 500 1000 −4 −3 −2 Total Sample Size (n) (a). E(∣∣θf −ˆθmle∣∣2) (b). ∣E(θf −ˆθmle)∣ (c). E(∣∣θf −θ∗∣∣2) (d). ∣E(θf −θ∗)∣ Figure 1: Result on the toy model in Example 4.3. (a)-(d) The mean square errors and biases of the linear average ˆθlinear and the KL average ˆθKL w.r.t. to the global MLE ˆθmle and the true parameter θ∗, respectively. The y-axes are shown on logarithmic (base 10) scales. 5.1 Bivariate Normal on Ellipse We start with the toy model in Example 4.3 to verify our theoretical results. We draw samples from the true model (assuming θ∗= π/4, a = 1, b = 5), and partition the samples randomly into 10 subgroups (d = 10). Fig. 1 shows that the empirical biases and MSEs match closely with the theoretical predictions when the sample size is large (e.g., n ≥250), and ˆθKL is consistently better than ˆθlinear in terms of recovering both the global MLE and the true parameters. Fig. 1(b) shows that the bias of ˆθKL decreases faster than that of ˆθlinear, as predicted in Theorem 4.6 (2). Fig. 1(c) shows that all algorithms perform similarly in terms of the asymptotic MSE w.r.t. the true parameters θ∗, but linear average degrades significantly in the non-asymptotic region (e.g., n < 250). ⇡/2 −⇡/2 0 ✓ Model Misspecification. Model misspecification is unavoidable in practice, and may create multiple local modes in the likelihood objective, leading to poor behavior from the linear average. We illustrate this phenomenon using the toy model in Example 4.3, assuming the true model is N([0,1/2], 12×2), outside of the assumed parametric family. This is illustrated in the figure at right, where the ellipse represents the parametric family, and the black square denotes the true model. The MLE will concentrate on the projection of the true model to the ellipse, in one of two locations (θ = ±π/2) indicated by the two red circles. Depending on the random data sample, the global MLE will concentrate on one or the other of these two values; see Fig. 2(a). Given a sufficient number of samples (n > 250), the probability that the MLE is at θ ≈−π/2 (the less favorable mode) goes to zero. Fig. 2(b) shows KL averaging mimics the bi-modal distribution of the global MLE across data samples; the less likely mode vanishes slightly slower. In contrast, the linear average takes the arithmetic average of local models from both of these two local modes, giving unreasonable parameter estimates that are close to neither (Fig. 2(c)). 10 100 1000 −π/2 0 π/2 Total Sample Size (n) Estimted Parameter −π/2 0 π/2 10 100 1000 −π/2 0 π/2 Total Sample Size (n) Estimted Parameter −π/2 0 π/2 10 100 1000 −π/2 0 π/2 Total Sample Size (n) Estimted Parameter −π/2 0 π/2 (a). Global MLE ˆθmle (b). KL Average ˆθKL (c). Linear Average ˆθlinear (n = 10) (n = 10) (n = 10) Figure 2: Result on the toy model in Example 4.3 with model misspecification: scatter plots of the estimated parameters vs. the total sample size n (with 10,000 random trials for each fixed n). The inside figures are the densities of the estimated parameters with fixed n = 10. Both global MLE and KL-average concentrate on two locations (±π/2), and the less favorable (−π/2) vanishes when the sample sizes are large (e.g., n > 250). In contrast, the linear approach averages local MLEs from the two modes, giving unreasonable estimates spread across the full interval. 7 500 5000 50000 −630 −625 −620 −615 Total Sample Size (n) 500 5000 50000 −635 −630 −625 −620 Total Sample Size (n) Local MLEs Global MLE Linear−Avg−Matched Linear−Avg KL−Avg 500 5000 50000 −650 −640 −630 −620 Total Sample Size (n) 500 5000 50000 −650 −640 −630 −620 Total Sample Size (n) (a) Training LL (random partition) (b) Test LL (random partition) (c) Training LL (label-wise partition) (d) Test LL (label-wise partition) Figure 3: Learning Gaussian mixture models on MNIST: training and test log-likelihoods of different methods with varying training size n. In (a)-(b), the data are partitioned into 10 sub-groups uniformly at random (ensuring sub-samples are i.i.d.); in (c)-(d) the data are partitioned according to their digit labels. The number of mixture components is fixed to be 10. 1000 10000100000 −140 −120 −100 Training Sample Size (n) Local MLEs Global MLE Linear−Avg−Matched Linear−Avg KL−Avg 1000 10000100000 −140 −120 −100 Training Sample Size (n) (a) Training log-likelihood (b) Test log-likelihood Figure 4: Learning Gaussian mixture models on the YearPredictionMSD data set. The data are randomly partitioned into 10 sub-groups, and we use 10 mixture components. 5.2 Gaussian Mixture Models on Real Datasets We next consider learning Gaussian mixture models. Because component indexes may be arbitrarily switched, na¨ıve linear averaging is problematic; we consider a matched linear average that first matches indices by minimizing the sum of the symmetric KL divergences of the different mixture components. The KL average is also difficult to calculate exactly, since the KL divergence between Gaussian mixtures is intractable. We approximate the KL average using Monte Carlo sampling (with 500 samples per local model), corresponding to the parametric bootstrap discussed in Section 2. We experiment on the MNIST dataset and the YearPredictionMSD dataset in the UCI repository, where the training data is partitioned into 10 sub-groups randomly and evenly. In both cases, we use the original training/test split; we use the full testing set, and vary the number of training examples n by randomly sub-sampling from the full training set (averaging over 100 trials). We take the first 100 principal components when using MNIST. Fig. 3(a)-(b) and 4(a)-(b) show the training and test likelihoods. As a baseline, we also show the average of the log-likelihoods of the local models (marked as local MLEs in the figures); this corresponds to randomly selecting a local model as the combined model. We see that the KL average tends to perform as well as the global MLE, and remains stable even with small sample sizes. The na¨ıve linear average performs badly even with large sample sizes. The matched linear average performs as badly as the na¨ıve linear average when the sample size is small, but improves towards to the global MLE as sample size increases. For MNIST, we also consider a severely heterogenous data partition by splitting the images into 10 groups according to their digit labels. In this setup, each partition learns a local model only over its own digit, with no information about the other digits. Fig. 3(c)-(d) shows the KL average still performs as well as the global MLE, but both the na¨ıve and matched linear average are much worse even with large sample sizes, due to the dissimilarity in the local models. 6 Conclusion and Future Directions We study communication-efficient algorithms for learning generative models with distributed data. Analyzing both a common linear averaging technique and a less common KL-averaging technique provides both theoretical and empirical insights. Our analysis opens many important future directions, including extensions to high dimensional inference and efficient approximations for complex machine learning models, such as LDA and neural networks. 8 Acknowledgements. This work sponsored in part by NSF grants IIS-1065618 and IIS-1254071, and the US Air Force under Contract No. FA8750-14-C-0011 under DARPA’s PPAML program. References Bradley Efron. Defining the curvature of a statistical problem (with applications to second order efficiency). The Annals of Statistics, pages 1189–1242, 1975. Ronald Aylmer Fisher. Theory of statistical estimation. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 22, pages 700–725. Cambridge Univ Press, 1925. C Radhakrishna Rao. Criteria of estimation in large samples. Sankhy¯a: The Indian Journal of Statistics, Series A, pages 189–206, 1963. Yuchen Zhang, John C Duchi, and Martin J Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321–3363, 2013a. Srujana Merugu and Joydeep Ghosh. Privacy-preserving distributed clustering using generative models. In IEEE Int’l Conf. on Data Mining (ICDM), pages 211–218. IEEE, 2003. Srujana Merugu and Joydeep Ghosh. Distributed learning using generative models. PhD thesis, University of Texas at Austin, 2006. Joel B Predd, Sanjeev R Kulkarni, and H Vincent Poor. Distributed learning in wireless sensor networks. John Wiley & Sons: Chichester, UK, 2007. Maria-Florina Balcan, Avrim Blum, Shai Fine, and Yishay Mansour. Distributed learning, communication complexity and privacy. arXiv preprint arXiv:1204.3514, 2012. Yuchen Zhang, John Duchi, Michael Jordan, and Martin J Wainwright. Information-theoretic lower bounds for distributed statistical estimation with communication constraints. In Advances in Neural Information Processing Systems (NIPS), pages 2328–2336, 2013b. Ohad Shamir. Fundamental limits of online and distributed algorithms for statistical learning and estimation. arXiv preprint arXiv:1311.3494, 2013. Pedro A Forero, Alfonso Cano, and Georgios B Giannakis. Distributed clustering using wireless sensor networks. IEEE Journal of Selected Topics in Signal Processing, 5(4):707–724, 2011. Yingyu Liang, Maria-Florina Balcan, and Vandana Kanchanapally. Distributed PCA and k-means clustering. In Big Learning Workshop, NIPS, 2013. Steven L Scott, Alexander W Blocker, Fernando V Bonassi, Hugh A Chipman, Edward I George, and Robert E McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. In EFaBBayes 250 conference, volume 16, 2013. Xiangyu Wang and David B Dunson. Parallel MCMC via Weierstrass sampler. arXiv preprint arXiv:1312.4605, 2013. Willie Neiswanger, Chong Wang, and Eric Xing. Asymptotically exact, embarrassingly parallel MCMC. arXiv preprint arXiv:1311.4780, 2013. Qiang Liu and Alexander Ihler. Distributed parameter estimation via pseudo-likelihood. In International Conference on Machine Learning (ICML), pages 1487–1494. July 2012. Z. Meng, D. Wei, A. Wiesel, and A.O. Hero III. Distributed learning of Gaussian graphical models via marginal likelihoods. In Int’l Conf. on Artificial Intelligence and Statistics (AISTATS), 2013. Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends® in Machine Learning, 3(1):1–122, 2011. Robert E Kass and Paul W Vos. Geometrical foundations of asymptotic inference, volume 908. John Wiley & Sons, 2011. Aad W Van der Vaart. Asymptotic statistics, volume 3. Cambridge university press, 2000. 9
2014
75
5,564
A⇤Sampling Chris J. Maddison Dept. of Computer Science University of Toronto cmaddis@cs.toronto.edu Daniel Tarlow, Tom Minka Microsoft Research {dtarlow,minka}@microsoft.com Abstract The problem of drawing samples from a discrete distribution can be converted into a discrete optimization problem [1, 2, 3, 4]. In this work, we show how sampling from a continuous distribution can be converted into an optimization problem over continuous space. Central to the method is a stochastic process recently described in mathematical statistics that we call the Gumbel process. We present a new construction of the Gumbel process and A⇤Sampling, a practical generic sampling algorithm that searches for the maximum of a Gumbel process using A⇤search. We analyze the correctness and convergence time of A⇤Sampling and demonstrate empirically that it makes more efficient use of bound and likelihood evaluations than the most closely related adaptive rejection sampling-based algorithms. 1 Introduction Drawing samples from arbitrary probability distributions is a core problem in statistics and machine learning. Sampling methods are used widely when training, evaluating, and predicting with probabilistic models. In this work, we introduce a generic sampling algorithm that returns exact independent samples from a distribution of interest. This line of work is important as we seek to include probabilistic models as subcomponents in larger systems, and as we seek to build probabilistic modelling tools that are usable by non-experts; in these cases, guaranteeing the quality of inference is highly desirable. There are a range of existing approaches for exact sampling. Some are specialized to specific distributions [5], but exact generic methods are based either on (adaptive) rejection sampling [6, 7, 8] or Markov Chain Monte Carlo (MCMC) methods where convergence to the stationary distribution can be guaranteed [9, 10, 11]. This work approaches the problem from a different perspective. Specifically, it is inspired by an algorithm for sampling from a discrete distribution that is known as the Gumbel-Max trick. The algorithm works by adding independent Gumbel perturbations to each configuration of a discrete negative energy function and returning the argmax configuration of the perturbed negative energy function. The result is an exact sample from the corresponding Gibbs distribution. Previous work [1, 3] has used this property to motivate samplers based on optimizing random energy functions but has been forced to resort to approximate sampling due to the fact that in structured output spaces, exact sampling appears to require instantiating exponentially many Gumbel perturbations. Our first key observation is that we can apply the Gumbel-Max trick without instantiating all of the (possibly exponentially many) Gumbel perturbations. The same basic idea then allows us to extend the Gumbel-Max trick to continuous spaces where there will be infinitely many independent perturbations. Intuitively, for any given random energy function, there are many perturbation values that are irrelevant to determining the argmax so long as we have an upper bound on their values. We will show how to instantiate the relevant ones and bound the irrelevant ones, allowing us to find the argmax — and thus an exact sample. There are a number of challenges that must be overcome along the way, which are addressed in this work. First, what does it mean to independently perturb space in a way analogous to perturbations in the Gumbel-Max trick? We introduce the Gumbel process, a special case of a stochastic process recently defined in mathematical statistics [12], which generalizes the notion of perturbation 1 over space. Second, we need a method for working with a Gumbel process that does not require instantiating infinitely many random variables. This leads to our novel construction of the Gumbel process, which draws perturbations according to a top-down ordering of their values. Just as the stick breaking construction of the Dirichlet process gives insight into algorithms for the Dirichlet process, our construction gives insight into algorithms for the Gumbel process. We demonstrate this by developing A⇤sampling, which leverages the construction to draw samples from arbitrary continuous distributions. We study the relationship between A⇤sampling and adaptive rejection sampling-based methods and identify a key difference that leads to more efficient use of bound and likelihood computations. We investigate the behaviour of A⇤sampling on a variety of illustrative and challenging problems. 2 The Gumbel Process The Gumbel-Max trick is an algorithm for sampling from a categorical distribution over classes i 2 {1, . . . , n} with probability proportional to exp(φ(i)). The algorithm proceeds by adding independent Gumbel-distributed noise to the log-unnormalized mass φ(i) and returns the optimal class of the perturbed distribution. In more detail, G ⇠Gumbel(m) is a Gumbel with location m if P(G g) = exp(−exp(−g + m)). The Gumbel-Max trick follows from the structure of Gumbel distributions and basic properties of order statistics; if G(i) are i.i.d. Gumbel(0), then argmaxi {G(i) + φ(i)} ⇠exp(φ(i))/ P i exp(φ(i)). Further, for any B ✓{1, . . . , n} max i2B {G(i) + φ(i)} ⇠Gumbel log X i2B exp(φ(i)) ! (1) argmax i2B {G(i) + φ(i)} ⇠ exp(φ(i)) P i2B exp(φ(i)) (2) Eq. 1 is known as max-stability—the highest order statistic of a sample of independent Gumbels also has a Gumbel distribution with a location that is the log partition function [13]. Eq. 2 is a consequence of the fact that Gumbels satisfy Luce’s choice axiom [14]. Moreover, the max and argmax are independent random variables, see Appendix for proofs. We would like to generalize the interpretation to continuous distributions as maximizing over the perturbation of a density p(x) / exp(φ(x)) on Rd. The perturbed density should have properties analogous to the discrete case, namely that the max in B ✓Rd should be distributed as Gumbel(log R x2B exp(φ(x))) and the distribution of the argmax in B should be distributed / 1(x 2 B) exp(φ(x)). The Gumbel process is a generalization satisfying these properties. Definition 1. Adapted from [12]. Let µ(B) be a sigma-finite measure on sample space ⌦, B ✓⌦ measurable, and Gµ(B) a random variable. Gµ = {Gµ(B) | B ✓⌦} is a Gumbel process, if 1. (marginal distributions) Gµ(B) ⇠Gumbel (log µ(B)) . 2. (independence of disjoint sets) Gµ(B) ? Gµ(Bc). 3. (consistency constraints) for measurable A, B ✓⌦, then Gµ(A [ B) = max(Gµ(A), Gµ(B)). The marginal distributions condition ensures that the Gumbel process satisfies the requirement on the max. The consistency requirement ensures that a realization of a Gumbel process is consistent across space. Together with the independence these ensure the argmax requirement. In particular, if Gµ(B) is the optimal value of some perturbed density restricted to B, then the event that the optima over ⌦is contained in B is equivalent to the event that Gµ(B) ≥Gµ(Bc). The conditions ensure that P(Gµ(B) ≥Gµ(Bc)) is a probability measure proportional to µ(B) [12]. Thus, we can use the Gumbel process for a continuous measure µ(B) = R x2B exp(φ(x)) on Rd to model a perturbed density function where the optimum is distributed / exp(φ(x)). Notice that this definition is a generalization of the finite case; if ⌦is finite, then the collection Gµ corresponds exactly to maxes over subsets of independent Gumbels. 3 Top-Down Construction for the Gumbel Process While [12] defines and constructs a general class of stochastic processes that include the Gumbel process, the construction that proves their existence gives little insight into how to execute a con2 tinuous version of the Gumbel-Max trick. Here we give an alternative algorithmic construction that will form the foundation of our practical sampling algorithm. In this section we assume log µ(⌦) can be computed tractably; this assumption will be lifted in Section 4. To explain the construction, we consider the discrete case as an introductory example. Algorithm 1 Top-Down Construction input sample space ⌦, measure µ(B) = R B exp(φ)dm (B1, Q) (⌦, Queue) G1 ⇠Gumbel(log µ(⌦)) X1 ⇠exp(φ(x))/µ(⌦) Q.push(1) k 1 while !Q.empty() do p Q.pop() L, R partition(Bp −{Xp}) for C 2 {L, R} do if C 6= ; then k k + 1 Bk C Gk ⇠TruncGumbel(log µ(Bk), Gp) Xk ⇠1(x 2 Bk) exp(φ(x))/µ(Bk) Q.push(k) yield (Gk, Xk) Suppose Gµ(i) ⇠Gumbel(φ(i)) is a set of independent Gumbel random variables for i 2 {1, . . . , n}. It would be straightforward to sample the variables then build a heap of the Gµ(i) values and also have heap nodes store the index i associated with their value. Let Bi be the set of indices that appear in the subtree rooted at the node with index i. A property of the heap is that the root (Gµ(i), i) pair is the max and argmax of the set of Gumbels with index in Bi. The key idea of our construction is to sample the independent set of random variables by instantiating this heap from root to leaves. That is, we will first sample the root node, which is the global max and argmax, then we will recurse, sampling the root’s two children conditional upon the root. At the end, we will have sampled a heap full of values and indices; reading off the value associated with each index will yield a draw of independent Gumbels from the target distribution. We sketch an inductive argument. For the base case, sample the max and its index i⇤using their distributions that we know from Eq. 1 and Eq. 2. Note the max and argmax are independent. Also let Bi⇤= {0, . . . , n −1} be the set of all indices. Now, inductively, suppose have sampled a partial heap and would like to recurse downward starting at (Gµ(p), p). Partition the remaining indices to be sampled Bp −{p} into two subsets L and R and let l 2 L be the left argmax and r 2 R be the right argmax. Let [≥p] be the indices that have been sampled already. Then p ! Gµ(l) = gl, Gµ(r) = gr, {Gµ(k) = gk}k2[≥p] | [≥p] " (3) /p ✓ max i2L Gµ(i)=gl ◆ p ✓ max i2R Gµ(i)=gr ◆Y k2[≥p] pk(Gµ(k) = gk)1 ! gk ≥gL(k) ^ gk ≥gR(k) " where L(k) and R(k) denote the left and right children of k and the constraints should only be applied amongst nodes [≥p] [ {l, r}. This implies p ! Gµ(l) = gl, Gµ(r) = gr | {Gµ(k) = gk}k2[≥p], [≥p] " / p ✓ max i2L Gµ(i) = gl ◆ p ✓ max i2R Gµ(i) = gr ◆ 1(gp > gl) 1(gp > gr) . (4) Eq. 4 is the joint density of two independent Gumbels truncated at Gµ(p). We could sample the children maxes and argmaxes by sampling the independent Gumbels in L and R respectively and computing their maxes, rejecting those that exceed the known value of Gµ(p). Better, the truncated Gumbel distributions can be sampled efficiently via CDF inversion1, and the independent argmaxes within L and R can be sampled using Eq. 2. Note that any choice of partitioning strategy for L and R leads to the same distribution over the set of Gumbel values. The basic structure of this top-down sampling procedure allows us to deal with infinite spaces; we can still generate an infinite descending heap of Gumbels and locations as if we had made a heap from an infinite list. The algorithm (which appears as Algorithm 1) begins by sampling the optimal value G1 ⇠Gumbel(log µ(⌦)) over sample space ⌦and its location X1 ⇠exp(φ(x))/µ(⌦). X1 is removed from the sample space and the remaining sample space is partitioned into L and R. The optimal Gumbel values for L and R are sampled from a Gumbel with location log measure of their 1G ⇠TruncGumbel(φ, b) if G has CDF exp(−exp(−min(g, b)+φ))/ exp(−exp(−b+φ)). To sample efficiently, return G = −log(exp(−b −γ + φ) −log(U)) −γ + φ where U ⇠uniform[0, 1]. 3 respective sets, but truncated at G1. The locations are sampled independently from their sets, and the procedure recurses. As in the discrete case, this yields a stream of (Gk, Xk) pairs, which we can think of as being nodes in a heap of the Gk’s. If Gµ(x) is the value of the perturbed negative energy at x, then Algorithm 1 instantiates this function at countably many points by setting Gµ(Xk) = Gk. In the discrete case we eventually sample the complete perturbed density, but in the continuous case we simply generate an infinite stream of locations and values. The sense in which Algorithm 1 constructs a Gumbel process is that the collection {max{Gk | Xk 2 B} | B ✓⌦} satisfies Definition 1. The intuition should be provided by the introductory argument; a full proof appears in the Appendix. An important note is that because Gk’s are sampled in descending order along a path in the tree, when the first Xk lands in set B, the value of max{Gk | Xk 2 B} will not change as the algorithm continues. 4 A⇤Sampling LB1 o(x) x1 exact sample x2 LB2 o(x)+G Figure 1: Illustration of A⇤sampling. Algorithm 2 A⇤Sampling input log density i(x), difference o(x), bounding function M(B), and partition (LB, X⇤, k) (−1, null, 1) Q PriorityQueue G1 ⇠Gumbel(log ⌫(Rd)) X1 ⇠exp(i(x))/⌫(Rd)) M1 M(Rd) Q.pushWithPriority(1, G1 + M1) while !Q.empty() and LB < Q.topPriority() do p Q.popHighest() LBp Gp + o(Xp) if LB < LBp then LB LBp X⇤ Xp L, R partition(Bp, Xp) for C 2 {L, R} do if C 6= ; then k k + 1 Bk C Gk ⇠TruncGumbel(log ⌫(Bk), Gp) Xk ⇠1(x 2 Bk) exp(i(x))/⌫(Bk) if LB < Gk + Mp then Mk M(Bk) if LB < Gk + Mk then Q.pushWithPriority(k, Gk + Mk) output (LB, X⇤) The Top-Down construction is not executable in general, because it assumes log µ(⌦) can be computed efficiently. A⇤sampling is an algorithm that executes the Gumbel-Max trick without this assumption by exploiting properties of the Gumbel process. Henceforth A⇤sampling refers exclusively to the continuous version. A⇤sampling is possible because we can transform one Gumbel process into another by adding the difference in their log densities. Suppose we have two continuous measures µ(B) = R x2B exp(φ(x)) and ⌫(B) = R x2B exp(i(x)). Let pairs (Gk, Xk) be draws from the Top-Down construction for G⌫. If o(x) = φ(x) −i(x) is bounded, then we can recover Gµ by adding the difference o(Xk) to every Gk; i.e., {max{Gk + o(Xk) | Xk 2 B} | B ✓Rd} is a Gumbel process with measure µ. As an example, if ⌫were a prior and o(x) a bounded log-likelihood, then we could simulate the Gumbel process corresponding to the posterior by adding o(Xk) to every Gk from a run of the construction for ⌫. This “linearity” allows us to decompose a target log density function into a tractable i(x) and boundable o(x). The tractable component is analogous to the proposal distribution in a rejection sampler. A⇤sampling searches for argmax{Gk + o(Xk)} within the heap of (Gk, Xk) pairs from the Top-Down construction of G⌫. The search is an A⇤procedure: nodes in the search tree correspond to increasingly refined regions in space, and the search is guided by upper and lower bounds that are computed for each region. Lower bounds for region B come from drawing the max Gk and argmax Xk of G⌫within B and evaluating Gk+o(Xk). Upper bounds come from the fact that max{Gk + o(Xk) | Xk 2 B} max{Gk | Xk 2 B} + M(B), where M(B) is a bounding function for a region, M(B) ≥o(x) for all x 2 B. M(B) is not random and can be implemented using methods from e.g., convex duality or interval analysis. The first term on the RHS is the Gk value used in the lower bound. 4 The algorithm appears in Algorithm 2 and an execution is illustrated in Fig. 1. The algorithm begins with a global upper bound (dark blue dashed). G1 and X1 are sampled, and the first lower bound LB1 = G1 + o(X1) is computed. Space is split, upper bounds are computed for the new children regions (medium blue dashed), and the new nodes are put on the queue. The region with highest upper bound is chosen, the maximum Gumbel in the region, (G2, X2), is sampled, and LB2 is computed. The current region is split at X2 (producing light blue dashed bounds), after which LB2 is greater than the upper bound for any region on the queue, so LB2 is guaranteed to be the max over the infinite tree of Gk + o(Xk). Because max{Gk + o(Xk) | Xk 2 B} is a Gumbel process with measure µ, this means that X2 is an exact sample from p(x) / exp(φ(x))) and LB2 is an exact sample from Gumbel(log µ(Rd)). Proofs of termination and correctness are in the Appendix. A⇤Sampling Variants. There are several variants of A⇤sampling. When more than one sample is desired, bound information can be reused across runs of the sampler. In particular, suppose we have a partition of Rd with bounds on o(x) for each region. A⇤sampling could use this by running a search independently for each region and returning the max Gumbel. The maximization can be done lazily by using A⇤search, only expanding nodes in regions that are needed to determine the global maximum. The second variant trades bound computations for likelhood computations by drawing more than one sample from the auxiliary Gumbel process at each node in the search tree. In this way, more lower bounds are computed (costing more likelihood evaluations), but if this leads to better lower bounds, then more regions of space can be pruned, leading to fewer bound evaluations. Finally, an interesting special case of A⇤sampling can be implemented when o(x) is unimodal in 1D. In this case, at every split of a parent node, one child can immediately be pruned, so the “search” can be executed without a queue. It simply maintains the currently active node and drills down until it has provably found the optimum. 5 Comparison to Rejection Samplers Our first result relating A⇤sampling to rejection sampling is that if the same global bound M = M(Rd) is used at all nodes within A⇤sampling, then the runtime of A⇤sampling is equivalent to that of standard rejection sampling. That is, the distribution over the number of iterations is distributed as a Geometric distribution with rate parameter µ(Rd)/(exp(M)⌫(Rd)). A proof is in the Appendix as part of the proof of termination. When bounds are refined, A⇤sampling bears similarity to adaptive rejection sampling-based algorithms. In particular, while it appears only to have been applied in discrete domains, OS⇤[7] is a general class of adaptive rejection sampling methods that maintain piecewise bounds on the target distribution. If piecewise constant bounds are used (henceforth we assume OS⇤uses only constant bounds) the procedure can be described as follows: at each step, (1) a region B with bound U(B) is sampled with probability proportional to ⌫(B) exp(M(B)), (2) a point is drawn from the proposal distribution restricted to the chosen region; (3) standard accept/rejection computations are performed using the regional bound, and (4) if the point is rejected, a region is chosen to be split into two, and new bounds are computed for the two regions that were created by the split. This process repeats until a point is accepted. Steps (2) and (4) are performed identically in A⇤when sampling argmax Gumbel locations and when splitting a parent node. A key difference is how regions are chosen in step (1). In OS⇤, a region is drawn according to volume of the region under the proposal. Note that piece selection could be implemented using the Gumbel-Max trick, in which case we would choose the piece with maximum GB +M(B) where GB ⇠Gumbel(log ⌫(B)). In A⇤sampling the region with highest upper bound is chosen, where the upper bound is GB + M(B). The difference is that GB values are reset after each rejection in OS⇤, while they persist in A⇤sampling until a sample is returned. The effect of the difference is that A⇤sampling more tightly couples together where the accepted sample will be and which regions are refined. Unlike OS⇤, it can go so far as to prune a region from the search, meaning there is zero probability that the returned sample will be from that region, and that region will never be refined further. OS⇤, on the other hand, is blind towards where the sample that will eventually be accepted comes from and will on average waste more computation refining regions that ultimately are not useful in drawing the sample. In experiments, we will see that A⇤consistently dominates OS⇤, refining the function less while also using fewer likelihood evaluations. This is possible because the persistence inside A⇤sampling focuses the refinement on the regions that are important for accepting the current sample. 5 (a) vs. peakiness (b) vs. # pts (c) Problem-dependent scaling Figure 2: (a) Drill down algorithm performance on p(x) = exp(−x)/(1 + x)a as function of a. (b) Effect of different bounding strategies as a function of number of data points; number of likelihood and bound evaluations are reported. (c) Results of varying observation noise in several nonlinear regression problems. 6 Experiments There are three main aims in this section. First, understand the empirical behavior of A⇤sampling as parameters of the inference problem and o(x) bounds vary. Second, demonstrate generality by showing that A⇤sampling algorithms can be instantiated in just a few lines of model-specific code by expressing o(x) symbolically, and then using a branch and bound library to automatically compute bounds. Finally, compare to OS⇤and an MCMC method (slice sampling). In all experiments, regions in the search trees are hyper rectangles (possibly with infinite extent); to split a region A, choose the dimension with the largest side length and split the dimension at the sampled Xk point. 6.1 Scaling versus Peakiness and Dimension In the first experiment, we sample from p(x) = exp(−x)/(1+x)a for x > 0, a > 0 using exp(−x) as the proposal distribution. In this case, o(x) = −a log(1+x) which is unimodal, so the drill down variant of A⇤sampling can be used. As a grows, the function becomes peakier; while this presents significant difficulty for vanilla rejection sampling, the cost to A⇤is just the cost of locating the peak, which is essentially binary search. Results averaged over 1000 runs appear in Fig. 2 (a). In the second experiment, we run A⇤sampling on the clutter problem [15], which estimates the mean of a fixed covariance isotropic Gaussian under the assumption that some points are outliers. We put a Gaussian prior on the inlier mean and set i(x) to be equal to the prior, so o(x) contains just the likelihood terms. To compute bounds on the total log likelihood, we compute upper bounds on the log likelihood of each point independently then sum up these bounds. We will refer to these as “constant” bounds. In D dimensions, we generated 20 data points with half within [−5, −3]D and half within [2, 4]D, which ensures that the posterior is sharply bimodal, making vanilla MCMC quickly inappropriate as D grows. The cost of drawing an exact sample as a function of D (averaged over 100 runs) grows exponentially in D, but the problem remains reasonably tractable as D grows (D = 3 requires 900 likelihood evaluations, D = 4 requires 4000). The analogous OS⇤algorithm run on the same set of problems requires 16% to 40% more computation on average over the runs. 6.2 Bounding Strategies Here we investigate alternative strategies for bounding o(x) in the case where o(x) is a sum of per-instance log likelihoods. To allow easy implementation of a variety of bounding strategies, we choose the simple problem of estimating the mean of a 1D Gaussian given N observations. We use three types of bounds: constant bounds as in the clutter problem; linear bounds, where we compute linear upper bounds on each term of the sum, then sum the linear functions and take the max over the region; and quadratic bounds, which are the same as linear except quadratic bounds are computed on each term. In this problem, quadratic bounds are tight. We evaluate A⇤sampling using each of the bounding strategies, varying N. See Fig. 2 (b) for results. For N = 1, all bound types are equivalent when each expands around the same point. For larger N, the looseness of each per-point bound becomes important. The figure shows that, for large N, using linear bounds multiplies the number of evaluations by 3, compared to tight bounds. Using constant bounds multiplies the number of evaluations by O( p N). The Appendix explains why this happens 6 and shows that this behavior is expected for any estimation problem where the width of the posterior shrinks with N. 6.3 Using Generic Interval Bounds Here we study the use of bounds that are derived automatically by means of interval methods [16]. This suggests how A⇤sampling (or OS⇤) could be used within a more general purpose probabilistic programming setting. We chose a number of nonlinear regression models inspired by problems in physics, computational ecology, and biology. For each, we use FuncDesigner [17] to symbolically construct o(x) and automatically compute the bounds needed by the samplers. Several expressions for y = f(x) appear in the legend of Fig. 2 (c), where letters a through f denote parameters that we wish to sample. The model in all cases is yn = f(xn) + ✏n where n is the data point index and ✏n is Gaussian noise. We set uniform priors from a reasonable range for all parameters (see Appendix) and generated a small (N=3) set of training data from the model so that posteriors are multimodal. The peakiness of the posterior can be controlled by the magnitude of the observation noise; we varied this from large to small to produce problems over a range of difficulties. We use A⇤sampling to sample from the posterior five times for each model and noise setting and report the average number of likelihood evaluations needed in Fig. 2 (c) (y-axis). To establish the difficulty of the problems, we estimate the expected number of likelihood evaluations needed by a rejection sampler to accept a sample. The savings over rejection sampling is often exponentially large, but it varies per problem and is not necessarily tied to the dimension. In the example where savings are minimal, there are many symmetries in the model, which leads to uninformative bounds. We also compared to OS⇤on the same class of problems. Here we generated 20 random instances with a fixed intermediate observation noise value for each problem and drew 50 samples, resetting the bounds after each sample. The average cost (heuristically set to # likelihood evaluations plus 2 ⇥# bound evaluations) of OS⇤for the five models in Fig. 2 (c) respectively was 21%, 30%, 11%, 21%, and 27% greater than for A⇤. 6.4 Robust Bayesian Regression Here our aim is to do Bayesian inference in a robust linear regression model yn = wTxn +✏n where noise ✏n is distributed as standard Cauchy and w has an isotropic Gaussian prior. Given a dataset D = {xn, yn}N n=1 our goal is to draw samples from the posterior P(w | D). This is a challenging problem because the heavy-tailed noise model can lead to multimodality in the posterior over w. The log likelihood is L(w) = P n log(1 + (wTxn −yn)2). We generated N data points with input dimension D in such a way that the posterior is bimodal and symmetric by setting w⇤= [2, ..., 2]T, generating X0 ⇠randn(N/2, D) and y0 ⇠X0w⇤+.1⇥randn(N/2), then setting X = [X0; X0] and y = [y0; −y0]. There are then equally-sized modes near w⇤and −w⇤. We decompose the posterior into a uniform i(·) within the interval [−10, 10]D and put all of the prior and likelihood terms into o(·). Bounds are computed per point; in some regions the per point bounds are linear, and in others they are quadratic. Details appear in the Appendix. We compare to OS⇤, using two refinement strategies that are discussed in [7]. The first is directly analogous to A⇤sampling and is the method we have used in the earlier OS⇤comparisons. When a point is rejected, refine the piece that was proposed from at the sampled point, and split the dimension with largest side length. The second method splits the region with largest probability under the proposal. We ran experiments on several random draws of the data and report performance along the two axes that are the dominant costs: how many bound computations were used, and how many likelihood evaluations were used. To weigh the tradeoff between the two, we did a rough asymptotic calculation of the costs of bounds versus likelihood computations and set the cost of a bound computation to be D + 1 times the cost of a likelihood computation. In the first experiment, we ask each algorithm to draw a single exact sample from the posterior. Here, we also report results for the variants of A⇤sampling and OS⇤that trade off likelihood computations for bound computations as discussed in Section 4. A representative result appears in Fig. 3 (left). Across operating points, A⇤consistently uses fewer bound evaluations and fewer likelihood evaluations than both OS⇤refinement strategies. In the second experiment, we ask each algorithm to draw 200 samples from the posterior and experiment with the variants that reuse bound information across samples. A representative result appears in Fig. 3 (right). Here we see that the extra refinement done by OS⇤early on allows it to use fewer likelihood evaluations at the expense of more bound computations, but A⇤sampling operates at a 7 point that is not achievable by OS⇤. For all of these problems, we ran a random direction slice sampler [18] that was given 10 times the computational budget that A⇤sampling used to draw 200 samples. The slice sampler had trouble mixing when D > 1. Across the five runs for D = 2, the sampler switched modes once, and it did not ever switch modes when D > 2. 7 Discussion Figure 3: A⇤(circles) versus OS⇤(squares and diamonds) computational costs on Cauchy regression experiments of varying dimension. Square is refinement strategy that splits node where rejected point was sampled; Diamond refines region with largest mass under the proposal distribution. Red lines denote lines of equi-total computational cost and are spaced on a log scale by 10% increase increments. Color of markers denotes the rate of refinement, ranging from (darkest) refining for every rejection (for OS⇤) or one lower bound evaluation per node expansion (for A⇤) to (lightest) refining on 10% of rejections (for OS⇤) or performing Poisson( 1 .1 −1) + 1 lower bound evaluations per node expansion (for A⇤). (left) Cost of drawing a single sample, averaged over 20 random data sets. (right) Drawing 200 samples averaged over 5 random data sets. Results are similar over a range of N’s and D = 1, . . . , 4. This work answers a natural question: is there a Gumbel-Max trick for continuous spaces, and can it be leveraged to develop tractable algorithms for sampling from continuous distributions? In the discrete case, recent work on “Perturb and MAP” (P&M) methods [1, 19, 2] that draw samples as the argmaxes of random energy functions has shown value in developing approximate, correlated perturbations. It is natural to think about continuous analogs in which exactness is abandoned in favor of more efficient computation. A question is if the approximations can be developed in a principled way, like how [3] showed a particular form of correlated discrete perturbation gives rise to bounds on the log partition function. Can analogous rigorous approximations be established in the continuous case? We hope this work is a starting point for exploring that question. We do not solve the problem of high dimensions. There are simple examples where bounds become uninformative in high dimensions, such as when sampling a density that is uniform over a hypersphere when using hyperrectangular search regions. In this case, little is gained over vanilla rejection sampling. An open question is if the split between i(·) and o(·) can be adapted to be node-specific during the search. An adaptive rejection sampler would be able to do this, which would allow leveraging parameter-varying bounds in the proposal distributions. This might be an important degree of freedom to exercise, particularly when scaling up to higher dimensions. There are several possible follow-ons including the discrete version of A⇤sampling and evaluating A⇤sampling as an estimator of the log partition function. In future work, we would like to explore taking advantage of conditional independence structure to perform more intelligent search, hopefully helping the method scale to larger dimensions. Example starting points might be ideas from AND/OR search [20] or branch and bound algorithms that only branch on a subset of dimensions [21]. Acknowledgments This research was supported by NSERC. We thank James Martens and Radford Neal for helpful discussions, Elad Mezuman for help developing early ideas related to this work, and Roger Grosse for suggestions that greatly improved this work. References [1] G. Papandreou and A. Yuille. Perturb-and-MAP Random Fields: Using Discrete Optimization to Learn and Sample from Energy Models. In ICCV, pages 193–200, November 2011. [2] Daniel Tarlow, Ryan Prescott Adams, and Richard S Zemel. Randomized Optimum Models for Structured Prediction. In AISTATS, pages 21–23, 2012. [3] Tamir Hazan and Tommi S Jaakkola. On the Partition Function and Random Maximum A-Posteriori Perturbations. In ICML, pages 991–998, 2012. 8 [4] Stefano Ermon, Carla P Gomes, Ashish Sabharwal, and Bart Selman. Embed and Project: Discrete Sampling with Universal Hashing. In NIPS, pages 2085–2093, 2013. [5] George Papandreou and Alan L Yuille. Gaussian Sampling by Local Perturbations. In NIPS, pages 1858–1866, 2010. [6] W.R. Gilks and P. Wild. Adaptive Rejection Sampling for Gibbs Sampling. Applied Statistics, 41(2):337 – 348, 1992. [7] Marc Dymetman, Guillaume Bouchard, and Simon Carter. The OS* Algorithm: a Joint Approach to Exact Optimization and Sampling. arXiv preprint arXiv:1207.0742, 2012. [8] V Mansinghka, D Roy, E Jonas, and J Tenenbaum. Exact and Approximate Sampling by Systematic Stochastic Search. JMLR, 5:400–407, 2009. [9] James Gary Propp and David Bruce Wilson. Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics. Random Structures and Algorithms, 9(1-2):223–252, 1996. [10] Antonietta Mira, Jesper Moller, and Gareth O Roberts. Perfect Slice Samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(3):593–606, 2001. [11] Faheem Mitha. Perfect Sampling on Continuous State Spaces. PhD thesis, University of North Carolina, Chapel Hill, 2003. [12] Hannes Malmberg. Random Choice over a Continuous Set of Options. Master’s thesis, Department of Mathematics, Stockholm University, 2013. [13] E. J. Gumbel and J. Lieblein. Statistical Theory of Extreme Values and Some Practical Applications: a Series of Lectures. US Govt. Print. Office, 1954. [14] John I. Yellott Jr. The Relationship between Luce’s Choice Axiom, Thurstone’s Theory of Comparative Judgment, and the Double Exponential Distribution. Journal of Mathematical Psychology, 15(2):109 – 144, 1977. [15] Thomas P Minka. Expectation Propagation for Approximate Bayesian Inference. In UAI, pages 362–369. Morgan Kaufmann Publishers Inc., 2001. [16] Eldon Hansen and G William Walster. Global Optimization Using Interval Analysis: Revised and Expanded, volume 264. CRC Press, 2003. [17] Dmitrey Kroshko. FuncDesigner. http://openopt.org/FuncDesigner, June 2014. [18] Radford M Neal. Slice Sampling. Annals of Statistics, pages 705–741, 2003. [19] Tamir Hazan, Subhransu Maji, and Tommi Jaakkola. On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations. In NIPS, pages 1268–1276. 2013. [20] Robert Eugeniu Mateescu. AND/OR Search Spaces for Graphical Models. PhD thesis, University of California, 2007. [21] Manmohan Chandraker and David Kriegman. Globally Optimal Bilinear Programming for Computer Vision Applications. In CVPR, pages 1–8, 2008. 9
2014
76
5,565
Consistent Binary Classification with Generalized Performance Metrics Oluwasanmi Koyejo⇤ Department of Psychology, Stanford University sanmi@stanford.edu Nagarajan Natarajan⇤ Department of Computer Science, University of Texas at Austin naga86@cs.utexas.edu Pradeep Ravikumar Department of Computer Science, University of Texas at Austin pradeepr@cs.utexas.edu Inderjit S. Dhillon Department of Computer Science, University of Texas at Austin inderjit@cs.utexas.edu Abstract Performance metrics for binary classification are designed to capture tradeoffs between four fundamental population quantities: true positives, false positives, true negatives and false negatives. Despite significant interest from theoretical and applied communities, little is known about either optimal classifiers or consistent algorithms for optimizing binary classification performance metrics beyond a few special cases. We consider a fairly large family of performance metrics given by ratios of linear combinations of the four fundamental population quantities. This family includes many well known binary classification metrics such as classification accuracy, AM measure, F-measure and the Jaccard similarity coefficient as special cases. Our analysis identifies the optimal classifiers as the sign of the thresholded conditional probability of the positive class, with a performance metric-dependent threshold. The optimal threshold can be constructed using simple plug-in estimators when the performance metric is a linear combination of the population quantities, but alternative techniques are required for the general case. We propose two algorithms for estimating the optimal classifiers, and prove their statistical consistency. Both algorithms are straightforward modifications of standard approaches to address the key challenge of optimal threshold selection, thus are simple to implement in practice. The first algorithm combines a plug-in estimate of the conditional probability of the positive class with optimal threshold selection. The second algorithm leverages recent work on calibrated asymmetric surrogate losses to construct candidate classifiers. We present empirical comparisons between these algorithms on benchmark datasets. 1 Introduction Binary classification performance is often measured using metrics designed to address the shortcomings of classification accuracy. For instance, it is well known that classification accuracy is an inappropriate metric for rare event classification problems such as medical diagnosis, fraud detection, click rate prediction and text retrieval applications [1, 2, 3, 4]. Instead, alternative metrics better tuned to imbalanced classification (such as the F1 measure) are employed. Similarly, cost-sensitive metrics may useful for addressing asymmetry in real-world costs associated with specific classes. An important theoretical question concerning metrics employed in binary classification is the characteri⇤Equal contribution to the work. 1 zation of the optimal decision functions. For example, the decision function that maximizes the accuracy metric (or equivalently minimizes the “0-1 loss”) is well-known to be sign(P(Y = 1|x)−1/2). A similar result holds for cost-sensitive classification [5]. Recently, [6] showed that the optimal decision function for the F1 measure, can also be characterized as sign(P(Y = 1|x) −δ⇤) for some δ⇤2 (0, 1). As we show in the paper, it is not a coincidence that the optimal decision function for these different metrics has a similar simple characterization. We make the observation that the different metrics used in practice belong to a fairly general family of performance metrics given by ratios of linear combinations of the four population quantities associated with the confusion matrix. We consider a family of performance metrics given by ratios of linear combinations of the four population quantities. Measures in this family include classification accuracy, false positive rate, false discovery rate, precision, the AM measure and the F-measure, among others. Our analysis shows that the optimal classifiers for all such metrics can be characterized as the sign of the thresholded conditional probability of the positive class, with a threshold that depends on the specific metric. This result unifies and generalizes known special cases including the AM measure analysis by Menon et al. [7], and the Fβ measure analysis by Ye et al. [6]. It is known that minimizing (convex) surrogate losses, such as the hinge and the logistic loss, provably also minimizes the underlying 0-1 loss or equivalently maximizes the classification accuracy [8]. This motivates the next question we address in the paper: can one obtain algorithms that (a) can be used in practice for maximizing metrics from our family, and (b) are consistent with respect to the metric? To this end, we propose two algorithms for consistent empirical estimation of decision functions. The first algorithm combines a plug-in estimate of the conditional probability of the positive class with optimal threshold selection. The second leverages the asymmetric surrogate approach of Scott [9] to construct candidate classifiers. Both algorithms are simple modifications of standard approaches that address the key challenge of optimal threshold selection. Our analysis identifies why simple heuristics such as classification using class-weighted loss functions and logistic regression with threshold search are effective practical algorithms for many generalized performance metrics, and furthermore, that when implemented correctly, such apparent heuristics are in fact asymptotically consistent. Related Work. Binary classification accuracy and its cost-sensitive variants have been studied extensively. Here we highlight a few of the key results. The seminal work of [8] showed that minimizing certain surrogate loss functions enables us to control the probability of misclassification (the expected 0-1 loss). An appealing corollary of the result is that convex loss functions such as the hinge and logistic losses satisfy the surrogacy conditions, which establishes the statistical consistency of the resulting algorithms. Steinwart [10] extended this work to derive surrogates losses for other scenarios including asymmetric classification accuracy. More recently, Scott [9] characterized the optimal decision function for weighted 0-1 loss in cost-sensitive learning and extended the risk bounds of [8] to weighted surrogate loss functions. A similar result regarding the use of a threshold different than 1/2, and appropriately rebalancing the training data in cost-sensitive learning, was shown by [5]. Surrogate regret bounds for proper losses applied to class probability estimation were analyzed by Reid and Williamson [11] for differentiable loss functions. Extensions to the multi-class setting have also been studied (for example, Zhang [12] and Tewari and Bartlett [13]). Analysis of performance metrics beyond classification accuracy is limited. The optimal classifier remains unknown for many binary classification performance metrics of interest, and few results exist for identifying consistent algorithms for optimizing these metrics [7, 6, 14, 15]. Of particular relevance to our work are the AM measure maximization by Menon et al. [7], and the Fβ measure maximization by Ye et al. [6]. 2 Generalized Performance Metrics Let X be either a countable set, or a complete separable metric space equipped with the standard Borel σ-algebra of measurable sets. Let X 2 X and Y 2 {0, 1} represent input and output random variables respectively. Further, let ⇥represent the set of all classifiers ⇥= {✓: X 7! [0, 1]}. We assume the existence of a fixed unknown distribution P, and data is generated as iid. samples (X, Y ) ⇠P. Define the quantities: ⇡= P(Y = 1) and γ(✓) = P(✓= 1). The components of the confusion matrix are the fundamental population quantities for binary classification. They are the true positives (TP), false positives (FP), true negatives (TN) and false negatives 2 (FN), given by: TP(✓, P) = P(Y = 1, ✓= 1), FP(✓, P) = P(Y = 0, ✓= 1), (1) FN(✓, P) = P(Y = 1, ✓= 0), TN(✓, P) = P(Y = 0, ✓= 0). These quantities may be further decomposed as: FP(✓, P) = γ(✓) −TP(✓), FN(✓, P) = ⇡−TP(✓), TN(✓, P) = 1 −γ(✓) −⇡+ TP(✓). (2) Let L : ⇥⇥P 7! R be a performance metric of interest. Without loss of generality, we assume that L is a utility metric, so that larger values are better. The Bayes utility L⇤is the optimal value of the performance metric, i.e., L⇤= sup✓2⇥L(✓, P). The Bayes classifier ✓⇤is the classifier that optimizes the performance metric, so L⇤= L(✓⇤), where: ✓⇤= arg max ✓2⇥ L(✓, P). We consider a family of classification metrics computed as the ratio of linear combinations of these fundamental population quantities (1). In particular, given constants (representing costs or weights) {a11, a10, a01, a00, a0} and {b11, b10, b01, b00, b0}, we consider the measure: L(✓, P) = a0 + a11TP + a10FP + a01FN + a00TN b0 + b11TP + b10FP + b01FN + b00TN (3) where, for clarity, we have suppressed dependence of the population quantities on ✓and P. Examples of performance metrics in this family include the AM measure [7], the Fβ measure [6], the Jaccard similarity coefficient (JAC) [16] and Weighted Accuracy (WA): AM = 1 2 ✓TP ⇡+ TN 1 −⇡ ◆ = (1 −⇡)TP + ⇡TN 2⇡(1 −⇡) , Fβ = (1 + β2)TP (1 + β2)TP + β2FN + FP = (1 + β2)TP β2⇡+ γ , JAC = TP TP + FN + FP = TP ⇡+ FP = TP γ + FN, WA = w1TP + w2TN w1TP + w2TN + w3FP + w4FN. Note that we allow the constants to depend on P. Other examples in this class include commonly used ratios such as the true positive rate (also known as recall) (TPR), true negative rate (TNR), precision (Prec), false negative rate (FNR) and negative predictive value (NPV): TPR = TP TP + FN, TNR = TN FP + TN, Prec = TP TP + FP, FNR = FN FN + TP, NPV = TN TN + FN. Interested readers are referred to [17] for a list of additional metrics in this class. By decomposing the population measures (1) using (2) we see that any performance metric in the family (3) has the equivalent representation: L(✓) = c0 + c1TP(✓) + c2γ(✓) d0 + d1TP(✓) + d2γ(✓) (4) with the constants: c0 = a01⇡+ a00 −a00⇡+ a0, c1 = a11 −a10 −a01 + a00, c2 = a10 −a00 and d0 = b01⇡+ b00 −b00⇡+ b0, d1 = b11 −b10 −b01 + b00, d2 = b10 −b00. Thus, it is clear from (4) that the family of performance metrics depends on the classifier ✓only through the quantities TP(✓) and γ(✓). Optimal Classifier We now characterize the optimal classifier for the family of performance metrics defined in (4). Let ⌫represent the dominating measure on X. For the rest of this manuscript, we make the following assumption: Assumption 1. The marginal distribution P(X) is absolutely continuous with respect to the dominating measure ⌫on X so there exists a density µ that satisfies dP = µd⌫. 3 To simplify notation, we use the standard d⌫(x) = dx. We also define the conditional probability ⌘x = P(Y = 1|X = x). Applying Assumption 1, we can expand the terms TP(✓) = R x2X ⌘x✓(x)µ(x)dx and γ(✓) = R x2X ✓(x)µ(x)dx, so the performance metric (4) may be represented as: L(✓, P) = c0 + R x2X (c1⌘x + c2)✓(x)µ(x)dx d0 + R x2X (d1⌘x + d2)✓(x)µ(x) . Our first main result identifies the Bayes classifier for all utility functions in the family (3), showing that they take the form ✓⇤(x) = sign(⌘x −δ⇤), where δ⇤is a metric-dependent threshold, and the sign function is given by sign : R 7! {0, 1} as sign(t) = 1 if t ≥0 and sign(t) = 0 otherwise. Theorem 2. Let P be a distribution on X ⇥[0, 1] that satisfies Assumption 1, and let L be a performance metric in the family (3). Given the constants {c0, c1, c2} and {d0, d1, d2}, define: δ⇤= d2L⇤−c2 c1 −d1L⇤. (5) 1. When c1 > d1L⇤, the Bayes classifier ✓⇤takes the form ✓⇤(x) = sign(⌘x −δ⇤) 2. When c1 < d1L⇤, the Bayes classifier takes the form ✓⇤(x) = sign(δ⇤−⌘x) The proof of the theorem involves examining the first-order optimality condition (see Appendix B). Remark 3. The specific form of the optimal classifier depends on the sign of c1 −d1L⇤, and L⇤is often unknown. In practice, one can often estimate loose upper and lower bounds of L⇤to determine the classifier. A number of useful results can be evaluated directly as instances of Theorem 2. For the Fβ measure, we have that c1 = 1 + β2 and d2 = 1 with all other constants as zero. Thus, δ⇤ Fβ = L⇤ 1+β2 . This matches the optimal threshold for F1 metric specified by Zhao et al. [14]. For precision, we have that c1 = 1, d2 = 1 and all other constants are zero, so δ⇤ Prec = L⇤. This clarifies the observation that in practice, precision can be maximized by predicting only high confidence positives. For true positive rate (recall), we have that c1 = 1, d0 = ⇡and other constants are zero, so δ⇤ TPR = 0 recovering the known result that in practice, recall is maximized by predicting all examples as positives. For the Jaccard similarity coefficient c1 = 1, d1 = −1, d2 = 1, d0 = ⇡and other constants are zero, so δ⇤ JAC = L⇤ 1+L⇤. When d1 = d2 = 0, the generalized metric is simply a linear combination of the four fundamental quantities. With this form, we can then recover the optimal classifier outlined by Elkan [5] for cost sensitive classification. Corollary 4. Let P be a distribution on X ⇥[0, 1] that satisfies Assumption 1, and let L be a performance metric in the family (3). Given the constants {c0, c1, c2} and {d0, d1 = 0, d2 = 0}, the optimal threshold (5) is δ⇤= −c2 c1 . Classification accuracy is in this family, with c1 = 2, c2 = −1, and it is well-known that δ⇤ ACC = 1 2. Another case of interest is the AM metric, where c1 = 1, c2 = −⇡, so δ⇤ AM = ⇡, as shown in Menon et al. [7]. 3 Algorithms The characterization of the Bayes classifier for the family of performance metrics (4) given in Theorem 2 enables the design of practical classification algorithms with strong theoretical properties. In particular, the algorithms that we propose are intuitive and easy to implement. Despite their simplicity, we show that the proposed algorithms are consistent with respect to the measure of interest; a desirable property for a classification algorithm. We begin with a description of the algorithms, followed by a detailed analysis of consistency. Let {Xi, Yi}n i=1 denote iid. training instances drawn from a fixed unknown distribution P. For a given ✓: X ! {0, 1}, we define the following empirical quantities based on their population analogues: TPn(✓) = 1 n Pn i=1 ✓(Xi)Yi, and γn(✓) = 1 n Pn i=1 ✓(Xi). It is clear that TPn(✓) n!1 −−−−! TP(✓; P) and γn(✓) n!1 −−−−! γ(✓; P). 4 Consider the empirical measure: Ln(✓) = c1TPn(✓) + c2γn(✓) + c0 d1TPn(✓) + d2γn(✓) + d0 , (6) corresponding to the population measure L(✓; P) in (4). It is expected that Ln(✓) will be close to the L(✓; P) when the sample is sufficiently large (see Proposition 8). For the rest of this manuscript, we assume that L⇤c1 d1 so ✓⇤(x) = sign(⌘x −δ⇤). The case where L⇤> c1 d1 is solved identically. Our first approach (Two-Step Expected Utility Maximization) is quite intuitive (Algorithm 1): Obtain an estimator ˆ⌘x for ⌘x = P(Y = 1|x) by performing ERM on the sample using a proper loss function [11]. Then, maximize Ln defined in (6) with respect to the threshold δ 2 (0, 1). The optimization required in the third step is one dimensional, thus a global minimizer can be computed efficiently in many cases [18]. In experiments, we use (regularized) logistic regression on a training sample to obtain ˆ⌘. Algorithm 1: Two-Step EUM Input: Training examples S = {Xi, Yi}n i=1 and the utility measure L. 1. Split the training data S into two sets S1 and S2. 2. Estimate ˆ⌘x using S1, define ˆ✓δ = sign(ˆ⌘x −δ) 3. Compute ˆδ = arg maxδ2(0,1) Ln(ˆ✓δ) on S2. Return: ˆ✓ˆδ Our second approach (Weighted Empirical Risk Minimization) is based on the observation that empirical risk minimization (ERM) with suitably weighted loss functions yields a classifier that thresholds ⌘x appropriately (Algorithm 2). Given a convex surrogate `(t, y) of the 0-1 loss, where t is a real-valued prediction and y 2 {0, 1}, the δ-weighted loss is given by [9]: `δ(t, y) = (1 −δ)1{y=1}`(t, 1) + δ1{y=0}`(t, 0). Denote the set of real valued functions as Φ; we then define ˆ✓δ as: ˆφδ = arg min φ2Φ 1 n n X i=1 `δ(φ(Xi), Yi) (7) then set ˆ✓δ(x) = sign(ˆφδ(x)). Scott [9] showed that such an estimated ˆ✓δ is consistent with ✓δ = sign(⌘x −δ). With the classifier defined, maximize Ln defined in (6) with respect to the threshold δ 2 (0, 1). Algorithm 2: Weighted ERM Input: Training examples S = {Xi, Yi}n i=1, and the utility measure L. 1. Split the training data S into two sets S1 and S2. 2. Compute ˆδ = arg maxδ2(0,1) Ln(ˆ✓δ) on S2. Sub-algorithm: Define ˆ✓δ(x) = sign(ˆφδ(x)) where ˆφδ(x) is computed using (7) on S1. Return: ˆ✓ˆδ Remark 5. When d1 = d2 = 0, the optimal threshold does not depend on L⇤(Corollary 4). We may then employ simple sample-based plugin estimates ˆδS. A benefit of using such plugin estimates is that the classification algorithms can be simplified while maintaining consistency. Given such a sample-based plugin estimate ˆδS, Algorithm 1 then reduces to estimating ˆ⌘x, and then setting ˆ✓ˆδS = sign(ˆ⌘x −ˆδS), Algorithm 2 reduces to a single ERM (7) to estimate ˆφˆδS(x), and then setting ˆ✓ˆδS(x) = sign(ˆφˆδS(x)). In the case of AM measure, the threshold is given by δ⇤= ⇡. A consistent estimator for ⇡is all that is required (see [7]). 5 3.1 Consistency of the proposed algorithms An algorithm is said to be L-consistent if the learned classifier ˆ✓satisfies L⇤−L(ˆ✓) p! 0 i.e., for every ✏> 0, P(|L⇤−L(ˆ✓)| < ✏) ! 1, as n ! 1. We begin the analysis from the simplest case when δ⇤is independent of L⇤(Corollary 4). The following proposition, which generalizes Lemma 1 of [7], shows that maximizing L is equivalent to minimizing δ⇤-weighted risk. As a consequence, it suffices to minimize a suitable surrogate loss `δ⇤ on the training data to guarantee L-consistency. Proposition 6. Assume δ⇤2 (0, 1) and δ⇤is independent of L⇤, but may depend on the distribution P. Define δ⇤-weighted risk of a classifier ✓as Rδ⇤(✓) = E(x,y)⇠P ⇥ (1 −δ⇤)1{y=1}1{✓(x)=0} + δ⇤1{y=0}1{✓(x)=1} ⇤ , then, Rδ⇤(✓) −min ✓ Rδ⇤(✓) = 1 c1 (L⇤−L(✓)). The proof is simple, and we defer it to Appendix B. Note that the key consequence of Proposition 6 is that if we know δ⇤, then simply optimizing a weighted surrogate loss as detailed in the proposition suffices to obtain a consistent classifier. In the more practical setting where δ⇤is not known exactly, we can then compute a sample based estimate ˆδS. We briefly mentioned in the previous section how the proposed Algorithms 1 and 2 simplify in this case. Using the plug-in estimate ˆδS such that ˆδS p! δ⇤in the algorithms directly guarantees consistency, under mild assumptions on P (see Appendix A for details). The proof for this setting essentially follows the arguments in [7], given Proposition 6. Now, we turn to the general case, i.e. when L is an arbitrary measure in the class (4) such that δ⇤ is difficult to estimate directly. In this case, both the proposed algorithms estimate δ to optimize the empirical measure Ln. We employ the following proposition which establishes bounds on L. Proposition 7. Let the constants aij, bij for i, j 2 {0, 1}, a0, and b0 be non-negative and, without loss of generality, take values from [0, 1]. Then, we have: 1. −2 c1, d1 2, −1 c2, d2 1, and 0 c0, d0 2(1 + ⇡). 2. L is bounded, i.e. for any ✓, 0 L(✓) L := a0+maxi,j2{0,1} aij b0+minij2{0,1} bij . The proofs of the main results in Theorem 10 and 11 rely on the following Lemmas 8 and 9 on how the empirical measure converges to the population measure at a steady rate. We defer the proofs to Appendix B. Lemma 8. For any ✏> 0, limn ! 1 P(|Ln(✓) −L(✓)| < ✏) = 1. Furthermore, with probability at least 1 −⇢, |Ln(✓) −L(✓)| < (C+LD)r(n,⇢) B−Dr(n,⇢) , where r(n, ⇢) = q 1 2n ln 4 ⇢, L is an upper bound on L(✓), B ≥0, C ≥0, D ≥0 are constants that depend on L (i.e. c0, c1, c2, d0, d1 and d2). Now, we show a uniform convergence result for Ln with respect to maximization over the threshold δ 2 (0, 1). Lemma 9. Consider the function class of all thresholded decisions ⇥= {1{φ(x)>δ} 8δ 2 (0, 1)} for a [0, 1]-valued function φ : X ! [0, 1]. Define ˜r(n, ⇢) = q 32 n ⇥ ln(en) + ln 16 ⇢ ⇤ . If ˜r(n, ⇢) < B D (where B and D are defined as in Lemma 8) and ✏= (C+LD)˜r(n,⇢) B−D˜r(n,⇢) , then with prob. at least 1 −⇢, sup ✓2⇥ |Ln(✓) −L(✓)| < ✏. We are now ready to state our main results concerning the consistency of the two proposed algorithms. Theorem 10. (Main Result 2) If the estimate ˆ⌘x satisfies ˆ⌘x p! ⌘x, Algorithm 1 is L-consistent. Note that we can obtain an estimate ˆ⌘x with the guarantee that ˆ⌘x p! ⌘x by using a strongly proper loss function [19] (e.g. logistic loss) (see Appendix B). 6 Theorem 11. (Main Result 3) Let ` : R : [0, 1) be a classification-calibrated convex (margin) loss (i.e. `0(0) < 0) and let `δ be the corresponding weighted loss for a given δ used in the weighted ERM (7). Then, Algorithm 2 is L-consistent. Note that loss functions used in practice such as hinge and logistic are classification-calibrated [8]. 4 Experiments We present experiments on synthetic data where we observe that measures from our family indeed are maximized by thresholding ⌘x. We also compare the two proposed algorithms on benchmark datasets on two specific measures from the family. 4.1 Synthetic data: Optimal decisions We evaluate the Bayes optimal classifiers for common performance metrics to empirically verify the results of Theorem 2. We fix a domain X = {1, 2, . . . 10}, then we set µ(x) by drawing random values uniformly in (0, 1), and then normalizing these. We set the conditional probability using a sigmoid function as ⌘x = 1 1+exp(−wx), where w is a random value drawn from a standard Gaussian. As the optimal threshold depends on the Bayes risk L⇤, the Bayes classifier cannot be evaluated using plug-in estimates. Instead, the Bayes classifier ✓⇤was obtained using an exhaustive search over all 210 possible classifiers. The results are presented in Fig. 1. For different metrics, we plot ⌘x, the predicted optimal threshold δ⇤(which depends on P) and the Bayes classifier ✓⇤. The results can be seen to be consistent with Theorem 2 i.e. the (exhaustively computed) Bayes optimal classifier matches the thresholded classifier detailed in the theorem. (a) Precision (b) F1 (c) Weighted Accuracy (d) Jaccard Figure 1: Simulated results showing ⌘x, optimal threshold δ⇤and Bayes classifier ✓⇤. 4.2 Benchmark data: Performance of the proposed algorithms We evaluate the two algorithms on several benchmark datasets for classification. We consider two measures, F1 defined as in Section 2 and Weighted Accuracy defined as 2(T P +T N) 2(T P +T N)+F P +F N . We split the training data S into two sets S1 and S2: S1 is used for estimating ˆ⌘x and S2 for selecting δ. For Algorithm 1, we use logistic loss on the samples (with L2 regularization) to obtain estimate ˆ⌘x. Once we have the estimate, we use the model to obtain ˆ⌘x for x 2 S2, and then use the values ˆ⌘x as candidate δ choices to select the optimal threshold (note that the empirical best lies in the choices). Similarly, for Algorithm 2, we use a weighted logistic regression, where the weights depend on the threshold as detailed in our algorithm description. Here, we grid the space [0, 1] to find the best threshold on S2. Notice that this step is embarrassingly parallelizable. The granularity of the grid depends primarily on class imbalance in the data, and varies with datasets. We also compare the two algorithms with the standard empirical risk minimization (ERM) - regularized logistic regression with threshold 1/2. First, we optimize for the F1 measure on four benchmark datasets: (1) REUTERS, consisting of news 8293 articles categorized into 65 topics (obtained the processed dataset from [20]). For each topic, we obtain a highly imbalanced binary classification dataset with the topic as the positive class and the rest as negative. We report the average F1 measure over all the topics (also known as macro-F1 score). Following the analysis in [6], we present results for averaging over topics that had at least C positives in the training (5946 articles) as well as the test (2347 articles) data. (2) LETTERS dataset consisting of 20000 handwritten letters (16000 training and 4000 test instances) 7 from the English alphabet (26 classes, with each class consisting of at least 100 positive training instances). (3) SCENE dataset (UCI benchmark) consisting of 2230 images (1137 training and 1093 test instances) categorized into 6 scene types (with each class consisting of at least 100 positive instances). (4) WEBPAGE binary text categorization dataset obtained from [21], consisting of 34780 web pages (6956 train and 27824 test), with only about 182 positive instances in the train. All the datasets, except SCENE, have a high class imbalance. We use our algorithms to optimize for the F1 measure on these datasets. The results are presented in Table 1. We see that both algorithms perform similarly in many cases. A noticeable exception is the SCENE dataset, where Algorithm 1 is better by a large margin. In REUTERS dataset, we observe that as the number of positive instances C in the training data increases, the methods perform significantly better, and our results align with those in [6] on this dataset. We also find, albeit surprisingly, that using a threshold 1/2 performs competitively on this dataset. DATASET C ERM Algorithm 1 Algorithm 2 1 0.5151 0.4980 0.4855 REUTERS 10 0.7624 0.7600 0.7449 (65 classes) 50 0.8428 0.8510 0.8560 100 0.9675 0.9670 0.9670 LETTERS (26 classes) 1 0.4827 0.5742 0.5686 SCENE (6 classes) 1 0.3953 0.6891 0.5916 WEB PAGE (binary) 1 0.6254 0.6269 0.6267 Table 1: Comparison of methods: F1 measure. First three are multi-class datasets: F1 is computed individually for each class that has at least C positive instances (in both the train and the test sets) and then averaged over classes (macro-F1). Next we optimize for the Weighted Accuracy measure on datasets with less class imbalance. In this case, we can see that δ⇤= 1/2 from Theorem 2. We use four benchmark datasets: SCENE (same as earlier), IMAGE (2068 images: 1300 train, 1010 test) [22], BREAST CANCER (683 instances: 463 train, 220 test) and SPAMBASE (4601 instances: 3071 train, 1530 test) [23]. Note that the last three are binary datasets. The results are presented in Table 2. Here, we observe that all the methods perform similarly, which conforms to our theoretical guarantees of consistency. DATASET ERM Algorithm 1 Algorithm 2 SCENE 0.9000 0.9000 0.9105 IMAGE 0.9060 0.9063 0.9025 BREAST CANCER 0.9860 0.9910 0.9910 SPAMBASE 0.9463 0.9550 0.9430 Table 2: Comparison of methods: Weighted Accuracy defined as 2(T P +T N) 2(T P +T N)+F P +F N . Here, δ⇤= 1/2. We observe that the two algorithms are consistent (ERM thresholds at 1/2). 5 Conclusions and Future Work Despite the importance of binary classification, theoretical results identifying optimal classifiers and consistent algorithms for many performance metrics used in practice remain as open questions. Our goal in this paper is to begin to answer these questions. We have considered a large family of generalized performance measures that includes many measures used in practice. Our analysis shows that the optimal classifiers for such measures can be characterized as the sign of the thresholded conditional probability of the positive class, with a threshold that depends on the specific metric. This result unifies and generalizes known special cases. We have proposed two algorithms for consistent estimation of the optimal classifiers. While the results presented are an important first step, many open questions remain. It would be interesting to characterize the convergence rates of L(ˆ✓) p! L(✓⇤) as ˆ✓ p! ✓⇤, using surrogate losses similar in spirit to how excess 0-1 risk is controlled through excess surrogate risk in [8]. Another important direction is to characterize the entire family of measures for which the optimal is given by thresholded P(Y = 1|x). We would like to extend our analysis to the multi-class and multi-label domains as well. Acknowledgments: This research was supported by NSF grant CCF-1117055 and NSF grant CCF-1320746. P.R. acknowledges the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS-1320894. 8 References [1] David D Lewis and William A Gale. A sequential algorithm for training text classifiers. In Proceedings of the 17th annual international ACM SIGIR conference, pages 3–12. Springer-Verlag New York, Inc., 1994. [2] Chris Drummond and Robert C Holte. Severe class imbalance: Why better algorithms aren’t the answer? In Machine Learning: ECML 2005, pages 539–546. Springer, 2005. [3] Qiong Gu, Li Zhu, and Zhihua Cai. Evaluation measures of the classification performance of imbalanced data sets. In Computational Intelligence and Intelligent Systems, pages 461–471. Springer, 2009. [4] Haibo He and Edwardo A Garcia. Learning from imbalanced data. Knowledge and Data Engineering, IEEE Transactions on, 21(9):1263–1284, 2009. [5] Charles Elkan. The foundations of cost-sensitive learning. In International Joint Conference on Artificial Intelligence, volume 17, pages 973–978. Citeseer, 2001. [6] Nan Ye, Kian Ming A Chai, Wee Sun Lee, and Hai Leong Chieu. Optimizing F-measures: a tale of two approaches. In Proceedings of the International Conference on Machine Learning, 2012. [7] Aditya Menon, Harikrishna Narasimhan, Shivani Agarwal, and Sanjay Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In Proceedings of The 30th International Conference on Machine Learning, pages 603–611, 2013. [8] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [9] Clayton Scott. Calibrated asymmetric surrogate losses. Electronic J. of Stat., 6:958–992, 2012. [10] Ingo Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26 (2):225–287, 2007. [11] Mark D Reid and Robert C Williamson. Composite binary losses. The Journal of Machine Learning Research, 9999:2387–2422, 2010. [12] Tong Zhang. Statistical analysis of some multi-category large margin classification methods. The Journal of Machine Learning Research, 5:1225–1251, 2004. [13] Ambuj Tewari and Peter L Bartlett. On the consistency of multiclass classification methods. The Journal of Machine Learning Research, 8:1007–1025, 2007. [14] Ming-Jie Zhao, Narayanan Edakunni, Adam Pocock, and Gavin Brown. Beyond Fano’s inequality: bounds on the optimal F-score, BER, and cost-sensitive risk and their implications. The Journal of Machine Learning Research, 14(1):1033–1090, 2013. [15] Zachary Chase Lipton, Charles Elkan, and Balakrishnan Narayanaswamy. Thresholding classiers to maximize F1 score. arXiv, abs/1402.1892, 2014. [16] Marina Sokolova and Guy Lapalme. A systematic analysis of performance measures for classification tasks. Information Processing & Management, 45(4):427–437, 2009. [17] Seung-Seok Choi and Sung-Hyuk Cha. A survey of binary similarity and distance measures. Journal of Systemics, Cybernetics and Informatics, pages 43–48, 2010. [18] Yaroslav D Sergeyev. Global one-dimensional optimization using smooth auxiliary functions. Mathematical Programming, 81(1):127–146, 1998. [19] Mark D Reid and Robert C Williamson. Surrogate regret bounds for proper losses. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 897–904. ACM, 2009. [20] Deng Cai, Xuanhui Wang, and Xiaofei He. Probabilistic dyadic data analysis with local and global consistency. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 105–112. ACM, 2009. [21] John C Platt. Fast training of support vector machines using sequential minimal optimization. 1999. [22] S. Mika, G. R¨atsch, J. Weston, B. Sch¨olkopf, and K.-R. M¨uller. Fisher discriminant analysis with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for Signal Processing IX, pages 41–48. IEEE, 1999. [23] Steve Webb, James Caverlee, and Calton Pu. Introducing the webb spam corpus: Using email spam to identify web spam automatically. In CEAS, 2006. [24] Stephen Poythress Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004. [25] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. springer, 1996. [26] Aditya Menon, Harikrishna Narasimhan, Shivani Agarwal, and Sanjay Chawla. On the statistical consistency of algorithms for binary classification under class imbalance: Supplementary material. In Proceedings of The 30th International Conference on Machine Learning, pages 603–611, 2013. 9
2014
77
5,566
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS) Anshumali Shrivastava Department of Computer Science Computing and Information Science Cornell University Ithaca, NY 14853, USA anshu@cs.cornell.edu Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University Piscataway, NJ 08854, USA pingli@stat.rutgers.edu Abstract We present the first provably sublinear time hashing algorithm for approximate Maximum Inner Product Search (MIPS). Searching with (un-normalized) inner product as the underlying similarity measure is a known difficult problem and finding hashing schemes for MIPS was considered hard. While the existing Locality Sensitive Hashing (LSH) framework is insufficient for solving MIPS, in this paper we extend the LSH framework to allow asymmetric hashing schemes. Our proposal is based on a key observation that the problem of finding maximum inner products, after independent asymmetric transformations, can be converted into the problem of approximate near neighbor search in classical settings. This key observation makes efficient sublinear hashing scheme for MIPS possible. Under the extended asymmetric LSH (ALSH) framework, this paper provides an example of explicit construction of provably fast hashing scheme for MIPS. Our proposed algorithm is simple and easy to implement. The proposed hashing scheme leads to significant computational savings over the two popular conventional LSH schemes: (i) Sign Random Projection (SRP) and (ii) hashing based on p-stable distributions for L2 norm (L2LSH), in the collaborative filtering task of item recommendations on Netflix and Movielens (10M) datasets. 1 Introduction and Motivation The focus of this paper is on the problem of Maximum Inner Product Search (MIPS). In this problem, we are given a giant data vector collection S of size N, where S ⊂RD, and a given query point q ∈RD. We are interested in searching for p ∈S which maximizes (or approximately maximizes) the inner product qT p. Formally, we are interested in efficiently computing p = arg max x∈S qT x (1) The MIPS problem is related to near neighbor search (NNS), which instead requires computing p = arg min x∈S ∣∣q −x∣∣2 2 = arg min x∈S (∣∣x∣∣2 2 −2qT x) (2) These two problems are equivalent if the norm of every element x ∈S is constant. Note that the value of the norm ∣∣q∣∣2 has no effect as it is a constant and does not change the identity of arg max or arg min. There are many scenarios in which MIPS arises naturally at places where the norms of the elements in S have significant variations [13] and cannot be controlled, e.g., (i) recommender system, (ii) large-scale object detection with DPM, and (iii) multi-class label prediction. Recommender systems: Recommender systems are often based on collaborative filtering which relies on past behavior of users, e.g., past purchases and ratings. Latent factor modeling based on matrix factorization [14] is a popular approach for solving collaborative filtering. In a typical matrix factorization model, a user i is associated with a latent user characteristic vector ui, and similarly, an item j is associated with a latent item characteristic vector vj. The rating ri,j of item j by user i is modeled as the inner product between the corresponding characteristic vectors. 1 In this setting, given a user i and the corresponding learned latent vector ui finding the right item j, to recommend to this user, involves computing j = arg max j′ ri,j′ = arg max j′ uT i vj′ (3) which is an instance of the standard MIPS problem. It should be noted that we do not have control over the norm of the learned vector, i.e., ∥vj∥2, which often has a wide range in practice [13]. If there are N items to recommend, solving (3) requires computing N inner products. Recommendation systems are typically deployed in on-line application over web where the number N is huge. A brute force linear scan over all items, for computing arg max, would be prohibitively expensive. Large-scale object detection with DPM: Deformable Part Model (DPM) based representation of images is the state-of-the-art in object detection tasks [8]. In DPM model, firstly a set of part filters are learned from the training dataset. During detection, these learned filter activations over various patches of the test image are used to score the test image. The activation of a filter on an image patch is an inner product between them. Typically, the number of possible filters are large (e.g., millions) and so scoring the test image is costly. Recently, it was shown that scoring based only on filters with high activations performs well in practice [7]. Identifying those filters having high activations on a given image patch requires computing top inner products. Consequently, an efficient solution to the MIPS problem will benefit large scale object detections based on DPM. Multi-class (and/or multi-label) prediction: The models for multi-class SVM (or logistic regression) learn a weight vector wi for each of the class label i. After the weights are learned, given a new test data vector xtest, predicting its class label is basically an MIPS problem: ytest = arg max i∈L xT test wi (4) where L is the set of possible class labels. Note that the norms of the vectors ∥wi∥2 are not constant. The size, ∣L∣, of the set of class labels differs in applications. Classifying with large number of possible class labels is common in multi-label learning and fine grained object classification, for instance, prediction task with ∣L∣= 100,000 [7]. Computing such high-dimensional vector multiplications for predicting the class label of a single instance can be expensive in, e.g., user-facing applications. 1.1 The Need for Hashing Inner Products Solving the MIPS problem can have significant practical impact. [19, 13] proposed solutions based on tree data structure combined with branch and bound space partitioning technique similar to k-d trees [9]. Later, the same method was generalized for general max kernel search [5], where the runtime guarantees, like other space partitioning methods, are heavily dependent on the dimensionality and the expansion constants. In fact, it is well-known that techniques based on space partitioning (such as k-d trees) suffer from the curse of dimensionality. For example, [24] showed that techniques based on space partitioning degrade to linear search, even for dimensions as small as 10 or 20. Locality Sensitive Hashing (LSH) [12] based randomized techniques are common and successful in industrial practice for efficiently solving NNS (near neighbor search). Unlike space partitioning techniques, both the running time as well as the accuracy guarantee of LSH based NNS are in a way independent of the dimensionality of the data. This makes LSH suitable for large scale processing system dealing with ultra-high dimensional datasets which are common in modern applications. Furthermore, LSH based schemes are massively parallelizable, which makes them ideal for modern “Big” datasets. The prime focus of this paper will be on efficient hashing based algorithms for MIPS, which do not suffer from the curse of dimensionality. 1.2 Our Contributions We develop Asymmetric LSH (ALSH), an extended LSH scheme for efficiently solving the approximate MIPS problem. Finding hashing based algorithms for MIPS was considered hard [19, 13]. We formally show that, under the current framework of LSH, there cannot exist any LSH for solving MIPS. Despite this negative result, we show that it is possible to relax the current LSH framework to allow asymmetric hash functions which can efficiently solve MIPS. This generalization comes with no extra cost and the ALSH framework inherits all the theoretical guarantees of LSH. Our construction of asymmetric LSH is based on an interesting fact that the original MIPS problem, after asymmetric transformations, reduces to the problem of approximate near neighbor search in 2 classical settings. Based on this key observation, we provide an example of explicit construction of asymmetric hash function, leading to the first provably sublinear query time hashing algorithm for approximate similarity search with (un-normalized) inner product as the similarity. The new ALSH framework is of independent theoretical interest. We report other explicit constructions in [22, 21]. We also provide experimental evaluations on the task of recommending top-ranked items with collaborative filtering, on Netflix and Movielens (10M) datasets. The evaluations not only support our theoretical findings but also quantify the obtained benefit of the proposed scheme, in a useful task. 2 Background 2.1 Locality Sensitive Hashing (LSH) A commonly adopted formalism for approximate near-neighbor search is the following: Definition: (c-Approximate Near Neighbor or c-NN) Given a set of points in a D-dimensional space RD, and parameters S0 > 0, δ > 0, construct a data structure which, given any query point q, does the following with probability 1 −δ: if there exists an S0-near neighbor of q in P, it reports some cS0-near neighbor of q in P. In the definition, the S0-near neighbor of point q is a point p with Sim(q,p) ≥S0, where Sim is the similarity of interest. Popular techniques for c-NN are often based on Locality Sensitive Hashing (LSH) [12], which is a family of functions with the nice property that more similar objects in the domain of these functions have a higher probability of colliding in the range space than less similar ones. In formal terms, consider H a family of hash functions mapping RD to a set I. Definition: (Locality Sensitive Hashing (LSH)) A family H is called (S0,cS0,p1,p2)-sensitive if, for any two point x,y ∈RD, h chosen uniformly from H satisfies the following: • if Sim(x,y) ≥S0 then PrH(h(x) = h(y)) ≥p1 • if Sim(x,y) ≤cS0 then PrH(h(x) = h(y)) ≤p2 For efficient approximate nearest neighbor search, p1 > p2 and c < 1 is needed. Fact 1 [12]: Given a family of (S0,cS0,p1,p2) -sensitive hash functions, one can construct a data structure for c-NN with O(nρ log n) query time and space O(n1+ρ), where ρ = log p1 log p2 < 1. 2.2 LSH for L2 Distance (L2LSH) [6] presented a novel LSH family for all Lp (p ∈(0,2]) distances. In particular, when p = 2, this scheme provides an LSH family for L2 distances. Formally, given a fixed (real) number r, we choose a random vector a with each component generated from i.i.d. normal, i.e., ai ∼N(0,1), and a scalar b generated uniformly at random from [0,r]. The hash function is defined as: hL2 a,b(x) = ⌊aT x + b r ⌋ (5) where ⌊⌋is the floor operation. The collision probability under this scheme can be shown to be Pr(hL2 a,b(x) = hL2 a,b(y)) = Fr(d); Fr(d) = 1 −2Φ(−r/d) − 2 √ 2π(r/d) (1 −e−(r/d)2/2) (6) where Φ(x) = ∫ x −∞ 1 √ 2πe−x2 2 dx is the cumulative density function (cdf) of standard normal distribution and d = ∣∣x −y∣∣2 is the Euclidean distance between the vectors x and y. This collision probability Fr(d) is a monotonically decreasing function of the distance d and hence hL2 a,b is an LSH for L2 distances. This scheme is also the part of LSH package [1]. Here r is a parameter. As argued previously, ∣∣x−y∣∣2 = √ (∣∣x∣∣2 2 + ∣∣y∣∣2 2 −2xT y) is not monotonic in the inner product xT y unless the given data has a constant norm. Hence, hL2 a,b is not suitable for MIPS. The recent work on coding for random projections [16] showed that L2LSH can be improved when the data are normalized for building large-scale linear classifiers as well as near neighbor search [17]. In particular, [17] showed that 1-bit coding (i.e., sign random projections (SRP) [10, 3]) or 2-bit coding are often better compared to using more bits. It is known that SRP is designed for retrieving with cosine similarity: Sim(x,y) = xT y ∣∣x∣∣2∣∣y∣∣2 . Again, ordering under this similarity can be very different from the ordering of inner product and hence SRP is also unsuitable for solving MIPS. 3 3 Hashing for MIPS 3.1 A Negative Result We first show that, under the current LSH framework, it is impossible to obtain a locality sensitive hashing scheme for MIPS. In [19, 13], the authors also argued that finding locality sensitive hashing for inner products could be hard, but to the best of our knowledge we have not seen a formal proof. Theorem 1 There cannot exist any LSH family for MIPS. Proof: Suppose there exists such hash function h. For un-normalized inner products the self similarity of a point x with itself is Sim(x,x) = xT x = ∣∣x∣∣2 2 and there may exist another points y, such that Sim(x,y) = yT x > ∣∣x∣∣2 2 + C, for any constant C. Under any single randomized hash function h, the collision probability of the event {h(x) = h(x)} is always 1. So if h is an LSH for inner product then the event {h(x) = h(y)} should have higher probability compared to the event {h(x) = h(x)}, since we can always choose y with Sim(x,y) = S0 + δ > S0 and cS0 > Sim(x,x) ∀S0 and c < 1. This is not possible because the probability cannot be greater than 1. This completes the proof. ◻ 3.2 Our Proposal: Asymmetric LSH (ALSH) The basic idea of LSH is probabilistic bucketing and it is more general than the requirement of having a single hash function h. The classical LSH algorithms use the same hash function h for both the preprocessing step and the query step. One assigns buckets in the hash table to all the candidates x ∈S using h, then uses the same h on the query q to identify relevant buckets. The only requirement for the proof of Fact 1, to work is that the collision probability of the event {h(q) = h(x)} increases with the similarity Sim(q,x). The theory [11] behind LSH still works if we use hash function h1 for preprocessing x ∈S and a different hash function h2 for querying, as long as the probability of the event {h2(q) = h1(x)} increases with Sim(q,x), and there exist p1 and p2 with the required property. The traditional LSH definition does not allow this asymmetry but it is not a required condition in the proof. For this reason, we can relax the definition of c-NN without losing runtime guarantees. [20] used a related (asymmetric) idea for solving 3-way similarity search. We first define a modified locality sensitive hashing in a form which will be useful later. Definition: (Asymmetric Locality Sensitive Hashing (ALSH)) A family H, along with the two vector functions Q ∶RD ↦RD′ (Query Transformation) and P ∶RD ↦RD′ (Preprocessing Transformation), is called (S0,cS0,p1,p2)-sensitive if, for a given c-NN instance with query q and any x in the collection S, the hash function h chosen uniformly from H satisfies the following: • if Sim(q,x) ≥S0 then PrH(h(Q(q))) = h(P(x))) ≥p1 • if Sim(q,x) ≤cS0 then PrH(h(Q(q)) = h(P(x))) ≤p2 When Q(x) = P(x) = x, we recover the vanilla LSH definition with h(.) as the required hash function. Coming back to the problem of MIPS, if Q and P are different, the event {h(Q(x)) = h(P(x))} will not have probability equal to 1 in general. Thus, Q ≠P can counter the fact that self similarity is not highest with inner products. We just need the probability of the new collision event {h(Q(q)) = h(P(y))} to satisfy the conditions in the definition of c-NN for Sim(q,y) = qT y. Note that the query transformation Q is only applied on the query and the pre-processing transformation P is applied to x ∈S while creating hash tables. It is this asymmetry which will allow us to solve MIPS efficiently. In Section 3.3, we explicitly show a construction (and hence the existence) of asymmetric locality sensitive hash function for solving MIPS. The source of randomization h for both q and x ∈S is the same. Formally, it is not difficult to show a result analogous to Fact 1. Theorem 2 Given a family of hash function H and the associated query and preprocessing transformations P and Q, which is (S0,cS0,p1,p2) -sensitive, one can construct a data structure for c-NN with O(nρ log n) query time and space O(n1+ρ), where ρ = log p1 log p2 . 3.3 From MIPS to Near Neighbor Search (NNS) Without loss of any generality, let U < 1 be a number such that ∣∣xi∣∣2 ≤U < 1, ∀xi ∈S. If this is not the case then define a scaling transformation, S(x) = U M × x; M = maxxi∈S∣∣xi∣∣2; (7) 4 Note that we are allowed one time preprocessing and asymmetry, S is the part of asymmetric transformation. For simplicity of arguments, let us assume that ∣∣q∣∣2 = 1, the arg max is anyway independent of the norm of the query. Later we show in Section 3.6 that it can be easily removed. We are now ready to describe the key step in our algorithm. First, we define two vector transformations P ∶RD ↦RD+m and Q ∶RD ↦RD+m as follows: P(x) = [x;∣∣x∣∣2 2;∣∣x∣∣4 2;....;∣∣x∣∣2m 2 ]; Q(x) = [x;1/2;1/2;....;1/2], (8) where [;] is the concatenation. P(x) appends m scalers of the form ∣∣x∣∣2i 2 at the end of the vector x, while Q(x) simply appends m “1/2” to the end of the vector x. By observing that Q(q)T P(xi) = qT xi + 1 2(∣∣xi∣∣2 2 + ∣∣xi∣∣4 2 + ... + ∣∣xi∣∣2m 2 ); ∣∣P(xi)∣∣2 2 = ∣∣xi∣∣2 2 + ∣∣xi∣∣4 2 + ... + ∣∣xi∣∣2m+1 2 we obtain the following key equality: ∣∣Q(q) −P(xi)∣∣2 2 = (1 + m/4) −2qT xi + ∣∣xi∣∣2m+1 2 (9) Since ∣∣xi∣∣2 ≤U < 1, ∣∣xi∣∣2m+1 →0, at the tower rate (exponential to exponential). The term (1 + m/4) is a fixed constant. As long as m is not too small (e.g., m ≥3 would suffice), we have arg max x∈S qT x ≃arg min x∈S ∣∣Q(q) −P(x)∣∣2 (10) This gives us the connection between solving un-normalized MIPS and approximate near neighbor search. Transformations P and Q, when norms are less than 1, provide correction to the L2 distance ∣∣Q(q) −P(xi)∣∣2 making it rank correlate with the (un-normalized) inner product. This works only after shrinking the norms, as norms greater than 1 will instead blow the term ∣∣xi∣∣2m+1 2 . 3.4 Fast Algorithms for MIPS Eq. (10) shows that MIPS reduces to the standard approximate near neighbor search problem which can be efficiently solved. As the error term ∣∣xi∣∣2m+1 2 < U 2m+1 goes to zero at a tower rate, it quickly becomes negligible for any practical purposes. In fact, from theoretical perspective, since we are interested in guarantees for c-approximate solutions, this additional error can be absorbed in the approximation parameter c. Formally, we can state the following theorem. Theorem 3 Given a c-approximate instance of MIPS, i.e., Sim(q,x) = qT x, and a query q such that ∣∣q∣∣2 = 1 along with a collection S having ∣∣x∣∣2 ≤U < 1 ∀x ∈S. Let P and Q be the vector transformations defined in (8). We have the following two conditions for hash function hL2 a,b (5) 1) if qT x ≥S0 then Pr[hL2 a,b(Q(q)) = hL2 a,b(P(x))] ≥Fr( √ 1 + m/4 −2S0 + U 2m+1) 2) if qT x ≤cS0 then Pr[hL2 a,b(Q(q)) = hL2 a,b(P(x))] ≤Fr( √ 1 + m/4 −2cS0) where the function Fr is defined in (6). Thus, we have obtained p1 = Fr( √ (1 + m/4) −2S0 + U 2m+1) and p2 = Fr( √ (1 + m/4) −2cS0). Applying Theorem 2, we can construct data structures with worst case O(nρ log n) query time guarantees for c-approximate MIPS, where ρ = log Fr( √ 1 + m/4 −2S0 + U 2m+1) log Fr( √ 1 + m/4 −2cS0) (11) We need p1 > p2 in order for ρ < 1. This requires us to have −2S0 + U 2m+1 < −2cS0, which boils down to the condition c < 1 −U 2m+1 2S0 . Note that U 2m+1 2S0 can be made arbitrarily close to zero with the appropriate value of m. For any given c < 1, there always exist U < 1 and m such that ρ < 1. This way, we obtain a sublinear query time algorithm for MIPS. We also have one more parameter r for the hash function ha,b. Recall the definition of Fr in Eq. (6): Fr(d) = 1 −2Φ(−r/d) − 2 √ 2π(r/d) (1 −e−(r/d)2/2). Thus, given a c-approximate MIPS instance, ρ 5 0 0.2 0.4 0.6 0.8 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 c ρ* S0 = 0.9U S0 = 0.5U 0.6 0.7 0.8 0 0.2 0.4 0.6 0.8 1 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 c ρ S0 = 0.5U 0.6 S0 = 0.9U 0.8 0.7 m=3,U=0.83, r=2.5 Figure 1: Left panel: Optimal values of ρ∗with respect to approximation ratio c for different S0. The optimization of Eq. (14) was conducted by a grid search over parameters r, U and m, given S0 and c. Right Panel: ρ values (dashed curves) for m = 3, U = 0.83 and r = 2.5. The solid curves are ρ∗values. See more details about parameter recommendations in arXiv:1405.5869. is a function of 3 parameters: U, m, r. The algorithm with the best query time chooses U, m and r, which minimizes the value of ρ. For convenience, we define ρ∗= min U,m,r log Fr( √ 1 + m/4 −2S0 + U 2m+1) log Fr( √ 1 + m/4 −2cS0) s.t. U 2m+1 2S0 < 1 −c, m ∈N+, 0 < U < 1. (12) See Figure 1 for the plots of ρ∗. With this best value of ρ, we can state our main result in Theorem 4. Theorem 4 (Approximate MIPS is Efficient) For the problem of c-approximate MIPS with ∣∣q∣∣2 = 1, one can construct a data structure having O(nρ∗log n) query time and space O(n1+ρ∗), where ρ∗< 1 is the solution to constraint optimization (14). 3.5 Practical Recommendation of Parameters Just like in the typical LSH framework, the value of ρ∗in Theorem 4 depends on the c-approximate instance we aim to solve, which requires knowing the similarity threshold S0 and the approximation ratio c. Since, ∣∣q∣∣2 = 1 and ∣∣x∣∣2 ≤U < 1, ∀x ∈S, we have qtx ≤U. A reasonable choice of the threshold S0 is to choose a high fraction of U, for example, S0 = 0.9U or S0 = 0.8U. The computation of ρ∗and the optimal values of corresponding parameters can be conducted via a grid search over the possible values of U, m and r. We compute ρ∗in Figure 1 (Left Panel). For convenience, we recommend m = 3, U = 0.83, and r = 2.5. With this choice of the parameters, Figure 1 (Right Panel) shows that the ρ values using these parameters are very close to ρ∗values. 3.6 Removing the Condition ∣∣q∣∣2 = 1 Changing norms of the query does not affect the arg maxx∈C qT x. Thus in practice for retrieving topranked items, normalizing the query should not affect the performance. But for theoretical purposes, we want the runtime guarantee to be independent of ∣∣q∣∣2. We are interested in the c-approximate instance which being a threshold based approximation changes if the query is normalized. Previously, transformations P and Q were precisely meant to remove the dependency on the norms of x. Realizing the fact that we are allowed asymmetry, we can use the same idea to get rid of the norm of q. Let M be the upper bound on all the norms or the radius of the space as defined in Eq (7). Let the transformation S ∶RD →RD be the ones defined in Eq (7). Define asymmetric transformations P ′ ∶RD →RD+2m and Q′ ∶RD →RD+2m as P ′(x) = [x;∣∣x∣∣2 2;∣∣x∣∣4 2;....;∣∣x∣∣2m 2 ;1/2;...1/2]; Q′(x) = [x;1/2;....;1/2;∣∣x∣∣2 2;∣∣x∣∣4 2;....;∣∣x∣∣2m 2 ], Given the query q and data point x, our new asymmetric transformations are Q′(S(q)) and P ′(S(x)) respectively. We observe that ∣∣Q′(S(q)) −P ′(S(x))∣∣2 2 = m 2 + ∣∣S(x)∣∣2m+1 2 + ∣∣S(q)∣∣2m+1 2 −2qtx × ( U 2 M 2 ) (13) Both ∣∣S(x)∣∣2m+1 2 ,∣∣S(q)∣∣2m+1 2 ≤U 2m+1 →0. Using exactly same arguments as before, we obtain 6 Theorem 5 (Unconditional Approximate MIPS is Efficient) For the problem of c-approximate MIPS in a bounded space, one can construct a data structure having O(nρ∗ u log n) query time and space O(n1+ρ∗ u), where ρ∗ u < 1 is the solution to constraint optimization (14). ρ∗ u = min 0<U<1,m∈N,r log Fr( √ m/2 −2S0 ( U 2 M 2 ) + 2U 2m+1) log Fr( √ m/2 −2cS0 ( U 2 M 2 )) s.t. U (2m+1−2)M 2 S0 < 1 −c, (14) Again, for any c-approximate MIPS instance, with S0 and c, we can always choose m big enough such that ρ∗ u < 1. The theoretical guarantee only depends on the radius of the space M. 3.7 A Generic Recipe for Constructing Asymmetric LSHs We are allowed any asymmetric transformation on x and q. This gives us a lot of flexibility to construct ALSH for new similarities S that we are interested in. The generic idea is to take a particular similarity Sim(x,q) for which we know an existing LSH or ALSH. Then we construct transformations P and Q such Sim(P(x),Q(q)) is monotonic in the similarity S that we are interested in. The other observation that makes it easier to construct P and Q is that LSH based guarantees are independent of dimensions, thus we can expand the dimensions like we did for P and Q. This paper focuses on using L2LSH to convert near neighbor search of L2 distance into an ALSH (i.e., L2-ALSH) for MIPS. We can devise new ALSHs for MIPS using other similarities and hash functions. For instance, utilizing sign random projections (SRP), the known LSH for correlations, we can construct different P and Q leading to a better ALSH (i.e., Sign-ALSH) for MIPS [22]. We are aware another work [18] which performs very similarly to Sign-ALSH. Utilizing minwise hashing [2, 15], which is the LSH for resemblance and is known to outperform SRP in sparse data [23], we can construct an even better ALSH (i.e., MinHash-ALSH) for MIPS over binary data [21]. 4 Evaluations Datasets. We evaluate the proposed ALSH scheme for the MIPS problem on two popular collaborative filtering datasets on the task of item recommendations: (i) Movielens(10M), and (ii) Netflix. Each dataset forms a sparse user-item matrix R, where the value of R(i,j) indicates the rating of user i for movie j. Given the user-item ratings matrix R, we follow the standard PureSVD procedure [4] to generate user and item latent vectors. This procedure generates latent vectors ui for each user i and vector vj for each item j, in some chosen fixed dimension f. The PureSVD method returns top-ranked items based on the inner products uT i vj, ∀j. Despite its simplicity, PureSVD outperforms other popular recommendation algorithms [4]. Following [4], we use the same choices for the latent dimension f, i.e., f = 150 for Movielens and f = 300 for Netflix. 4.1 Ranking Experiment for Hash Code Quality Evaluations We are interested in knowing, how the two hash functions correlate with the top-10 inner products. For this task, given a user i and its corresponding user vector ui, we compute the top-10 gold standard items based on the actual inner products uT i vj, ∀j. We then compute K different hash codes of the vector ui and all the item vectors vjs. For every item vj, we compute the number of times its hash values matches (or collides) with the hash values of query which is user ui, i.e., we compute Matchesj = ∑K t=1 1(ht(ui) = ht(vj)), based on which we rank all the items. Figure 2 reports the precision-recall curves in our ranking experiments for top-10 items, for comparing our proposed method with two baseline methods: the original L2LSH and the original sign random projections (SRP). These results confirm the substantial advantage of our proposed method. 4.2 LSH Bucketing Experiment We implemented the standard (K,L)-parameterized (where L is number of hash tables) bucketing algorithm [1] for retrieving top-50 items based on PureSVD procedure using the proposed ALSH hash function and the two baselines: SRP and L2LSH. We plot the recall vs the mean ratio of inner product required to achieve that recall. The ratio being computed relative to the number of inner products required in a brute force linear scan. In order to remove the effect of algorithm parameters (K,L) on the evaluations, we report the result from the best performing K and L chosen from K ∈{5,6,...,30} and L ∈{1,2,...,200} for each query. We use m = 3, U = 0.83, and r = 2.5 for 7 0 20 40 60 80 100 0 5 10 15 Recall (%) Precision (%) Movielens Top 10, K = 16 Proposed L2LSH SRP 0 20 40 60 80 100 0 10 20 30 Recall (%) Precision (%) Movielens Top 10, K = 64 Proposed L2LSH SRP 0 20 40 60 80 100 0 20 40 60 Recall (%) Precision (%) Movielens Top 10, K = 256 Proposed L2LSH SRP 0 20 40 60 80 100 0 2 4 6 8 10 Recall (%) Precision (%) NetFlix Top 10, K = 16 Proposed L2LSH SRP 0 20 40 60 80 100 0 5 10 15 20 Recall (%) Precision (%) NetFlix Top 10, K = 64 Proposed L2LSH SRP 0 20 40 60 80 100 0 10 20 30 40 50 Recall (%) Precision (%) NetFlix Top 10, K = 256 Proposed L2LSH SRP Figure 2: Ranking. Precision-Recall curves (higher is better), of retrieving top-10 items, with the number of hashes K ∈{16,64,256}. The proposed algorithm (solid, red if color is available) significantly outperforms L2LSH. We fix the parameters m = 3, U = 0.83, and r = 2.5 for our proposed method and we present the results of L2LSH for all r values in {1,1.5,2,2.5,3,3.5,4,4.5,5}. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Fraction Multiplications Top 50 Movielens Proposed SRP L2LSH 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Fraction Multiplications Top 50 Netflix Proposed SRP L2LSH Figure 3: Bucketing. Mean number of inner products per query, relative to a linear scan, evaluated by different hashing schemes at different recall levels, for generating top-50 recommendations (Lower is better). The results corresponding to the best performing K and L (for a wide range of K and L) at a given recall value, separately for all the three hashing schemes, are shown. our hashing scheme. For L2LSH, we observe that using r = 4 usually performs well and so we show results for r = 4. The results are summarized in Figure 3, confirming that the proposed ALSH leads to significant savings compared to baseline hash functions. 5 Conclusion MIPS (maximum inner product search) naturally arises in numerous practical scenarios, e.g., collaborative filtering. This problem is challenging and, prior to our work, there existed no provably sublinear time hashing algorithms for MIPS. Also, the existing framework of classical LSH (locality sensitive hashing) is not sufficient for solving MIPS. In this study, we develop ALSH (asymmetric LSH), which generalizes the existing LSH framework by applying (appropriately chosen) asymmetric transformations to the input query vector and the data vectors in the repository. We present an implementation of ALSH by proposing a novel transformation which converts the original inner products into L2 distances in the transformed space. We demonstrate, both theoretically and empirically, that this implementation of ALSH provides provably efficient as well as practical solution to MIPS. Other explicit constructions of ALSH, for example, ALSH through cosine similarity, or ALSH through resemblance (for binary data), will be presented in followup technical reports. Acknowledgments The research is partially supported by NSF-DMS-1444124, NSF-III-1360971, NSF-Bigdata1419210, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. We appreciate the constructive comments from the program committees of KDD 2014 and NIPS 2014. Shrivastava would also like to thank Thorsten Joachims and the Class of CS6784 (Spring 2014) for valuable feedbacks. 8 References [1] A. Andoni and P. Indyk. E2lsh: Exact euclidean locality sensitive hashing. Technical report, 2004. [2] A. Z. Broder. On the resemblance and containment of documents. In the Compression and Complexity of Sequences, pages 21–29, Positano, Italy, 1997. [3] M. S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380–388, Montreal, Quebec, Canada, 2002. [4] P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on topn recommendation tasks. In Proceedings of the fourth ACM conference on Recommender systems, pages 39–46. ACM, 2010. [5] R. R. Curtin, A. G. Gray, and P. Ram. Fast exact max-kernel search. In SDM, pages 1–9, 2013. [6] M. Datar, N. Immorlica, P. Indyk, and V. S. Mirrokn. Locality-sensitive hashing scheme based on p-stable distributions. In SCG, pages 253 – 262, Brooklyn, NY, 2004. [7] T. Dean, M. A. Ruzon, M. Segal, J. Shlens, S. Vijayanarasimhan, and J. Yagnik. Fast, accurate detection of 100,000 object classes on a single machine. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 1814–1821. IEEE, 2013. [8] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1627–1645, 2010. [9] J. H. Friedman and J. W. Tukey. A projection pursuit algorithm for exploratory data analysis. IEEE Transactions on Computers, 23(9):881–890, 1974. [10] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of ACM, 42(6):1115– 1145, 1995. [11] S. Har-Peled, P. Indyk, and R. Motwani. Approximate nearest neighbor: Towards removing the curse of dimensionality. Theory of Computing, 8(14):321–350, 2012. [12] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604–613, Dallas, TX, 1998. [13] N. Koenigstein, P. Ram, and Y. Shavitt. Efficient retrieval of recommendations in a matrix factorization framework. In CIKM, pages 535–544, 2012. [14] Y. Koren, R. Bell, and C. Volinsky. Matrix factorization techniques for recommender systems. [15] P. Li and A. C. K¨onig. Theory and applications b-bit minwise hashing. Commun. ACM, 2011. [16] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for random projections. In ICML, 2014. [17] P. Li, M. Mitzenmacher, and A. Shrivastava. Coding for random projections and approximate near neighbor search. Technical report, arXiv:1403.8144, 2014. [18] B. Neyshabur and N. Srebro. A simpler and better lsh for maximum inner product search (mips). Technical report, arXiv:1410.5518, 2014. [19] P. Ram and A. G. Gray. Maximum inner-product search using cone trees. In KDD, pages 931–939, 2012. [20] A. Shrivastava and P. Li. Beyond pairwise: Provably fast algorithms for approximate k-way similarity search. In NIPS, Lake Tahoe, NV, 2013. [21] A. Shrivastava and P. Li. Asymmetric minwise hashing. Technical report, 2014. [22] A. Shrivastava and P. Li. An improved scheme for asymmetric lsh. Technical report, arXiv:1410.5410, 2014. [23] A. Shrivastava and P. Li. In defense of minhash over simhash. In AISTATS, 2014. [24] R. Weber, H.-J. Schek, and S. Blott. A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces. In VLDB, pages 194–205, 1998. 9
2014
78
5,567
Graphical Models for Recovering Probabilistic and Causal Queries from Missing Data Karthika Mohan and Judea Pearl Cognitive Systems Laboratory Computer Science Department University of California, Los Angeles, CA 90024 {karthika,judea}@cs.ucla.edu Abstract We address the problem of deciding whether a causal or probabilistic query is estimable from data corrupted by missing entries, given a model of missingness process. We extend the results of Mohan et al. [2013] by presenting more general conditions for recovering probabilistic queries of the form P(y|x) and P(y, x) as well as causal queries of the form P(y|do(x)). We show that causal queries may be recoverable even when the factors in their identifying estimands are not recoverable. Specifically, we derive graphical conditions for recovering causal effects of the form P(y|do(x)) when Y and its missingness mechanism are not d-separable. Finally, we apply our results to problems of attrition and characterize the recovery of causal effects from data corrupted by attrition. 1 Introduction All branches of experimental science are plagued by missing data. Improper handling of missing data can bias outcomes and potentially distort the conclusions drawn from a study. Therefore, accurate diagnosis of the causes of missingness is crucial for the success of any research. We employ a formal representation called ‘Missingness Graphs’ (m-graphs, for short) to explicitly portray the missingness process as well as the dependencies among variables in the available dataset (Mohan et al. [2013]). Apart from determining whether recoverability is feasible namely, whether there exists any theoretical impediment to estimability of queries of interest, m-graphs can also provide a means for communication and refinement of assumptions about the missingness process. Furthermore, m-graphs permit us to detect violations in modeling assumptions even when the dataset is contaminated with missing entries (Mohan and Pearl [2014]). In this paper, we extend the results of Mohan et al. [2013] by presenting general conditions under which probabilistic queries such as joint and conditional distributions can be recovered. We show that causal queries of the type P(y|do(x)) can be recovered even when the associated probabilistic relations such as P(y, x) and P(y|x) are not recoverable. In particular, causal effects may be recoverable even when Y is not separable from its missingness mechanism. Finally, we apply our results to recover causal effects when the available dataset is tainted by attrition. This paper is organized as follows. Section 2 provides an overview of missingness graphs and reviews the notion of recoverability i.e. obtaining consistent estimates of a query, given a dataset and an m-graph. Section 3 refines the sequential factorization theorem presented in Mohan et al. [2013] and extends its applicability to a wider range of problems in which missingness mechanisms may influence each other. In section 4, we present general 1 RQ RI I* Q* Experience (X) Income (I) Missingness Mechanism of Income Proxy variable for Income U Latent Variable Sex (S) Qualifcation (Q) Figure 1: Typical m-graph where Vo = {S, X}, Vm = {I, Q}, V ∗= {I∗, Q∗}, R = {Ri, Rq} and U is the latent common cause. Members of Vo and Vm are represented by full and hollow circles respectively. The associated missingness process and assumptions are elaborated in appendix 10.1. algorithms to recover joint distributions from the class of problems for which sequential factorization theorem fails. In section 5, we introduce new graphical criteria that preclude recoverability of joint and conditional distributions. In section 6, we discuss recoverability of causal queries and show that unlike probabilistic queries, P(y|do(x)) may be recovered even when Y and its missingness mechanism (Ry) are not d-separable. In section 7, we demonstrate how we can apply our results to problems of attrition in which missingness is a severe obstacle to sound inferences. Related works are discussed in section 8 and conclusions are drawn in section 9. Proofs of all theoretical results in this paper are provided in the appendix. 2 Missingness Graph and Recoverability Missingness graphs as discussed below was first defined in Mohan et al. [2013] and we adopt the same notations. Let G(V, E) be the causal DAG where V = V ∪U ∪V ∗∪R. V is the set of observable nodes. Nodes in the graph correspond to variables in the data set. U is the set of unobserved nodes (also called latent variables). E is the set of edges in the DAG. We use bi-directed edges as a shorthand notation to denote the existence of a U variable as common parent of two variables in V ∪R. V is partitioned into Vo and Vm such that Vo ⊆V is the set of variables that are observed in all records in the population and Vm ⊆V is the set of variables that are missing in at least one record. Variable X is termed as fully observed if X ∈Vo, partially observed if X ∈Vm and substantive if X ∈Vo ∪Vm. Associated with every partially observed variable Vi ∈Vm are two other variables Rvi and V ∗ i , where V ∗ i is a proxy variable that is actually observed, and Rvi represents the status of the causal mechanism responsible for the missingness of V ∗ i ; formally, v∗ i = f(rvi, vi) =  vi if rvi = 0 m if rvi = 1 (1) V ∗is the set of all proxy variables and R is the set of all causal mechanisms that are responsible for missingness. R variables may not be parents of variables in V ∪U. We call this graphical representation Missingness Graph (or m-graph). An example of an m-graph is given in Figure 1 (a).We use the following shorthand. For any variable X, let X′ be a shorthand for X = 0. For any set W ⊆Vm ∪Vo ∪R, let Wr, Wo and Wm be the shorthand for W ∩R, W ∩Vo and W ∩Vm respectively. Let Rw be a shorthand for RVm∩W i.e. Rw is the set containing missingness mechanisms of all partially observed variables in W. Note that Rw and Wr are not the same. GX and GX represent graphs formed by removing from G all edges leaving and entering X, respectively. A manifest distribution P(Vo, V ∗, R) is the distribution that governs the available dataset. An underlying distribution P(Vo, Vm, R) is said to be compatible with a given manifest distribution P(Vo, V ∗, R) if the latter can be obtained from the former using equation 1. Manifest distribution Pm is compatible with a given underlying distribution Pu if ∀X, X ⊆ 2 RW RX RY R Z X Z W Y RW R Z RX RY RX RY R Z X Z W Y Y Z X (a) (b) (c) Figure 2: (a) m-graph in which P(V ) is recoverable by the sequential factorization (b) & (c): m-graphs for which no admissible sequence exists. Vm and Y = Vm \ X, the following equality holds true. Pm(R′ x, Ry, X∗, Y ∗, Vo) = Pu(R′ x, Ry, X, Vo) where R′ x denotes Rx = 0 and Ry denotes Ry = 1. Refer Appendix 10.2 for an example. 2.1 Recoverability Given a manifest distribution P(V ∗, Vo, R) and an m-graph G that depicts the missingness process, query Q is recoverable if we can compute a consistent estimate of Q as if no data were missing. Formally, Definition 1 (Recoverability (Mohan et al. [2013])). Given a m-graph G, and a target relation Q defined on the variables in V , Q is said to be recoverable in G if there exists an algorithm that produces a consistent estimate of Q for every dataset D such that P(D) is (1) compatible with G and (2) strictly positive1 over complete cases i.e. P(Vo, Vm, R = 0) > 0. For an introduction to the notion of recoverability see, Pearl and Mohan [2013] and Mohan et al. [2013]. 3 Recovering Probabilistic Queries by Sequential Factorization Mohan et al. [2013] (theorem-4) presented a sufficient condition for recovering probabilistic queries such as joint and conditional distributions by using ordered factorizations. However, the theorem is not applicable to certain classes of problems such as those in longitudinal studies in which edges exist between R variables. General ordered factorization defined below broadens the concept of ordered factorization (Mohan et al. [2013]) to include the set of R variables. Subsequently, the modified theorem (stated below as theorem 1) will permit us to handle cases in which R variables are contained in separating sets that d-separate partially observed variables from their respective missingness mechanisms (example: X⊥⊥Rx|Ry in figure 2 (a)). Definition 2 (General Ordered factorization). Given a graph G and a set O of ordered V ∪R variables Y1 < Y2 < . . . < Yk, a general ordered factorization relative to G, denoted by f(O), is a product of conditional probabilities f(O) = Q i P(Yi|Xi) where Xi ⊆{Yi+1, . . . , Yn} is a minimal set such that Yi⊥⊥({Yi+1, . . . , Yn} \ Xi)|Xi holds in G. Theorem 1 (Sequential Factorization ). A sufficient condition for recoverability of a relation Q defined over substantive variables is that Q be decomposable into a general ordered factorization, or a sum of such factorizations, such that every factor Qi = P(Yi|Xi) satisfies, (1) Yi⊥⊥(Ryi, Rxi)|Xi \{Ryi, Rxi}, if Yi ∈(Vo ∪Vm) and (2) Z /∈Xi and Xr ∩RXm = ∅ and Rz⊥⊥RXi|Xi if Yi = Rz for any Z ∈Vm. An ordered factorization that satisfies the condition in Theorem 1 is called an admissible sequence. The following example illustrates the use of theorem 1 for recovering the joint distribution. Additionally, it sheds light on the need for the notion of minimality in definition 2. 1An extension to datasets that are not strictly positive over complete cases is sometimes feasible(Mohan et al. [2013]). 3 Example 1. We are interested in recovering P(X, Y, Z) given the m-graph in Figure 2 (a). We discern from the graph that definition 2 is satisfied because: (1) P(Y |X, Z, Ry) = P(Y |X, Z) and (X, Z) is a minimal set such that Y ⊥⊥({X, Z, Ry} \ (X, Z))|(X, Z), (2) P(X|Ry, Z) = P(X|Ry) and Ry is the minimal set such that X⊥⊥({Ry, Z} \ Ry)|Ry and (3) P(Z|Ry) = P(Z) and ∅is the minimal set such that Z⊥⊥Ry|∅. Therefore, the order Y < X < Z < Ry induces a general ordered factorization P(X, Y, Z, Ry) = P(Y |X, Z)P(X|Ry)P(Z)P(Ry). We now rewrite P(X, Y, Z) as follows: P(X, Y, Z) = X Ry P(Y, X, Z, Ry) = P(Y |X, Z)P(Z) X Ry P(X|Ry)P(Ry) Since Y ⊥⊥Ry|X, Z, Z⊥⊥Rz, X⊥⊥Rx|Ry, by theorem 1 we have, P(X, Y, Z) = P(Y |X, Z, R′ x, R′ y, R′ z)P(Z|R′ z) X Ry P(X|R′ x, Ry)P(Ry) Indeed, equation 1 permits us to rewrite it as: P(X, Y, Z) = P(Y ∗|X∗, Z∗, R′ x, R′ y, R′ z)P(Z∗|R′ z) X Ry P(X∗|R′ x, Ry)P(Ry) P(X, Y, Z) is recoverable because every term in the right hand side is consistently estimable from the available dataset. Had we ignored the minimality requirement in definition 2 and chosen to factorize Y < X < Z < Ry using the chain rule, we would have obtained: P(X, Y, Z, Ry) = P(Y |X, Z, Ry)P(X|Z, Ry)P(Z|Ry)P(Ry) which is not admissible since X⊥⊥(Rz, Rx)|Z does not hold in the graph. In other words, existence of one admissible sequence based on an order O of variables does not guarantee that every factorization based on O is admissible; it is for this reason that we need to impose the condition of minimality in definition 2. The recovery procedure presented in example 1 requires that we introduce Ry into the order. Indeed, there is no ordered factorization over the substantive variables {X, Y, Z} that will permit recoverability of P(X, Y, Z) in figure 2 (a). This extension of Mohan et al. [2013] thus permits the recovery of probabilistic queries from problems in which the missingness mechanisms interact with one another. 4 Recoverability in the Absence of an Admissible Sequence Mohan et al. [2013] presented a theorem (refer appendix 10.4) that stated the necessary and sufficient condition for recovering the joint distribution for the class of problems in which the parent set of every R variable is a subset of Vo∪Vm. In contrast to Theorem 1, their theorem can handle problems for which no admissible sequence exists. The following theorem gives a generalization and is applicable to any given semi-markovian model (for example, m-graphs in figure 2 (b) & (c)). It relies on the notion of collider path and two new subsets, R(part): the partitions of R variables and Mb(R(i)): substantive variables related to R(i), which we will define after stating the theorem. Theorem 2. Given an m-graph G in which no element in Vm is either a neighbor of its missingness mechanism or connected to its missingness mechanism by a collider path, P(V ) is recoverable if no Mb(R(i)) contains a partially observed variable X such that Rx ∈R(i) i.e. ∀i, R(i) ∩RMb(R(i)) = ∅. Moreover, if recoverable, P(V ) is given by, P(V ) = P(V, R = 0) Q i P(R(i) = 0|Mb(R(i)), RMb(R(i)) = 0) In theorem 2: (i) collider path p between any two nodes X and Y is a path in which every intermediate node is a collider. Example, X →Z < −−> Y . (ii) Rpart = {R(1), R(2), ...R(N)} are partitions of R variables such that for every element Rx and Ry belonging to distinct partitions, the following conditions hold true: (i) Rx and 4 Ry are not neighbors and (ii) Rx and Ry are not connected by a collider path. In figure 2 (b): Rpart = {R(1), R(2)} where R(1) = {Rw, Rz}, R(2) = {Rx, Ry} (iii) Mb(R(i)) is the markov blanket of R(i) comprising of all substantive variables that are either neighbors or connected to variables in R(i) by a collider path (Richardson [2003]). In figure 2 (b): Mb(R(1)) = {X, Y } and Mb(R(2)) = {Z, W}. Appendix 10.6 demonstrates how theorem 2 leads to the recoverability of P(V ) in figure 2, to which theorems in Mohan et al. [2013] do not apply. The following corollary yields a sufficient condition for recovering the joint distribution from the class of problems in which no bi-directed edge exists between variables in sets R and Vo ∪Vm (for example, the m-graph described in Figure 2 (c)). These problems form a subset of the class of problems covered in theorem 2. Subset Pasub(R(i)) used in the corollary is the set of all substantive variables that are parents of variables in R(i). In figure 2 (b): Pasub(R(1)) = ∅and Pasub(R(2)) = {Z, W}. Corollary 1. Let G be an m-graph such that (i) ∀X ∈Vm ∪Vo, no latent variable is a common parent of X and any member of R, and (ii) ∀Y ∈Vm, Y is not a parent of Ry. If ∀i, Pasub(R(i)) does not contain a partially observed variables whose missing mechanism is in R(i) i.e. R(i) ∩RP asub(R(i)) = ∅, then P(V ) is recoverable and is given by, P(v) = P (R=0,V ) Q i P (R(i)=0|P asub(R(i)),RP asub(R(i))=0) 5 Non-recoverability Criteria for Joint and Conditional Distributions Up until now, we dealt with sufficient conditions for recoverability. It is important however to supplement these results with criteria for non-recoverability in order to alert the user to the fact that the available assumptions are insufficient to produce a consistent estimate of the target query. Such criteria have not been treated formally in the literature thus far. In the following theorem we introduce two graphical conditions that preclude recoverability. Theorem 3 (Non-recoverability of P(V )). Given a semi-markovian model G, the following conditions are necessary for recoverability of the joint distribution: (i) ∀X ∈Vm, X and Rx are not neighbors and (ii) ∀X ∈Vm, there does not exist a path from X to Rx in which every intermediate node is both a collider and a substantive variable. In the following corollary, we leverage theorem 3 to yield necessary conditions for recovering conditional distributions. Corollary 2. [Non-recoverability of P(Y |X)] Let X and Y be disjoint subsets of substantive variables. P(Y |X) is non-recoverable in m-graph G if one of the following conditions is true: (1) Y and Ry are neighbors (2) G contains a collider path p connecting Y and Ry such that all intermediate nodes in p are in X. 6 Recovering Causal Queries Given a causal query and a causal bayesian network a complete algorithm exists for deciding whether the query is identifiable or not (Shpitser and Pearl [2006]). Obviously, a query that is not identifiable in the substantive model is not recoverable from missing data. Therefore, a necessary condition for recoverability of a causal query is its identifiability which we will assume in the rest of our discussion. Definition 3 (Trivially Recoverable Query). A causal query Q is said to be trivially recoverable given an m-graph G if it has an estimand (in terms of substantive variables) in which every factor is recoverable. 5 Ry W Z Y Figure 3: m-graph in which Y and Ry are not separable but still P(Y |do(Z)) is recoverable. Classes of problems that fall into the MCAR (Missing Completely At Random) and MAR (Missing At Random) category are much discussed in the literature ((Rubin [1976])) because in such categories probabilistic queries are recoverable by graph-blind algorithms. An immediate but important implication of trivial recoverability is that if data are MAR or MCAR and the query is identifiable, then it is also recoverable by model-blind algorithms. Example 2. In the gender wage-gap study example in Figure 1 (a), the effect of sex on income, P(I|do(S)), is identifiable and is given by P(I|S). By theorem 2, P(S, X, Q, I) is recoverable. Hence P(I|do(S)) is recoverable. 6.1 Recovering P(y|do(z)) when Y and Ry are inseparable The recoverability of P(V ) hinges on the separability of a partially observed variable from its missingness mechanism (a condition established in theorem 3). Remarkably, causal queries may circumvent this requirement. The following example demonstrates that P(y|do(z)) is recoverable even when Y and Ry are not separable. Example 3. Examine Figure 3. By backdoor criterion, P(y|do(z)) = P w P(y|z, w)P(w). One might be tempted to conclude that the causal relation is non-recoverable because P(w, z, y) is non-recoverable (by theorem 2) and P(y|z, w) is not recoverable (by corollary 2). However, P(y|do(z)) is recoverable as demonstrated below: P(y|do(z)) = P(y|do(z), R′ y) = X w P(y|do(z), w, R′ y)P(w|do(z), R′ y) (2) P(y|do(z), w, R′ y) = P(y|z, w, R′ y) (by Rule-2 of do-calculus (Pearl [2009])) (3) P(w|do(z), R′ y) = P(w|R′ y) (by Rule-3 of do-calculus) ) (4) Substituting (3) and (4) in (2) we get: P(y|do(z)) = X w P(y|z, w, R′ y)P(w|R′ y) = X w P(y∗|z, w, R′ y)P(w|R′ y) The recoverability of P(y|do(z)) in the previous example follows from the notion of d*separability and dormant independence [Shpitser and Pearl, 2008]. Definition 4 (d∗-separation (Shpitser and Pearl [2008])). Let G be a causal diagram. Variable sets X, Y are d∗-separated in G given Z, W (written X ⊥w Y |Z), if we can find sets Z, W, such that X ⊥Y |Z in Gw, and P(y, x|z, do(w)) is identifiable. Definition 5 (Inducing path (Verma and Pearl [1991])). An path p between X and Y is called inducing path if every node on the path is a collider and an ancestor of either X or Y . Theorem 4. Given an m-graph in which |Vm| = 1 and Y and Ry are connected by an inducing path, P(y|do(x)) is recoverable if there exists Z, W such that Y ⊥w Ry|Z and for W = W \ X, the following conditions hold: (1) Y ⊥⊥W1|X, Z in GX,W1 and (2) P(W1, Z|do(X)) and P(Y |do(W1), do(X), Z, R′y) are identifiable. Moreover, if recoverable then, P(y|do(x)) = P W1,Z P(Y |do(W), do(X), Z, R′ y)P(Z, W1|do(X)) We can quickly conclude that P(y|do(z)) is recoverable in the m-graph in figure 3 by verifying that the conditions in theorem 4 hold in the m-graph. 6 RY Y X Z (a) RY X Y Z (b) Figure 4: (a) m-graphs in which P(y|do(x)) is not recoverable (b) m-graphs in which P(y|do(x)) is recoverable. 7 Attrition Attrition (i.e. participants dropping out from a study/experiment), is a ubiquitous phenomenon, especially in longitudinal studies. In this section, we shall discuss a special case of attrition called ‘Simple Attrition’ (Garcia [2013]). In this problem, a researcher conducts a randomized trial, measures a set of variables (X,Y,Z) and obtains a dataset where outcome (Y) is corrupted by missing values (due to attrition). Clearly, due to randomization, the effect of treatment (X) on outcome (Y), P(y|do(x)), is identifiable and is given by P(Y |X). We shall now demonstrate the usefulness of our previous discussion in recovering P(y|do(x)). Typical attrition problems are depicted in figure 4. In Figure 4 (b) we can apply theorem 1 to recover P(y|do(x)) as given below: P(Y |X) = P Z P(Y ∗|X, Z, R′ y)P(Z|X). In Figure 4 (a), we observe that Y and Ry are connected by a collider path. Therefore by corollary 2, P(Y |X) is not recoverable; hence P(y|do(x)) is also not recoverable. 7.1 Recovering Joint Distributions under simple attrition The following theorem yields the necessary and sufficient condition for recovering joint distributions from semi-markovian models with a single partially observed variable i.e. |Vm| = 1 which includes models afflicted by simple attrition. Theorem 5. Let Y ∈Vm and |Vm| = 1. P(V ) is recoverable in m-graph G if and only if Y and Ry are not neighbors and Y and Ry are not connected by a path in which all intermediate nodes are colliders. If both conditions are satisfied, then P(V ) is given by, P(V ) = P(Y |VO, Ry = 0)P(VO) 7.2 Recovering Causal Effects under Simple Attrition Theorem 6. P(y|do(x)) is recoverable in the simple attrition case (with one partially observed variable) if and only if Y and Ry are neither neighbors nor connected by an inducing path. Moreover, if recoverable, P(Y |X) = X z P(Y ∗|X, Z, R′ y)P(Z|X) (5) where Z is the separating set that d-separates Y from Ry. These results rectify prevailing opinion in the available literature. For example, according to Garcia [2013] (Theorem-3), a necessary condition for non-recoverability of causal effect under simple attrition is that X be an ancestor of Ry. In Figure 4 (a), X is not an ancestor of Ry and still P(Y |X) is non-recoverable ( due to the collider path between Y and Ry ). 8 Related Work Deletion based methods such as listwise deletion that are easy to understand as well as implement, guarantee consistent estimates only for certain categories of missingness such as MCAR (Rubin [1976]). Maximum Likelihood method is known to yield consistent estimates under MAR assumption; expectation maximization algorithm and gradient based algorithms are widely used for searching for ML estimates under incomplete data (Lauritzen [1995], Dempster et al. [1977], Darwiche [2009], Koller and Friedman [2009]). Most work in machine learning assumes MAR and proceeds with ML or Bayesian inference. However, there are exceptions such as recent work on collaborative filtering and recommender systems which 7 develop probabilistic models that explicitly incorporate missing data mechanism (Marlin et al. [2011], Marlin and Zemel [2009], Marlin et al. [2007]). Other methods for handling missing data can be classified into two: (a) Inverse Probability Weighted Methods and (b) Imputation based methods (Rothman et al. [2008]). Inverse Probability Weighing methods analyze and assign weights to complete records based on estimated probabilities of completeness (Van der Laan and Robins [2003], Robins et al. [1994]). Imputation based methods substitute a reasonable guess in the place of a missing value (Allison [2002]) and Multiple Imputation (Little and Rubin [2002]) is a widely used imputation method. Missing data is a special case of coarsened data and data are said to be coarsened at random (CAR) if the coarsening mechanism is only a function of the observed data (Heitjan and Rubin [1991]). Robins and Rotnitzky [1992] introduced a methodology for parameter estimation from data structures for which full data has a non-zero probability of being fully observed and their methodology was later extended to deal with censored data in which complete data on subjects are never observed (Van Der Laan and Robins [1998]). The use of graphical models for handling missing data is a relatively new development. Daniel et al. [2012] used graphical models for analyzing missing information in the form of missing cases (due to sample selection bias). Attrition is a common occurrence in longitudinal studies and arises when subjects drop out of the study (Twisk and de Vente [2002], Shadish [2002]) and Garcia [2013] analysed the problem of attrition using causal graphs. Thoemmes and Rose [2013] cautioned the practitioner that contrary to popular belief, not all auxiliary variables reduce bias. Both Garcia [2013] and Thoemmes and Rose [2013] associate missingness with a single variable and interactions among several missingness mechanisms are unexplored. Mohan et al. [2013] employed a formal representation called Missingness Graphs to depict the missingness process, defined the notion of recoverability and derived conditions under which queries would be recoverable when datasets are categorized as Missing Not At Random (MNAR). Tests to detect misspecifications in the m-graph are discussed in Mohan and Pearl [2014]. 9 Conclusion Graphical models play a critical role in portraying the missingness process, encoding and communicating assumptions about missingness and deciding recoverability given a dataset afflicted with missingness. We presented graphical conditions for recovering joint and conditional distributions and sufficient conditions for recovering causal queries. We exemplified the recoverability of causal queries of the form P(y|do(x)) despite the existence of an inseparable path between Y and Ry, which is an insurmountable obstacle to the recovery of P(Y). We applied our results to problems of attrition and presented necessary and sufficient graphical conditions for recovering causal effects in such problems. Acknowledgement This paper has benefited from discussions with Ilya Shpitser. This research was supported in parts by grants from NSF #IIS1249822 and #IIS1302448, and ONR #N00014-13-1-0153 and #N00014-10-1-0933. References P.D. Allison. Missing data series: Quantitative applications in the social sciences, 2002. R.M. Daniel, M.G. Kenward, S.N. Cousens, and B.L. De Stavola. Using causal diagrams to guide analysis in missing data problems. Statistical Methods in Medical Research, 21(3):243–256, 2012. A Darwiche. Modeling and reasoning with Bayesian networks. Cambridge University Press, 2009. 8 A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society. Series B (Methodological), pages 1–38, 1977. F. M. Garcia. Definition and diagnosis of problematic attrition in randomized controlled experiments. Working paper, April 2013. Available at SSRN: http://ssrn.com/abstract=2267120. D.F. Heitjan and D.B. Rubin. Ignorability and coarse data. The Annals of Statistics, pages 2244– 2253, 1991. D Koller and N Friedman. Probabilistic graphical models: principles and techniques. 2009. S L Lauritzen. The em algorithm for graphical association models with missing data. Computational Statistics & Data Analysis, 19(2):191–201, 1995. R.J.A. Little and D.B. Rubin. Statistical analysis with missing data. Wiley, 2002. B.M. Marlin and R.S. Zemel. Collaborative prediction and ranking with non-random missing data. In Proceedings of the third ACM conference on Recommender systems, pages 5–12. ACM, 2009. B.M. Marlin, R.S. Zemel, S. Roweis, and M. Slaney. Collaborative filtering and the missing at random assumption. In UAI, 2007. B.M. Marlin, R.S. Zemel, S.T. Roweis, and M. Slaney. Recommender systems: missing data and statistical model estimation. In IJCAI, 2011. K Mohan and J Pearl. On the testability of models with missing data. Proceedings of AISTAT, 2014. K Mohan, J Pearl, and J Tian. Graphical models for inference with missing data. In Advances in Neural Information Processing Systems 26, pages 1277–1285. 2013. J. Pearl. Causality: models, reasoning and inference. Cambridge Univ Press, New York, 2009. J Pearl and K Mohan. Recoverability and testability of missing data: Introduction and summary of results. Technical Report R-417, UCLA, 2013. Available at http://ftp.cs.ucla.edu/pub/stat ser/r417.pdf. Thomas Richardson. Markov properties for acyclic directed mixed graphs. Scandinavian Journal of Statistics, 30(1):145–157, 2003. J M Robins and A Rotnitzky. Recovery of information and adjustment for dependent censoring using surrogate markers. In AIDS Epidemiology, pages 297–331. Springer, 1992. J M Robins, A Rotnitzky, and L P Zhao. Estimation of regression coefficients when some regressors are not always observed. Journal of the American Statistical Association, 89(427):846–866, 1994. K J Rothman, S Greenland, and T L Lash. Modern epidemiology. Lippincott Williams & Wilkins, 2008. D.B. Rubin. Inference and missing data. Biometrika, 63:581–592, 1976. W R Shadish. Revisiting field experimentation: field notes for the future. Psychological methods, 7 (1):3, 2002. I Shpitser and J Pearl. Identification of conditional interventional distributions. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence, pages 437–444. 2006. I Shpitser and J Pearl. Dormant independence. In AAAI, pages 1081–1087, 2008. F. Thoemmes and N. Rose. Selection of auxiliary variables in missing data problems: Not all auxiliary variables are created equal. Technical Report R-002, Cornell University, 2013. J Twisk and W de Vente. Attrition in longitudinal studies: how to deal with missing data. Journal of clinical epidemiology, 55(4):329–337, 2002. M J Van Der Laan and J M Robins. Locally efficient estimation with current status data and time-dependent covariates. Journal of the American Statistical Association, 93(442):693–701, 1998. M.J. Van der Laan and J.M. Robins. Unified methods for censored longitudinal data and causality. Springer Verlag, 2003. T.S Verma and J Pearl. Equivalence and synthesis of causal models. In Proceedings of the Sixth Conference in Artificial Intelligence, pages 220–227. Association for Uncertainty in AI, 1991. 9
2014
79
5,568
Discriminative Unsupervised Feature Learning with Convolutional Neural Networks Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller and Thomas Brox Department of Computer Science University of Freiburg 79110, Freiburg im Breisgau, Germany {dosovits,springj,riedmiller,brox}@cs.uni-freiburg.de Abstract Current methods for training convolutional neural networks depend on large amounts of labeled samples for supervised training. In this paper we present an approach for training a convolutional neural network using only unlabeled data. We train the network to discriminate between a set of surrogate classes. Each surrogate class is formed by applying a variety of transformations to a randomly sampled ’seed’ image patch. We find that this simple feature learning algorithm is surprisingly successful when applied to visual object recognition. The feature representation learned by our algorithm achieves classification results matching or outperforming the current state-of-the-art for unsupervised learning on several popular datasets (STL-10, CIFAR-10, Caltech-101). 1 Introduction Convolutional neural networks (CNNs) trained via backpropagation were recently shown to perform well on image classification tasks with millions of training images and thousands of categories [1, 2]. The feature representation learned by these networks achieves state-of-the-art performance not only on the classification task for which the network was trained, but also on various other visual recognition tasks, for example: classification on Caltech-101 [2, 3], Caltech-256 [2] and the CaltechUCSD birds dataset [3]; scene recognition on the SUN-397 database [3]; detection on the PASCAL VOC dataset [4]. This capability to generalize to new datasets makes supervised CNN training an attractive approach for generic visual feature learning. The downside of supervised training is the need for expensive labeling, as the amount of required labeled samples grows quickly the larger the model gets. The large performance increase achieved by methods based on the work of Krizhevsky et al. [1] was, for example, only possible due to massive efforts on manually annotating millions of images. For this reason, unsupervised learning – although currently underperforming – remains an appealing paradigm, since it can make use of raw unlabeled images and videos. Furthermore, on vision tasks outside classification it is not even certain whether training based on object class labels is advantageous. For example, unsupervised feature learning is known to be beneficial for image restoration [5] and recent results show that it outperforms supervised feature learning also on descriptor matching [6]. In this work we combine the power of a discriminative objective with the major advantage of unsupervised feature learning: cheap data acquisition. We introduce a novel training procedure for convolutional neural networks that does not require any labeled data. It rather relies on an automatically generated surrogate task. The task is created by taking the idea of data augmentation – which is commonly used in supervised learning – to the extreme. Starting with trivial surrogate classes consisting of one random image patch each, we augment the data by applying a random set of transformations to each patch. Then we train a CNN to classify these surrogate classes. We refer to this method as exemplar training of convolutional neural networks (Exemplar-CNN). 1 The feature representation learned by Exemplar-CNN is, by construction, discriminative and invariant to typical transformations. We confirm this both theoretically and empirically, showing that this approach matches or outperforms all previous unsupervised feature learning methods on the standard image classification benchmarks STL-10, CIFAR-10, and Caltech-101. 1.1 Related Work Our approach is related to a large body of work on unsupervised learning of invariant features and training of convolutional neural networks. Convolutional training is commonly used in both supervised and unsupervised methods to utilize the invariance of image statistics to translations (e.g. LeCun et al. [7], Kavukcuoglu et al. [8], Krizhevsky et al. [1]). Similar to our approach the current surge of successful methods employing convolutional neural networks for object recognition often rely on data augmentation to generate additional training samples for their classification objective (e.g. Krizhevsky et al. [1], Zeiler and Fergus [2]). While we share the architecture (a convolutional neural network) with these approaches, our method does not rely on any labeled training data. In unsupervised learning, several studies on learning invariant representations exist. Denoising autoencoders [9], for example, learn features that are robust to noise by trying to reconstruct data from randomly perturbed input samples. Zou et al. [10] learn invariant features from video by enforcing a temporal slowness constraint on the feature representation learned by a linear autoencoder. Sohn and Lee [11] and Hui [12] learn features invariant to local image transformations. In contrast to our discriminative approach, all these methods rely on directly modeling the input distribution and are typically hard to use for jointly training multiple layers of a CNN. The idea of learning features that are invariant to transformations has also been explored for supervised training of neural networks. The research most similar to ours is early work on tangent propagation [13] (and the related double backpropagation [14]) which aims to learn invariance to small predefined transformations in a neural network by directly penalizing the derivative of the output with respect to the magnitude of the transformations. In contrast, our algorithm does not regularize the derivative explicitly. Thus it is less sensitive to the magnitude of the applied transformation. This work is also loosely related to the use of unlabeled data for regularizing supervised algorithms, for example self-training [15] or entropy regularization [16]. In contrast to these semi-supervised methods, Exemplar-CNN training does not require any labeled data. Finally, the idea of creating an auxiliary task in order to learn a good data representation was used by Ahmed et al. [17], Collobert et al. [18]. 2 Creating Surrogate Training Data The input to the training procedure is a set of unlabeled images, which come from roughly the same distribution as the images to which we later aim to apply the learned features. We randomly sample N ∈[50, 32000] patches of size 32×32 pixels from different images at varying positions and scales forming the initial training set X = {x1, . . . xN}. We are interested in patches containing objects or parts of objects, hence we sample only from regions containing considerable gradients. We define a family of transformations {Tα| α ∈A} parameterized by vectors α ∈A, where A is the set of all possible parameter vectors. Each transformation Tα is a composition of elementary transformations from the following list: • translation: vertical or horizontal translation by a distance within 0.2 of the patch size; • scaling: multiplication of the patch scale by a factor between 0.7 and 1.4; • rotation: rotation of the image by an angle up to 20 degrees; • contrast 1: multiply the projection of each patch pixel onto the principal components of the set of all pixels by a factor between 0.5 and 2 (factors are independent for each principal component and the same for all pixels within a patch); • contrast 2: raise saturation and value (S and V components of the HSV color representation) of all pixels to a power between 0.25 and 4 (same for all pixels within a patch), multiply these values by a factor between 0.7 and 1.4, add to them a value between −0.1 and 0.1; 2 Figure 1: Exemplary patches sampled from the STL unlabeled dataset which are later augmented by various transformations to obtain surrogate data for the CNN training. Figure 2: Several random transformations applied to one of the patches extracted from the STL unlabeled dataset. The original (’seed’) patch is in the top left corner. • color: add a value between −0.1 and 0.1 to the hue (H component of the HSV color representation) of all pixels in the patch (the same value is used for all pixels within a patch). All numerical parameters of elementary transformations, when concatenated together, form a single parameter vector α. For each initial patch xi ∈X we sample K ∈[1, 300] random parameter vectors {α1 i , . . . , αK i } and apply the corresponding transformations Ti = {Tα1 i , . . . , TαK i } to the patch xi. This yields the set of its transformed versions Sxi = Tixi = {Txi| T ∈Ti}. Afterwards we subtract the mean of each pixel over the whole resulting dataset. We do not apply any other preprocessing. Exemplary patches sampled from the STL-10 unlabeled dataset are shown in Fig. 1. Examples of transformed versions of one patch are shown in Fig. 2 . 3 Learning Algorithm Given the sets of transformed image patches, we declare each of these sets to be a class by assigning label i to the class Sxi. We next train a CNN to discriminate between these surrogate classes. Formally, we minimize the following loss function: L(X) = X xi∈X X T ∈Ti l(i, Txi), (1) where l(i, Txi) is the loss on the transformed sample Txi with (surrogate) true label i. We use a CNN with a softmax output layer and optimize the multinomial negative log likelihood of the network output, hence in our case l(i, Txi) = M(ei, f(Txi)), M(y, f) = −⟨y, log f⟩= − X k yk log fk, (2) where f(·) denotes the function computing the values of the output layer of the CNN given the input data, and ei is the ith standard basis vector. We note that in the limit of an infinite number of transformations per surrogate class, the objective function (1) takes the form bL(X) = X xi∈X Eα[l(i, Tαxi)], (3) which we shall analyze in the next section. Intuitively, the classification problem described above serves to ensure that different input samples can be distinguished. At the same time, it enforces invariance to the specified transformations. In the following sections we provide a foundation for this intuition. We first present a formal analysis of the objective, separating it into a well defined classification problem and a regularizer that enforces invariance (resembling the analysis in Wager et al. [19]). We then discuss the derived properties of this classification problem and compare it to common practices for unsupervised feature learning. 3.1 Formal Analysis We denote by α ∈A the random vector of transformation parameters, by g(x) the vector of activations of the second-to-last layer of the network when presented the input patch x, by W the matrix 3 of the weights of the last network layer, by h(x) = Wg(x) the last layer activations before applying the softmax, and by f(x) = softmax (h(x)) the output of the network. By plugging in the definition of the softmax activation function softmax (z) = exp(z)/∥exp(z)∥1 (4) the objective function (3) with loss (2) takes the form X xi∈X Eα  −⟨ei, h(Tαxi)⟩+ log ∥exp(h(Tαxi))∥1  . (5) With bgi = Eα [g(Tαxi)] being the average feature representation of transformed versions of the image patch xi we can rewrite Eq. (5) as X xi∈X  −⟨ei, W bgi⟩+ log ∥exp(W bgi)∥1  + X xi∈X  Eα [log ∥exp(h(Tαxi))∥1] −log ∥exp(W bgi)∥1  . (6) The first sum is the objective function of a multinomial logistic regression problem with input-target pairs ( bgi, ei). This objective falls back to the transformation-free instance classification problem L(X) = P xi∈X l(i, xi) if g(xi) = Eα[g(Tαx)]. In general, this equality does not hold and thus the first sum enforces correct classification of the average representation Eα[g(Tαxi)] for a given input sample. For a truly invariant representation, however, the equality is achieved. Similarly, if we suppose that Tαx = x for α = 0, that for small values of α the feature representation g(Tαxi) is approximately linear with respect to α and that the random variable α is centered, i.e. Eα [α] = 0, then bgi = Eα [g(Tαxi)] ≈Eα [g(xi) + ∇α(g(Tαxi))|α=0 α] = g(xi). The second sum in Eq. (6) can be seen as a regularizer enforcing all h(Tαxi) to be close to their average value, i.e., the feature representation is sought to be approximately invariant to the transformations Tα. To show this we use the convexity of the function log ∥exp(·)∥1 and Jensen’s inequality, which yields (proof in supplementary material) Eα [log ∥exp(h(Tαxi))∥1] −log ∥exp(W bgi)∥1 ≥0. (7) If the feature representation is perfectly invariant, then h(Tαxi) = W bgi and inequality (7) turns to equality, meaning that the regularizer reaches its global minimum. 3.2 Conceptual Comparison to Previous Unsupervised Learning Methods Suppose we want to unsupervisedly learn a feature representation useful for a recognition task, for example classification. The mapping from input images x to a feature representation g(x) should then satisfy two requirements: (1) there must be at least one feature that is similar for images of the same category y (invariance); (2) there must be at least one feature that is sufficiently different for images of different categories (ability to discriminate). Most unsupervised feature learning methods aim to learn such a representation by modeling the input distribution p(x). This is based on the assumption that a good model of p(x) contains information about the category distribution p(y|x). That is, if a representation is learned, from which a given sample can be reconstructed perfectly, then the representation is expected to also encode information about the category of the sample (ability to discriminate). Additionally, the learned representation should be invariant to variations in the samples that are irrelevant for the classification task, i.e., it should adhere to the manifold hypothesis (see e.g. Rifai et al. [20] for a recent discussion). Invariance is classically achieved by regularization of the latent representation, e.g., by enforcing sparsity [8] or robustness to noise [9]. In contrast, the discriminative objective in Eq. (1) does not directly model the input distribution p(x) but learns a representation that discriminates between input samples. The representation is not required to reconstruct the input, which is unnecessary in a recognition or matching task. This leaves more degrees of freedom to model the desired variability of a sample. As shown in our analysis (see Eq. (7)), we achieve partial invariance to transformations applied during surrogate data creation by forcing the representation g(Tαxi) of the transformed image patch to be predictive of the surrogate label assigned to the original image patch xi. 4 It should be noted that this approach assumes that the transformations Tα do not change the identity of the image content. If we, for example, use a color transformation we will force the network to be invariant to this change and cannot expect the extracted features to perform well in a task relying on color information (such as differentiating black panthers from pumas)1. 4 Experiments To compare our discriminative approach to previous unsupervised feature learning methods, we report classification results on the STL-10 [21], CIFAR-10 [22] and Caltech-101 [23] datasets. Moreover, we assess the influence of the augmentation parameters on the classification performance and study the invariance properties of the network. 4.1 Experimental Setup The datasets we test on differ in the number of classes (10 for CIFAR and STL, 101 for Caltech) and the number of samples per class. STL is especially well suited for unsupervised learning as it contains a large set of 100,000 unlabeled samples. In all experiments (except for the dataset transfer experiment in the supplementary material) we extracted surrogate training data from the unlabeled subset of STL-10. When testing on CIFAR-10, we resized the images from 32×32 pixels to 64×64 pixels so that the scale of depicted objects roughly matches the two other datasets. We worked with two network architectures. A “small” network was used to evaluate the influence of different components of the augmentation procedure on classification performance. It consists of two convolutional layers with 64 filters each followed by a fully connected layer with 128 neurons. This last layer is succeeded by a softmax layer, which serves as the network output. A “large” network, consisting of three convolutional layers with 64, 128 and 256 filters respectively followed by a fully connected layer with 512 neurons, was trained to compare our method to the state-of-theart. In both models all convolutional filters are connected to a 5×5 region of their input. 2×2 maxpooling was performed after the first and second convolutional layers. Dropout [24] was applied to the fully connected layers. We trained the networks using an implementation based on Caffe [25]. Details on the training, the hyperparameter settings, and an analysis of the performance depending on the network architecture is provided in the supplementary material. Our code and training data are available at http://lmb.informatik.uni-freiburg.de/resources . We applied the feature representation to images of arbitrary size by convolutionally computing the responses of all the network layers except the top softmax. To each feature map, we applied the pooling method that is commonly used for the respective dataset: 1) 4-quadrant max-pooling, resulting in 4 values per feature map, which is the standard procedure for STL-10 and CIFAR-10 [26, 10, 27, 12]; 2) 3-layer spatial pyramid, i.e. max-pooling over the whole image as well as within 4 quadrants and within the cells of a 4 × 4 grid, resulting in 1 + 4 + 16 = 21 values per feature map, which is the standard for Caltech-101 [28, 10, 29]. Finally, we trained a linear support vector machine (SVM) on the pooled features. On all datasets we used the standard training and test protocols. On STL-10 the SVM was trained on 10 pre-defined folds of the training data. We report the mean and standard deviation achieved on the fixed test set. For CIFAR-10 we report two results: (1) training the SVM on the whole CIFAR-10 training set (’CIFAR-10’); (2) the average over 10 random selections of 400 training samples per class (’CIFAR-10(400)’). For Caltech-101 we followed the usual protocol of selecting 30 random samples per class for training and not more than 50 samples per class for testing. This was repeated 10 times. 4.2 Classification Results In Table 1 we compare Exemplar-CNN to several unsupervised feature learning methods, including the current state-of-the-art on each dataset. We also list the state-of-the-art for supervised learning (which is not directly comparable). Additionally we show the dimensionality of the feature vectors 1Such cases could be covered either by careful selection of applied transformations or by combining features from multiple networks trained with different sets of transformations and letting the final classifier choose which features to use. 5 Table 1: Classification accuracies on several datasets (in percent). † Average per-class accuracy2 78.0% ± 0.4%. ‡ Average per-class accuracy 84.4% ± 0.6%. Algorithm STL-10 CIFAR-10(400) CIFAR-10 Caltech-101 #features Convolutional K-means Network [26] 60.1 ± 1 70.7 ± 0.7 82.0 — 8000 Multi-way local pooling [28] — — — 77.3 ± 0.6 1024 × 64 Slowness on videos [10] 61.0 — — 74.6 556 Hierarchical Matching Pursuit (HMP) [27] 64.5 ± 1 — — — 1000 Multipath HMP [29] — — — 82.5 ± 0.5 5000 View-Invariant K-means [12] 63.7 72.6 ± 0.7 81.9 — 6400 Exemplar-CNN (64c5-64c5-128f) 67.1 ± 0.3 69.7 ± 0.3 75.7 79.8 ± 0.5† 256 Exemplar-CNN (64c5-128c5-256c5-512f) 72.8 ± 0.4 75.3 ± 0.2 82.0 85.5 ± 0.4‡ 960 Supervised state of the art 70.1[30] — 91.2 [31] 91.44 [32] — produced by each method before final pooling. The small network was trained on 8000 surrogate classes containing 150 samples each and the large one on 16000 classes with 100 samples each. The features extracted from the larger network match or outperform the best prior result on all datasets. This is despite the fact that the dimensionality of the feature vector is smaller than that of most other approaches and that the networks are trained on the STL-10 unlabeled dataset (i.e. they are used in a transfer learning manner when applied to CIFAR-10 and Caltech 101). The increase in performance is especially pronounced when only few labeled samples are available for training the SVM (as is the case for all the datasets except full CIFAR-10). This is in agreement with previous evidence that with increasing feature vector dimensionality and number of labeled samples, training an SVM becomes less dependent on the quality of the features [26, 12]. Remarkably, on STL-10 we achieve an accuracy of 72.8%, which is a large improvement over all previously reported results. 4.3 Detailed Analysis We performed additional experiments (using the “small” network) to study the effect of three design choices in Exemplar-CNN training and validate the invariance properties of the learned features. Experiments on sampling ’seed’ patches from different datasets can be found in the supplementary. 4.3.1 Number of Surrogate Classes We varied the number N of surrogate classes between 50 and 32000. As a sanity check, we also tried classification with random filters. The results are shown in Fig. 3. Clearly, the classification accuracy increases with the number of surrogate classes until it reaches an optimum at about 8000 surrogate classes after which it did not change or even decreased. This is to be expected: the larger the number of surrogate classes, the more likely it is to draw very similar or even identical samples, which are hard or impossible to discriminate. Few such cases are not detrimental to the classification performance, but as soon as such collisions dominate the set of surrogate labels, the discriminative loss is no longer reasonable and training the network to the surrogate task no longer succeeds. To check the validity of this explanation we also plot in Fig. 3 the classification error on the validation set (taken from the surrogate data) computed after training the network. It rapidly grows as the number of surrogate classes increases. We also observed that the optimal number of surrogate classes increases with the size of the network (not shown in the figure), but eventually saturates. This demonstrates the main limitation of our approach to randomly sample ’seed’ patches: it does not scale to arbitrarily large amounts of unlabeled data. However, we do not see this as a fundamental restriction and discuss possible solutions in Section 5 . 4.3.2 Number of Samples per Surrogate Class Fig. 4 shows the classification accuracy when the number K of training samples per surrogate class varies between 1 and 300. The performance improves with more samples per surrogate class and 2 On Caltech-101 one can either measure average accuracy over all samples (average overall accuracy) or calculate the accuracy for each class and then average these values (average per-class accuracy). These differ, as some classes contain fewer than 50 test samples. Most researchers in ML use average overall accuracy. 6 50 100 250 500 1000 2000 4000 8000 1600032000 54 56 58 60 62 64 66 68 Number of classes (log scale) Classification accuracy on STL−10 Classification on STL (± σ) Validation error on surrogate data 0 20 40 60 80 100 Error on validation data Figure 3: Influence of the number of surrogate training classes. The validation error on the surrogate data is shown in red. Note the different y-axes for the two curves. 1 2 4 8 16 32 64 100 150 300 45 50 55 60 65 70 Number of samples per class (log scale) Classification accuracy on STL−10 1000 classes 2000 classes 4000 classes random filters Figure 4: Classification performance on STL for different numbers of samples per class. Random filters can be seen as ’0 samples per class’. saturates at around 100 samples. This indicates that this amount is sufficient to approximate the formal objective from Eq. (3), hence further increasing the number of samples does not significantly change the optimization problem. On the other hand, if the number of samples is too small, there is insufficient data to learn the desired invariance properties. 4.3.3 Types of Transformations −20 −15 −10 −5 0 Removed transformations rotation scaling translation color contrast rot+sc+tr col+con all −20 −15 −10 −5 0 Difference in classification accuracy STL−10 CIFAR−10 Caltech−101 Figure 5: Influence of removing groups of transformations during generation of the surrogate training data. Baseline (’0’ value) is applying all transformations. Each group of three bars corresponds to removing some of the transformations. We varied the transformations used for creating the surrogate data to analyze their influence on the final classification performance. The set of ’seed’ patches was fixed. The result is shown in Fig. 5. The value ’0’ corresponds to applying random compositions of all elementary transformations: scaling, rotation, translation, color variation, and contrast variation. Different columns of the plot show the difference in classification accuracy as we discarded some types of elementary transformations. Several tendencies can be observed. First, rotation and scaling have only a minor impact on the performance, while translations, color variations and contrast variations are significantly more important. Secondly, the results on STL10 and CIFAR-10 consistently show that spatial invariance and color-contrast invariance are approximately of equal importance for the classification performance. This indicates that variations in color and contrast, though often neglected, may also improve performance in a supervised learning scenario. Thirdly, on Caltech-101 color and contrast transformations are much more important compared to spatial transformations than on the two other datasets. This is not surprising, since Caltech-101 images are often well aligned, and this dataset bias makes spatial invariance less useful. 4.3.4 Invariance Properties of the Learned Representation In a final experiment, we analyzed to which extent the representation learned by the network is invariant to the transformations applied during training. We randomly sampled 500 images from the STL-10 test set and applied a range of transformations (translation, rotation, contrast, color) to each image. To avoid empty regions beyond the image boundaries when applying spatial transformations, we cropped the central 64×64 pixel sub-patch from each 96×96 pixel image. We then applied two measures of invariance to these patches. First, as an explicit measure of invariance, we calculated the normalized Euclidean distance between normalized feature vectors of the original image patch and the transformed one [10] (see the supplementary material for details). The downside of this approach is that the distance between extracted features does not take into account how informative and discriminative they are. We there7 −20 −10 0 10 20 0 0.2 0.4 0.6 0.8 1 Translation (pixels) Distance between feature vectors (a) 1st layer 2nd layer 3rd layer 4−quadrant HOG −50 0 50 10 20 30 40 50 60 Rotation angle (degrees) Classification accuracy (in %) (b) No movements in training data Rotations up to 20 degrees Rotations up to 40 degrees −0.2 −0.1 0 0.1 0.2 0.3 10 20 30 40 50 60 Hue shift Classification accuracy (in %) (c) No color transform Hue change within ± 0.1 Hue change within ± 0.2 Hue change within ± 0.3 −3 −2 −1 0 1 2 3 10 20 30 40 50 60 Contrast multiplier Classification accuracy (in %) (d) No contrast transform Contrast coefficients (2, 0.5, 0.1) Contrast coefficients (4, 1, 0.2) Contrast coefficients (6, 1.5, 0.3) Figure 6: Invariance properties of the feature representation learned by Exemplar-CNN. (a): Normalized Euclidean distance between feature vectors of the original and the translated image patches vs. the magnitude of the translation, (b)-(d): classification performance on transformed image patches vs. the magnitude of the transformation for various magnitudes of transformations applied for creating surrogate data. (b): rotation, (c): additive color change, (d): multiplicative contrast change. fore evaluated a second measure – classification performance depending on the magnitude of the transformation applied to the classified patches – which does not come with this problem. To compute the classification accuracy, we trained an SVM on the central 64 × 64 pixel patches from one fold of the STL-10 training set and measured classification performance on all transformed versions of 500 samples from the test set. The results of both experiments are shown in Fig. 6 . Due to space restrictions we show only few representative plots. Overall the experiment empirically confirms that the Exemplar-CNN objective leads to learning invariant features. Features in the third layer and the final pooled feature representation compare favorably to a HOG baseline (Fig. 6 (a)). Furthermore, adding stronger transformations in the surrogate training data leads to more invariant classification with respect to these transformations (Fig. 6 (b)-(d)). However, adding too much contrast variation may deteriorate classification performance (Fig. 6 (d)). One possible reason is that level of contrast can be a useful feature: for example, strong edges in an image are usually more important than weak ones. 5 Discussion We have proposed a discriminative objective for unsupervised feature learning by training a CNN without class labels. The core idea is to generate a set of surrogate labels via data augmentation. The features learned by the network yield a large improvement in classification accuracy compared to features obtained with previous unsupervised methods. These results strongly indicate that a discriminative objective is superior to objectives previously used for unsupervised feature learning. One potential shortcoming of the proposed method is that in its current state it does not scale to arbitrarily large datasets. Two probable reasons for this are that (1) as the number of surrogate classes grows larger, many of them become similar, which contradicts the discriminative objective, and (2) the surrogate task we use is relatively simple and does not allow the network to learn invariance to complex variations, such as 3D viewpoint changes or inter-instance variation. We hypothesize that the presented approach could learn more powerful higher-level features, if the surrogate data were more diverse. This could be achieved by using additional weak supervision, for example, by means of video or a small number of labeled samples. Another possible way of obtaining richer surrogate training data and at the same time avoiding similar surrogate classes would be (unsupervised) merging of similar surrogate classes. We see these as interesting directions for future work. Acknowledgements We acknowledge funding by the ERC Starting Grant VideoLearn (279401); the work was also partly supported by the BrainLinks-BrainTools Cluster of Excellence funded by the German Research Foundation (DFG, grant number EXC 1086). References [1] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012. 8 [2] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. In ECCV, 2014. [3] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML, 2014. [4] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [5] K. Cho. Simple sparsification improves sparse denoising autoencoders in denoising highly corrupted images. In ICML. JMLR Workshop and Conference Proceedings, 2013. [6] P. Fischer, A. Dosovitskiy, and T. Brox. Descriptor matching with convolutional neural networks: a comparison to SIFT. 2014. pre-print, arXiv:1405.5769v1 [cs.CV]. [7] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, 1989. [8] K. Kavukcuoglu, P. Sermanet, Y. Boureau, K. Gregor, M. Mathieu, and Y. LeCun. Learning convolutional feature hierachies for visual recognition. In NIPS, 2010. [9] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, pages 1096–1103, 2008. [10] W. Y. Zou, A. Y. Ng, S. Zhu, and K. Yu. Deep learning of invariant features via simulated fixations in video. In NIPS, pages 3212–3220, 2012. [11] K. Sohn and H. Lee. Learning invariant representations with local transformations. In ICML, 2012. [12] K. Y. Hui. Direct modeling of complex invariances for visual object features. In ICML, 2013. [13] P. Simard, B. Victorri, Y. LeCun, and J. S. Denker. Tangent Prop - A formalism for specifying selected invariances in an adaptive network. In NIPS, 1992. [14] H. Drucker and Y. LeCun. Improving generalization performance using double backpropagation. IEEE Transactions on Neural Networks, 3(6):991–997, 1992. [15] M.-R. Amini and P. Gallinari. Semi supervised logistic regression. In ECAI, pages 390–394, 2002. [16] Y. Grandvalet and Y. Bengio. Entropy regularization. In Semi-Supervised Learning, pages 151–168. MIT Press, 2006. [17] A. Ahmed, K. Yu, W. Xu, Y. Gong, and E. Xing. Training hierarchical feed-forward visual recognition models using transfer learning from pseudo-tasks. In ECCV (3), pages 69–82, 2008. [18] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537, 2011. [19] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In NIPS. 2013. [20] S. Rifai, Y. N. Dauphin, P. Vincent, Y. Bengio, and X. Muller. The manifold tangent classifier. In NIPS. 2011. [21] A. Coates, H. Lee, and A. Y. Ng. An analysis of single-layer networks in unsupervised feature learning. AISTATS, 2011. [22] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto, 2009. [23] L. Fei-Fei, R. Fergus, and P. Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In CVPR WGMBV, 2004. [24] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. 2012. pre-print, arxiv:cs/1207.0580v3. [25] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [26] A. Coates and A. Y. Ng. Selecting receptive fields in deep networks. In NIPS, pages 2528–2536, 2011. [27] L. Bo, X. Ren, and D. Fox. Unsupervised feature learning for RGB-D based object recognition. In ISER, June 2012. [28] Y. Boureau, N. Le Roux, F. Bach, J. Ponce, and Y. LeCun. Ask the locals: multi-way local pooling for image recognition. In ICCV’11. IEEE, 2011. [29] L. Bo, X. Ren, and D. Fox. Multipath sparse coding using hierarchical matching pursuit. In CVPR, pages 660–667, 2013. [30] K. Swersky, J. Snoek, and R. P. Adams. Multi-task bayesian optimization. In NIPS, 2013. [31] M. Lin, Q. Chen, and S. Yan. Network in network. In ICLR, 2014. [32] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. 9
2014
8
5,569
Restricted Boltzmann machines modeling human choice Takayuki Osogami IBM Research - Tokyo osogami@jp.ibm.com Makoto Otsuka IBM Research - Tokyo motsuka@ucla.edu Abstract We extend the multinomial logit model to represent some of the empirical phenomena that are frequently observed in the choices made by humans. These phenomena include the similarity effect, the attraction effect, and the compromise effect. We formally quantify the strength of these phenomena that can be represented by our choice model, which illuminates the flexibility of our choice model. We then show that our choice model can be represented as a restricted Boltzmann machine and that its parameters can be learned effectively from data. Our numerical experiments with real data of human choices suggest that we can train our choice model in such a way that it represents the typical phenomena of choice. 1 Introduction Choice is a fundamental behavior of humans and has been studied extensively in Artificial Intelligence and related areas. The prior work suggests that the choices made by humans can significantly depend on available alternatives, or the choice set, in rather complex but systematic ways [13]. The empirical phenomena that result from such dependency on the choice set include the similarity effect, the attraction effect, and the compromise effect. Informally, the similarity effect refers to the phenomenon that a new product, S, reduces the share of a similar product, A, more than a dissimilar product, B (see Figure 1 (a)). With the attraction effect, a new dominated product, D, increases the share of the dominant product, A (see Figure 1 (b)). With the compromise effect, a product, C, has a relatively larger share when two extreme products, A and B, are in the market than when only one of A and B is in the market (see Figure 1 (c)). We call these three empirical phenomena as the typical choice phenomena. However, the standard choice model of the multinomial logit model (MLM) and its variants cannot represent at least one of the typical choice phenomena [13]. More descriptive models have been proposed to represent the typical choice phenomena in some representative cases [14, 19]. However, it is unclear when and to what degree the typical choice phenomena can be represented. Also, no algorithms have been proposed for training these descriptive models from data. S A B (a) Similarity A B D (b) Attraction A B C (c) Compromise Figure 1: Choice sets that cause typical choice phenomena. 1 We extend the MLM to represent the typical choice phenomena, which is our first contribution. We show that our choice model can be represented as a restricted Boltzmann machine (RBM). Our choice model is thus called the RBM choice model. An advantage of this representation as an RBM is that training algorithms for RBMs are readily available. See Section 2. We then formally define the measure of the strength for each typical choice phenomenon and quantify the strength of each typical choice phenomenon that the RBM choice model can represent. Our analysis not only gives a guarantee on the flexibility of the RBM choice model but also illuminates why the RBM choice model can represent the typical choice phenomena. These definitions and analysis constitute our second contribution and are presented in Section 3. Our experiments suggest that we can train the RBM choice model in such a way that it represents the typical choice phenomena. We show that the trained RBM choice model can then adequately predict real human choice on the means of transportation [2]. These experimental results constitute our third contribution and are presented in Section 4. 2 Choice model with restricted Boltzmann machine We extend the MLM to represent the typical choice phenomena. Let I be the set of items. For A ∈ X ⊆I, we study the probability that an item, A, is selected from a choice set, X. This probability is called the choice probability. The model of choice, equipped with the choice probability, is called a choice model. We use A, B, C, D, S, or X to denote an item and X, Y, or a set such as {A, B} to denote a choice set. For the MLM, the choice probability of A from X can be represented by p(A|X) = λ(A|X) P X∈X λ(X|X), (1) where we refer to λ(X|X) as the choice rate of X from X. The choice rate of the MLM is given by λMLM(X|X) = exp(bX), (2) where bX can be interpreted as the attractiveness of X. One could define bX through uX, the vector of the utilities of the attributes for X, and α, the vector of the weight on each attribute (i.e., bX ≡α·uX). Observe that λMLM(X|X) is independent of X as long as X ∈X. This independence causes the incapability of the MLM in representing the typical choice phenomena. We extend the choice rate of (2) but keep the choice probability in the form of (1). Specifically, we consider the following choice rate: λ(X|X) ≡exp(bX) Y k∈K 1 + exp T k X + U k X  , (3) where we define T k X ≡ X Y ∈X T k Y . (4) Our choice model has parameters, bX, T k X , U k X for X ∈X, k ∈K, that take values in (−∞, ∞). Equation (3) modifies exp(bX) by multiplying factors. Each factor is associated with an index, k, and has parameters, T k X and U k X, that depend on k. The set of these indices is denoted by K. We now show that our choice model can be represented as a restricted Boltzmann machine (RBM). This means that we can use existing algorithms for RBMs to learn the parameters of the RBM choice model (see Appendix A.1). An RBM consists of a layer of visible units, i ∈V, and a layer of hidden units, k ∈H. A visible unit, i, and a hidden unit, k, are connected with weight, W k i . The units within each layer are disconnected from each other. Each unit is associated with a bias. The bias of a visible unit, i, is denoted by bvis i . The bias of a hidden unit, k, is denoted by bhid k . A visible unit, i, is associated with a binary variable, zi, and a hidden unit, k, is associated with a binary variable, hk, which takes a value in {0, 1}. For a given configuration of binary variables, the energy of the RBM is defined as Eθ(z, h) ≡− X i∈V X k∈H zi W k i hk + bvis i zi + bhid k hk  , (5) 2 k ... ... X ... ... A ... ... TX k UA k Hidden Choice set Selected item bA Figure 2: RBM choice model where θ ≡{W, bvis, bhid} denotes the parameters of the RBM. The probability of realizing a particular configuration of (z, h) is given by Pθ(z, h) ≡ exp(−Eθ(z, h)) P z′ P h′ exp(−Eθ(z′, h′)). (6) The summation with respect to a binary vector (i.e., P z′ or P h′) denotes the summation over all of the possible binary vectors of a given length. The length of z′ is |V|, and the length of h′ is |H|. The RBM choice model can be represented as an RBM having the structure in Figure 2. Here, the layer of visible units is split into two parts: one for the choice set and the other for the selected item. The corresponding binary vector is denoted by z = (v, w). Here, v is a binary vector associated with the part for the choice set. Specifically, v has length |I|, and vX = 1 denotes that X is in the choice set. Analogously, w has length |I|, and wA = 1 denotes that A is selected. We use T k X to denote the weight between a hidden unit, k, and a visible unit, X, for the choice set. We use U k A to denote the weight between a hidden unit, k, and a visible unit, A, for the selected item. The bias is zero for all of the hidden units and for all of the visible units for the choice set. The bias for a visible unit, A, for the selected item is denoted by bA. Finally, let H = K. The choice rate (3) of the RBM choice model can then be represented by λ(A|X) = X h exp −Eθ vX , wA , h  , (7) where we define the binary vectors, vX , wA, such that vX i = 1 iff i ∈X and wA j = 1 iff j = A. Observe that the right-hand side of (7) is X h exp(−Eθ((vX , wA), h)) = X h exp X X∈X X k T k X hk + X k U k A hk + bA ! (8) = exp(bA) X h Y k exp T k X + U k A  hk  (9) = exp(bA) Y k X hk∈{0,1} exp T k X + U k A  hk  , (10) which is equivalent to (3). The RBM choice model assumes that one item from a choice set is selected. In the context of the RBM, this means that wA = 1 for only one A ∈X ⊆I. Using (6), our choice probability (1) can be represented by p(A|X) = P h Pθ((vX , wA), h) P X∈X P h Pθ((vX , wX), h). (11) This is the conditional probability of realizing the configuration, (vX , wA), given that the realized configuration is either of the (vX , wX) for X ∈X. See Appendix A.2for an extension of the RBM choice model. 3 Flexibility of the RBM choice model In this section, we formally study the flexibility of the RBM choice model. Recall that λ(X|X) in (3) is modified from λMLM(X|X) in (2) by a factor, 1 + exp T k X + U k X  , (12) 3 for each k in K, so that λ(X|X) can depend on X through T k X . We will see how this modification allows the RBM choice model to represent each of the typical choice phenomena. The similarity effect refers to the following phenomenon [14]: p(A|{A, B}) > p(B|{A, B}) and p(A|{A, B, S}) < p(B|{A, B, S}). (13) Motivated by (13), we define the strength of the similarity effect as follows: Definition 1. For A, B ∈X, the strength of the similarity effect of S on A relative to B with X is defined as follows: ψ(sim) A,B,S,X ≡ p(A|X) p(B|X) p(B|X ∪{S}) p(A|X ∪{S}) . (14) When ψ(sim) A,B,S,X = 1, adding S into X does not change the ratio between p(A|X) and p(B|X). Namely, there is no similarity effect. When ψ(sim) A,B,S,X > 1, we can increase p(B|X) p(A|X) by a factor of ψ(sim) A,B,S,X by the addition of S into X. This corresponds to the similarity effect of (13). When ψ(sim) A,B,S,X < 1, this ratio decreases by an analogous factor. We will study the strength of this (rather general) similarity effect without the restriction that S is “similar” to A (see Figure 1 (a)). Because p(X|X) has a common denominator for X = A and X = B, we have ψ(sim) A,B,S,X = λ(A|X) λ(B|X) λ(B|X ∪{S}) λ(A|X ∪{S}) . (15) The MLM cannot represent the similarity effect, because the λMLM(X|X) in (2) is independent of X. For any choice sets, X and Y, we must have λMLM(A|X) λMLM(B|X) = λMLM(A|Y) λMLM(B|Y). (16) The equality (16) is known as the independence from irrelevant alternatives (IIA). The RBM choice model can represent an arbitrary strength of the similarity effect. Specifically, by adding an element, ˆk, into K of (3), we can set λ(A|X∪{S}) λ(A|X) at an arbitrary value without affecting the value of λ(B|Y), ∀B ̸= A, for any Y. We prove the following theorem in Appendix C: Theorem 1. Consider an RBM choice model where the choice rate of X from X is given by (2). Let ˆλ(X|X) be the corresponding choice rate after adding ˆk into K. Namely, ˆλ(X|X) = λ(X|X)  1 + exp  T ˆk X + U ˆk X  . (17) Consider an item A ∈X and an item S ̸∈X. For any c ∈(0, ∞) and ε > 0, we can then choose T ˆk · and U ˆk · such that c = ˆλ(A|X ∪{S}) ˆλ(A|X) ; ε > ˆλ(B|Y) λ(B|Y) −1 , ∀Y, B s.t. B ̸= A. (18) By (15) and Theorem 1, the strength of the similarity effect after adding ˆk into K is ˆψ(sim) A,B,S,X = ˆλ(A|X) ˆλ(A|X ∪{S}) ˆλ(B|X ∪{S}) ˆλ(B|X) ≈1 c λ(B|X ∪{S}) λ(B|X) . (19) Because c can take an arbitrary value in (0, ∞), the additional factor, (12) with k = ˆk, indeed allows ˆψ(sim) A,B,S,X to take any positive value without affecting the value of λ(B|Y), ∀B ̸= A, for any Y. The first part of (18) guarantees that this additional factor does not change p(X|Y) for any X if A /∈Y. Note that what we have shown is not limited to the similarity effect of (13). The RBM choice model can represent an arbitrary phenomenon where the choice set affects the ratio of the choice rate. 4 According to [14], the attraction effect is represented by p(A|{A, B}) < p(A|{A, B, D}). (20) The MLM cannot represent the attraction effect, because the λMLM(X|Y) in (2) is independent of Y, and we must have P X∈X λMLM(X|X) ≤P X∈Y λMLM(X|Y) for X ⊂Y, which in turn implies the regularity principle: p(X|X) ≥p(X|Y) for X ⊂Y. Motivated by (20), we define the strength of the attraction effect as the magnitude of the change in the choice probability of an item when another item is added into the choice set. Formally, Definition 2. For A ∈X, the strength of the attraction effect of D on A with X is defined as follows: ψ(att) A,D,X ≡ p(A|X ∪{D}) p(A|X) . (21) When there is no attraction effect, adding D into X can only decrease p(A|X); hence, ψ(att) A,D,X ≤1. The standard definition of the attraction effect (20) implies ψ(att) A,D,X > 1. We study the strength of this attraction effect without the restriction that A “dominates” D (see Figure 1 (b)). We prove the following theorem in Appendix C: Theorem 2. Consider the two RBM choice models in Theorem 1. The first RBM choice model has the choice rate given by (3), and the second RBM choice model has the choice rate given by (17). Let p(·|·) denote the choice probability for the first RBM choice model and ˆp(·|·) denote the choice probability for the second RBM choice model. Consider an item A ∈X and an item D ̸∈X. For any r ∈(p(A|X ∪{D}), 1/p(A|X)) and ε > 0, we can choose T ˆk · , U ˆk · such that r = ˆp(A|X ∪{D}) ˆp(A|X) ; ε > ˆλ(B|Y) λ(B|Y) −1 , ∀Y, B s.t. B ̸= A. (22) We expect that the range, (p(A|X ∪{D}), 1/p(A|X)), of r in the theorem covers the attraction effect in practice. Also, this range is the widest possible in the following sense. The factor (12) can only increase λ(X|Y) for any X, Y. The form of (1) then implies that, to decrease p(A|Y), we must increase λ(X|Y) for X ̸= A. However, increasing λ(X|Y) for X ̸= A is not allowed due to the second part of (22) with ε →0. Namely, the additional factor, (12) with k = ˆk, can only increase p(A|Y) for any Y under the condition of the second part of (22). The lower limit, p(A|X ∪{D}), is achieved when ˆp(A|X) →1, while keeping ˆp(A|X ∪{D}) ≈p(A|X ∪{D}). The upper limit, 1/p(A|X), is achieved when ˆp(A|X ∪{D}) →1, while keeping ˆp(A|X) ≈p(A|X). According to [18], the compromise effect is formally represented by p(C|{A, B, C}) X X∈{A,C} p(X|{A, B, C}) > p(C|{A, C}) and p(C|{A, B, C}) X X∈{B,C} p(X|{A, B, C}) > p(C|{B, C}). (23) The MLM cannot represent the compromise effect, because the λMLM(X|Y) in (2) is independent of Y, which in turn makes the inequalities in (23) equalities. Motivated by (23), we define the strength of the compromise effect as the magnitude of the change in the conditional probability of selecting an item, C, given that either C or another item, A, is selected when yet another item, B, is added into the choice set. More precisely, we also exchange the roles of A and B, and study the minimum magnitude of those changes: Definition 3. For a choice set, X, and items, A, B, C, such that A, B, C ∈X, let φA,B,C,X ≡ qAC(C|X) qAC(C|X \ {B}), (24) where, for Y such that A, C ∈Y, we define qAC(C|Y) ≡ p(C|Y) P X∈{A,C} p(X|Y). (25) The strength of the compromise effect of A and B on C with X is then defined as ψ(com) A,B,C,X ≡ min {φA,B,C,X , φB,A,C,X } . (26) 5 Here, we do not have the restriction that C is a “compromise” between A and B (see Figure 1 (c)). In Appendix C:we prove the following theorem: Theorem 3. Consider a choice set, X, and three items, A, B, C ∈X. Consider the two RBM choice models in Theorem 2. Let ˆψ(com) A,B,C,X be defined analogously to (26) but with ˆp(·|·). Let q ≡max {qAC(C|X \ {B}), qBC(C|X \ {A})} (27) q ≡min {qAC(C|X), qBC(C|X)} . (28) Then, for any r ∈(q, 1/q) and ε > 0, we can choose T k · , U k · such that r = ˆψ(com) A,B,C,X ; ε > ˆλ(X|Y) λ(X|Y) −1 , ∀Y, X s.t. X ̸= C. (29) We expect that the range of r in the theorem covers the compromising effect in practice. Also, this range is best possible in the sense analogous to what we have discussed with the range in Theorem 2. Because the additional factor, (12) with k = ˆk, can only increase p(C|Y) for any Y under the condition of the second part of (29), it can only increase qXC(C|Y) for X ∈{A, B}. The lower limit, q, is achieved when qXC(C|X \ {X}) →1, while keeping qXC(C|X) approximately unchanged, for X ∈{A, B}. The upper limit, 1/q, is achieved when qXC(C|X) →1, while keeping qXC(C|X \ {X}) approximately unchanged, for X ∈{A, B}. 4 Numerical experiments We now validate the effectiveness of the RBM choice model in predicting the choices made by humans. Here we use the dataset from [2], which is based on the survey conducted in Switzerland, where people are asked to choose a means of transportation from given options. A subset of the dataset is used to train the RBM choice model, which is then used to predict the choice in the remaining dataset. In Appendix B.2,we also conduct an experiment with artificial dataset and show that the RBM choice model can indeed be trained to represent each of the typical choice phenomena. This flexibility in the representation is the basis of the predictive accuracy of the RBM choice model to be presented in this section. All of our experiments are run on a single core of a Windows PC with main memory of 8 GB and Core i5 CPU of 2.6 GHz. The dataset [2] consists of 10,728 choices that 1,192 people have made from a varying choice set. For those who own a car, the choice set has three items: a train, a maglev, and a car. For those who do not own a car, the choice set consists of a train and a maglev. The train can operate at the interval of 30, 60, or 120 minutes. The maglev can operate at the interval of 10, 20, or 30 minutes. The trains (or maglevs) with different intervals are considered to be distinct items in our experiment. Figure 3 (a) shows the empirical choice probability for each choice set. Each choice set consists of a train with a particular interval (blue, shaded) and a maglev with a particular interval (red, mesh) possibly with a car (yellow, circles). The interval of the maglev varies as is indicated at the bottom of the figure. The interval of the train is indicated at the left side of the figure. For each combination of the intervals of the train and the maglev, there are two choice sets, with or without a car. We evaluate the accuracy of the RBM choice model in predicting the choice probability for an arbitrary choice set, when the RBM choice model is trained with the data of the choice for the remaining 17 choice sets (i.e., we have 18 test cases). We train the RBM choice model (or the MLM) by the use of discriminative training with stochastic gradient descent using the mini-batch of size 50 and the learning rate of η = 0.1 (see Appendix A.1).Each run of the evaluation uses the entire training dataset 50 times for training, and the evaluation is repeated five times by varying the initial values of the parameters. The elements of T and U are initialized independently with samples from the uniform distribution on [−10/ p max(|I|, |K|), −10/ p max(|I|, |K|)], where |I| = 7 is the number of items under consideration, and |K| is the number of hidden nodes. The elements of b are initialized with samples from the uniform distribution on [−1, 1]. Figure 3 (b) shows the Kullback-Leibler (KL) divergence between the predicted distribution of the choice and the corresponding true distribution. The dots connected with a solid line show the the 6 0.0 0.2 0.4 0.6 0.8 1.0 Train30 0.0 0.2 0.4 0.6 0.8 1.0 Train60 Maglev10 0.0 0.2 0.4 0.6 0.8 1.0 Train120 Maglev20 Maglev30 Train30 Train60 Train120 Maglev10 Maglev20 Maglev30 Car (a) Dataset 0 1 2 4 8 16 Number of hidden units 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Average KL divergence Training Test (b) Error 0.0 0.2 0.4 0.6 0.8 1.0 Train30 0.0 0.2 0.4 0.6 0.8 1.0 Train60 Maglev10 0.0 0.2 0.4 0.6 0.8 1.0 Train120 Maglev20 Maglev30 Train30 Train60 Train120 Maglev10 Maglev20 Maglev30 Car (c) RBM 0.0 0.2 0.4 0.6 0.8 1.0 Train30 0.0 0.2 0.4 0.6 0.8 1.0 Train60 Maglev10 0.0 0.2 0.4 0.6 0.8 1.0 Train120 Maglev20 Maglev30 Train30 Train60 Train120 Maglev10 Maglev20 Maglev30 Car (d) MLM Figure 3: Dataset (a), the predictive error of the RBM choice model against the number of hidden units (b), and the choice probabilities learned by the RBM choice model (c) and the MLM (d). average KL divergence over all of the 18 test cases and five runs with varying initialization. The average KL divergence is also evaluated for training data and is shown with a dashed line. The confidence interval represents the corresponding standard deviation. The wide confidence interval is largely due to the variance between test instances (see Figure 4 in the appendix.The horizontal axis shows the number of the hidden units in the RBM choice model, where zero hidden units correspond to the MLM. The average KL divergence is reduced from 0.12 for the MLM to 0.02 for the RBM choice model with 16 hidden units, an improvement by a factor of six. Figure 3 (c)-(d) shows the choice probabilities given by (a) the RBM choice model with 16 hidden units and (b) the MLM, after these models are trained for the test case where the choice set consists of the train with 30-minute interval (Train30) and the maglev with 20-minute interval (Maglev20). Observe that the RBM choice model gives the choice probabilities that are close to the true choice probabilities shown in Figure 3 (a), while the MLM has difficulty in fitting these choice probabilities. Taking a closer look at Figure 3 (a), we can observe that the MLM is fundamentally incapable of learning this dataset. For example, Train30 is more popular than Maglev20 for people who do not own cars, while the preference is reversed for car owners (i.e., the attraction effect). The attraction effect can also be seen for the combination of Maglev30 and Train60. As we have discussed in Section 3, the MLM cannot represent such attraction effects, but the RBM choice model can. 5 Related work We now review the prior work related to our contributions. We will see that all of the existing choice models either cannot represent at least one of the typical choice phenomena or do not have systematic training algorithms. We will also see that the prior work has analyzed choice models with respect to whether those choice models can represent typical choice phenomena or others but only in specific cases of specific strength. On the contrary, our analysis shows that the RBM choice model can represent the typical choice phenomena for all cases of the specified strength. A majority of the prior work on the choice model is about the MLM and its variants such as the hierarchical MLM [5], the multinomial probit model [6], and, generally, random utility models [17]. 7 In particular, the attraction effect cannot be represented by these variants of the MLM [13]. In general, when the choice probability depends only on the values that are determined independently for each item (e.g., the models of [3, 7]), none of the typical choice phenomena can be represented [18]. Recently, Hruschka has proposed a choice model based on an RBM [9], but his choice model cannot represent any of the typical choice phenomena, because the corresponding choice rate is independent of the choice set. It is thus nontrivial how we use the RBM as a choice model in such a way that the typical choice phenomena can be represented. In [11], a hierarchical Bayesian choice model is shown to represent the attraction effect in a specific case. There also exist choice models that have been numerically shown to represent all of the typical choice phenomena for some specific cases. For example, sequential sampling models, including the decision field theory [4] and the leaky competing accumulator model [19], are meant to directly mimic the cognitive process of the human making a choice [12]. However, no paper has shown an algorithm that can train a sequential sampling model in such a way that the trained model exhibits the typical choice phenomena. Shenoy and Yu propose a hierarchical Bayesian model to represent the three typical choice phenomena [16]. Although they perform inferences of the posterior distributions that are needed to compute the choice probabilities with their model, they do not show how to train their model to fit the choice probabilities to given data. Their experiments show that their model represents the typical choice phenomena in particular cases, where the parameters of the model are set manually. Rieskamp et al. classify choice models according to whether a choice model can never represent a certain phenomenon or can do so in some cases to some degree [13]. The phenomena studied in [13] are not limited to the typical choice phenomena, but they list the typical choice phenomena as the ones that are robust and significant. Also, Otter et al. exclusively study all of the typical choice phenomena [12]. Luce is a pioneer of the formal analysis of choice models, which however is largely qualitative [10]. For example, Lemma 3 of [10] can tell us whether a given choice model satisfies the IIA in (16) for all cases or it violates the IIA for some cases to some degree. We address the new question of to what degree a choice model can represent each of the typical choice phenomena (e.g., to what degree the RBM choice model can violate the IIA). Finally, our theorems can be contrasted with the universal approximation theorem of RBMs, which states that an arbitrary distribution can be approximated arbitrarily closely with a sufficient number of hidden units [15, 8]. This is in contrast to our theorems, which show that a single hidden unit suffices to represent the typical choice phenomena of the strength that is specified in the theorems. 6 Conclusion The RBM choice model is developed to represent the typical choice phenomena that have been reported frequently in the literature of cognitive psychology and related areas. Our work motivates a new direction of research on using RBMs to model such complex behavior of humans. Particularly interesting behavior includes the one that is considered to be irrational or the one that results from cognitive biases (see e.g. [1]). The advantages of the RBM choice model that are demonstrated in this paper include their flexibility in representing complex behavior and the availability of effective training algorithms. The RBM choice model can incorporate the attributes of the items in its parameters. Specifically, one can represent the parameters of the RBM choice model as functions of uX, the attributes of X ∈I analogously to the MLM, where bX can be represented as bX = α·uX as we have discussed after (2). The focus of this paper is in designing the fundamental structure of the RBM choice model and analyzing its fundamental properties, and the study about the RBM choice model with attributes will be reported elsewhere. Although the attributes are important for generalization of the RBM model to unseen items, our experiments suggest that the RBM choice model, without attributes, can learn the typical choice phenomena from a given choice set and generalize it to unseen choice sets. Acknowledgements A part of this research is supported by JST, CREST. 8 References [1] D. Ariely. Predictably Irrational: The Hidden Forces That Shape Our Decisions. Harper Perennial, revised and expanded edition, 2010. [2] M. Bierlaire, K. Axhausen, and G. Abay. The acceptance of modal innovation: The case of Swissmetro. In Proceedings of the First Swiss Transportation Research Conference, March 2001. [3] E. Bonilla, S. Guo, and S. Sanner. Gaussian process preference elicitation. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 262–270. 2010. [4] J. R. Busemeyer and J. T. Townsend. Decision field theory: A dynamic cognition approach to decision making. Psychological Review, 100:432–459, 1993. [5] O. Chapelle and Z. Harchaoui. A machine learning approach to conjoint analysis. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17, pages 257–264. 2005. [6] B. Eric, N. de Freitas, and A. Ghosh. Active preference learning with discrete choice data. In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 409–416. 2008. [7] V. F. Farias, S. Jagabathula, and D. Shah. A nonparametric approach to modeling choice with limited data. Management Science, 59(2):305–322, 2013. [8] Y. Freund and D. Haussler. Unsupervised learning of distributions on binary vectors using two layer networks. Technical Report UCSC-CRL-94-25, University of California, Santa Cruz, June 1994. [9] H. Hruschka. Analyzing market baskets by restricted Boltzmann machines. OR Spectrum, pages 1–22, 2012. [10] R. D. Luce. Individual choice behavior: A theoretical analysis. John Wiley and Sons, New York, NY, 1959. [11] T. Osogami and T. Katsuki. A hierarchical Bayesian choice model with visibility. In Proceedings of the 22nd International Conference on Pattern Recognition (ICPR 2014), pages 3618–3623, August 2014. [12] T. Otter, J. Johnson, J. Rieskamp, G. M. Allenby, J. D. Brazell, A. Diederich, J. W. Hutchinson, S. MacEachern, S. Ruan, and J. Townsend. Sequential sampling models of choice: Some recent advances. Marketing Letters, 19(3-4):255–267, 2008. [13] J. Rieskamp, J. R. Busemeyer, and B. A. Mellers. Extending the bounds of rationality: Evidence and theories of preferential choice. Journal of Economic Literature, 44:631–661, 2006. [14] R. M. Roe, J. R. Busemeyer, and J. T. Townsend. Multialternative decision field theory: A dynamic connectionist model of decision making. Psychological Review, 108(2):370–392, 2001. [15] N. L. Roux and Y. Bengio. Representational power of restricted Boltzmann machines and deep belief networks. Neural Computation, 20(6):1631–1649, 2008. [16] P. Shenoy and A. J. Yu. Rational preference shifts in multi-attribute choice: What is fair? In Proceedings of the Annual Meeting of the Cognitive Science Society (CogSci 2013), pages 1300–1305, 2013. [17] K. Train. Discrete Choice Methods with Simulation. Cambridge University Press, second edition, 2009. [18] A. Tversky and I. Simonson. Context-dependent preferences. Management Science, 39(10):1179–1189, 1993. [19] M. Usher and J. L. McClelland. Loss aversion and inhibition in dynamical models of multialternative choice. Psychological Review, 111(3):757–769, 2004. 9
2014
80
5,570
On the Statistical Consistency of Plug-in Classifiers for Non-decomposable Performance Measures Harikrishna Narasimhan†, Rohit Vaish†, Shivani Agarwal Department of Computer Science and Automation Indian Institute of Science, Bangalore – 560012, India {harikrishna, rohit.vaish, shivani}@csa.iisc.ernet.in Abstract We study consistency properties of algorithms for non-decomposable performance measures that cannot be expressed as a sum of losses on individual data points, such as the F-measure used in text retrieval and several other performance measures used in class imbalanced settings. While there has been much work on designing algorithms for such performance measures, there is limited understanding of the theoretical properties of these algorithms. Recently, Ye et al. (2012) showed consistency results for two algorithms that optimize the F-measure, but their results apply only to an idealized setting, where precise knowledge of the underlying probability distribution (in the form of the ‘true’ posterior class probability) is available to a learning algorithm. In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable ‘estimate’ of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably. We use this template to derive consistency results for plug-in algorithms for the F-measure and for the geometric mean of TPR and precision; to our knowledge, these are the first such results for these measures. In addition, for continuous distributions, we show consistency of plug-in algorithms for any performance measure that is a continuous and monotonically increasing function of TPR and TNR. Experimental results confirm our theoretical findings. 1 Introduction In many real-world applications, the performance measure used to evaluate a learning model is non-decomposable and cannot be expressed as a summation or expectation of losses on individual data points; this includes, for example, the F-measure used in information retrieval [1], and several combinations of the true positive rate (TPR) and true negative rate (TNR) used in class imbalanced classification settings [2–5] (see Table 1). While there has been much work in the last two decades in designing learning algorithms for such performance measures [6–14], our understanding of the statistical consistency of these methods is rather limited. Recently, Ye et al. (2012) showed consistency results for two algorithms for the F-measure [15] that use the ‘true’ posterior class probability to make predictions on instances. These results implicitly assume that the given learning algorithm has precise knowledge of the underlying probability distribution (in the form of the true posterior class probability); this assumption does not however hold in most real-world settings. In this paper, we consider a family of methods that construct a plug-in classifier by applying an empirically determined threshold to a suitable ‘estimate’ of the class probability (obtained using a model learned from a sample drawn from the underlying distribution). We provide a general method†Both authors contributed equally to this paper. 1 Table 1: Performance measures considered in our study. Here β ∈(0, ∞) and p = P(y = 1). Each performance measure here can be expressed as PΨ D[h] = Ψ(TPRD[h], TNRD[h], p). The last column contains the assumption on the distribution D under which the plug-in algorithm considered in this work is statistically consistent w.r.t. a performance measure (details in Sections 3 and 5). Measure Definition Ref. Ψ(u, v, p) Assumption on D AM (1-BER) (TPR + TNR)/2 [17–19] u+v 2 Assumption A Fβ-measure (1 + β2)/ β2 Prec + 1 TPR  [1,19] (1+β2)pu p+β2(pu+(1−p)(1−v)) Assumption A G-TP/PR √ TPR · Prec [3] q pu2 pu+(1−p)(1−v) Assumption A G-Mean (GM) √ TPR · TNR [2,3] √uv Assumption B H-Mean (HM) 2/ 1 TPR + 1 TNR  [4] 2uv u+v Assumption B Q-Mean (QM) 1 −((1 −TPR)2 + (1 −TNR)2)/2 [5] 1 −(1−u)2+(1−v)2 2 Assumption B ology to show statistical consistency of these methods (under a mild assumption on the underlying distribution) for any performance measure that can be expressed as a continuous function of the TPR and TNR and the class proportion, and for which the Bayes optimal classifier is the class probability function thresholded at a suitable point. We use our proof template to derive consistency results for the F-measure (using a recent result by [15] on the Bayes optimal classifier for F-measure), and the geometric mean of TPR and precision; to our knowledge, these are the first such results for these performance measures. Using our template, we also obtain a recent consistency result by Menon et al. [16] for the arithmetic mean of TPR and TNR. In addition, we show that for continuous distributions, the optimal classifier for any performance measure that is a continuous and monotonically increasing function of TPR and TNR is necessarily of the requisite thresholded form, thus establishing consistency of the plug-in algorithms for all such performance measures. Experiments on real and synthetic data confirm our theoretical findings, and show that the plug-in methods considered here are competitive with the state-of-the-art SVMperf method [12] for non-decomposable measures. Related Work. Much of the work on non-decomposable performance measures in binary classification settings has focused on the F-measure; these include the empirical plug-in algorithm considered here [6], cost-weighted versions of SVM [9], methods that optimize convex and non-convex approximations to F-measure [10–14], and decision-theoretic methods that learn a class probability estimate and compute predictions that maximize the expected F-measure on a test set [7–9]. While there has been considerable amount of work on consistency of algorithms for univariate performance measures [16,20–22], theoretical results on non-decomposable measures have been limited to characterizing the Bayes optimal classifier for F-measure [15, 23, 24], and some consistency results for F-measure for certain idealized versions of the empirical plug-in and decision theoretic methods that have access to the true class probability [15]. There has also been some work on algorithms that optimize F-measure in multi-label classification settings [25, 26] and consistency results for these methods [26,27], but these results do not apply to the binary classification setting that we consider here; in particular, in a binary classification setting, the F-measure that one seeks to optimize is a single number computed over the entire training set, while in a multi-label setting, the goal is to optimize the mean F-measure computed over multiple labels on individual instances. Organization. We start with some preliminaries in Section 2. Section 3 presents our main result on consistency of plug-in algorithms for non-decomposable performance measures that are functions of TPR and TNR. Section 4 contains application of our proof template to the AM, Fβ and G-TP/PR measures, and Section 5 contains results under continuous distributions for performance measures that are monotonic in TPR and TNR. Section 6 describes our experimental results on real and synthetic data sets. Proofs not provided in the main text can be found in the Appendix. 2 Preliminaries Problem Setup. Let X be any instance space. Given a training sample S = ((x1, y1), . . . , (xn, yn)) ∈(X × {±1})n, our goal is to learn a binary classifier bhS : X →{±1} to make predictions for new instances drawn from X. Assume all examples (both training and test) are drawn iid according to some unknown probability distribution D on X × {±1}. Let η(x) = P(y = 1|x) and p = P(y = 1) (both under D). We will be interested in settings where the performance of bhS is measured via a non-decomposable performance measure P : {±1}X →R+, which cannot be expressed as a sum or expectation of losses on individual examples. 2 Non-decomposable performance measures. Let us first define the following quantities associated with a binary classifier h : X →{±1}: True Positive Rate / Recall TPRD[h] = P h(x) = 1 | y = 1  True Negative Rate TNRD[h] = P h(x) = −1 | y = −1  Precision PrecD[h] = P y = 1 | h(x) = 1  = pTPRD[h] pTPRD[h]+(1−p)(1−TNRD[h]). In this paper, we will consider non-decomposable performance measures that can be expressed as a function of the TPR and TNR and the class proportion p. Specifically, let Ψ : [0, 1]3 →R+; then the Ψ-performance of h w.r.t. D, which we will denote as PΨ D[h], will be defined as: PΨ D[h] = Ψ(TPRD[h], TNRD[h], p). For example, for β > 0, the Fβ-measure of h can be defined through the function ΨFβ : [0, 1]3 →R+ given by ΨFβ(u, v, p) = (1+β2)pu p+β2(pu+(1−p)(1−v)), which gives PFβ D [h] = (1 + β2)/  β2 PrecD[h] + 1 TPRD[h]  . Table 1 gives several examples of non-decomposable performance measures that are used in practice. We will also find it useful to consider empirical versions of these performance measures calculated from a sample S, which we will denote as bPΨ S [h]: bPΨ S [h] = Ψ( d TPRS[h], d TNRS[h], bpS), (1) where bpS = 1 n Pn i=1 1(yi = 1) is an empirical estimate of p, and d TPRS[h] = 1 bpSn n X i=1 1(h(xi) = 1, yi = 1); d TNRS[h] = 1 (1 −bpS)n n X i=1 1(h(xi) = −1, yi = −1) are the empirical TPR and TNR respectively.1 Ψ-consistency. We will be interested in the optimum value of PΨ D over all classifiers: PΨ,∗ D = sup h:X →{±1} PΨ D[h]. In particular, one can define the Ψ-regret of a classifier h as: regretΨ D[h] = PΨ,∗ D −PΨ D[h]. A learning algorithm is then said to be Ψ-consistent if the Ψ-regret of the classifier bhS output by the algorithm on seeing training sample S converges in probability to 0:2 regretΨ D[bhS] P−→0. Class of Threshold Classifiers. We will find it useful to define for any function f : X →[0, 1], the set of classifiers obtained by assigning a threshold to f: Tf = {sign ◦(f −t) | t ∈[0, 1]}, where sign(u) = 1 if u > 0 and −1 otherwise. For a given f, we shall also define the thresholds corresponding to maximum population and empirical measures respectively (when they exist) as: t∗ D,f,Ψ ∈argmax t∈[0,1] PΨ D[sign ◦(f −t)]; btS,f,Ψ ∈argmax t∈[0,1] bPΨ S [sign ◦(f −t)]. Plug-in Algorithms and Result of Ye et al. (2012). In this work, we consider a family of plug-in algorithms, which divide the input sample S into samples (S1, S2), use a suitable class probability estimation (CPE) algorithm to learn a class probability estimator bηS1 : X →[0, 1] from S1, and output a classifier bhS(x) = sign(bηS1(x) −btS2,bηS1,Ψ), where btS2,bηS1,Ψ is a threshold that maximizes the empirical performance measure on S2 (see Algorithm 1). We note that this approach is different from the idealized plug-in method analyzed by Ye et al. (2012) in the context of F-measure optimization, where a classifier is learned by assigning an empirical threshold to the ‘true’ class probability function η [15]; the consistency result therein is useful only if precise knowledge of η is available to a learning algorithm, which is not the case in most practical settings. L1-consistency of a CPE algorithm. Let C be a CPE algorithm, and for any sample S, denote bηS = C(S). We will say C is L1-consistent w.r.t. a distribution D if Ex  bηS(x) −η(x)  P−→0. 1In the setting considered here, the goal is to maximize a (non-decomposable) function of expectations; we note that this is different from the decision-theoretic setting in [15], where one looks at the expectation of a non-decomposable performance measure on n examples, and seeks to maximize its limiting value as n →∞. 2We say φ(S) converges in probability to a ∈ R, written as φ(S) P−→ a, if ∀ϵ > 0, PS∼Dn(|φ(S) −a| ≥ϵ) →0 as n →∞. 3 Algorithm 1 Plug-in with Empirical Threshold for Performance Measure PΨ : 2X →R+ 1: Input: S = ((x1, y1), . . . , (xn, yn)) ∈(X × {±1})n 2: Parameter: α ∈(0, 1) 3: Let S1 = ((x1, y1), . . . , (xn1, yn1)), S2 = ((xn1+1, yn1+1), . . . , (xn, yn)), where n1 = ⌈nα⌉ 4: Learn bηS1 = C(S1), where C : ∪∞ n=1(X × {±1})n →[0, 1]X is a suitable CPE algorithm 5: btS2,bηS1,Ψ ∈argmax t∈[0,1] bPΨ S2[sign ◦(bηS1 −t)] 6: Output: Classifier bhS(x) = sign(bηS1(x) −btS2,bηS1,Ψ) 3 A Generic Proof Template for Ψ-consistency of Plug-in Algorithms We now give a general result for showing consistency of the plug-in method in Algorithm 1 for any performance measure that can be expressed as a continuous function of TPR and TNR, and for which the Bayes optimal classifier is obtained by suitably thresholding the class probability function. Assumption A. We will say that a probability distribution D on X × {±1} satisfies Assumption A w.r.t. Ψ if t∗ D,η,Ψ exists and is in (0, 1), and the cumulative distribution functions of the random variable η(x) conditioned on y = 1 and on y = −1, P(η(x) ≤z | y = 1) and P(η(x) ≤z | y = −1), are continuous at z = t∗ D,η,Ψ.3 Note that this assumption holds for any distribution D for which η(x) conditioned on y = 1 and on y = −1 is continuous, and also for any D for which η(x) conditioned on y = 1 and on y = −1 is mixed, provided the optimum threshold t∗ D,η,Ψ for PΨ exists and is not a point of discontinuity. Under the above assumption, and assuming that the CPE algorithm used in Algorithm 1 is L1consistent (which holds for any algorithm that uses a regularized empirical risk minimization of a proper loss [16,28]), we have our main consistency result. Theorem 1 (Ψ-consistency of Algorithm 1). Let Ψ : [0, 1]3 →R+ be continuous in each argument. Let D be a probability distribution on X ×{±1} that satisfies Assumption A w.r.t. Ψ, and for which the Bayes optimal classifier is of the form hΨ,∗(x) = sign ◦(η(x) −t∗ D,η,Ψ). If the CPE algorithm C in Algorithm 1 is L1-consistent, then Algorithm 1 is Ψ-consistent w.r.t. D. Before we prove the above theorem, we will find it useful to state the following lemmas. In our first lemma, we state that the TPR and TNR of a classifier constructed by thresholding a suitable class probability estimate at a fixed c ∈(0, 1) converge respectively to the TPR and TNR of the classifier obtained by thresholding the true class probability function η at c. Lemma 2 (Convergence of TPR and TNR for fixed threshold). Let D be a distribution on X × {±1}. Let bηS : X →[0, 1] be generated by an L1-consistent CPE algorithm. Let c ∈(0, 1) be an apriori fixed constant such that the cumulative distribution functions P(η(x) ≤z | y = 1) and P(η(x) ≤z | y = −1) are continuous at z = c. We then have TPRD[sign ◦(bηS −c)] P−→TPRD[sign ◦(η −c)]; TNRD[sign ◦(bηS −c)] P−→TNRD[sign ◦(η −c)]. As a corollary to the above lemma, we have a similar result for PΨ. Lemma 3 (Convergence of PΨ for fixed threshold). Let Ψ : [0, 1]3 →R+ be continuous in each argument. Under the statement of Lemma 2, we have PΨ D[sign ◦(bηS −c)] P−→PΨ D[sign ◦(η −c)]. We next state a result showing convergence of the empirical performance measure to its population value for a fixed classifier, and a uniform convergence result over a class of thresholded classifiers. Lemma 4 (Concentration result for PΨ). Let Ψ : [0, 1]3 →R+ be continuous in each argument. Then for any fixed h : X →{±1}, and ϵ > 0, PS∼Dn  PΨ D[h] −bPΨ S [h] ≥ϵ  →0 as n →∞. 3For simplicity, we assume that t∗ D,η,Ψ is in (0, 1); our results easily extend to the case when t∗ D,η,Ψ ∈[0, 1]. 4 Lemma 5 (Uniform convergence of PΨ over threshold classifiers). Let Ψ : [0, 1]3 →R+ be continuous in each argument. For any f : X →[0, 1] and ϵ > 0, PS∼Dn [ θ∈Tf n PΨ D[θ] −bPΨ S [θ] ≥ϵ o! →0 as n →∞. We are now ready to prove our main theorem. Proof of Theorem 1. Recall t∗ D,η,Ψ ∈argmax t∈[0,1] PΨ D[sign ◦(η −t)] exists by Assumption A. In the following, we shall use t∗in the place of t∗ D,η,Ψ and btS2,S1 in the place of btS2,bηS1,Ψ. We have regretΨ D[hS] = regretΨ D[sign ◦(bηS1 −btS2,S1)] = PΨ,∗ D −PΨ D[sign ◦(bηS1 −btS2,S1)] = PΨ D[sign ◦(η −t∗)] −PΨ D[sign ◦(bηS1 −btS2,S1)], which follows from the assumption on the Bayes optimal classifier for PΨ. Adding and subtracting empirical and population versions of PΨ computed on certain classifiers, regretΨ D[sign ◦(bηS1 −btS2,S1)] = PΨ D[sign ◦(η −t∗)] −PΨ D[sign ◦(bηS1 −t∗)] | {z } term1 + PΨ D[sign ◦(bηS1 −t∗)] −bPΨ S2[sign ◦(bηS1 −btS2,S1)] | {z } term2 + bPΨ S2[sign ◦(bηS1 −btS2,S1)] −PΨ D[sign ◦(bηS1 −btS2,S1)] | {z } term3 . We now show convergence for each of the above terms. Applying Lemma 3 with c = t∗(by Assumption A, t∗∈(0, 1) and satisfies the necessary continuity assumption), we have term1 P−→0. For term2, from the definition of threshold btS2,S1 (see Algorithm 1), we have term2 ≤ PΨ D[sign ◦(bηS1 −t∗)] −bPΨ S2[sign ◦(bηS1 −t∗)]. (2) Then for any ϵ > 0, PS∼Dnterm2 ≥ϵ  = PS1∼Dn1, S2∼Dn−n1 term2 ≥ϵ  = ES1 h PS2|S1 term2 ≥ϵ i ≤ ES1 h PS2|S1  PΨ D[sign ◦(bηS1 −t∗)] −bPΨ S2[sign ◦(bηS1 −t∗)] ≥ϵ i → 0 as n →∞, where the third step follows from Eq. (2), and the last step follows by applying, for a fixed S1, the concentration result in Lemma 4 with h = sign ◦(bηS1 −t∗) (given continuity of Ψ). Finally, for term3, we have for any ϵ > 0, PS term3 ≥ϵ  = ES1 h PS2|S1  bPΨ S2[sign ◦(bηS1 −btS2,S1)] −PΨ D[sign ◦(bηS1 −btS2,S1)] ≥ϵ i ≤ES1 " PS2|S1 [ θ∈TbηS1 n bPΨ S2[θ] −PΨ D[θ] ≥ϵ o!# →0 as n →∞, where the last step follows by applying the uniform convergence result in Lemma 5 over the class of thresholded classifiers TbηS1 = {sign ◦(bηS1 −t) | t ∈[0, 1]} (for a fixed S1). 4 Consistency of Plug-in Algorithms for AM, Fβ, and G-TP/PR We now use the result in Theorem 1 to establish consistency of the plug-in algorithms for the arithmetic mean of TPR and TNR, the Fβ-measure, and the geometric mean of TPR and precision. 5 4.1 Consistency for AM-measure The arithmetic mean of TPR and TNR (AM) or one minus the balanced error rate (BER) is a widelyused performance measure in class imbalanced binary classification settings [17–19]: PAM D [h] = TPRD[h] + TNRD[h] 2 . It can be shown that Bayes optimal classifier for the AM-measure is of the form hAM,∗(x) = sign ◦(η(x) −p) (see for example [16]), and that the threshold chosen by the plugin method in Algorithm 1 for the AM-measure is an empirical estimate of p. In recent work, Menon et al. show that this plug-in method is consistent w.r.t. the AM-measure [16]; their proof makes use of a decomposition of the AM-measure in terms of a certain cost-sensitive error and a result of [22] on regret bounds for cost-sensitive classification. We now use our result in Theorem 1 to give an alternate route for showing AM-consistency of this plug-in method.4 Theorem 6 (Consistency of Algorithm 1 w.r.t. AM-measure). Let Ψ = ΨAM. Let D be a distribution on X × {±1} that satisfies Assumption A w.r.t. ΨAM. If the CPE algorithm C in Algorithm 1 is L1-consistent, then Algorithm 1 is AM-consistent w.r.t. D. Proof. We apply Theorem 1 noting that ΨAM(u, v, p) = (u+v)/2 is continuous in all its arguments, and that the Bayes optimal classifier for PAM is of the requisite thresholded form. 4.2 Consistency for Fβ-measure The Fβ-measure or the (weighted) harmonic mean of TPR and precision is a popular performance measure used in information retrieval [1]: PFβ D [h] = (1 + β2)TPRD[h]PrecD[h] β2TPRD[h] + PrecD[h] = (1 + β2)pTPRD[h] p + β2pTPRD[h] + (1 −p)(1 −TNRD[h]) , where β ∈(0, 1) controls the trade-off between TPR and precision. In a recent work, Ye et al. [15] show that the optimal classifier for the Fβ-measure is the class probability η thresholded suitably. Lemma 7 (Optimality of threshold classifiers for Fβ-measure; Ye et al. (2012) [15]). For any distribution D over X × {±1} that satisfies Assumption A w.r.t. Ψ, the Bayes optimal classifier for PFβ is of the form hFβ,∗(x) = sign ◦(η(x) −t∗ D,η,Fβ). As noted earlier, the authors in [15] show that an idealized plug-in method that applies an empirically determined threshold to the ‘true’ class probability η is consistent w.r.t. the Fβ-measure . This result is however useful only when the ‘true’ class probability is available to a learning algorithm, which is not the case in most practical settings. On the other hand, the plug-in method considered in our work constructs a classifier by assigning an empirical threshold to a suitable ‘estimate’ of the class probability. Using Theorem 1, we now show that this method is consistent w.r.t. the Fβ-measure. Theorem 8 (Consistency of Algorithm 1 w.r.t. Fβ-measure). Let Ψ = ΨFβ in Algorithm 1. Let D be a distribution on X × {±1} that satisfies Assumption A w.r.t. ΨFβ. If the CPE algorithm C in Algorithm 1 is L1-consistent, then Algorithm 1 is Fβ-consistent w.r.t. D. Proof. We apply Theorem 1 noting that ΨFβ(u, v, p) = (1+β2)pu p+β2(pu+(1−p)(1−v)) is continuous in each argument, and that (by Lemma 7) the Bayes optimal classifier for PFβ is of the requisite form. 4.3 Consistency for G-TP/PR The geometric mean of TPR and precision (G-TP/PR) is another performance measure proposed for class imbalanced classification problems [3]: PG-TP/PR D [h] = p TPRD[h]PrecD[h] = s pTPRD[h]2 pTPRD[h] + (1 −p)(1 −TNRD[h]). 4Note that the plug-in classification threshold chosen for the AM-measure is the same independent of the class probability estimate used; our consistency results will therefore apply in this case even if one uses, as in [16], the same sample for both learning a class probability estimate, and estimating the plug-in threshold. 6 We first show that the optimal classifier for G-TP/PR is obtained by thresholding the class probability function η at a suitable point; our proof uses a technique similar to the one for the Fβ-measure in [15]. Lemma 9 (Optimality of threshold classifiers for G-TP/PR). For any distribution D on X × {±1} that satisfies Assumption A w.r.t. Ψ, the Bayes optimal classifier for PG-TP/PR is of the form hG-TP/PR,∗(x) = sign(η(x) −t∗ D,η,G-TP/PR). Theorem 10 (Consistency of Algorithm 1 w.r.t. G-TP/PR). Let Ψ = ΨG-TP/PR. Let D be a distribution on X × {±1} that satisfies Assumption A w.r.t. ΨG-TP/PR. If the CPE algorithm C in Algorithm 1 is L1-consistent, then Algorithm 1 is G-TP/PR-consistent w.r.t. D. Proof. We apply Theorem 1 noting that ΨG-TP/PR(u, v, p) = q pu2 pu+(1−p)(1−v) is continuous in each argument, and that (by Lemma 9) the Bayes optimal classifier for PG-TP/PR is of the requisite form. 5 Consistency of Plug-in Algorithms for Non-decomposable Performance Measures that are Monotonic in TPR and TNR The consistency results seen so far apply to any distribution that satisfies a mild continuity condition at the optimal threshold for a performance measure, and have crucially relied on the specific functional form of the measure. In this section, we shall see that under a stricter continuity assumption on the distribution, the empirical plug-in algorithm can be shown to be consistent w.r.t. any performance measure that is a continuous and monotonically increasing function of TPR and TNR. Assumption B. We will say that a probability distribution D on X × {±1} satisfies Assumption B w.r.t. Ψ if t∗ D,η,Ψ exists and is in (0, 1), and the cumulative distribution function of the random variable η(x), P(η(x) ≤z), is continuous at all z ∈(0, 1). Distributions that satisfy the above assumption also satisfy Assumption A. We show that under this assumption, the optimal classifier for any performance measure that is monotonically increasing in TPR and TNR is obtained by thresholding η, and this holds irrespective of the specific functional form of the measure. An application of Theorem 1 then gives us the desired consistency result. Lemma 11 (Optimality of threshold classifiers for monotonic Ψ under distributional assumption). Let Ψ : [0, 1]3 →R+ be monotonically increasing in its first two arguments. Then for any distribution D on X × {±1} that satisfies Assumption B, the Bayes optimal classifier for PΨ is of the form hΨ,∗(x) = sign(η(x) −t∗ D,η,Ψ). Theorem 12 (Consistency of Algorithm 1 for monotonic Ψ under distributional assumption). Let Ψ : [0, 1]3 →R+ be continuous in each argument, and monotonically increasing in its first two arguments. Let D be a distribution on X × {±1} that satisfies Assumption B. If the CPE algorithm C in Algorithm 1 is L1-consistent, then Algorithm 1 is Ψ-consistent w.r.t. D. Proof. We apply Theorem 1 by using the continuity assumption on Ψ, and noting that, by Lemma 11 and monotonicity of Ψ, the Bayes optimal classifier for PΨ is of the requisite form. The above result applies to all performance measures listed in Table 1, and in particular, to the geometric, harmonic, and quadratic means of TPR and TNR [2–5], for which the Bayes optimal classifier need not be of the requisite thresholded form for a general distribution (see Appendix C). 6 Experiments We performed two types of experiments. The first involved synthetic data, where we demonstrate diminishing regret of the plug-in method in Algorithm 1 with growing sample size for different performance measures; since the data is generated from a known distribution, exact calculation of regret is possible here. The second involved real data, where we show that the plug-in algorithm is competitive with the state-of-the-art SVMperf algorithm for non-decomposable measures (SVMPerf) [12]; we also include for comparison a plug-in method with a fixed threshold of 0.5 (Plug-in (0-1)). We consider three performance measures here: F1-measure, G-TP/PR and G-Mean (see Table 1). Synthetic data. We generated data from a known distribution (class conditionals are multivariate Gaussians with mixing ratio p and equal covariance matrices) for which the optimal classifier for 7 10 2 10 3 10 4 0 0.05 0.1 0.15 0.2 No. of training examples F1 Regret F1−measure Plug−in (F1) SVMPerf (F1) Plug−in (0−1) 10 2 10 3 10 4 0 0.05 0.1 0.15 0.2 No. of training examples G−TP/PR Regret G−TP/PR Plug−in (G−TP/PR) SVMPerf (G−TP/PR) Plug−in (0−1) 10 2 10 3 10 4 0 0.02 0.04 0.06 0.08 0.1 No. of training examples GM Regret G−Mean Plug−in (GM) SVMPerf (GM) Plug−in (0−1) Figure 1: Experiments on synthetic data with p = 0.5: regret as a function of number of training examples using various methods for the F1, G-TP/PR and G-mean performance measures. 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 No. of training examples F1 Regret F1−measure Plug−in (F1) SVMPerf (F1) Plug−in (0−1) 10 2 10 3 10 4 0 0.1 0.2 0.3 0.4 No. of training examples G−TP/PR Regret G−TP/PR Plug−in (G−TP/PR) SVMPerf (G−TP/PR) Plug−in (0−1) 10 2 10 3 10 4 0 0.2 0.4 0.6 0.8 No. of training examples GM Regret G−Mean Plug−in (GM) SVMPerf (GM) Plug−in (0−1) Figure 2: Experiments on synthetic data with p = 0.1: regret as a function of number of training examples using various methods for the F1, G-TP/PR and G-Mean performance measures. F1 G−TP/PR G−Mean 0 0.5 1 Performance on test set car (N = 1728, d = 21, p = 0.038) Emp. Plug−in SVMPerf Plug−in (0−1) F1 G−TP/PR G−Mean 0 0.5 1 Performance on test set chemo (N = 2111, d = 1021, p = 0.024) Emp. Plug−in SVMPerf Plug−in (0−1) F1 G−TP/PR G−Mean 0 0.5 1 Performance on test set nursery (N = 12960, d = 27, p = 0.025) Emp. Plug−in SVMPerf Plug−in (0−1) F1 G−TP/PR G−Mean 0 0.5 1 Performance on test set pendigits (N = 10992, d = 17,p = 0.096) Emp. Plug−in SVMPerf Plug−in (0−1) Figure 3: Experiments on real data: results for various methods (using linear models) on four data sets in terms of F1, G-TP/PR and G-Mean performance measures. Here N, d, p refer to the number of instances, number of features and fraction of positives in the data set respectively. each performance measure considered here is linear, making it sufficient to learn a linear model; the distribution satisfies Assumption B w.r.t. each performance measure. We used regularized logistic regression as the CPE method in Algorithm 1 in order to satisfy the L1-consistency condition in Theorem 1 (see Appendix A.1 and A.4 for details). The experimental results are shown in Figures 1 and 2 for p = 0.5 and p = 0.1 respectively. In each case, the regret for the empirical plug-in method (Plug-in (F1), Plug-in (G-TP/PR) and Plug-in (GM)) goes to zero with increasing training set size, validating our consistency results; SVMperf fails to exhibit diminishing regret for p = 0.1; and as expected, Plug-in (0-1), with its apriori fixed threshold, fails to be consistent in most cases. Real data. We ran the three algorithms described earlier over data sets drawn from the UCI ML repository [29] and a cheminformatics data set obtained from [30], and report their performance on separately held test sets. Figure 3 contains results for four data sets averaged over 10 random traintest splits of the original data. (See Appendix A.2 for details and A.3 for additional results). Clearly, in most cases, the empirical plug-in method performs comparable to SVMperf and outperforms the Plug-in (0-1) method. Moreover, the empirical plug-in was found to run faster than SVMperf. 7 Conclusions We have presented a general method for proving consistency of plug-in algorithms that assign an empirical threshold to a suitable class probability estimate for a variety of non-decomposable performance measures for binary classification that can be expressed as a continuous function of TPR and TNR, and for which the Bayes optimal classifier is the class probability function thresholded suitably. We use our template to show consistency for the AM, Fβ and G-TP/PR measures, and under a continuous distribution, for any performance measure that is continuous and monotonic in TPR and TNR. Our experiments suggest that these algorithms are competitive with the SVMperf method. Acknowledgments HN thanks support from a Google India PhD Fellowship. SA gratefully acknowledges support from DST, Indo-US Science and Technology Forum, and an unrestricted gift from Yahoo. 8 References [1] C. D. Manning, P. Raghavan, and H. Sch¨utze. Introduction to Information Retrieval. Cambridge University Press, 2008. [2] M. Kubat and S. Matwin. Addressing the curse of imbalanced training sets: One-sided selection. In ICML, 1997. [3] S. Daskalaki, I. Kopanas, and N. Avouris. Evaluation of classifiers for an uneven class distribution problem. Applied Artificial Intelligence, 20:381–417, 2006. [4] K. Kennedy, B.M. Namee, and S.J. Delany. Learning without default: a study of one-class classification and the low-default portfolio problem. In ICAICS, 2009. [5] S. Lawrence, I. Burns, A. Back, A-C. Tsoi, and C.L. Giles. Neural network classification and prior class probabilities. In Neural Networks: Tricks of the Trade, pages 1524:299–313. 1998. [6] Y. Yang. A study of thresholding strategies for text categorization. In SIGIR, 2001. [7] D.D. Lewis. Evaluating and optimizing autonomous text classification systems. In SIGIR, 1995. [8] K.M.A. Chai. Expectation of F-measures: Tractable exact computation and some empirical observations of its properties. In SIGIR, 2005. [9] D.R. Musicant, V. Kumar, and A. Ozgur. Optimizing F-measure with support vector machines. In FLAIRS, 2003. [10] S. Gao, W. Wu, C-H. Lee, and T-S. Chua. A maximal figure-of-merit learning approach to text categorization. In SIGIR, 2003. [11] M. Jansche. Maximum expected F-measure training of logistic regression models. In HLT, 2005. [12] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005. [13] Z. Liu, M. Tan, and F. Jiang. Regularized F-measure maximization for feature selection and classification. BioMed Research International, 2009, 2009. [14] P.M. Chinta, P. Balamurugan, S. Shevade, and M.N. Murty. Optimizing F-measure with non-convex loss and sparse linear classifiers. In IJCNN, 2013. [15] N. Ye, K.M.A. Chai, W.S. Lee, and H.L. Chieu. Optimizing F-measures: A tale of two approaches. In ICML, 2012. [16] A.K. Menon, H. Narasimhan, S. Agarwal, and S. Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In ICML, 2013. [17] J. Cheng, C. Hatzis, H. Hayashi, M-A. Krogel, S. Morishita, D. Page, and J. Sese. KDD Cup 2001 report. ACM SIGKDD Explorations Newsletter, 3(2):47–64, 2002. [18] R. Powers, M. Goldszmidt, and I. Cohen. Short term performance forecasting in enterprise systems. In KDD, 2005. [19] Q. Gu, L. Zhu, and Z. Cai. Evaluation measures of the classification performance of imbalanced data sets. In Computational Intelligence and Intelligent Systems, volume 51, pages 461–471. 2009. [20] T. Zhang. Statistical behaviour and consistency of classification methods based on convex risk minimization. Annals of Mathematical Statistics, 32:56–134, 2004. [21] P.L. Bartlett, M.I. Jordan, and J.D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [22] C. Scott. Calibrated asymmetric surrogate losses. Electronic Journal of Statistics, 6:958–992, 2012. [23] M. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano’s inequality: Bounds on the optimal F-score, BER, and cost-sensitive risk and their implications. Journal of Machine Learning Research, 14(1):1033–1090, 2013. [24] Z.C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize F1 measure. In ECML/PKDD, 2014. [25] J. Petterson and T. Caetano. Reverse multi-label learning. In NIPS, 2010. [26] K. Dembczynski, W. Waegeman, W. Cheng, and E. H¨ullermeier. An exact algorithm for F-measure maximization. In NIPS, 2011. [27] K. Dembczynski, A. Jachnik, W. Kotlowski, W. Waegeman, and E. Huellermeier. Optimizing the F-measure in multi-label classification: Plug-in rule approach versus structured loss minimization. In ICML, 13. [28] S. Agarwal. Surrogate regret bounds for the area under the ROC curve via strongly proper losses. In COLT, 2013. [29] A. Frank and A. Asuncion. UCI machine learning repository, 2010. URL: http://archive.ics.uci.edu/ml. [30] Robert N. Jorissen and Michael K. Gilson. Virtual screening of molecular databases using a support vector machine. Journal of Chemical Information and Modeling, 45:549–561, 2005. 9
2014
81
5,571
Clustering from Labels and Time-Varying Graphs Shiau Hong Lim National University of Singapore mpelsh@nus.edu.sg Yudong Chen EECS, University of California, Berkeley yudong.chen@eecs.berkeley.edu Huan Xu National University of Singapore mpexuh@nus.edu.sg Abstract We present a general framework for graph clustering where a label is observed to each pair of nodes. This allows a very rich encoding of various types of pairwise interactions between nodes. We propose a new tractable approach to this problem based on maximum likelihood estimator and convex optimization. We analyze our algorithm under a general generative model, and provide both necessary and sufficient conditions for successful recovery of the underlying clusters. Our theoretical results cover and subsume a wide range of existing graph clustering results including planted partition, weighted clustering and partially observed graphs. Furthermore, the result is applicable to novel settings including time-varying graphs such that new insights can be gained on solving these problems. Our theoretical findings are further supported by empirical results on both synthetic and real data. 1 Introduction In the standard formulation of graph clustering, we are given an unweighted graph and seek a partitioning of the nodes into disjoint groups such that members of the same group are more densely connected than those in different groups. Here, the presence of an edge represents some sort of affinity or similarity between the nodes, and the absence of an edge represents the lack thereof. In many applications, from chemical interactions to social networks, the interactions between nodes are much richer than a simple “edge” or “non-edge”. Such extra information may be used to improve the clustering quality. We may represent each type of interaction by a label. One simple setting of this type is weighted graphs, where instead of a 0-1 graph, we have edge weights representing the strength of the pairwise interaction. In this case the observed label between each pair is a real number. In a more general setting, the label need not be a number. For example, on social networks like Facebook, the label between two persons may be “they are friends”, “they went to different schools”, “they liked 21 common pages”, or the concatenation of these. In such cases different labels carry different information about the underlying community structure. Standard approaches convert these pairwise interactions into a simple edge/non-edge, and then apply standard clustering algorithms, which might lose much of the information. Even in the case of a standard weighted/unweighted graph, it is not immediately clear how the graph should be used. For example, should the absence of an edge be interpreted as a neutral observation carrying no information, or as a negative observation which indicates dissimilarity between the two nodes? We emphasize that the forms of labels can be very general. In particular, a label can take the form of a time series, i.e., the record of time varying interaction such as “A and B messaged each other on June 1st, 4th, 15th and 21st”, or “they used to be friends, but they stop seeing each other since 2012”. Thus, the labeled graph model is an immediate tool for analyzing time-varying graphs. 1 In this paper, we present a new and principled approach for graph clustering that is directly based on pairwise labels. We assume that between each pair of nodes i and j, a label Lij is observed which is an element of a label set L. The set L may be discrete or continuous, and need not have any structure. The standard graph model corresponds to a binary label set L = {edge, non-edge}, and a weighted graph corresponds to L = R. Given the observed labels L = (Lij) ∈Ln×n, the goal is to partition the n nodes into disjoint clusters. Our approach is based on finding a partition that optimizes a weighted objective appropriately constructed from the observed labels. This leads to a combinatorial optimization problem, and our algorithm uses its convex relaxation. To systematically evaluate clustering performance, we consider a generalization of the stochastic block model [1] and the planted partition model [2]. Our model assumes that the observed labels are generated based on an underlying set of ground truth clusters, where pairs from the same cluster generate labels using a distribution µ over L and pairs from different clusters use a different distribution ν. The standard stochastic block model corresponds to the case where µ and ν are twopoint distributions with µ(edge) = p and ν(edge) = q. We provide theoretical guarantees for our algorithm under this generalized model. Our results cover a wide range of existing clustering settings—with equal or stronger theoretical guarantees—including the standard stochastic block model, partially observed graphs and weighted graphs. Perhaps surprisingly, our framework allows us to handle new classes of problems that are not a priori obvious to be a special case of our model, including the clustering of time-varying graphs. 1.1 Related work The planted partition model/stochastic block model [1, 2] are standard models for studying graph clustering. Variants of the models cover partially observed graphs [3, 4] and weighted graphs [5, 6]. All these models are special cases of ours. Various algorithms have been proposed and analyzed under these models, such as spectral clustering [7, 8, 1], convex optimization approaches [9, 10, 11] and tensor decomposition methods [12]. Ours is based on convex optimization; we build upon and extend the approach in [13], which is designed for clustering unweighted graphs whose edges have different levels of uncertainty, a special case of our problem (cf. Section 4.2 for details). Most related to our setting is the labelled stochastic block model proposed in [14] and [15]. A main difference in their model is that they assume each observation is a two-step process: first an edge/non-edge is observed; if it is an edge then a label is associated with it. In our model all observations are in the form of labels; in particular, an edge or no-edge is also a label. This covers their setting as a special case. Our model is therefore more general and natural—as a result our theory covers a broad class of subproblems including time-varying graphs. Moreover, their analysis is mainly restricted to the two-cluster setting with edge probabilities on the order of Θ(1/n), while we allow for an arbitrary number of clusters and a wide range of edge/label distributions. In addition, we consider the setting where the distributions of the labels are not precisely known. Algorithmically, they use belief propagation [14] and spectral methods [15]. Clustering time-varying graphs has been studied in various context; see [16, 17, 18, 19, 20] and the references therein. Most existing algorithms use heuristics and lack theoretical analysis. Our approach is based on a generative model and has provable performance guarantees. 2 Problem setup and algorithms We assume n nodes are partitioned into r disjoint clusters of size at least K, which are unknown and considered as the ground truth. For each pair (i, j) of nodes, a label Lij ∈L is observed, where L is the set of all possible labels.1 These labels are generated independently across pairs according to the distributions µ and ν. In particular, the probability of observing the label Lij is µ(Lij) if i and j are in the same cluster, and ν(Lij) otherwise. The goal is to recover the ground truth clusters given the labels. Let L = (Lij) ∈Ln×n be the matrix of observed labels. We represent the true clusters by an n × n cluster matrix Y ∗, where Y ∗ ij = 1 if nodes i and j belong to the same cluster and Y ∗ ij = 0 otherwise (we use the convention Y ∗ ii = 1 for all i). The problem is therefore to find Y ∗given L. 1Note that L does not have to be finite. Although some of the results are presented for finite L, they can be easily adapted to the other cases, for instance, by replacing summation with integration. 2 We take an optimization approach to this problem. To motivate our algorithm, first consider the case of clustering a weighted graph, where all labels are real numbers. Positive weights indicate in-cluster interaction while negative weights indicate cross-cluster interaction. A natural approach is to cluster the nodes in a way that maximizes the total weight inside the clusters (this is equivalent to correlation clustering [21]). Mathematically, this is to find a clustering, represented by a cluster matrix Y , such that P i,j LijYij is maximized. For the case of general labels, we pick a weight function w : L 7→R, which assigns a number Wij = w(Lij) to each label, and then solve the following max-weight problem: max Y ⟨W, Y ⟩ s.t. Y is a cluster matrix; (1) here ⟨W, Y ⟩:= P ij WijYij is the standard trace inner product. Note that this effectively converts the problem of clustering from labels into a weighted clustering problem. The program (1) is non-convex due to the constraint. Our algorithm is based on a convex relaxation of (1), using the now well-known fact that a cluster matrix is a block-diagonal 0-1 matrix and thus has nuclear norm2 equal to n [22, 3, 23]. This leads to the following convex optimization problem: max Y ⟨W, Y ⟩ (2) s.t. ∥Y ∥∗≤n; 0 ≤Yij ≤1, ∀(i, j). We say that this program succeeds if it has a unique optimal solution equal to the true cluster matrix Y ∗. We note that a related approach is considered in [13], which is discussed in section 4. One has the freedom of choosing the weight function w. Intuitively, w should assign w(Lij) > 0 to a label Lij with µ(Lij) > ν(Lij), so the program (2) is encouraged to place i and j in the same cluster, the more likely possibility; similarly we should have w(Lij) < 0 if µ(Lij) < ν(Lij). A good weight function should further reflect the information in µ and ν. Our theoretical results in section 3 characterize the performance of the program (2) for any given weight function; building on this, we further derive the optimal choice for the weight function. 3 Theoretical results In this section, we provide theoretical analysis for the performance of the convex program (2) under the probabilistic model described in section 2. The proofs are given in the supplementary materials. Our main result is a general theorem that gives sufficient conditions for (2) to recover the true cluster matrix Y ∗. The conditions are stated in terms of the label distribution µ and ν, the minimum size of the true clusters K, and any given weight function w. Define Eµw := P l∈L w(l)µ(l) and Varµw := P l∈L[w(l) −Eµw]2µ(l); Eνw and Varνw are defined similarly. Theorem 1 (Main). Suppose b is any number that satisfies |w(l)| ≤b, ∀l ∈L almost surely. There exists a universal constant c > 0 such that if −Eνw ≥cb log n + √K log n√Varνw K , (3) Eµw ≥cb log n + √n log n p max(Varµw, Varνw) K , (4) then Y ∗is the unique solution to (2) with probability at least 1 −n−10. 3 The theorem holds for any given weight function w. In the next two subsections, we show how to choose w optimally, and then address the case where w deviates from the optimal choice. 3.1 Optimal weights A good candidate for the weight function w can be derived from the maximum likelihood estimator (MLE) of Y ∗. Given the observed labels L, the log-likelihood of the true cluster matrix taking 2The nuclear norm of a matrix is defined as the sum of its singular values. A cluster matrix is positive semidefinite so its nuclear norm is equal to its trace. 3In all our results, the choice n−10 is arbitrary. In particular, the constant c scales linearly with the exponent. 3 the value Y is log Pr(L|Y ∗= Y ) = X i,j log  µ(Lij)Yijν(Lij)1−Yij = ⟨W, Y ⟩+ c where c is independent of Y and W is given by the weight function w(l) = wMLE(l) := log µ(l) ν(l). The MLE thus corresponds to using the log-likelihood ratio wMLE(·) as the weight function. The following theorem is a consequence of Theorem 1 and characterizes the performance of using the MLE weights. In the sequel, we use D(·∥·) to denote the KL divergence between two distributions. Theorem 2 (MLE). Suppose wMLE is used, and b and ζ are any numbers which satisfy with D(ν||µ) ≤ζD(µ||ν) and log µ(l) ν(l) ≤b, ∀l ∈L. There exists a universal constant c > 0 such that Y ∗is the unique solution to (2) with probability at least 1 −n−10 if D(ν||µ) ≥c(b + 2)log n K , (5) D(µ||ν) ≥c(ζ + 1)(b + 2) n log n K2  . (6) Moreover, we always have D(ν||µ) ≤(2b + 3)D(µ||ν), so we can take ζ = 2b + 3. Note that the theorem has the intuitive interpretation that the in/cross-cluster label distributions µ and ν should be sufficiently different, measured by their KL divergence. Using a classical result in information theory [24], we may replace the KL divergences with a quantity that is often easier to work with, as summarized below. The LHS of (7) is sometimes called the triangle discrimination [24]. Corollary 1 (MLE 2). Suppose wMLE is used, and b, ζ are defined as in Theorem 2. There exists a universal constant c such that Y ∗is the unique solution to (2) with probability at least 1 −n−10 if X l∈L (µ(l) −ν(l))2 µ(l) + ν(l) ≥c(ζ + 1)(b + 2) n log n K2  . (7) We may take ζ = 2b + 3. The MLE weight wMLE turns out to be near-optimal, at least in the two-cluster case, in the sense that no other weight function (in fact, no other algorithm) has significantly better performance. This is shown by establishing a necessary condition for any algorithm to recover Y ∗. Here, an algorithm is a measurable function ˆY that maps the data L to a clustering (represented by a cluster matrix). Theorem 3 (Converse). The following holds for some universal constants c, c′ > 0. Suppose K = n 2 , and b defined in Theorem 2 satisfies b ≤c′. If X l∈L (µ(l) −ν(l))2 µ(l) + ν(l) ≤c log n n , (8) then inf ˆY supY ∗P( ˆY ̸= Y ∗) ≥1 2, where the supremum is over all possible cluster matrices. Under the assumption of Theorem 3, the conditions (7) and (8) match up to a constant factor. Remark. The MLE weight |wMLE(l)| becomes large if µ(l) = o(ν(l)) or ν(l) = o(µ(l)), i.e., when the in-cluster probability is negligible compared to the cross-cluster one (or the other way around). It can be shown that in this case the MLE weight is actually order-wise better than a bounded weight function. We give this result in the supplementary material due to space constraints. 3.2 Monotonicity We sometimes do not know the exact true distributions µ and ν to compute wMLE. Instead, we might compute the weight using the log likelihood ratios of some “incorrect” distribution ¯µ and ¯ν. Our algorithm has a nice monotonicity property: as long as the divergence of the true µ and ν is larger than that of ¯µ and ¯ν (hence an “easier” problem), then the problem should still have the same, if not better probability of success, even though the wrong weights are used. We say that (µ, ν) is more divergent then (¯µ, ¯ν) if, for each l ∈L, we have that either µ(l) ν(l) ≥µ(l) ¯ν(l) ≥¯µ(l) ¯ν(l) ≥1 or ν(l) µ(l) ≥ν(l) ¯µ(l) ≥¯ν(l) ¯µ(l) ≥1. 4 Theorem 4 (Monotonicity). Suppose we use the weight function w(l) = log ¯µ(l) ¯ν(l), ∀l, while the actual label distributions are µ and ν. If the conditions in Theorem 2 or Corollary 1 hold with µ, ν replaced by ¯µ, ¯ν, and (µ, ν) is more divergent than (¯µ, ¯ν), then with probability at least 1 −n−10 Y ∗is the unique solution to (2). This result suggests that one way to choose the weight function is by using the log-likelihood ratio based on a “conservative” estimate (i.e., a less divergent one) of the true label distribution pair. 3.3 Using inaccurate weights In the previous subsection we consider using a conservative log-likelihood ratio as the weight. We now consider a more general weight function w which need not be conservative, but is only required to be not too far from the true log-likelihood ratio wMLE. Let ε(l) := w(l) −wMLE(l) = w(l) −log µ(l) ν(l) be the error for each label l ∈L. Accordingly, let ∆µ := P l∈L µ(l)ε(l) and ∆ν := P l∈L ν(l)ε(l) be the average errors with respect to µ and ν. Note that ∆µ and ∆ν can be either positive or negative. The following characterizes the performance of using such a w. Theorem 5 (Inaccurate Weights). Let b and ζ be defined as in Theorem 2. If the weight w satisfies |w(l)| ≤λ log µ(l) ν(l) , ∀l ∈L, |∆µ| ≤γD(µ||ν), |∆ν| ≤γD(ν||µ) for some γ < 1 and λ > 0. Then Y ∗is unique solution to (2) with probability at least 1 −n−10 if D(ν||µ) ≥c λ2 (1 −γ)2 (b + 2)log n K and D(µ||ν) ≥c λ2 (1 −γ)2 (ζ + 1)(b + 2) n log n K2  . Therefore, as long as the errors ∆µ and ∆ν in w are not too large, the condition for recovery will be order-wise similar to that in Theorem 2 for using the MLE weight. The numbers λ and γ measure the amount of inaccuracy in w w.r.t. wMLE. The last two conditions in Theorem 5 thus quantify the relation between the inaccuracy in w and the price we need to pay for using such a weight. 4 Consequences and applications We apply the general results in the last section to different special cases. In sections 4.1 and 4.2, we consider two simple settings and show that two immediate corollaries of our main theorems recover, and in fact improve upon, existing results. In sections 4.3 and 4.4, we turn to the more complicated setting of clustering time-varying graphs and derive several novel results. 4.1 Clustering a Gaussian matrix with partial observations Analogous to the planted partition model for unweighted graphs, the bi-clustering [5] or submatrixlocalization [6, 23] problem concerns with weighted graph whose adjacency matrix has Gaussian entries. We consider a generalization of this problem where some of the entries are unobserved. Specifically, we observe a matrix L ∈(R ∪{?})n×n, which has r submatrices of size K × K with disjoint row and column support, such that Lij =? (meaning unobserved) with probability 1−s and otherwise Lij ∼N(uij, 1). Here the means of the Gaussians satisfy: uij = ¯u if (i, j) is inside the submatrices and uij = u if outside, where ¯u > u ≥0. Clustering is equivalent to locating these submatrices with elevated mean, given the large Gaussian matrix L with partial observations.4 This is a special case of our labeled framework with L = R ∪{?}. Computing the log-likelihood ratios for two Gaussians, we obtain wMLE(Lij) = 0 if Lij =?, and wMLE(Lij) ∝Lij −(¯u + u)/2 otherwise. This problem is interesting only when ¯u −u ≲√log n (otherwise simple element-wise thresholding [5, 6] finds the submatrices), which we assume to hold. Clearly D (µ∥ν) = D (ν∥µ) = 1 4s(¯u −u)2. The following can be proved using our main theorems (proof in the appendix). 4Here for simplicity we consider the clustering setting instead of bi-clustering. The latter setting corresponds to rectangular L and submatrices. Extending our results to this setting is relatively straightforward. 5 Corollary 2 (Gaussian Graphs). Under the above setting, Y ∗is the unique solution to (2) with weights w = wMLE with probability at least 1 −2n−10 provided s (¯u −u)2 ≥cn log3 n K2 . In the fully observed case, this recovers the results in [23, 5, 6] up to log factors. Our results are more general as we allow for partial observations, which is not considered in previous work. 4.2 Planted Partition with non-uniform edge densities The work in [13] considers a variant of the planted partition model with non-uniform edge densities, where each pair (i, j) has an edge with probability 1 −uij > 1/2 if they are in the same cluster, and with probability uij < 1/2 otherwise. The number uij can be considered as a measure of the level of uncertainty in the observation between i and j, and is known or can be estimated in applications like cloud-clustering. They show that using the knowledge of {uij} improves clustering performance, and such a setting covers clustering of partially observed graphs that is considered in [11, 3, 4]. Here we consider a more general setting that does not require the in/cross-cluster edge density to be symmetric around 1 2. Suppose each pair (i, j) is associated with two numbers pij and qij, such that if i and j are in the same cluster (different clusters, resp.), then there is an edge with probability pij (qij, resp.); we know pij and qij but not which of them is the probability that generates the edge. The values of pij and qij are generated i.i.d. randomly as (pij, qij) ∼D by some distribution D on [0, 1] × [0, 1]. The goal is to find the clusters given the graph adjacency matrix A, (pij) and (qij). This model is a special case of our labeled framework. The labels have the form Lij = (Aij, pij, qij) ∈L = {0, 1} × [0, 1] × [0, 1], generated by the distributions µ(l) = pD(p, q), l = (1, p, q) (1 −p)D(p, q), l = (0, p, q) ν(l) = qD(p, q), l = (1, p, q) (1 −q)D(p, q), l = (0, p, q). The MLE weight has the form wMLE(Lij) = Aij log pij qij +(1−Aij) log 1−pij 1−qij . It turns out it is more convenient to use a conservative weight in which we replace pij and qij with ¯pij = 3 4pij + 1 4qij and ¯qij = 1 4pij + 3 4qij. Applying Theorem 4 and Corollary 1, we immediately obtain the following. Corollary 3 (Non-uniform Density). Program (2) recovers Y ∗with probability at least 1 −n−10 if ED (pij −qij)2 pij(1 −qij)  ≥cn log n K2 , ∀(i.j). Here ED is the expectation w.r.t. the distribution D, and LHS above is in fact independent of (i, j). Corollary 3 improves upon existing results for several settings. • Clustering partially observed graphs. Suppose D is such that pij = p and qij = q with probability s, and pij = qij otherwise, where p > q. This extends the standard planted partition model: each pair is unobserved with probability 1 −s. For this setting we require s(p −q)2 p(1 −q) ≳n log n K2 . When s = 1. this matches the best existing bounds for standard planted partition [9, 12] up to a log factor. For the partial observation setting with s ≤1, the work in [4] gives a similar bound under the additional assumption p > 0.5 > q, which is not required by our result. For general p and q, the best existing bound is given in [3, 9], which replaces unobserved entries with 0 and requires the condition s(p−q)2 p(1−sq) ≳n log n K2 . Our result is tighter when p and q are close to 1. • Planted partition with non-uniformity. The model and algorithm in [13] is a special case of ours with symmetric densities pij ≡1−qij, for which we recover their result ED  (1−2qij)2 ≳nlogn K2 . Corollary 3 is more general as it removes the symmetry assumption. 6 4.3 Clustering time-varying multiple-snapshot graphs Standard graph clustering concerns with clustering on a single, static graph. We now consider a setting where the graph can be time-varying. Specifically, we assume that for each time interval t = 1, 2, . . . , T, we observed a snapshot of the graph L(t) ∈Ln×n. We assume each snapshot is generated by the distributions µ and ν, independent of other snapshots. We can map this problem into our original labeled framework, by considering the whole time sequence of ¯Lij := (L(1) ij , . . . , L(T ) ij ) observed at the pair (i, j) as a single label. In this case the label set is thus the set of all possible sequences, i.e., ¯L = (L)T , and the label distributions are (with a slight abuse of notation) µ(¯Lij) = µ(L(1) ij ) . . . µ(L(T ) ij ), with ν(·) given similarly. The MLE weight (normalized by T) is thus the average log-likelihood ratio: wMLE(¯Lij) = 1 T log µ(L(1) ij ) . . . µ(L(T ) ij ) ν(L(1) ij ) . . . ν(L(T ) ij ) = 1 T T X t=1 log µ(L(t) ij ) ν(L(t) ij ) . Since wMLE(¯Lij) is the average of T independent random variables, its variance scales with 1 T . Applying Theorem 1, with almost identical proof as in Theorem 2 we obtain the following: Corollary 4 (Independent Snapshots). Suppose | log µ(l) ν(l)| ≤b, ∀l ∈L and D(ν||µ) ≤ζD(µ||ν). The program (2) with MLE weights given recovers Y ∗with probability at least 1 −n−10 provided D(ν||µ) ≥c(b + 2)log n K , (9) D(µ||ν) ≥c(b + 2) max nlog n K , (ζ + 1)n log n TK2 o . (10) Setting T = 1 recovers Theorem 2. When the second term in (10) dominates, the corollary says that the problem becomes easier if we observe more snapshots, with the tradeoff quantified precisely. 4.4 Markov sequence of snapshots We now consider the more general and useful setting where the snapshots form a Markov chain. For simplicity we assume that the Markov chain is time-invariant and has a unique stationary distribution which is also the initial distribution. Therefore, the observations L(t) ij at each (i, j) are generated by first drawing a label from the stationary distribution ¯µ (or ¯ν) at t = 1, then applying a one-step transition to obtain the label at each subsequent t. In particular, given the previously observed label l, let the intra-cluster and inter-cluster conditional distributions be µ(·|l) and ν(·|l). We assume that the Markov chains with respect to both µ and ν are geometrically ergodic such that for any τ ≥1, and label-pair L(1), L(τ+1), | Prµ(L(τ+1)|L(1)) −¯µ(L(τ+1))| ≤κγτ and | Prν(L(τ+1)|L(1)) −¯ν(L(τ+1))| ≤κγτ for some constants κ ≥1 and γ < 1 that only depend on µ and ν. Let Dl(µ||ν) be the KL-divergence between µ(·|l) and ν(·|l); Dl(ν||µ) is similarly defined. Let E¯µDl(µ||ν) = P l∈L¯µ(l)Dl(µ||ν) and similarly for E¯νDl(ν||µ). As in the previous subsection, we use the average log-likelihood ratio as the weight. Define λ = κ (1−γ) minl{¯µ(l),¯ν(l)}. Applying Theorem 1 gives the following corollary. See sections H–I in the supplementary material for the proof and additional discussion. Corollary 5 (Markov Snapshots). Under the above setting, suppose for each label-pair (l, l′), log ¯µ(l) ¯ν(l) ≤b, log µ(l′|l) ν(l′|l) ≤b, D(¯ν||¯µ) ≤ζD(¯µ||¯ν) and E¯νDl(ν||µ) ≤ζE¯µDl(µ||ν). The program (2) with MLE weights recovers Y ∗with probability at least 1 −n−10 provided 1 T D(¯ν||¯µ) +  1 −1 T  E¯νDl(ν||µ) ≥c(b + 2)log n K (11) 1 T D(¯µ||¯ν) +  1 −1 T  E¯µDl(µ||ν) ≥c(b + 2) max nlog n K , (ζ + 1)λn log n TK2 o . (12) 7 As an illuminating example, consider the case where ¯µ ≈¯ν, i.e., the marginal distributions for individual snapshots are identical or very close. It means that the information is contained in the change of labels, but not in the individual labels, as made evident in the LHSs of (11) and (12). In this case, it is necessary to use the temporal information in order to perform clustering. Such information would be lost if we disregard the ordering of the snapshots, for example, by aggregating or averaging the snapshots then apply a single-snapshot clustering algorithm. This highlights an essential difference between clustering time-varying graphs and static graphs. 5 Empirical results To solve the convex program (2), we follow [13, 9] and adapt the ADMM algorithm by [25]. We perform 100 trials for each experiment, and report the success rate, i.e., the fraction of trials where the ground-truth clustering is fully recovered. Error bars show 95% confidence interval. Additional empirical results are provided in the supplementary material. We first test the planted partition model with partial observations under the challenging sparse (p and q close to 0) and dense settings (p and q close to 1); cf. section 4.2. Figures 1 and 2 show the results for n = 1000 with 4 equal-size clusters. In both cases, each pair is observed with probability 0.5. For comparison, we include results for the MLE weights as well as the linear weights (based on linear approximation of the log-likelihood ratio), uniform weights and a imputation scheme where all unobserved entries are assumed to be “no-edge”. 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 p − q success rate q = 0.02, s = 0.5 MLE linear uniform no partial Figure 1: Sparse graphs 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 p − q success rate p = 0.98, s = 0.5 MLE linear uniform no partial Figure 2: Dense graphs 0 0.005 0.01 0.015 0.02 0.025 0.7 0.75 0.8 0.85 0.9 14 weeks fraction of data used in estimation accuracy Markov independent aggregate Figure 3: Reality Mining dataset Corollary 3 predicts more success as the ratio s(p−q)2 p(1−q) gets larger. All else being the same, distributions with small ζ (sparse) are “easier” to solve. Both predictions are consistent with the empirical results in Figs. 1 and 2. The results also show that the MLE weights outperform the other weights. For real data, we use the Reality Mining dataset [26], which contains individuals from two main groups, the MIT Media Lab and the Sloan Business School, which we use as the ground-truth clusters. The dataset records when two individuals interact, i.e., become proximal of each other or make a phone call, over a 9-month period. We choose a window of 14 weeks (the Fall semester) where most individuals have non-empty interaction data. These consist of 85 individuals with 25 of them from Sloan. We represent the data as a time-varying graph with 14 snapshots (one per week) and two labels—an “edge” if a pair of individuals interact within the week, and “no-edge” otherwise. We compare three models: Markov sequence, independent snapshots, and the aggregate (union) graphs. In each trial, the in/cross-cluster distributions are estimated from a fraction of randomly selected pairwise interaction data. The vertical axis in Figure 3 shows the fraction of pairs whose cluster relationship are correctly identified. From the results, we infer that the interactions between individuals are likely not independent across time, and are better captured by the Markov model. Acknowledgments S.H. Lim and H. Xu were supported by the Ministry of Education of Singapore through AcRF Tier Two grant R-265-000-443-112. Y. Chen was supported by NSF grant CIF-31712-23800 and ONR MURI grant N00014-11-1-0688. 8 References [1] K. Rohe, S. Chatterjee, and B. Yu. Spectral clustering and the high-dimensional stochastic block model. Annals of Statistics, 39:1878–1915, 2011. [2] A. Condon and R. M. Karp. Algorithms for graph partitioning on the planted partition model. Random Structures and Algorithms, 18(2):116–140, 2001. [3] S. Oymak and B. Hassibi. Finding dense clusters via low rank + sparse decomposition. arXiv:1104.5186v1, 2011. [4] Y. Chen, A. Jalali, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization. Journal of Machine Learning Research, 15:2213–2238, June 2014. [5] Sivaraman Balakrishnan, Mladen Kolar, Alessandro Rinaldo, Aarti Singh, and Larry Wasserman. Statistical and computational tradeoffs in biclustering. In NIPS Workshop on Computational Trade-offs in Statistical Learning, 2011. [6] Mladen Kolar, Sivaraman Balakrishnan, Alessandro Rinaldo, and Aarti Singh. Minimax localization of structural information in large noisy matrices. In NIPS, pages 909–917, 2011. [7] F. McSherry. Spectral partitioning of random graphs. In FOCS, pages 529–537, 2001. [8] K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees in the extended planted partition model. COLT, 2012. [9] Y. Chen, S. Sanghavi, and H. Xu. Clustering sparse graphs. In NIPS 2012., 2012. [10] B. Ames and S. Vavasis. Nuclear norm minimization for the planted clique and biclique problems. Mathematical Programming, 129(1):69–89, 2011. [11] C. Mathieu and W. Schudy. Correlation clustering with noisy input. In SODA, page 712, 2010. [12] Anima Anandkumar, Rong Ge, Daniel Hsu, and Sham M Kakade. A tensor spectral approach to learning mixed membership community models. arXiv preprint arXiv:1302.2684, 2013. [13] Y. Chen, S. H. Lim, and H. Xu. Weighted graph clustering with non-uniform uncertainties. In ICML, 2014. [14] Simon Heimlicher, Marc Lelarge, and Laurent Massouli´e. Community detection in the labelled stochastic block model. In NIPS Workshop on Algorithmic and Statistical Approaches for Large Social Networks, 2012. [15] Marc Lelarge, Laurent Massouli´e, and Jiaming Xu. Reconstruction in the Labeled Stochastic Block Model. In IEEE Information Theory Workshop, Seville, Spain, September 2013. [16] S. Fortunato. Community detection in graphs. Physics Reports, 486(3-5):75–174, 2010. [17] Jimeng Sun, Christos Faloutsos, Spiros Papadimitriou, and Philip S. Yu. Graphscope: parameter-free mining of large time-evolving graphs. In ACM KDD, 2007. [18] D. Chakrabarti, R. Kumar, and A. Tomkins. Evolutionary clustering. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 554–560. ACM, 2006. [19] Vikas Kawadia and Sameet Sreenivasan. Sequential detection of temporal communities by estrangement confinement. Scientific Reports, 2, 2012. [20] N.P. Nguyen, T.N. Dinh, Y. Xuan, and M.T. Thai. Adaptive algorithms for detecting community structure in dynamic social networks. In INFOCOM, pages 2282–2290. IEEE, 2011. [21] N. Bansal, A. Blum, and S. Chawla. Correlation clustering. Machine Learning, 56(1), 2004. [22] A. Jalali, Y. Chen, S. Sanghavi, and H. Xu. Clustering partially observed graphs via convex optimization. In ICML, 2011. [23] Brendan P.W. Ames. Guaranteed clustering and biclustering via semidefinite programming. Mathematical Programming, pages 1–37, 2013. [24] F. Topsoe. Some inequalities for information divergence and related measures of discrimination. IEEE Transactions on Information Theory, 46(4):1602–1609, Jul 2000. [25] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical Report UILU-ENG-09-2215, UIUC, 2009. [26] Nathan Eagle and Alex (Sandy) Pentland. Reality mining: Sensing complex social systems. Personal Ubiquitous Comput., 10(4):255–268, March 2006. [27] Alexandre B. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2009. 9
2014
82
5,572
Unsupervised learning of an efficient short-term memory network Pietro Vertechi Wieland Brendel ∗ Christian K. Machens Champalimaud Neuroscience Programme Champalimaud Centre for the Unknown Lisbon, Portugal first.last@neuro.fchampalimaud.org Abstract Learning in recurrent neural networks has been a topic fraught with difficulties and problems. We here report substantial progress in the unsupervised learning of recurrent networks that can keep track of an input signal. Specifically, we show how these networks can learn to efficiently represent their present and past inputs, based on local learning rules only. Our results are based on several key insights. First, we develop a local learning rule for the recurrent weights whose main aim is to drive the network into a regime where, on average, feedforward signal inputs are canceled by recurrent inputs. We show that this learning rule minimizes a cost function. Second, we develop a local learning rule for the feedforward weights that, based on networks in which recurrent inputs already predict feedforward inputs, further minimizes the cost. Third, we show how the learning rules can be modified such that the network can directly encode non-whitened inputs. Fourth, we show that these learning rules can also be applied to a network that feeds a time-delayed version of the network output back into itself. As a consequence, the network starts to efficiently represent both its signal inputs and their history. We develop our main theory for linear networks, but then sketch how the learning rules could be transferred to balanced, spiking networks. 1 Introduction Many brain circuits are known to maintain information over short periods of time in the firing of their neurons [15]. Such “persistent activity” is likely to arise through reverberation of activity due to recurrent synapses. While many recurrent network models have been designed that remain active after transient stimulation, such as hand-designed attractor networks [21, 14] or randomly generated reservoir networks [10, 13], how neural networks can learn to remain active is less well understood. The problem of learning to remember the input history has mostly been addressed in supervised learning of recurrent networks. The classical approaches are based on backpropagation through time [22, 6]. However, apart from convergence issues, backpropagation through time is not a feasible method for biological systems. More recent work has drawn attention to random recurrent neural networks, which already provide a reservoir of time constants that allow to store and read out memories [10, 13]. Several studies have focused on the question of how to optimize such networks to the task at hand (see [12] for a review), however, the generality of the underlying learning rules is often not fully understood, since many rules are not based on analytical results or convergence proofs. ∗current address: Centre for Integrative Neuroscience, University of T¨ubingen, Germany 1 The unsupervised learning of short-term memory systems, on the other hand, is largely unchartered territory. While there have been several “bottom-up” studies that use biologically realistic learning rules and simulations (see e.g. [11]), we are not aware of any analytical results based on local learning rules. Here we report substantial progress in following through a normative, “top-down” approach that results in a recurrent neural network with local synaptic plasticity. This network learns how to efficiently remember an input and its history. The learning rules are largely Hebbian or covariancebased, but separate recurrent and feedforward inputs. Based on recent progress in deriving integrateand-fire neurons from optimality principles [3, 4], we furthermore sketch how an equivalent spiking network with local learning rules could be derived. Our approach generalizes analogous work in the setting of efficient coding of an instantaneous signal, as developed in [16, 19, 23, 4, 1]. 2 The autoencoder revisited We start by recapitulating the autoencoder network shown in Fig. 1a. The autoencoder transforms a K-dimensional input signal, x, into a set of N firing rates, r, while obeying two constraints. First, the input signal should be reconstructable from the output firing rates. A common assumption is that the input can be recovered through a linear decoder, D, so that x ≈ˆx = Dr. (1) Second, the output firing rates, r, should provide an optimal or efficient representation of the input signals. This optimality can be measured by defining a cost C(r) for the representation r. For simplicity, we will in the following assume that the costs are quadratic (L2), although linear (L1) costs in the firing rates could easily be accounted for as well. We note that autoencoder networks are sometimes assumed to reduce the dimensionality of the input (undercomplete case, N < K) and sometimes assumed to increase the dimensionality (overcomplete case, N > K). Our results apply to both cases. The optimal set of firing rates for a given input signal can then be found by minimizing the loss function, L = 1 2 ∥x −Dr∥2 + µ 2 ∥r∥2 , (2) with respect to the firing rates r. Here, the first term is the error between the reconstructed input signal, ˆx = Dr, and the actual stimulus, x, while the second term corresponds to the “cost” of the signal representation. The minimization can be carried out via gradient descent, resulting in the differential equation ˙r = −∂L ∂r = −µr + D⊤x −D⊤Dr. (3) This differential equation can be interpreted as a neural network with a ‘leak’, −µr, feedforward connections, F = DT , and recurrent connections, Ω= D⊤D. The derivation of neural networks from quadratic loss functions was first introduced by Hopfield [7, 8], and the link to the autoencoder was pointed out in [19]. Here, we have chosen a quadratic cost term which results in a linear differential equation. Depending on the precise nature of the cost term, one can also obtain nonlinear differential equations, such as the Cowan-Wilson equations [19, 8]. Here, we will first focus on linear networks, in which case ‘firing rates’ can be both positive and negative. Further below, we will also show how our results can be generalized to networks with positive firing rates and to networks in which neurons spike. In the case of arbitrarily small costs, the network can be understood as implementing predictive coding [17]. The reconstructed (“predicted”) input signal, ˆx = Dr, is subtracted from the actual input signal, x, see Fig. 1b. Predictive coding here enforces a cancellation or ‘balance’ between the feedforward and recurrent synaptic inputs. If we assume that the actual input acts excitatory, for instance, then the predicted input is mediated through recurrent lateral inhibition. Recent work has shown that this cancellation can be mediated by the detailed balance of currents in spiking networks [3, 1], a result we will return to later on. 2 a b c ˆx = Dr x r x r Fx F(x−ˆx) x Fx −FDr r xt rt Drt rt−1 Mrt Figure 1: Autoencoders. (a) Feedforward network. The input signal x is multiplied with the feedforward weights F. The network generates output firing rates r. (b) Recurrent network. The left panel shows how the reconstructed input signal ˆx = Dr is fed back and subtracted from the original input signal x. The right panel shows that this subtraction can also be performed through recurrent connections FD. For the optimal network, we set F = D⊤. (c) Recurrent network with delayed feedback. Here, the output firing rates are fed back with a delay. This delayed feedback acts as just another input signal, and is thereby re-used, thus generating short-term memory. 3 Unsupervised learning of the autoencoder with local learning rules The transformation of the input signal, x, into the output firing rate, r, is largely governed by the decoder, D, as can be seen in Eq. (3). When the inputs are drawn from a particular distribution, p(x), such as the distribution of natural images or natural sounds, some decoders will lead to a smaller average loss and better performance. The average loss is given by ⟨L⟩= 1 2 ∥x −Dr∥2 + µ ∥r∥2 (4) where the angular brackets denote an average over many signal presentations. In practice, x will generally be centered and whitened. While it is straightforward to minimize this average loss with respect to the decoder, D, biological networks face a different problem.1 A general recurrent neural network is governed by the firing rate dynamics ˙r = −µr + Fx −Ωr, (5) and has therefore no access to the decoder, D, but only to to its feedforward weights, F, and its recurrent weights, Ω. Furthermore, any change in F and Ωmust solely relie on information that is locally available to each synapse. We will assume that matrix Ωis initially chosen such that the dynamical system is stable, in which case its equilibrium state is given by Fx = Ωr + µr. (6) If the dynamics of the input signal x are slow compared to the firing rate dynamics of the autoencoder, the network will generally operate close to equilibrium. We will assume that this is the case, and show that this assumption helps us to bridge from these firing rate networks to spiking networks further below. A priori, it is not clear how to change the feedforward weights, F, or the recurrent weights, Ω, since neither appears in the average loss function, Eq. (4). We might be inclined to solve Eq. (6) for r and plug the result into Eq. (4). However, we then have to operate on matrix inverses, the resulting gradients imply heavily non-local synaptic operations, and we would still need to somehow eliminate the decoder, D, from the picture. Here, we follow a different approach. We note that the optimal target network in the previous section implements a form of predictive coding. We therefore suggest a two-step approach to the learning problem. First, we fix the feedforward weights and we set up learning rules for the recurrent weights such that the network moves into a regime where the inputs, Fx, are predicted or ‘balanced’ by the recurrent weights, Ωr, see Fig. 1b. In this case, Ω= FD, and this will be our first target for learning. Second, once Ωis learnt, we change the feedforward weights F to decrease the average loss even further. We then return to step 1 and iterate. 1Note that minimization of the average loss with respect to D requires either a hard or a soft normalization constraint on D. 3 Since F is assumed constant in step 1, we can reach the target Ω= FD by investigating how the decoder D needs to change. The respective learning equation for D can then be translated into a learning equation for Ω, which will directly link the learning of Ωto the minimization of the loss function, Eq. (4). One thing to keep in mind, however, is that any change in Ωwill cause a compensatory change in r such that Eq. (6) remains fulfilled. These changes are related through the equation ˙Ωr + (Ω+ µI)˙r = 0 (7) which is obtained by taking the derivative of Eq. (6) and remembering that x changes on much slower time scales, and can therefore be considered a constant. In consequence, we have to consider the combined change of the recurrent weights, Ω, and the equilibrium firing rate, r, in order to reduce the average loss. Let us assume a small change of D in the direction ∆D = ϵxr⊤, which is equivalent to simply decreasing x in the first term of Eq. (4). Such a small change can be translated into the following learning rule for D, ˙D = ϵ(xr⊤−αD), (8) where ϵ is sufficiently small to make the learning slow compared to the dynamics of the input signals x = x(t). The ‘weight decay’ term, −αD, acts as a soft normalization or regularizer on D. In turn, to have the recurrent weights Ωmove towards FD, we multiply with F from the left to obtain the learning rule2 ˙Ω= ϵ(Fxr⊤−αΩ). (9) Importantly, this learning rule is completely local: it only rests on information that is available to each synapse, namely the presynaptic firing rates, r, and the postsynaptic input signal, Fx. Finally, we show that the ‘unnormalized’ learning rule decreases the loss function. As noted above, any change of Ωcauses a change in the equilibrium firing rate, see Eq. (7). By plugging the unnormalized learning rule for Ω, namely ϵFxr⊤, into Eq. (7), and by remembering that Fx = Ωr + µr, we obtain ˙r = −ϵ∥r∥2r. (10) So, to first order, the firing rates decay in the direction of r. In turn, the temporal derivative of the loss function, d⟨L⟩ dt = D (−˙Dr −D˙r)⊤(x −Dr) + µ˙rT r E (11) = D −ϵ∥r∥2(x −Dr)⊤(x −Dr) −µϵ∥r∥4E , (12) is always negative so that the unnormalized learning rule for Ωdecreases the error. We then subtract the term −ϵαΩ(thus reducing the norm of the matrix but not changing the direction) as a ‘soft normalisation’ to prevent it from going to infinity. Note that the argument here rests on the parallelism of the learning of D and Ω. The decoder, D, however, is merely a hypothetical quantity that does not have a physical counterpart in the network. In step 2, we assume that the recurrent weights have reached their target, Ω= FD, and we learn the feedforward weights. For that we notice that in the absolute minimum, as shown in the previous section, the feedforward weights become F = D⊤. Hence, the target for the feedforward weights should be the transpose of the decoder. Over long time intervals, the expected decoder is simply D = ⟨xr⊤⟩/α, since that is the fixed point of the decoder learning rule, Eq. (8). Hence, we suggest to learn the feedforward weights on a yet slower time scale β ≪ϵ, according to ˙F = β(rx⊤−λF), (13) where λF is once more a soft normalization factor. The fixed point of the learning rule is then F = D⊤. We emphasize that this learning rule is also local, based solely on the presynaptic input signal and postsynaptic firing rates. In summary, we note that the autoencoder operates on four separate time scales. On a very fast, almost instantaneous time scale, the firing rates run into equilibrium for a given input signal, Eq. (6). On a slower time scale, the input signal, x, changes. On a yet slower time scale, the recurrent weights, Ω, are learnt, and their learning therefore uses many input signal values. On the final and slowest time scale, the feedforward weights, F, are optimized. 2Note that the fixed point of the decoder learning rule is D = ⟨xr⊤⟩/α. Hence, the fixed point of the recurrent learning is Ω= FD. 4 4 Unsupervised learning for non-whitened inputs Algorithms for efficient coding are generally applied to whitened and centered data (see e.g. [2, 16]). Indeed, if the data are not centered, the read-out of the neurons will concentrate in the direction of the mean input signal in order to represent it, even though the mean may not carry any relevant information about the actual, time-varying signal. If the data are not whitened, the choice of the decoder will be dominated by second-order statistical dependencies, at the cost of representing higher-order dependencies. The latter are often more interesting to represent, as shown by applications of efficient or sparse coding algorithms to the visual system [20]. While whitening and centering are therefore common pre-processing steps, we note that, with a simple correction, our autoencoder network can take care of the pre-processing steps autonomously. This extra step will be crucial later on, when we feed the time-delayed (and non-whitened) network activity back into the network. The main idea is simple: we suggest to use a cost function that is invariant under affine transformations and equals the cost function we have been using until now in case of centered and whitened data. To do so, we introduce the short-hands xc = x −⟨x⟩and rc = r −⟨r⟩for the centered input and the centered firing rates, and we write C = cov(x, x) for the covariance matrix of the input signal. The corrected loss function is then, L = 1 2 xc −Drc ⊤C−1xc −Drc  + µ 2 ∥r∥2 . (14) The loss function reduces to Eq. (2) if the data are centered and if C = I. Furthermore, the value of the loss function remains constant if we apply any affine transformation x →Ax + b.3 In turn, we can interpret the loss function as the likelihood function of a Gaussian. From hereon, we can follow through exactly the same derivations as in the previous sections. We first notice that the optimal firing rate dynamics becomes V = D⊤C−1x −D⊤C−1Dr −µr (15) ˙r = V −⟨V⟩ (16) where V is a placeholder for the overall input. The dynamics differ in two ways from those in Eq. (3). First, the dynamics now require the subtraction of the averaged input, ⟨V⟩. Biophysically, this subtraction could correspond to a slower intracellular process, such as adaptation through hyperpolarization. Second, the optimal feedforward weights are now F = D⊤C−1, and the optimal recurrent weights become Ω= D⊤C−1Dr. The derivation of the learning rules follows the outline of the previous section. Initially, the network starts with some random connectivity, and obeys the dynamical equations, V = Fx −Ωr −µr (17) ˙r = V −⟨V⟩. (18) We then apply the following modified learning rules for D and Ω, ˙D = ϵ xr⊤−⟨x⟩⟨r⟩⊤−αD  (19) ˙Ω= ϵ Fxr⊤−⟨Fx⟩⟨r⟩⊤−αΩ  . (20) We note that in both cases, the learning remains local. However, similar to the rate dynamics, the dynamics of learning now requires a slower synaptic process that computes the averaged signal inputs and presynaptic firing rates. Synapses are well-known to operate on a large range of time scales (e.g., [5]), so that such slower processes are in broad agreement with physiology. The target for learning the feedforward weights becomes F →D⊤C−1. The matrix inverse can be eliminated by noticing that the differential equation ˙F = ϵ(−FC + D⊤) has the required target as its fixed point. The covariance matrix C can be estimated by averaging over xcx⊤ c , and the decoder D⊤can be estimated by averaging over xcr⊤ c , just as in the previous section, or as follows from Eq. (19). Hence, the learning of the feedforward weights becomes ˙F = β  (r −Fx)x⊤−⟨r −Fx⟩⟨x⊤⟩−αF  . (21) As for the recurrent weights, the learning rests on local information, but requires a slower time scale that computes the mean input signal and presynaptic firing rates. 3Under an affine transformation, y = Ax+b and ˆy = Aˆx+b, we obtain: y−ˆy ⊤cov(y, y)−1y−ˆy  = Ax −Aˆx ⊤cov(Ax, Ax)−1Ax −Aˆx  = x −ˆx ⊤cov(x, x)−1x −ˆx  . 5 5 The autoencoder with memory We are finally in a position to tackle the problem we started out with, how to build a recurrent network that efficiently represents not just its present input, but also its past inputs. The objective function used so far, however, completely neglects the input history: even if the dimensionality of the input is much smaller than the number of neurons available to code it, the network will not try to use the extra ‘space’ available to remember the input history. 5.1 An objective function for short-term memory Ideally, we would want to be able to read out both the present input and the past inputs, such that xt−n ≈Dnrt, where n is an elementary time step, and Dn are appropriately chosen readouts. We will in the following assume that there is a matrix M such that DnM = Dn+1 for all n. In other words, the input history should be accessible via ˆxt−n = Dnr = D0Mnrt. Then the cost function we would like to minimize is a straightforward generalization of Eq. (2), L = 1 2 X n=0 γn∥xt−n −DMnrt∥2 + µ 2 ∥rt∥2. (22) where we have set D = D0. We tacitly assume that x and r are centered and that the L2 norm is defined with respect to the input signal covariance matrix C, so that we can work in the full generality of Eq. (14) without keeping the additional notational baggage. Unfortunately, the direct minimization of this objective is impossible, since the network has no access to the past inputs xt−n for n ≥1. Rather, information about past inputs will have to be retrieved from the network activity itself. We can enforce that by replacing the past input signal at time t, with its estimate in the previous time step, which we will denote by a prime. In other words, instead of asking that xt−n ≈ˆxt−n, we ask that ˆx′ (t−1)−(n−1) ≈ˆxt−n, so that the estimates of the input (and its history) are properly propagated through the network. Given the iterative character of the respective errors, ∥ˆx′ (t−1)−(n−1) −ˆxt−n∥= ∥DMn−1(rt−1 −Mrt)∥, we can define a loss function for one time step only, L = 1 2 ∥xt −Drt∥2 + γ 2 ∥rt−1 −Mrt∥2 + µ 2 ∥rt∥2 . (23) Here, the first term enforces that the instantaneous input signal is properly encoded, while the second term ensures that the network is remembering past information. The last term is a cost term that makes the system more stable and efficient. Note that a network which minimizes this loss function is maximizing its information content, even if the number of neurons, N, far exceeds the input dimension K, so that N ≫K. As becomes clear from inspecting the loss function, the network is trying to code an N + K dimensional signal with only N neurons. Consequently, just as in the undercomplete autoencoder, all of its information capacity will be used. 5.2 Dynamics and learning Conceptually, the loss function in Eq. (23) is identical to Eq. (2), or rather, to Eq. (14), if we keep full generality. We only need to vertically stack the feedforward input and the delayed recurrent input into a single high-dimensional vector x′ = (xt ; γrt−1). Similarly, we can horizontally combine the decoder D and the ‘time travel’ matrix M into a single decoder matrix D′ = (D γM). The above loss function then reduces to L = ∥x′ t −D′rt∥2 + µ ∥rt∥2 , (24) and all of our derivations, including the learning rules, can be directly applied to this system. Note that the ‘input’ to the network now combines the actual input signal, xt, and the delayed recurrent input, rt−1. Consequently, this extended input is neither white nor centered, and we will need to work with the generalized dynamics and generalized learning rules derived in the previous section. 6 0 10 20 30 40 50 Time [s] 0 1 2 3 4 5 6 7 8 Rate [Hz] Population rate before learning 0 10 20 30 40 50 Time [s] 0 1 2 3 4 5 6 7 8 Rate [Hz] Population rate after learning 0 50000 100000 150000 200000 Time [s] 10 -7 10 -6 10 -5 10 -4 10 -3 10 -2 10 -1 10 0 10 1 10 2 10 3 Distance to optimal weights 0 2 4 6 8 10 12 Relative time in past 0.0 0.2 0.4 0.6 0.8 1.0 Reconstruction performance History reconstruction (before) 0 2 4 6 8 10 12 Relative time in past 0.0 0.2 0.4 0.6 0.8 1.0 Reconstruction performance History reconstruction (after) 0.75 0.80 0.85 0.90 0.95 1.00 Optimal weights 0.75 0.80 0.85 0.90 0.95 1.00 Learned weights Optimal vs Learned weights A C B D F E Relative distance to FTF + MTM Figure 2: Emergence of working memory in a network of 10 neurons with random initial connectivity. (A) Rates of all neurons for the first 50 inputs at the beginning of learning. (B) Same as (A), but after learning. (C) Distance of fast recurrent weights to optimal configuration, F⊤F + M⊤M, relative to L2-norm of optimal weights. (D) Squared error of optimal linear reconstruction of inputs at time t −k from rates at time t, relative to variance of the input before learning; for k ∈[0, . . . , 20]. (E) Same as (D) but after learning. (F) Scatter plot of fast recurrent weights after learning against optimal configuration, F⊤F + M⊤M. The network dynamics will initially follow the differential equation 4 V = Fxt + Ωdrt−1 −Ωfrt −µrt (25) ˙r = V −⟨V⟩. (26) Compared to our previous network, we now have effectively three inputs into the network: the feedforward inputs with weight F, a delayed recurrent input with weight Ωd and a fast recurrent input with weight Ωf, see Fig. 1c. The optimal connectivities can be derived from the loss function and are (see also Fig. 1c) F = D⊤ (27) Ωd = M⊤ (28) Ωf = D⊤D + M⊤M. (29) Consequently, there are also three learning rules: one for the fast recurrent weights, which follows Eq. (20), one for the feedforward weights, which follows Eq. (21), and one for the delayed recurrent weights, which also follows Eq. (21). In summary, ˙Ωf = ϵ(Fxtr⊤ t −⟨Fxt⟩⟨r⟩⊤−αΩ) (30) ˙F = β (rt −Fxt)x⊤ t −⟨rt −Fxt⟩⟨x⊤ t ⟩−αF  . (31) ˙Ωd = β (rt −Ωdrt−1)r⊤ t−1 −⟨rt −Ωdrt−1⟩⟨r⊤ t−1⟩−αΩd . (32) We note that the learning of the slow connections does not strictly minimize the expected loss in every time step, due to potential non-stationarities in the distribution of firing rates throughout the course of learning. In practice, we therefore find that the improvement in memory performance is often dominated by the learning of the fast connectivity (see example below). 4We are now dealing with a delay-differential equation, which may be obscured by our notation. In practice, the term rt−1 would be replaced by a term of the form r(r −τ), where τ is the actual value of the ‘time step’. 7 6 Simulations We simulated a firing rate network of ten neurons that learn to remember a one-dimensional, temporally uncorrelated white noise stimulus (Fig. 2). Firing rates were constrained to be positive. We initialized all feedforward weights to one, whereas the matrices Ωf and Ωd were initialised by drawing numbers from centered Gaussian distributions with variance 1 and 0.2 respectively. All matrices were then divided by N 2 = 100. At the onset, the network has some memory, similar to random networks based on reservoir computing. However, the recurrent inputs are generally not cancelling out the feedforward inputs. The effect of such imprecise balance are initially high firing rates and poor coding properties (Fig. 2A,D). At the end of learning, neurons are firing less, and the coding properties are close to the information-theoretic limit (10 time steps), see Fig. 2B,E. We note that, although the signal input was white noise for simplicity, the total input into the network (i.e., including the delayed firing rates) is neither white nor zero-mean, due to the positivity constraint on the firing rates. The network converges to the derived connectivity (Fig. 2C,F); we note, however, that the bulk of the improvements is due to the learning of the fast connections. 7 Towards learning in spiking recurrent networks While we have shown how a recurrent network can learn to efficiently represent an input and its history using only local learning rules, our network is still far from being biologically realistic. A quite obvious discrepancy with biological networks is that the neurons are not spiking, but rather emit ‘firing rates’ that can be both positive and negative. How can we make the connection to spiking networks? Standard solutions have bridged from rate to spiking networks using mean-field approaches [18]. However, more recent work has shown that there is a direct link from the types of loss functions considered in this paper to balanced spiking networks. Recently, Hu et al. pointed out that the minimization of Eq. (2) can be done by a network of neurons that fires both positive and negative spikes [9], and then argued that these networks can be translated into real spiking networks. A similar, but more direct approach was introduced in [3, 1] who suggested to minimize the loss function, Eq. (2), under the constraint that r ≥0. The resulting networks consist of recurrently connected integrate-and-fire neurons that balance their feedforward and recurrent inputs [3, 1, 4]. Importantly, Eq. (2) remains a convex function of r, and Eq. (3) still applies (except that r cannot become negative). The precise match between the spiking network implementation and the firing rate minimization [1] opens up the possibility to apply our learning rules to the spiking networks. We note, though, that this holds only strictly in the regime where the spiking networks are balanced. (For unbalanced networks, there is no direct link to the firing rate formalism.) If the initial network is not balanced, we need to first learn how to bring it into the balanced state. For white-noise Gaussian inputs, [4] showed how this can be done. For more general inputs, this problem will have to be solved in the future. 8 Discussion In summary, we have shown how a recurrent neural network can learn to efficiently represent both its present and past inputs. A key insight has been the link between balancing of feedforward and recurrent inputs and the minimization of the cost function. If neurons can compensate both external feedforward and delayed recurrent excitation with lateral inhibition, then, to some extent, they must be coding the temporal trajectory of the stimulus. Indeed, in order to be able to compensate an input, the network must be coding it at some level. Furthermore, if synapses are linear, then so must be the decoder. We have shown that this ‘balance’ can be learnt through local synaptic plasticity of the lateral connections, based only on the presynaptic input signals and postsynaptic firing rates of the neurons. Performance can then be further improved by learning the feedforward connections (as well as the ‘time travel’ matrix) which thereby take the input statistics into account. In our network simulations, these connections only played a minor role in the overall improvements. Since the learning rules for the time-travel matrix do not strictly minimize the expected loss (see above), there may still be room for future improvements. 8 References [1] D. G. Barrett, S. Den`eve, and C. K. Machens. “Firing rate predictions in optimal balanced networks”. In: Advances in Neural Information Processing Systems 26. 2013, pp. 1538–1546. [2] A. J. Bell and T. J. Sejnowski. “An information-maximization approach to blind separation and blind deconvolution”. In: Neural comp. 7 (1995), pp. 1129–1159. [3] M. Boerlin, C. K. Machens, and S. Den`eve. “Predictive coding of dynamical variables in balanced spiking networks”. In: PLoS Computational Biology 9.11 (2013), e1003258. [4] R. Bourdoukan et al. “Learning optimal spike-based representations”. In: Advances in Neural Information Processing Systems 25. MIT Press, 2012, epub. [5] S. Fusi, P. J. Drew, and L. F. Abbott. “Cascade models of synaptically stored memories”. In: Neuron 45.4 (2005), pp. 599–611. [6] S. Hochreiter and J. Schmidhuber. “Long short-term memory”. In: Neural computation 9.8 (1997), pp. 1735–1780. [7] J. J. Hopfield. “Neural networks and physical systems with emergent collective computational abilities”. In: Proceedings of the national academy of sciences 79.8 (1982), pp. 2554–2558. [8] J. J. Hopfield. “Neurons with graded response have collective computational properties like those of two-state neurons”. In: Proc. Natl. Acad. Sci. USA 81 (1984), pp. 3088–3092. [9] T. Hu, A. Genkin, and D. B. Chklovskii. “A network of spiking neurons for computing sparse representations in an energy-efficient way”. In: Neural computation 24.11 (2012), pp. 2852– 2872. [10] H. Jaeger. “The ”echo state” approach to analysing and training recurrent neural networks.” In: German National Research Center for Information Technology. Vol. 48. 2001. [11] A. Lazaar, G. Pipa, and J. Triesch. “SORN: a self-organizing recurrent neural network”. In: Frontiers in computational neuroscience 3 (2009), p. 23. [12] M. Lukoˇseviˇcius and H. Jaeger. “Reservoir computing approaches to recurrent neural network training”. In: Computer Science Review 3.3 (2009), pp. 127–149. [13] W. Maass, T. Natschl¨ager, and H. Markram. “Real-time computing without stable states: A new framework for neural computation based on perturbations”. In: Neural computation 14.11 (2002), pp. 2531–2560. [14] C. K. Machens, R. Romo, and C. D. Brody. “Flexible control of mutual inhibition: A neural model of two-interval discrimination”. In: Science 307 (2005), pp. 1121–1124. [15] G. Major and D. Tank. “Persistent neural activity: prevalence and mechanisms”. In: Curr. Opin. Neurobiol. 14 (2004), pp. 675–684. [16] B. A. Olshausen and D. J. Field. “Sparse coding with an overcomplete basis set: A strategy employed by V1?” In: Vision Research 37.23 (1997), pp. 3311–3325. [17] R. P. N. Rao and D. H. Ballard. “Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects”. In: Nature neuroscience 2.1 (1999), pp. 79–87. [18] A. Renart, N. Brunel, and X.-J. Wang. “Mean-field theory of irregularly spiking neuronal populations and working memory in recurrent cortical networks”. In: Computational neuroscience: A comprehensive approach (2004), pp. 431–490. [19] C. J. Rozell et al. “Sparse coding via thresholding and local competition in neural circuits”. In: Neural computation 20.10 (2008), pp. 2526–2563. [20] E. P. Simoncelli and B. A. Olshausen. “Natural image statistics and neural representation”. In: Ann. Rev. Neurosci. 24 (2001), pp. 1193–1216. [21] X.-J. Wang. “Probabilistic decision making by slow reverberation in cortical circuits”. In: Neuron 36.5 (2002), pp. 955–968. [22] P. J. Werbos. “Backpropagation through time: what it does and how to do it”. In: Proceedings of the IEEE 78.10 (1990), pp. 1550–1560. [23] J. Zylberberg, J. T. Murphy, and M. R. DeWeese. “A sparse coding model with synaptically local plasticity and spiking neurons can account for the diverse shapes of V1 simple cell receptive fields”. In: PLoS Computational Biology 7.10 (2011), e1002250. 9
2014
83
5,573
Projective dictionary pair learning for pattern classification Shuhang Gu1, Lei Zhang1, Wangmeng Zuo2, Xiangchu Feng3 1Dept. of Computing, The Hong Kong Polytechnic University, Hong Kong, China 2School of Computer Science and Technology, Harbin Institute of Technology, Harbin, China 3Dept. of Applied Mathematics, Xidian University, Xi′an, China {cssgu, cslzhang}@comp.polyu.edu.hk cswmzuo@gmail.com, xcfeng@mail.xidian.edu.cn Abstract Discriminative dictionary learning (DL) has been widely studied in various pattern classification problems. Most of the existing DL methods aim to learn a synthesis dictionary to represent the input signal while enforcing the representation coefficients and/or representation residual to be discriminative. However, the ℓ0 or ℓ1-norm sparsity constraint on the representation coefficients adopted in most DL methods makes the training and testing phases time consuming. We propose a new discriminative DL framework, namely projective dictionary pair learning (DPL), which learns a synthesis dictionary and an analysis dictionary jointly to achieve the goal of signal representation and discrimination. Compared with conventional DL methods, the proposed DPL method can not only greatly reduce the time complexity in the training and testing phases, but also lead to very competitive accuracies in a variety of visual classification tasks. 1 Introduction Sparse representation represents a signal as the linear combination of a small number of atoms chosen out of a dictionary, and it has achieved a big success in various image processing and computer vision applications [1, 2]. The dictionary plays an important role in the signal representation process [3]. By using a predefined analytical dictionary (e.g., wavelet dictionary, Gabor dictionary) to represent a signal, the representation coefficients can be produced by simple inner product operations. Such a fast and explicit coding makes analytical dictionary very attractive in image representation; however, it is less effective to model the complex local structures of natural images. Sparse representation with a synthesis dictionary has been widely studied in recent years [2, 4, 5]. With synthesis dictionary, the representation coefficients of a signal are usually obtained via an ℓp-norm (p ≤1) sparse coding process, which is computationally more expensive than analytical dictionary based representation. However, synthesis based sparse representation can better model the complex image local structures and it has led to many state-of-the-art results in image restoration [6]. Another important advantage lies in that the synthesis based sparse representation model allows us to easily learn a desired dictionary from the training data. The seminal work of KSVD [1] tells us that an over-complete dictionary can be learned from example natural images, and it can lead to much better image reconstruction results than the analytically designed off-the-shelf dictionaries. Inspired by KSVD, many dictionary learning (DL) methods have been proposed and achieved stateof-the-art performance in image restoration tasks. The success of DL in image restoration problems triggers its applications in image classification tasks. Different from image restoration, assigning the correct class label to the test sample is the goal of classification problems; therefore, the discrimination capability of the learned dictionary is 1 of the major concern. To this end, supervised dictionary learning methods have been proposed to promote the discriminative power of the learned dictionary [4, 5, 7, 8, 9]. By encoding the query sample over the learned dictionary, both the coding coefficients and the coding residual can be used for classification, depending on the employed DL model. Discriminative DL has led to many stateof-the-art results in pattern recognition problems. One popular strategy of discriminative DL is to learn a shared dictionary for all classes while enforcing the coding coefficients to be discriminative [4, 5, 7]. A classifier on the coding coefficients can be trained simultaneously to perform classification. Mairal et al. [7] proposed to learn a dictionary and a corresponding linear classifier in the coding vector space. In the label consistent KSVD (LC-KSVD) method, Jiang et al. [5] introduced a binary class label sparse code matrix to encourage samples from the same class to have similar sparse codes. In [4], Mairal et al. proposed a task driven dictionary learning (TDDL) framework, which minimizes different risk functions of the coding coefficients for different tasks. Another popular line of research in DL attempts to learn a structured dictionary to promote discrimination between classes [2, 8, 9, 10]. The atoms in the structured dictionary have class labels, and the class-specific representation residual can be computed for classification. Ramirez et al. [8] introduced an incoherence promotion term to encourage the sub-dictionaries of different classes to be independent. Yang et al. [9] proposed a Fisher discrimination dictionary learning (FDDL) method which applies the Fisher criterion to both representation residual and representation coefficient. Wang et al. [10] proposed a max-margin dictionary learning (MMDL) algorithm from the large margin perspective. In most of the existing DL methods, ℓ0-norm or ℓ1-norm is used to regularize the representation coefficients since sparser coefficients are more likely to produce better classification results. Hence a sparse coding step is generally involved in the iterative DL process. Although numerous algorithms have been proposed to improve the efficiency of sparse coding [11, 12], the use of ℓ0-norm or ℓ1norm sparsity regularization is still a big computation burden and makes the training and testing inefficient. It is interesting to investigate whether we can learn discriminative dictionaries but without the costly ℓ0-norm or ℓ1-norm sparsity regularization. In particular, it would be very attractive if the representation coefficients can be obtained by linear projection instead of nonlinear sparse coding. To this end, in this paper we propose a projective dictionary pair learning (DPL) framework to learn a synthesis dictionary and an analysis dictionary jointly for pattern classification. The analysis dictionary is trained to generate discriminative codes by efficient linear projection, while the synthesis dictionary is trained to achieve class-specific discriminative reconstruction. The idea of using functions to predict the representation coefficients is not new, and fast approximate sparse coding methods have been proposed to train nonlinear functions to generate sparse codes [13, 14]. However, there are clear difference between our DPL model and these methods. First, in DPL the synthesis dictionary and analysis dictionary are trained jointly, which ensures that the representation coefficients can be approximated by a simple linear projection function. Second, DPL utilizes class label information and promotes discriminative power of the representation codes. One related work to this paper is the analysis-based sparse representation prior learning [15, 16], which represents a signal from a dual viewpoint of the commonly used synthesis model. Analysis prior learning tries to learn a group of analysis operators which have sparse responses to the latent clean signal. Sprechmann et al. [17] proposed to train a group of analysis operators for classification; however, in the testing phase a costly sparsity-constrained optimization problem is still required. Feng et al. [18] jointly trained a dimensionality reduction transform and a dictionary for face recognition. The discriminative dictionary is trained in the transformed space, and sparse coding is needed in both the training and testing phases. The contribution of our work is two-fold. First, we introduce a new DL framework, which extends the conventional discriminative synthesis dictionary learning to discriminative synthesis and analysis dictionary pair learning (DPL). Second, the DPL utilizes an analytical coding mechanism and it largely improves the efficiency in both the training and testing phases. Our experiments in various visual classification datasets show that DPL achieves very competitive accuracy with state-of-the-art DL algorithms, while it is significantly faster in both training and testing. 2 2 Projective Dictionary Pair Learning 2.1 Discriminative dictionary learning Denote by X = [X1, . . . , Xk, . . . , XK] a set of p-dimensional training samples from K classes, where Xk ∈Rp×n is the training sample set of class k, and n is the number of samples of each class. Discriminative DL methods aim to learn an effective data representation model from X for classification tasks by exploiting the class label information of training data. Most of the state-of-the-art discriminative DL methods [5, 7, 9] can be formulated under the following framework: minD,A ∥X −DA ∥2 F +λ ∥A ∥p +Ψ(D, A, Y), (1) where λ ≥0 is a scalar constant, Y represents the class label matrix of samples in X, D is the synthesis dictionary to be learned, and A is the coding coefficient matrix of X over D. In the training model (1), the data fidelity term ∥X −DA ∥2 F ensures the representation ability of D; ∥A ∥p is the ℓp-norm regularizer on A; and Ψ(D, A, Y) stands for some discrimination promotion function, which ensures the discrimination power of D and A. As we introduced in Section 1, some DL methods [4, 5, 7] learn a shared dictionary for all classes and a classifier on the coding coefficients simultaneously, while some DL methods [8, 9, 10] learn a structured dictionary to promote discrimination between classes. However, they all employ ℓ0 or ℓ1-norm sparsity regularizer on the coding coefficients, making the training stage and the consequent testing stage inefficient. In this work, we extend the conventional DL model in (1), which learns a discriminative synthesis dictionary, to a novel DPL model, which learns a pair of synthesis and analysis dictionaries. No costly ℓ0 or ℓ1-norm sparsity regularizer is required in the proposed DPL model, and the coding coefficients can be explicitly obtained by linear projection. Fortunately, DPL does not sacrifice the classification accuracy while achieving significant improvement in the efficiency, as demonstrated by our extensive experiments in Section 3. 2.2 The dictionary pair learning model The conventional discriminative DL model in (1) aims to learn a synthesis dictionary D to sparsely represent the signal X, and a costly ℓ1-norm sparse coding process is needed to resolve the code A. Suppose that if we can find an analysis dictionary, denoted by P ∈RmK×p , such that the code A can be analytically obtained as A = PX, then the representation of X would become very efficient. Based on this idea, we propose to learn such an analysis dictionary P together with the synthesis dictionary D, leading to the following DPL model: {P∗,D∗}=arg min P,D ∥X −DPX∥2 F +Ψ(D, P, X, Y), (2) where Ψ(D, P, X, Y) is some discrimination function. D and P form a dictionary pair: the analysis dictionary P is used to analytically code X, and the synthesis dictionary D is used to reconstruct X. The discrimination power of the DPL model depends on the suitable design of Ψ(D, P, X, Y) . We propose to learn a structured synthesis dictionary D = [D1, . . . , Dk, . . . , DK] and a structured analysis dictionary P = [P1; . . . ; Pk; . . . ; PK], where {Dk ∈Rp×m, Pk ∈Rm×p} forms a subdictionary pair corresponding to class k. Recent studies on sparse subspace clustering [19] have proved that a sample can be represented by its corresponding dictionary if the signals satisfy certain incoherence condition. With the structured analysis dictionary P, we want that the sub-dictionary Pk can project the samples from class i, i ̸= k, to a nearly null space, i.e., PkXi ≈0, ∀k ̸= i. (3) Clearly, with (3) the coefficient matrix PX will be nearly block diagonal. On the other hand, with the structured synthesis dictionary D, we want that the sub-dictionary Dk can well reconstruct the data matrix Xk from its projective code matrix PkXk; that is, the dictionary pair should minimize the reconstruction error: min P,D XK k=1 ∥Xk −DkPkXk ∥2 F . (4) Based on the above analysis, we can readily have the following DPL model: {P∗, D∗} = arg min P,D XK k=1 ∥Xk −DkPkXk ∥2 F +λ ∥Pk¯Xk ∥2 F, s.t. ∥di ∥2 2≤1. (5) 3 Algorithm 1 Discriminative synthesis&analysis dictionary pair learning (DPL) Input: Training samples for K classes X = [X1, X2, . . . , XK], parameter λ, τ, m; 1: Initialize D(0) and P(0) as random matrixes with unit Frobenious norm, t = 0; 2: while not converge do 3: t ←t + 1; 4: for i=1:K do 5: Update A(t) k by (8); 6: Update P(t) k by (10); 7: Update D(t) k by (12); 8: end for 9: end while Output: Analysis dictionary P, synthesis dictionary D. where ¯Xk denotes the complementary data matrix of Xk in the whole training set X, λ > 0 is a scalar constant, and di denotes the ith atom of synthesis dictionary D. We constrain the energy of each atom di in order to avoid the trivial solution of Pk = 0 and make the DPL more stable. The DPL model in (5) is not a sparse representation model, while it enforces group sparsity on the code matrix PX (i.e., PX is nearly block diagonal). Actually, the role of sparse coding in classification is still an open problem, and some researchers argued that sparse coding may not be crucial to classification tasks [20, 21]. Our findings in this work are supportive to this argument. The DPL model leads to very competitive classification performance with those sparse coding based DL models, but it is much faster. 2.3 Optimization The objective function in (5) is generally non-convex. We introduce a variable matrix A and relax (5) to the following problem: {P∗, A∗, D∗}=arg min P,A,D K X k=1 ∥Xi−DkAk ∥2 F +τ ∥PkXk−Ak ∥2 F +λ∥Pk ¯Xk ∥2 F, s.t. ∥di ∥2 2≤1. (6) where τ is a scalar constant. All terms in the above objective function are characterized by Frobenius norm, and (6) can be easily solved. We initialize the analysis dictionary P and synthesis dictionary D as random matrices with unit Frobenius norm, and then alternatively update A and {D, P}. The minimization can be alternated between the following two steps. (1) Fix D and P, update A A∗= arg min A XK k=1 ∥Xk −DkAk ∥2 F +τ ∥PkXk −Ak ∥2 F . (7) This is a standard least squares problem and we have the closed-form solution: A∗ k = (DT k Dk + τI)−1(τPkXk + DT k Xk). (8) (2) Fix A, update D and P: ( P∗=arg minP PK k=1 τ ∥PkXk −Ak ∥2 F +λ ∥Pk ¯Xk ∥2 F; D∗=arg minD PK k=1 ∥Xk −DkAk ∥2 F, s.t. ∥di ∥2 2≤1. (9) The closed-form solutions of P can be obtained as: P∗ k = τAkXT k (τXkXT k + λ ¯Xk ¯ XT k + γI)−1, (10) where γ = 10e−4 is a small number. The D problem can be optimized by introducing a variable S: min D,S XK k=1 ∥Xk −DkAk ∥2 F, s.t. D = S, ∥si ∥2 2≤1. (11) The optimal solution of (11) can be obtained by the ADMM algorithm:      D(r+1) =arg minD PK k=1 ∥Xk −DkAk ∥2 F +ρ ∥Dk −S(r) k + T(r) k ∥2 F, S(r+1) =arg minS PK k=1 ρ ∥D(r+1) k −Sk + T(r) k ∥2 F, s.t. ∥si ∥2 2≤1, T(r+1) =T(r) + D(r+1) k −S(r+1) k , update ρ if appropriate. (12) 4 (a) * 2 2 kP y (b) * * 2 2 k k  y D P y (a)∥P∗ k y ∥2 2 (a) * 2 2 kP y (b) * * 2 2 k k  y D P y (b)∥y −D∗ k P∗ k y ∥2 2 Figure 1: (a) The representation codes and (b) reconstruction error on the Extended YaleB dataset. In each step of optimization, we have closed form solutions for variables A and P, and the ADMM based optimization of D converges rapidly. The training of the proposed DPL model is much faster than most of previous discriminative DL methods. The proposed DPL algorithm is summarized in Algorithm 1. When the difference between the energy in two adjacent iterations is less than 0.01, the iteration stops. The analysis dictionary P and the synthesis dictionary D are then output for classification. One can see that the first sub-objective function in (9) is a discriminative analysis dictionary learner, focusing on promoting the discriminative power of P; the second sub-objective function in (9) is a representative synthesis dictionary learner, aiming to minimize the reconstruction error of the input signal with the coding coefficients generated by the analysis dictionary P. When the minimization process converges, a balance between the discrimination and representation power of the model can be achieved. 2.4 Classification scheme In the DPL model, the analysis sub-dictionary P∗ k is trained to produce small coefficients for samples from classes other than k, and it can only generate significant coding coefficients for samples from class k. Meanwhile, the synthesis sub-dictionary D∗ k is trained to reconstruct the samples of class k from their projective coefficients P∗ kXk; that is, the residual ∥Xk −D∗ kP∗ kXk ∥2 F will be small. On the other hand, since P∗ kXi, i ̸= k, will be small and D∗ k is not trained to reconstruct Xi, the residual ∥Xi −D∗ kP∗ kXi ∥2 F will be much larger than ∥Xk −D∗ kP∗ kXk ∥2 F . In the testing phase, if the query sample y is from class k, its projective coding vector by P∗ k (i.e., P∗ ky ) will be more likely to be significant, while its projective coding vectors by P∗ i , i ̸= k, tend to be small. Consequently, the reconstruction residual ∥y −D∗ kP∗ ky ∥2 2 tends to be much smaller than the residuals ∥y −D∗ i P∗ i y ∥2 2, i ̸= k. Let us use the Extended YaleB face dataset [22] to illustrate this. (The detailed experimental setting can be found in Section 3.) Fig. 1(a) shows the ℓ2-norm of the coefficients P∗ ky, where the horizontal axis refers to the index of y and the vertical axis refers to the index of P∗ k . One can clearly see that ∥P∗ ky ∥2 2 has a nearly block diagonal structure, and the diagonal blocks are produced by the query samples which have the same class labels as P∗ k . Fig. 1(b) shows the reconstruction residual ∥y −D∗ kP∗ ky ∥2 2. One can see that ∥y −D∗ kP∗ ky ∥2 2 also has a block diagonal structure, and only the diagonal blocks have small residuals. Clearly, the classspecific reconstruction residual can be used to identify the class label of y, and we can naturally have the following classifier associated with the DPL model: identity(y) = arg mini ∥y −DiPiy ∥2 . (13) 2.5 Complexity and Convergence Complexity In the training phase of DPL, Ak, Pk and Dk are updated alternatively. In each iteration, the time complexities of updating Ak, Pk and Dk are O(mpn + m3 + m2n), O(mnp + p3 + mp2) and O(W(pmn + m3 + m2p + p2m)), respectively, where W is the iteration number in ADMM algorithm for updating D. We experimentally found that in most cases W is less than 20. In many applications, the number of training samples and the number of dictionary atoms for each class are much smaller than the dimension p. Thus the major computational burden in the training phase of DPL is on updating Pk, which involves an inverse of a p × p matrix {τXkXT k + λ¯Xk¯XT k + γI}. Fortunately, this 5 0 10 20 30 40 50 4 5 6 Iteration Number Energy Figure 2: The convergence curve of DPL on the AR database. matrix will not change in the iteration, and thus the inverse of it can be pre-computed. This greatly accelerates the training process. In the testing phase, our classification scheme is very efficient. The computation of class-specific reconstruction error ∥y −D∗ kP∗ ky ∥2 only has a complexity of O(mp). Thus, the total complexity of our model to classify a query sample is O(Kmp). Convergence The objective function in (6) is a bi-convex problem for {(D, P), (A)}, e.g., by fixing A the function is convex for D and P, and by fixing D and P the function is convex for A. The convergence of such a problem has already been intensively studied [23], and the proposed optimization algorithm is actually an alternate convex search (ACS) algorithm. Since we have the optimal solutions of updating A, P and D, and our objective function has a general lower bound 0, our algorithm is guaranteed to converge to a stationary point. A detailed convergence analysis can be found in our supplementary file. It is empirically found that the proposed DPL algorithm converges rapidly. Fig. 2 shows the convergence curve of our algorithm on the AR face dataset [24]. One can see that the energy drops quickly and becomes very small after 10 iterations. In most of our experiments, our algorithm will converge in less than 20 iterations. 3 Experimental Results We evaluate the proposed DPL method on various visual classification datasets, including two face databases (Extended YaleB [22] and AR [24]), one object categorization database (Caltech101) [25], and one action recognition database (UCF 50 action [26]). These datasets are widely used in previous works [5, 9] to evaluate the DL algorithms. Besides the classification accuracy, we also report the training and testing time of competing algorithms in the experiments. All the competing algorithms are implemented in Matlab except for SVM which is implemented in C. All experiments are run on a desktop PC with 3.5GHz Intel CPU and 8 GB memory. The testing time is calculated in terms of the average processing time to classify a single query sample. 3.1 Parameter setting There are three parameters, m, λ and τ in the proposed DPL model. To achieve the best performance, in face recognition and object recognition experiments, we set the number of dictionary atoms as its maximum (i.e., the number of training samples) for all competing DL algorithms, including the proposed DPL. In the action recognition experiment, since the samples per class is relatively big, we set the number of dictionary atoms of each class as 50 for all the DL algorithms. Parameter τ is an algorithm parameter, and the regularization parameter λ is to control the discriminative property of P. In all the experiments, we choose λ and τ by 10-fold cross validation on each dataset. For all the competing methods, we tune their parameters for the best performance. 3.2 Competing methods We compare the proposed DPL method with the following methods: the base-line nearest subspace classifier (NSC) and linear support vector machine (SVM), sparse representation based classification (SRC) [2] and collaborative representation based classification (CRC) [21], and the state-of-the-art DL algorithms DLSI [8], FDDL [9] and LC-KSVD [5]. The original DLSI represents the test sample by each class-specific sub-dictionary. The results in [9] have shown that by coding the test sample collaboratively over the whole dictionary, the classification performance can be greatly improved. 6 (a) (b) (a) Figure 3: Sample images in the (a) Extended YaleB and (b) AR databases. Therefore, we follow the use of DLDI in [9] and denote this method as DLSI(C). For the two variants of LC-KSVD proposed in [5], we adopt the LC-KSVD2 since it can always produce better classification accuracy. 3.3 Face recognition We first evaluate our algorithm on two widely used face datasets: Extended YaleB [22] and AR [24]. The Extended YaleB database has large variations in illumination and expressions, as illustrated in Fig. 3(a). The AR database involves many variations such as illumination, expressions and sunglass and scarf occlusion, as illustrated in Fig. 3(b). We follow the experimental settings in [5] for fair comparison with state-of-the-arts. A set of 2,414 face images of 38 persons are extracted from the Extended YaleB database. We randomly select half of the images per subject for training and the other half for testing. For the AR database, a set of 2,600 images of 50 female and 50 male subjects are extracted. 20 images of each subject are used for training and the remain 6 images are used for testing. We use the features provided by Jiang et al. [5] to represent the face image. The feature dimension is 504 for Extended YaleB and 540 for AR. The parameter τ is set to 0.05 on both the datasets and λ is set to 3e-3 and 5e-3 on the Extended YaleB and AR datasets, respectively. In these two experiments, we also compare with the max-margin dictionary learning (MMDL) [10] algorithm, whose recognition accuracy is cropped from the original paper but the training/testing time is not available. Table 1: Results on the Extended YaleB database. Accuracy (%) Training time (s) Testing time (s) NSC 94.7 no need 1.41e-3 SVM 95.6 0.70 3.49e-5 CRC 97.0 no need 1.92e-3 SRC 96.5 no need 2.16e-2 DLSI(C) 97.0 567.47 4.30e-2 FDDL 96.7 6,574.6 1.43 LC-KSVD 96.7 412.58 4.22e-4 MMDL 97.3 DPL 97.5 4.38 1.71e-4 Table 2: Results on the AR database. Accuracy (%) Training time (s) Testing time (s) NSC 92.0 no need 3.29e-3 SVM 96.5 3.42 6.16e-5 CRC 98.0 no need 5.08e-3 SRC 97.5 no need 3.42e-2 DLSI(C) 97.5 2,470.5 0.16 FDDL 97.5 61,709 2.50 LC-KSVD 97.8 1,806.3 7.72e-4 MMDL 97.3 DPL 98.3 11.30 3.93e-4 Extended YaleB database The recognition accuracies and training/testing time by different algorithms on the Extended YaleB database are summarized in Table 1. The proposed DPL algorithm achieves the best accuracy, which is slightly higher than MMDL, DLSI(C), LC-KSVD and FDDL. However, DPL has obvious advantage in efficiency over the other DL algorithms. AR database The recognition accuracies and running time on the AR database are shown in Table 2. DPL achieves the best results among all the competing algorithms. Compared with the experiment on Extended YaleB, in this experiment there are more training samples and the feature dimension is higher, and DPL′s advantage of efficiency is much more obvious. In training, it is more than 159 times faster than DLSI and LC-KSVD, and 5,460 times faster than FDDL. 3.4 Object recognition In this section we test DPL on object categorization by using the Caltech101 database [25]. The Caltech101 database [25] includes 9,144 images from 102 classes (101 common object classes and a background class). The number of samples in each category varies from 31 to 800. Following the experimental settings in [5, 27], 30 samples per category are used for training and the rest are 7 Table 3: Recognition accuracy(%) & running time(s) on the Caltech101 database. Accuracy Training time Testing time NSC 70.1 no need 1.79e-2 SVM 64.6 14.6 1.81e-4 CRC 68.2 no need 1.38e-2 SRC 70.7 no need 1.09 DLSI(C) 73.1 97,200 1.46 FDDL 73.2 104,000 12.86 LC-KSVD 73.6 12,700 4.17e-3 DPL 73.9 134.6 1.29e-3 used for testing. We use the standard bag-of-words (BOW) + spatial pyramid matching (SPM) framework [27] for feature extraction. Dense SIFT descriptors are extracted on three grids of sizes 1×1, 2×2, and 4×4 to calculate the SPM features. For a fair comparison with [5], we use the vector quantization based coding method to extract the mid-level features and use the standard max pooling approach to build up the high dimension pooled features. Finally, the original 21,504 dimensional data is reduced to 3,000 dimension by PCA. The parameters τ and λ used in our algorithm are 0.05 and 1e-4, respectively. The experimental results are listed in Table 3. Again, DPL achieves the best performance. Though its classification accuracy is only slightly better than the DL methods, its advantage in terms of training/testing time is huge. 3.5 Action recognition Action recognition is an important yet very challenging task and it has been attracting great research interests in recent years. We test our algorithm on the UCF 50 action database [26], which includes 50 categories of 6,680 human action videos from YouTube. We use the action bank features [28] and five-fold data splitting to evaluate our algorithm. For all the comparison methods, the feature dimension is reduced to 5,000 by PCA. The parameters τ and λ used in our algorithm are both 0.01. The results by different methods are reported in Table 4. Our DPL algorithm achieves much higher accuracy than its competitors. FDDL has the second highest accuracy; however, it is 1,666 times slower than DPL in training and 83,317 times slower than DPL in testing. Table 4: Recognition accuracy(%) & running time(s) on the UCF50 action database Methods Accuracy Training time Testing time NSC 51.8 no need 6.11e-2 SVM 57.9 59.8 5.02e-4 CRC 60.3 no need 6.76e-2 SRC 59.6 no need 8.92 DLSI(C) 60.0 397,000 10.11 FDDL 61.1 415,000 89.15 LC-KSVD 53.6 9,272.4 0.12 DPL 62.9 249.0 1.07e-3 4 Conclusion We proposed a novel projective dictionary pair learning (DPL) model for pattern classification tasks. Different from conventional dictionary learning (DL) methods, which learn a single synthesis dictionary, DPL learns jointly a synthesis dictionary and an analysis dictionary. Such a pair of dictionaries work together to perform representation and discrimination simultaneously. Compared with previous DL methods, DPL employs projective coding, which largely reduces the computational burden in learning and testing. Performance evaluation was conducted on publically accessible visual classification datasets. DPL exhibits highly competitive classification accuracy with state-of-the-art DL methods, while it shows significantly higher efficiency, e.g., hundreds to thousands times faster than LC-KSVD and FDDL in training and testing. 8 References [1] Aharon, M., Elad, M., Bruckstein, A.: K-svd: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. on Signal Processing, 54(11) (2006) 4311–4322 [2] Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2) (2009) 210–227 [3] Rubinstein, R., Bruckstein, A.M., Elad, M.: Dictionaries for sparse representation modeling. Proceedings of the IEEE 98(6) (2010) 1045–1057 [4] Mairal, J., Bach, F., Ponce, J.: Task-driven dictionary learning. IEEE Trans. Pattern Anal. Mach. Intelligence 34(4) (2012) 791–804 [5] Jiang, Z., Lin, Z., Davis, L.: Label consistent k-svd: learning a discriminative dictionary for recognition. IEEE Trans. on Pattern Anal. Mach. Intelligence 35(11) (2013) 2651–2664 [6] Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15(12) (2006) 3736–3745 [7] Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A., et al.: Supervised dictionary learning. In: NIPS. (2008) [8] Ramirez, I., Sprechmann, P., Sapiro, G.: Classification and clustering via dictionary learning with structured incoherence and shared features. In: CVPR. (2010) [9] Yang, M., Zhang, L., , Feng, X., Zhang, D.: Fisher discrimination dictionary learning for sparse representation. In: ICCV. (2011) [10] Wang, Z., Yang, J., Nasrabadi, N., Huang, T.: A max-margin perspective on sparse representation-based classification. In: ICCV. (2013) [11] Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: NIPS. (2007) [12] Hale, E.T., Yin, W., Zhang, Y.: Fixed-point continuation for ℓ1-minimization: Methodology and convergence. SIAM Journal on Optimization 19(3) (2008) 1107–1130 [13] Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: ICML. (2010) [14] Ranzato, M., Poultney, C., Chopra, S., Cun, Y.L.: Efficient learning of sparse representations with an energy-based model. In: NIPS. (2006) [15] Yunjin, C., Thomas, P., Bischof, H.: Learning l1-based analysis and synthesis sparsity priors using bilevel optimization. NIPS workshop (2012) [16] Elad, M., Milanfar, P., Rubinstein, R.: Analysis versus synthesis in signal priors. Inverse problems 23(3) (2007) 947 [17] Sprechmann, P., Litman, R., Yakar, T.B., Bronstein, A., Sapiro, G.: Efficient supervised sparse analysis and synthesis operators. In: NIPS. (2013) [18] Feng, Z., Yang, M., Zhang, L., Liu, Y., Zhang, D.: Joint discriminative dimensionality reduction and dictionary learning for face recognition. Pattern Recognition 46(8) (2013) 2134–2143 [19] Soltanolkotabi, M., Elhamifar, E., Candes, E.: Robust subspace clustering. arXiv preprint arXiv:1301.2603 (2013) [20] Coates, A., Ng, A.Y.: The importance of encoding versus training with sparse coding and vector quantization. In: ICML. (2011) [21] Zhang, L., Yang, M., Feng, X.: Sparse representation or collaborative representation: Which helps face recognition? In: ICCV. (2011) [22] Georghiades, A., Belhumeur, P., Kriegman, D.: From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Patt. Anal. Mach. Intel. 23(6) (2001) 643–660 [23] Gorski, J., Pfeuffer, F., Klamroth, K.: Biconvex sets and optimization with biconvex functions: a survey and extensions. Mathematical Methods of Operations Research 66(3) (2007) 373–407 [24] Martinez, A., Benavente., R.: The ar face database. CVC Technical Report (1998) [25] Fei-Fei, L., Fergus, R., Perona, P.: Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. Computer Vision and Image Understanding 106(1) (2007) 59–70 [26] Reddy, K.K., Shah, M.: Recognizing 50 human action categories of web videos. Machine Vision and Applications 24(5) (2013) 971–981 [27] Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In: CVPR. (2006) [28] Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: CVPR. (2012) 9
2014
84
5,574
Analysis of Learning from Positive and Unlabeled Data Marthinus C. du Plessis The University of Tokyo Tokyo, 113-0033, Japan christo@ms.k.u-tokyo.ac.jp Gang Niu Baidu Inc. Beijing, 100085, China niugang@baidu.com Masashi Sugiyama The University of Tokyo Tokyo, 113-0033, Japan sugi@k.u-tokyo.ac.jp Abstract Learning a classifier from positive and unlabeled data is an important class of classification problems that are conceivable in many practical applications. In this paper, we first show that this problem can be solved by cost-sensitive learning between positive and unlabeled data. We then show that convex surrogate loss functions such as the hinge loss may lead to a wrong classification boundary due to an intrinsic bias, but the problem can be avoided by using non-convex loss functions such as the ramp loss. We next analyze the excess risk when the class prior is estimated from data, and show that the classification accuracy is not sensitive to class prior estimation if the unlabeled data is dominated by the positive data (this is naturally satisfied in inlier-based outlier detection because inliers are dominant in the unlabeled dataset). Finally, we provide generalization error bounds and show that, for an equal number of labeled and unlabeled samples, the generalization error of learning only from positive and unlabeled samples is no worse than 2 √ 2 times the fully supervised case. These theoretical findings are also validated through experiments. 1 Introduction Let us consider the problem of learning a classifier from positive and unlabeled data (PU classification), which is aimed at assigning labels to the unlabeled dataset [1]. PU classification is conceivable in various applications such as land-cover classification [2], where positive samples (built-up urban areas) can be easily obtained, but negative samples (rural areas) are too diverse to be labeled. Outlier detection in unlabeled data based on inlier data can also be regarded as PU classification [3, 4]. In this paper, we first explain that, if the class prior in the unlabeled dataset is known, PU classification can be reduced to the problem of cost-sensitive classification [5] between positive and unlabeled data. Thus, in principle, the PU classification problem can be solved by a standard cost-sensitive classifier such as the weighted support vector machine [6]. The goal of this paper is to give new insight into this PU classification algorithm. Our contributions are three folds: • The use of convex surrogate loss functions such as the hinge loss may potentially lead to a wrong classification boundary being selected, even when the underlying classes are completely separable. To obtain the correct classification boundary, the use of non-convex loss functions such as the ramp loss is essential. 1 • When the class prior in the unlabeled dataset is estimated from data, the classification error is governed by what we call the effective class prior that depends both on the true class prior and the estimated class prior. In addition to gaining intuition behind the classification error incurred in PU classification, a practical outcome of this analysis is that the classification error is not sensitive to class-prior estimation error if the unlabeled data is dominated by positive data. This would be useful in, e.g., inlier-based outlier detection scenarios where inlier samples are dominant in the unlabeled dataset [3, 4]. This analysis can be regarded as an extension of traditional analysis of class priors in ordinary classification scenarios [7, 8] to PU classification. • We establish generalization error bounds for PU classification. For an equal number of positive and unlabeled samples, the convergence rate is no worse than 2 √ 2 times the fully supervised case. Finally, we numerically illustrate the above theoretical findings through experiments. 2 PU classification as cost-sensitive classification In this section, we show that the problem of PU classification can be cast as cost-sensitive classification. Ordinary classification: The Bayes optimal classifier corresponds to the decision function f(X) ∈{1, −1} that minimizes the expected misclassification rate w.r.t. a class prior of π: R(f) := πR1(f) + (1 −π)R−1(f), where R−1(f) and R1(f) denote the expected false positive rate and expected false negative rate: R−1(f) = P−1(f(X) ̸= −1) and R1(f) = P1(f(X) ̸= 1), and P1 and P−1 denote the marginal probabilities of positive and negative samples. In the empirical risk minimization framework, the above risk is replaced with their empirical versions obtained from fully labeled data, leading to practical classifiers [9]. Cost-sensitive classification: A cost-sensitive classifier selects a function f(X) ∈{1, −1} in order to minimize the weighted expected misclassification rate: R(f) := πc1R1(f) + (1 −π)c−1R−1(f), (1) where c1 and c−1 are the per-class costs [5]. Since scaling does not matter in (1), it is often useful to interpret the per-class costs as reweighting the problem according to new class priors proportional to πc1 and (1 −π)c−1. PU classification: In PU classification, a classifier is learned using labeled data drawn from the positive class P1 and unlabeled data that is a mixture of positive and negative samples with unknown class prior π: PX = πP1 + (1 −π)P−1. Since negative samples are not available, let us train a classifier to minimize the expected misclassification rate between positive and unlabeled samples. Since we do not have negative samples in the PU classification setup, we cannot directly estimate R−1(f) and thus we rewrite the risk R(f) not to include R−1(f). More specifically, let RX(f) be the probability that the function f(X) gives the positive label over PX [10]: RX(f) = PX(f(X) = 1) = πP1(f(X) = 1) + (1 −π)P−1(f(X) = 1) = π(1 −R1(f)) + (1 −π)R−1(f). (2) 2 Then the risk R(f) can be written as R(f) = πR1(f) + (1 −π)R−1(f) = πR1(f) −π(1 −R1(f)) + RX(f) = 2πR1(f) + RX(f) −π. (3) Let η be the proportion of samples from P1 compared to PX, which is empirically estimated by n n+n′ where n and n′ denote the numbers of positive and unlabeled samples, respectively. The risk R(f) can then be expressed as R(f) = c1ηR1(f) + cX(1 −η)RX(f) −π, where c1 = 2π η and cX = 1 1 −η . Comparing this expression with (1), we can confirm that the PU classification problem is solved by cost-sensitive classification between positive and unlabeled data with costs c1 and cX. Some implementations of support vector machines, such as libsvm [6], allow for assigning weights to classes. In practice, the unknown class prior π may be estimated by the methods proposed in [10, 1, 11]. In the following sections, we analyze this algorithm. 3 Necessity of non-convex loss functions in PU classification In this section, we show that solving the PU classification problem with a convex loss function may lead to a biased solution, and the use of a non-convex loss function is essential to avoid this problem. Loss functions in ordinary classification: We first consider ordinary classification problems where samples from both classes are available. Instead of a binary decision function f(X) ∈ {−1, 1}, a continuous decision function g(X) ∈R such that sign(g(X)) = f(X) is learned. The loss function then becomes J0-1(g) = πE1 [ℓ0-1(g(X))] + (1 −π)E−1 [ℓ0-1(−g(X))] , where Ey is the expectation over Py and ℓ0-1(z) is the zero-one loss: ℓ0-1(z) = {0 z > 0, 1 z ≤0. Since the zero-one loss is hard to optimize in practice due to its discontinuous nature, it may be replaced with a ramp loss (as illustrated in Figure 1): ℓR(z) = 1 2 max(0, min(2, 1 −z)), giving an objective function of JR(g) = πE1 [ℓR (g(X))] + (1 −π)E−1 [ℓR(−g(X))] . (4) To avoid the non-convexity of the ramp loss, the hinge loss is often preferred in practice: ℓH(z) = 1 2 max(1 −z, 0), giving an objective of JH(g) = πE1 [ℓH (g(X))] + (1 −π)E−1 [ℓH(−g(X))] . (5) One practical motivation to use the convex hinge loss instead of the non-convex ramp loss is that separability (i.e., ming JR(g) = 0) implies ℓR(z) = 0 everywhere, and for all values of z for which ℓR(z) = 0, we have ℓH(z) = 0. Therefore, the convex hinge loss will give the same decision boundary as the non-convex ramp loss in the ordinary classification setup, under the assumption that the positive and negative samples are non-overlapping. 3 ℓH(z) = 1 2 max(0, 1−z) 1 1 2 1 −1 ℓH(z) ℓR(z) ℓR(z) = 1 2max(0, min(2, 1−z)) (a) Loss functions ℓH(z) + ℓH(−z) 1 1 −1 ℓR(z) + ℓR(−z) (b) Resulting penalties Figure 1: ℓR(z) denotes the ramp loss, and ℓH(z) denotes the hinge loss. ℓR(z)+ℓR(−z) is constant but ℓH(z) + ℓH(−z) is not and therefore causes a superfluous penalty. Ramp loss function in PU classification: An important question is whether the same interpretation will hold for PU classification: can the PU classification problem be solved by using the convex hinge loss? As we show below, the answer to this question is unfortunately “no”. In PU classification, the risk is given by (3), and its ramp-loss version is given by JPU-R(g) = 2πR1(f) + RX(f) −π (6) = 2πE1 [ℓR(g(X))] + [πE1 [ℓR(−g(X))] + (1 −π)E−1 [ℓR(−g(X))]] −π (7) = πE1 [ℓR(g(X))] + πE1 [ℓR(g(X)) + ℓR(−g(X))] + (1 −π)E−1 [ℓR(−g(X))] −π, (8) where (6) comes from (3) and (7) is due to the substitution of (2). Since the ramp loss is symmetric in the sense of ℓR(−z) + ℓR(z) = 1, (8) yields JPU-R(g) = πE1 [ℓR(g(X))] + (1 −π)E−1 [ℓR(−g(X))] . (9) (9) is essentially the same as (4), meaning that learning with the ramp loss in the PU classification setting will give the same classification boundary as in the ordinary classification setting. For non-convex optimization with the ramp loss, see [12, 13]. Hinge loss function in PU classification: On the other hand, using the hinge loss to minimize (3) for PU learning gives JPU-H(g) = 2πE1 [ℓH(g(X))] + [πE1 [ℓH(−g(X))] + (1 −π)E−1 [ℓH(−g(X))]] −π, (10) = πE1 [ℓH (g(X))] + (1 −π)E−1 [ℓH(−g(X))] | {z } Ordinary error term, cf. (5) + πE1 [ℓH(g(X)) + ℓH(−g(X))] | {z } Superfluous penalty −π. We see that the hinge loss has a term that corresponds to (5), but it also has a superfluous penalty term (see also Figure 1). This penalty term may cause an incorrect classification boundary to be selected. Indeed, even if g(X) perfectly separates the data, it may not minimize JPU-H(g) due to the superfluous penalty. To obtain the correct decision boundary, the loss function should be symmetric (and therefore non-convex). Alternatively, since the superfluous penalty term can be evaluated, it can be subtracted from the objective function. Note that, for the problem of label noise, an identical symmetry condition has been obtained [14]. Illustration: We illustrate the failure of the hinge loss on a toy PU classification problem with class conditional densities of: p(x|y = 1) = N ( −3, 12) and p(x|y = 1) = N ( 3, 12) , where N(µ, σ2) is a normal distribution with mean µ and variance σ2. The hinge-loss objective function for PU classification, JPU-H(g), is minimized with a model of g(x) = wx + b (the expectations in the objective function is computed via numerical integration). The optimal decision 4 −6 −3 0 3 6 0 0.2 0.4 x p(x) p(x|y=1) p(x|y=−1) (a) Class-conditional densities of the problem 0.1 0.3 0.5 0.7 0.9 −1 −0.5 0 0.5 1 π Threshold Optimal Hinge Loss (b) Optimal threshold and threshold using the hinge loss 0.1 0.3 0.5 0.7 0.9 0 0.002 0.004 0.006 0.008 0.01 π Misclassification rate Optimal Hinge (c) The misclassification rate for the optimal and hinge loss case Figure 2: Illustration of the failure of the hinge loss for PU classification. The optimal threshold and the threshold estimated by the hinge loss differ significantly (Figure 2(b)), causing a difference in the misclassification rates (Figure 2(c)). The threshold for the ramp loss agrees with the optimal threshold. threshold and the threshold for the hinge loss is plotted in Figure 2(b) for a range of class priors. Note that the threshold for the ramp loss will correspond to the optimal threshold. From this figure, we note that the hinge-loss threshold differs from the optimal threshold. The difference is especially severe for larger class priors, due to the fact that the superfluous penalty is weighted by the class prior. When the class-prior is large enough, the large hinge-loss threshold causes all samples to be positively labeled. In such a case, the false negative rate is R1 = 0 but the false positive rate is R−1 = 1. Therefore, the overall misclassification rate for the hinge loss will be 1 −π. 4 Effect of inaccurate class-prior estimation To solve the PU classification problem by cost-sensitive learning described in Section 2, the true class prior π is needed. However, since it is often unknown in practice, it needs to be estimated, e.g., by the methods proposed in [10, 1, 11]. Since many of the estimation methods are biased [1, 11], it is important to understand the influence of inaccurate class-prior estimation on the classification performance. In this section, we elucidate how the error in the estimated class prior bπ affects the classification accuracy in the PU classification setting. Risk with true class prior in ordinary classification: In the ordinary classification scenarios with positive and negative samples, the risk for a classifier f on a dataset with class prior π is given as follows ([8, pp. 26–29] and [7]): R(f, π) = πR1(f) + (1 −π)R−1(f). The risk for the optimal classifier according to the class prior π is therefore, R∗(π) = min f∈F R(f, π) Note that R∗(π) is concave, since it is the minimum of a set of functions that are linear w.r.t. π. This is illustrated in Figure 3(a). Excess risk with class prior estimation in ordinary classification: Suppose we have a classifier bf that minimizes the risk for an estimated class prior bπ: bf := arg min f∈F R(f, bπ). The risk when applying the classifier bf on a dataset with true class prior π is then on the line tangent to the concave function R∗(π) at π = bπ, as illustrated in Figure 3(a): bR(π) = πR1( bf) + (1 −π)R−1( bf). The function bf is suboptimal at π, and results in the excess risk [8]: Eπ = bR(π) −R(π). 5 eπ Eπ 0.2 0.1 π 1 R∗(π) = bR(π) bR(π) Class prior Risk (a) Selecting a classifier to minimize (11) and applying it to a dataset with class prior π leads to an excess risk of Eπ. 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0.95 1 Estimated prior bπ Effective prior eπ π = 0.95 π = 0.9 π = 0.7 π = 0.5 (b) The effective class prior eπ vs. the estimated class prior bπ for different true class priors π. Figure 3: Learning in the PU framework with an estimated class prior bπ is equivalent to selecting a classifier which minimizes the risk according to an effective class prior eπ. (a) The difference between the effective class prior eπ and the true class prior π causes an excess risk Eπ. (b) The effective class prior eπ depends on the true class prior π and the estimated class prior bπ. Excess risk with class prior estimation in PU classification: We wish to select a classifier that minimizes the risk in (3). In practice, however, we only know an estimated class prior bπ. Therefore, a classifier is selected to minimize R(f) = 2bπR1(f) + RX(f) −bπ. (11) Expanding the above risk based on (2) gives R(f) = 2bπR1(f) + π(1 −R1(f)) + (1 −π)R−1(f) −bπ = (2bπ −π) R1(f) + (1 −π)R−1(f) + π −bπ. Thus, the estimated class prior affects the risk with respect to 2bπ −π and 1 −π. This result immediately shows that PU classification cannot be performed when the estimated class prior is less than half of the true class prior: bπ ≤1 2π. We define the effective class prior eπ so that 2bπ −π and 1 −π are normalized to sum to one: eπ = 2bπ −π 2bπ −π + 1 −π = 2bπ −π 2bπ −2π + 1. Figure 3(b) shows the profile of the effective class prior eπ for different π. The graph shows that when the true class prior π is large, eπ tends to be flat around π. When the true class prior is known to be large (such as the proportion of inliers in inlier-based outlier detection), a rough class-prior estimator is sufficient to have a good classification performance. On the other hand, if the true class prior is small, PU classification tends to be hard and an accurate class-prior estimator is necessary. We also see that when the true class prior is large, overestimation of the class prior is more attenuated. This may explain why some class-prior estimation methods [1, 11] still give a good practical performance in spite of having a positive bias. 5 Generalization error bounds for PU classification In this section, we analyze the generalization error for PU classification, when training samples are clearly not identically distributed. More specifically, we derive error bounds for the classification function f(x) of form f(x) = n ∑ i=1 αik(xi, x) + n′ ∑ j=1 α′ jk(x′ j, x), where x1, . . . , xn are positive training data and x′ 1, . . . , x′ n′ are positive and negative test data. Let A = {(α1, . . . , αn, α′ 1, . . . , α′ n′) | x1, . . . , xn ∼p(x | y = +1), x′ 1, . . . , x′ n′ ∼p(x)} 6 be the set of all possible optimal solutions returned by the algorithm given some training data and test data according to p(x | y = +1) and p(x). Then define the constants Cα = supα∈A,x1,...,xn∼p(x|y=+1),x′ 1,...,x′ n′∼p(x) (∑n i,i′=1 αiαi′k(xi, xi′) + 2 ∑n i=1 ∑n′ j=1 αiα′ jk(xi, x′ j) + ∑n′ j,j′=1 α′ jα′ j′k(x′ j, x′ j′) )1/2 , Ck = supx∈Rd √ k(x, x), and define the function class F = {f : x 7→ n ∑ i=1 αik(xi, x) + n′ ∑ j=1 α′ jk(x′ j, x) | α ∈A, x1, . . . , xn ∼p(x | y = +1), x′ 1, . . . , x′ n′ ∼p(x)}. (12) Let ℓη(z) be a surrogate loss for the zero-one loss ℓη(z) =    0 if z > η, 1 −z/η if 0 < z ≤η, 1 if z ≤0. For any η > 0, ℓη(z) is lower bounded by ℓ0-1(z) and approaches ℓ0-1(z) as η approaches zero. Moreover, let eℓ(yf(x)) = 2 y + 3ℓ0-1(yf(x)) and eℓη(yf(x)) = 2 y + 3ℓη(yf(x)). Then we have the following theorems (proofs are provided in Appendix A). Our key idea is to decompose the generalization error as Ep(x,y)[ℓ0-1(yf(x))] = π∗Ep(x|y=+1) [ eℓ(f(x)) ] + Ep(x,y) [ eℓ(yf(x)) ] , where π∗:= p(y = 1) is the true class prior of the positive class. Theorem 1. Fix f ∈F, then, for any 0 < δ < 1, with probability at least 1 −δ over the repeated sampling of {x1, . . . , xn} and {(x′ 1, y′ 1), . . . , (x′ n′, y′ n′)} for evaluating the empirical error,1 Ep(x,y)[ℓ0-1(yf(x))] −1 n′ n′ ∑ j=1 eℓ(y′ jf(x′ j)) ≤π∗ n n ∑ i=1 eℓ(f(xi)) + ( π∗ 2√n + 1 √ n′ ) √ ln(2/δ) 2 . (13) Theorem 2. Fix η > 0, then, for any 0 < δ < 1 with probability at least 1 −δ over the repeated sampling of {x1, . . . , xn} and {(x′ 1, y′ 1), . . . , (x′ n′, y′ n′)} for evaluating the empirical error, every f ∈F satisfies Ep(x,y)[ℓ0-1(yf(x))] −1 n′ n′ ∑ j=1 eℓη(y′ jf(x′ j)) ≤π∗ n n ∑ i=1 eℓη(f(xi)) + ( π∗ √n + 2 √ n′ ) CαCk η + ( π∗ 2√n + 1 √ n′ ) √ ln(2/δ) 2 . In both theorems, the generalization error bounds are of order O(1/√n + 1/ √ n′). This order is optimal for PU classification where we have n i.i.d. data from a distribution and n′ i.i.d. data from another distribution. The error bounds for fully supervised classification, by assuming these n + n′ data are all i.i.d., would be of order O(1/ √ n + n′). However, this assumption is unreasonable for PU classification, and we cannot train fully supervised classifiers using these n + n′ samples. Although the orders (and the losses) differ slightly, O(1/√n + 1/ √ n′) for PU classification is no worse than 2 √ 2 times O(1/ √ n + n′) for fully supervised classification (assuming n and n′ are equal). To the best of our knowledge, no previous work has provided such generalization error bounds for PU classification. 1The empirical error that we cannot evaluate in practice is in the left-hand side of (13), and the empirical error and confidence terms that we can evaluate in practice are in the right-hand side of (13). 7 Table 1: Misclassification rate (in percent) for PU classification on the USPS dataset. The best, and equivalent by 95% t-test, is indicated in bold. π 0.2 0.4 0.6 0.8 0.9 0.95 Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge Ramp Hinge 0 vs 1 3.36 4.40 4.85 4.78 5.48 5.18 4.16 4.00 2.68 9.86 1.71 4.94 0 vs 2 5.15 6.20 6.96 8.67 7.22 8.79 5.90 14.60 4.12 9.92 2.80 4.94 0 vs 3 3.49 5.52 4.72 8.08 5.02 8.52 4.06 16.51 2.89 9.92 2.12 4.94 0 vs 4 1.68 2.83 2.05 4.00 2.21 3.99 2.00 3.03 1.70 9.92 1.42 4.94 0 vs 5 5.21 7.42 7.22 11.16 7.46 12.04 6.16 19.78 4.36 9.92 3.21 4.94 0 vs 6 11.47 11.61 19.87 19.59 22.58 22.94 15.13 19.83 8.86 9.92 5.29 4.94 0 vs 7 1.89 3.55 2.55 4.61 2.64 3.70 2.31 2.49 1.78 9.92 1.39 4.94 0 vs 8 3.98 5.09 4.81 7.00 4.75 6.85 3.74 11.34 2.79 9.92 2.11 4.94 0 vs 9 1.22 2.76 1.60 3.86 1.73 3.56 1.61 2.24 1.38 9.92 1.13 4.94 −2 −1 0 1 2 0 1 2 3 z Loss Positive loss Negative loss (a) Loss functions −6 −4 −2 0 2 4 6 −5 0 5 x1 x2 Positive Negative Hinge Ramp (b) Class prior is π = 0.2 −6 −4 −2 0 2 4 6 −5 0 5 x1 x2 (c) Class prior is π = 0.6 −6 −4 −2 0 2 4 6 −5 0 5 x1 x2 (d) Class prior is π = 0.9. . Figure 4: Examples of the classification boundary for the “0” vs. “7” digits, obtained by PU learning. The unlabeled dataset and the underlying (latent) class labels are given. Since discriminant function for the hinge loss case is constant 1 when π = 0.9, no decision boundary can be drawn and all negative samples are misclassified. 6 Experiments In this section, the experimentally compare the performance of the ramp loss and the hinge loss in PU classification (weighting was performed w.r.t. the true class prior and the ramp loss was optimized with [12]). We used the USPS dataset, with the dimensionality reduced to 2 via principal component analysis to enable illustration. 550 samples were used for the positive and mixture datasets. From the results in Table 1, it is clear that the ramp loss gives a much higher classification accuracy than the hinge loss, especially for large class priors. This is due to the fact that the effect of the superfluous penalty term in (10) becomes larger since it scales with π. When the class prior is large, the classification accuracy for the hinge loss is often close to 1 −π. This can be explained by (10): collecting the terms for the positive expectation, we get an effective loss function for the positive samples (illustrated in Figure 4(a)). When π is large enough, the positive loss is minimized, giving a constant 1. The misclassification rate becomes 1 −π since it is a combination of the false negative rate and the false positive rate according to the class prior. Examples of the discrimination boundary for digits “0” vs. “7” are given in Figure 4. When the class-prior is low (Figure 4(b) and Figure 4(c)) the misclassification rate of the hinge loss is slightly higher. For large class-priors (Figure 4(d)), the hinge loss causes all samples to be classified as positive (inspection showed that w = 0 and b = 1). 7 Conclusion In this paper we discussed the problem of learning a classifier from positive and unlabeled data. We showed that PU learning can be solved using a cost-sensitive classifier if the class prior of the unlabeled dataset is known. We showed, however, that a non-convex loss must be used in order to prevent a superfluous penalty term in the objective function. In practice, the class prior is unknown and estimated from data. We showed that the excess risk is actually controlled by an effective class prior which depends on both the estimated class prior and the true class prior. Finally, generalization error bounds for the problem were provided. Acknowledgments MCdP is supported by the JST CREST program, GN was supported by the 973 Program No. 2014CB340505 and MS is supported by KAKENHI 23120004. 8 References [1] C. Elkan and K. Noto. Learning classifiers from only positive and unlabeled data. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2008), pages 213–220, 2008. [2] W. Li, Q. Guo, and C. Elkan. A positive and unlabeled learning algorithm for one-class classification of remote-sensing data. IEEE Transactions on Geoscience and Remote Sensing, 49(2):717–725, 2011. [3] S. Hido, Y. Tsuboi, H. Kashima, M. Sugiyama, and T. Kanamori. Inlier-based outlier detection via direct density ratio estimation. In F. Giannotti, D. Gunopulos, F. Turini, C. Zaniolo, N. Ramakrishnan, and X. Wu, editors, Proceedings of IEEE International Conference on Data Mining (ICDM2008), pages 223– 232, Pisa, Italy, Dec. 15–19 2008. [4] C. Scott and G. Blanchard. Novelty detection: Unlabeled data definitely help. In Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS2009), pages 464–471, Clearwater Beach, Florida USA, Apr. 16-18 2009. [5] C. Elkan. The foundations of cost-sensitive learning. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence (IJCAI2001), pages 973–978, 2001. [6] C.C. Chang and C.J. Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. [7] H.L. Van Trees. Detection, Estimation, and Modulation Theory, Part I. Detection, Estimation, and Modulation Theory. John Wiley and Sons, New York, NY, USA, 1968. [8] R. Duda, P. Hart, and D. Stork. Pattern Classification. John Wiley & Sons, 2nd edition, 2001. [9] V. Vapnik. The Nature of Statistical Learning Theory. Springer, 2000. [10] G. Blanchard, G. Lee, and C. Scott. Semi-supervised novelty detection. The Journal of Machine Learning Research, 11:2973–3009, 2010. [11] M. C. du Plessis and M. Sugiyama. Class prior estimation from positive and unlabeled data. IEICE Transactions on Information and Systems, E97-D:1358–1362, 2014. [12] R. Collobert, F.H. Sinz, J. Weston, and L. Bottou. Trading convexity for scalability. In Proceedings of the 23rd International Conference on Machine learning (ICML2006), pages 201–208, 2006. [13] S. Suzumura, K. Ogawa, M. Sugiyama, and I. Takeuchi. Outlier path: A homotopy algorithm for robust SVM. In Proceedings of 31st International Conference on Machine Learning (ICML2014), pages 1098– 1106, Beijing, China, Jun. 21–26 2014. [14] A. Ghosh, N. Manwani, and P. S. Sastry. Making risk minimization tolerant to label noise. CoRR, abs/1403.3610, 2014. [15] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012. 9
2014
85
5,575
Learning with Fredholm Kernels Qichao Que Mikhail Belkin Yusu Wang Department of Computer Science and Engineering The Ohio State University Columbus, OH 43210 {que,mbelkin,yusu}@cse.ohio-state.edu Abstract In this paper we propose a framework for supervised and semi-supervised learning based on reformulating the learning problem as a regularized Fredholm integral equation. Our approach fits naturally into the kernel framework and can be interpreted as constructing new data-dependent kernels, which we call Fredholm kernels. We proceed to discuss the “noise assumption” for semi-supervised learning and provide both theoretical and experimental evidence that Fredholm kernels can effectively utilize unlabeled data under the noise assumption. We demonstrate that methods based on Fredholm learning show very competitive performance in the standard semi-supervised learning setting. 1 Introduction Kernel methods and methods based on integral operators have become one of the central areas of machine learning and learning theory. These methods combine rich mathematical foundations with strong empirical performance. In this paper we propose a framework for supervised and unsupervised learning as an inverse problem based on solving the integral equation known as the Fredholm problem of the first kind. We develop regularization based algorithms for solving these systems leading to what we call Fredholm kernels. In the basic setting of supervised learning we are given the data set (xi, yi), where xi ∈X, yi ∈R. We would like to construct a function f : X →R, such that f(xi) ≈yi and f is “nice enough” to generalize to new data points. This is typically done by choosing f from a class of functions (a Reproducing Kernel Hilbert Space (RKHS) corresponding to a positive definite kernel for the kernel methods) and optimizing a certain loss function, such as the square loss or hinge loss. In this paper we formulate a new framework for learning based on interpreting the learning problem as a Fredholm integral equation. This formulation shares some similarities with the usual kernel learning framework but unlike the standard methods also allows for easy incorporation of unlabeled data. We also show how to interpret the resulting algorithm as a standard kernel method with a non-standard data-dependent kernel (somewhat resembling the approach taken in [13]). We discuss reasons why incorporation of unlabeled data may be desirable, concentrating in particular on what may be termed “the noise assumption” for semi-supervised learning, which is related but distint from manifold and cluster assumption popular in the semi-supervised learning literature. We provide both theoretical and empirical results showing that the Fredholm formulation allows for efficient denoising of classifiers. To summarize, the main contributions of the paper are as follows: (1) We formulate a new framework based on solving a regularized Fredholm equation. The framework naturally combines labeled and unlabeled data. We show how this framework can be expressed as a kernel method with a non-standard data-dependent kernel. 1 (2) We discuss “the noise assumption” in semi-supervised learning and provide some theoretical evidence that Fredholm kernels are able to improve performance of classifiers under this assumption. More specifically, we analyze the behavior of several versions of Fredholm kernels, based on combining linear and Gaussian kernels. We demonstrate that for some models of the noise assumption, Fredholm kernel provides better estimators than the traditional data-independent kernel and thus unlabeled data provably improves inference. (3) We show that Fredholm kernels perform well on synthetic examples designed to illustrate the noise assumption as well as on a number of real-world datasets. Related work. Kernel and integral methods in machine learning have a large and diverse literature (e.g., [12, 11]). The work most directly related to our approach is [10], where Fredholm integral equations were introduced to address the problem of density ratio estimation and covariate shift. In that work the problem of density ratio estimation was expressed as a Fredholm integral equation and solved using regularization in RKHS. This setting also relates to a line of work on on kernel mean embedding where data points are embedded in Reproducing Kernel Hilbert Spaces using integral operators with applications to density ratio estimation and other tasks [5, 6, 7]. A very interesting recent work [9] explores a shrinkage estimator for estimating means in RKHS, following the SteinJames estimator originally used for estimating the mean in an Euclidean space. The results obtained in [9] show how such estimators can reduce variance. There is some similarity between that work and our theoretical results presented in Section 4 which also show variance reduction for certain estimators of the kernel although in a different setting. Another line of related work is the class of semi-supervised learning techniques (see [15, 2] for a comprehensive overview) related to manifold regularization [1], where an additional graph Laplacian regularizer is added to take advantage of the geometric/manifold structure of the data. Our reformulation of Fredholm learning as a kernel, addressing what we called “noise assumptions”, parallels data-dependent kernels for manifold regularization proposed in [13]. 2 Fredholm Kernels We start by formulating learning framework proposed in this paper. Suppose we are given l labeled pairs (x1, y1), . . . , (xl, yl) from the data distribution p(x, y) defined on X × Y and u unlabeled points xl+1, . . . , xl+u from the marginal distribution pX(x) on X. For simplicity we will assume that the feature space X is a Euclidean space RD, and the label set Y is either {−1, 1} for binary classification or the real line R for regression. Semi-supervised learning algorithms aim to construct a (predictor) function f : X →Y by incorporating the information of unlabeled data distribution. To this end, we introduce the integral operator KpX associated with a kernel function k(x, z). In our setting k(x, z) does not have to be a positive semi-definite (or even symmetric) kernel. KpX : L2 →L2 and KpXf(x) = Z k(x, z)f(z)pX(z)dz, (1) where L2 is the space of square-integrable functions. By the law of large numbers, the above operator can be approximated using unlabeled data from pX as KˆpXf(x) = 1 l + u l+u X i=1 k(x, xi)f(xi). This approximation provides a natural way of incorporating unlabeled data into algorithms. In our Fredholm learning framework, we will use functions in KpXH = {KpXf : f ∈H}, where H is an appropriate Reproducing Kernel Hilbert Space (RKHS) as classification or regression functions. Note that unlike RKHS, this space of functions, KpXH, is density dependent. In particular, this now allows us to formulate the following optimization problem for semi-supervised classification/regression in a way similar to many supervised learning algorithms: The Fredholm learning framework solves the following optimization problem1: f ∗= arg min f∈H 1 l l X i=1 ((KˆpXf)(xi) −yi)2 + λ∥f∥2 H, (2) 1We will be using the square loss to simplify the exposition. Other loss functions can also be used in Eqn 2. 2 The final classifier is c(x) = (KˆpXf ∗) (x), where KˆpX is the operator defined above. Eqn 2 is a discretized and regularized version of the Fredholm integral equation KpXf = y, thus giving the name of Fredholm learning framework. Even though at a first glance this setting looks similar to conventional kernel methods, the extra layer introduced by KˆpX makes significant difference, in particular, by allowing the integration of information from unlabeled data distribution. In contrast, solutions to standard kernel methods for most kernels, e.g., linear, polynomial or Gaussian kernels, are completely independent of the unlabeled data. We note that our approach is closely related to [10] where a Fredholm equation is used to estimated the density ratio for two probability distributions. The Fredholm learning framework is a generalization of the standard kernel framework. In fact, if the kernel k is the δ-function, then our formulation above is equivalent to the Regularized Kernel Least Squares equation f ∗= arg minf∈H 1 l Pl i=1(f(xi) −yi)2 + λ∥f∥2 H. We could also replace the L2 loss in Eqn 2 by other loss functions, such as hinge loss, resulting in a SVM-like classifier. Finally, even though Eqn 2 is an optimization problem in a potentially infinite dimensional function space H, a standard derivation, using the Representer Theorem (See full version for details), yields a computationally accessible solution as follows: f ∗(x) = 1 l + u l+u X j=1 kH(x, xj)vj, v = KT l+uKl+uKH + λI −1 KT l+uy, (3) where (Kl+u)ij = k(xi, xj) for 1 ≤i ≤l, 1 ≤j ≤l + u, and (KH)ij = kH(xi, xj) for 1 ≤i, j ≤l + u. Note that Kl+u is a l × (l + u) matrix. Fredholm kernels: a convenient reformulation. In fact we will see that Fredholm learning problem induces a new data-dependent kernel, which we will refer to as Fredholm kernel2. To show this connection, we use the following identity, which can be easily verified: KT l+uKl+uKH + λI −1 KT l+u = KT l+u Kl+uKHKT l+u + λI −1 . Define KF = Kl+uKHKT l+u to be the l × l kernel matrix associated with a new kernel defined by ˆkF (x, z) = 1 (l + u)2 l+u X i,j=1 k(x, xi)kH(xi, xj)k(z, xj), (4) and we consider the unlabeled data are fixed for computing this new kernel. Using this new kernel ˆkF , the final classifying function from Eqn 3 can be rewritten as: c∗(x) = 1 l + u l+u X i=1 k(x, xi)f ∗(xi) = l X s=1 ˆkF (x, xs)αs, α = (KF + λI)−1 y. Because of Eqn 4 we will sometimes refer to the kernels kH and k as the “inner” and “outer” kernels respectively. It can be observed that this solution is equivalent to a standard kernel method, but using a new data dependent kernel ˆkF , which we will call the Fredholm kernel, since it is induced from the Fredholm problem formulated in Eqn 2. Proposition 1. The Fredholm kernel defined in Eqn 4 is positive semi-definite as long as KH is positive semi-definite for any set of data x1, . . . , xl+u. The proof is given in the full version. The “outer” kernel k does not have to be either positive definite or even symmetric. When using Gaussian kernel for k, discrete approximation in Eqn 4 might be unstable when the kernel width is small, so we also introduce the normalized Fredholm kernel, ˆkN F (x, z) = l+u X i,j=1 k(x, xi) P n k(x, xn)kH(xi, xj) k(z, xj) P n k(z, xn). (5) It is easy to check that the resulting Fredholm kernel ˆkN F is still symmetric positive semi-definite. Even though Fredholm kernel was derived using L2 loss here, it could also be derived when hinge loss is used, which will be explained in full version. 2 We note that the term Fredholm Kernel has been used in mathematics ([8], page 103) and also in a different learning context [14]. Our usage represents a different object. 3 3 The Noise Assumption and Semi-supervised Learning In order for unlabeled data to be useful in classification tasks it is necessary for the marginal distribution of the unlabeled data to contain information about the conditional distribution of the labels. Several ways in which such information can be encoded has been proposed including the “cluster assumption” [3] and the “manifold assumption” [1]. The cluster assumption states that a cluster (or a high density area) contains only (or mostly) points belonging to the same class. That is, if x1 and x2 belong to the same cluster, the corresponding labels y1, y2 should be the same. The manifold assumption assumes that the regression function is smooth with respect to the underlying manifold structure of the data, which can be interpreted as saying that the geodesic distance should be used instead of the ambient distance for optimal classification. The success of algorithms based on these ideas indicates that these assumptions do capture certain characteristics of real data. Still, better understanding of unlabeled data may still lead to progress in data analysis. Figure 1: Left: only labelled points, and Right: with unlabelled points. The noise assumption. We propose to formulate a new assumption, the “noise assumption”, which is that in the neighborhood of every point, the directions with low variance (for the unlabeled data) are uninformative with respect to the class labels, and can be regarded as noise. While intuitive, as far as we know, it has not been explicitly formulated in the context of semi-supervised learning algorithms, nor applied to theoretical analysis. Note that even if the noise variance is small along a single direction, it could still significantly decrease the performance of a supervised learning algorithm if the noise is high-dimensional. These accumulated non-informative variations in particular increase the difficulty of learning a good classifier when the amount of labeled data is small. The first figure on right illustrates the issue of noise with two labeled points. The seemingly optimal classification boundary (the red line) differs from the correct one (in black) due to the noisy variation along the y axis for the two labeled points. Intuitively unlabeled data shown in the right panel of Figure 1 can be helpful in this setting as low variance directions can be estimated locally such that algorithms could suppress the influences of the noisy variation when learning a classifier. Connection to cluster and manifold assumptions. The noise assumption is compatible with the manifold assumption within the manifold+noise model. Specifically, we can assume that the functions of interest vary along the manifold and are constant in the orthogonal direction. Alternatively, we can think of directions with high variance as “signal/manifold” and directions with low variance as “noise”. We note that the noise assumption does not require the data to conform to a low-dimensional manifold in the strict mathematical sense of the word. The noise assumption is orthogonal to the cluster assumption. For example, Figure 1 illustrates a situation where data has no clusters but the noise assumption applies. 4 Theoretical Results for Fredholm Kernels Non-informative variation in data could degrade traditional supervised learning algorithms. We will now show that Fredholm kernels can be used to replace traditional kernels to inject them with “noise-suppression” power with the help of unlabeled data. In this section we will present two views to illustrate how such noise suppression can be achieved. Specifically, in Section 4.1) we show that under certain setup, linear Fredholm kernel suppresses principal components with small variance. In Section 4.2) we prove that under certain conditions we are able to provide good approximations to the “true” kernel on the hidden underlying space. To make our arguments more clear, we assume that there are infinite amount of unlabelled data; that is, we know the marginal distribution of data exactly. We will then consider the following continuous versions of the un-normalized and normalized Fredholm kernels as in Eqn 4 and 5: kU F (x, z) = Z Z k(x, u)kH(u, v)k(z, v)p(u)p(v)dudv (6) kN F (x, z) = Z Z k(x, u) R k(x, w)p(w)dwkH(u, v) k(z, v) R k(z, w)p(w)dwp(u)p(v)dudv. (7) 4 Note, in the above equations and in what follows, we sometimes write p instead of pX for the marginal distribution when its choice is clear from context. We will typically use kF to denote appropriate normalized or unnormalized kernels depending on the context. 4.1 Linear Fredholm kernels and inner products For this section, we consider the unormalized Fredholm kernel, that is kF = kU F . If the “outer” kernel k(u, v) is linear, i.e. k(u, v) = ⟨u, v⟩, the resulting Fredholm kernel can be viewed as an inner product. Specifically, the un-normalized Fredholm kernel from Eqn 6 can be rewritten as: kF (x, z) = xT ΣF z, where ΣF = Z Z ukH(u, v)vT p(u)p(v)dudv. Thus kF (x, z) is simply an inner product which depends on both the unlabeled data distribution p(x) and the “inner” kernel kH. This inner product re-weights the standard norm in feature space based on variances along the principal directions of the matrix ΣF . We show that for the model when unlabeled data is sampled from a normal distribution this kernel can be viewed as a “soft thresholding” PCA, suppressing the directions with low variance. Specifically, we have the following3 Theorem 2. Let kH(x, z) = exp  −∥x−z∥2 2t  and assume the distribution pX for unlabeled data is a single multi-variate normal distribution, N(µ, diag(σ2 1, . . . , σ2 d)). We have ΣF = D Y d=1 s t 2σ2 d + t !  µµT + diag  σ4 1 2σ2 1 + t, . . . , σ4 D 2σ2 D + t  . Assuming that the data is mean-subtracted, i.e. µ = 0, we see that xT ΣF z re-scales the projections along the principal components when computing the inner product; that is, the rescaling factor for the i-th principal direction is q σ4 i 2σ2 i +t. Note that this rescaling factor σ4 i 2σ2 i +t ≈0 when σ2 i ≪t. On the other hand when σ2 i ≫t, we have that σ4 i 2σ2 i +t ≈σ2 i 2 . Hence t can be considered as a soft threshold that eliminates the effects of principal components with small variances. When t is small the rescaling factors are approximately proportional to diag(σ2 1, σ2 2, . . . , σ2 D), in which case ΣF is is proportional to the covariance matrix of the data XXT . 4.2 Kernel Approximation With Noise We have seen that one special case of Fredholm kernel could achieve the effect of principal components re-scaling by using linear kernel as the “outer” kernel k. In this section we give a more general interpretation of noise suppression by the Fredholm kernel. First, we give a simple senario to provide some intuition behind the definition of Fredholm kernle. Consider a standard supervised learning setting which uses the solution f ∗= arg minf∈H 1 l Pl i=1(f(xi)−yi)2+λ∥f∥2 H as the classifier. Let ktarget H denote the ideal kernel that we intend to use on the clean data, which we call the target kernel from now on. Now suppose what we have are two noisy labelled points xe and ze for “true” data ¯x and ¯z, i.e. xe = ¯x + εx, ze = ¯z + εz. The evaluation of ktarget H (xe, ze) can be quite different from the true signal ktarget H (¯x, ¯z), leading to an suboptimal final classifier (the red line in Figure 1 (a)). On the other hand, now consider the Fredholm kernel from Eqn 6 (or similarly from Eqn 7): kF (xe, ze) = R R k(xe, u)p(u) · kH(u, v) · k(ze, v)p(v)dudv, and set the outer kernel k to be the Gaussian kernel, and the inner kernel kH to be the same as target kernel ktarget H . We can think of kF (xe, ze) as an averaging of kH(u, v) over all possible pairs of data u, v, weighted by k(xe, u)p(u) and k(ze, v)p(v) respectively. Specifically, points 3The proof of this and other results can be found in the full version. 5 that are close to xe (resp. ze) with high density will receive larger weights. Hence the weighted averages will be biased towards ¯x and ¯z respectively (which presumably lie in high density regions around xe and ze). The value of kF (xe, ze) tends to provide a more accurate estimate of kH(¯x, ¯z). See the right figure for an illustration where the arrows indicate points with stronger influences in the computation of kF (xe, ze) than kH(xe, ze). As a result, the classifier obtained using the Fredholm kernel will also be more resilient to noise and closer to the optimum. The Fredholm learning framework is rather flexible in terms of the choices of kernels k and kH. In the remainder of this section, we will consider a few specific scenarios and provide quantitative analysis to show the noise robustness of the Fredholm kernel. Problem setup. Assume that we have a ground-truth distribution over the subspace spanned by the first d dimension of the Euclidean space RD. We will assume that this distribution is a single Gaussian N(0, λ2Id). Suppose this distribution is corrupted with Gaussian noise along the orthogonal subspace of dimension D −d. That is, for any “true” point ¯x drawn from N(0, λ2Id), its observation xe is drawn from N(¯x, σ2ID−d). Since the noise lies in a space orthogonal to data distribution, this means that any observed point, labelled or unlabeled, is sampled from pX = N(0, diag(λ2Id, σ2ID−d). We will show that Fredholm kernel provides a better approximation to the “original” kernel given unlabeled data than simply computing the kernel of noisy points. We choose this basic setting to be able to state the theoretical results in a clean manner. Even though this is a Gaussian distribution over a linear subspace with noise, this framework has more general implications since local neighborhoods of manifolds are (almost) linear spaces. Note: In this section we use normalized Fredholm kernel given in Eqn 7, that is kF = kN F for now on. Un-normalized Fredholm kernel displays similar behavior, while the bounds are trickier. Linear Kernel. First we consider the case where the target kernel ktarget H (u, v) is the linear kernel, ktarget H (u, v) = uT v. We will set kH in Fredholm kernel to also be linear, and k to be the Gaussian kernel k(u, v) = e−∥u−v∥2 2t We will compare kF (xe, ze) with the target kernel on the two observed points, that is, with ktarget H (xe, ze). The goal is to estimate ktarget H (¯x, ¯z). We will see that (1) both kF (xe, ze) and (appropriately scaled) kH(xe, ze) are unbiased estimators of ktarget H (¯x, ¯z), however (2) the variance of kF (xe, ze) is smaller than that of ktarget H (xe, ze), making it a more precise estimator. Theorem 3. Suppose the probability distribution for the unlabeled data pX = N(0, diag(λ2Id, σ2ID−d)). For Fredholm kernel defined in Eqn 7, we have Exe,ze(ktarget H (xe, ze)) = Exe,ze t + λ2 λ2 2 kF (xe, ze) ! = ¯xT ¯z Moreover, when λ > σ, Varxe,ze  t+λ2 λ2 2 kF (xe, ze)  < Varxe,ze(ktarget H (xe, ze)). Remark: Note that we have a normalization constant for the Fredholm kernel to make it an unbiased estimator of ¯xT ¯z. In practice, choosing normalization is subsumed in selecting the regularization parameter for kernel methods. Thus we can see the Fredholm kernel provides an approximation of the “true” linear kernel, but with smaller variance compared to the actual linear kernel on noisy data. Gaussian Kernel. We now consider the case where the target kernel is the Gaussian kernel: ktarget H (u, v) = exp  −∥u−v∥2 2r  . To approximate this kernel, we will set both k and kH to be Gaussian kernels. To simplify the presentation of results, we assume that k and kH have the same kernel width t. The resulting Fredholm kernel turns out to also be a Gaussian kernel, whose kernel width depends on the choice of t. Our main result is the following. Again, similar to the case of linear kernel, the Fredholm estimation kF (xe, ze) and ktarget H (xe, ze) are both unbiased estimator for the target ktarget H (¯x, ¯z) up to a constant; but kF (xe, ze) has a smaller variance. Theorem 4. Suppose the probability distribution for the unlabeled data pX = N(0, diag(λ2Id, σ2ID−d)). Given the target kernel ktarget H (u, v) = exp  −∥u−v∥2 2r  with kernel width r > 0, we can choose t, given by the equation t(t+λ2)(t+3λ2) λ4 = r, and two scaling 6 constants c1, c2, such that Exe,ze(c−1 1 ktarget H (xe, ze)) = Exe,ze(c−1 2 kF (xe, ze)) = ktarget H (¯x, ¯z). and when λ > σ, we have Varxe,ze(c−1 1 ktarget H (xe, ze)) > Varxe,ze(c−1 2 kF (xe, ze)). Remark. In practice, when applying kernel methods for real world applications, optimal kernel width r is usually unknown and chosen by cross-validation or other methods. Similarly, for our Fredholm kernel, one can also use cross-validation to choose the optimal t for kF . 5 Experiments Using linear and Gaussian kernel for k or kH respectively, we will define three instances of the Fredholm kernel as follows. −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −1.5 −1 −0.5 0 0.5 1 1.5 −1 −0.5 0 0.5 1 1.5 Figure 2: Noise but not cluster assumption. Gaussian noise in R100 is added. Linear (above) and non-linear (below) class boundaries. (1) FredLin1: k(x, z) = xT z and kH(x, z) = exp  −∥x−z∥2 2r  . (2) FredLin2: k(x, z) = exp  −∥x−z∥2 2r  and kH(x, z) = xT z. (3) FredGauss: k(x, z) = kH(x, z) = exp  −∥x−z∥2 2r  . For the kernels in (2) and (3) that use the Gaussian kernel as outside kernel k we can also define their normalized version, which we will denote by by FredLin2(N) and FredGauss(N) respectively. Synthetic examples. Noise and cluster assumptions. To isolate the ability of Fredholm kernels to deal with noise from the cluster assumption, we construct two synthetic examples that violate the cluster assumption, shown in Figure 2. The figures show first two dimensions, with multi-variate Gaussian noise with variance σ2 = 0.01 in R100 added. The classification boundaries are indicated by the color. For each class, we provide several labeled points and large amount of unlabeled data. Note that the classification boundary in the “circle” example is non-linear. We compare Fredholm kernel based classifier with RLSC (Regularized Least Squares Classifier), and two widely used semisupervised methods, the transductive support vector machine and LapRLSC. Since the examples violate the cluster assumption, the two existing semi-supervised learning algorithms, Transductive SVM and LapRLSC, should not gain much from the unlabeled data. For TSVM, we use the primal TSVM proposed in [4], and we will use the implementation of LapRLSC given in [1]. Different numbers of labeled points are given for each class, together with another 2000 unlabeled points. To choose the optimal parameters for each method, we pick the parameters based on their performance on the validation set, while the final classification error is computed on the held-out testing data set. Results are reported in Table 1 and 2, in which Fredholm kernels show clear improvement over other methods for synthetic examples in term of classification error. Real-world Data Sets. Unlike artificial examples, it is usually difficult to verify whether certain assumptions are satisfied in real-world problems. In this section, we examine the performance of Fredholm kernels on several real-world data sets and compare it with the baseline algorithms mentioned above. Linear Kernels. Here we consider text categorization and sentiment analysis, where linear methods are known to perform well. We use the following data (represented by TF-IDF features): (1) 20 news group: it has 11269 documents with 20 classes, and we select the first 10 categories for our experiment. (2) Webkb: the original data set contains 7746 documents with 7 unbalanced classes, and we pick the two largest classes with 1511 and 1079 instances respectively. (3) IMDB movie review: it has 1000 positive reviews and 1000 negative reviews of movie on IMDB.com. (4) Twitter sentiment data from Sem-Eval 2013: it contains 5173 tweets, with positive, neural and negative sentiment. We combine neutral and negative classes to set up a binary classification problem. Results are reported in Table 3. In Table4, we use WebKB as an example to illustrate the change of the performance as number of labeled points increases. 7 Number of Labeled Methods(Linear) RLSC TSVM LapRLSC FredLin1 FredLin2(N) 8 10.0(± 3.9) 5.2(± 2.2) 10.0(± 3.5) 3.7(± 2.6) 4.5(± 2.1) 16 9.1(± 1.9) 5.1(± 1.1) 9.1(± 2.2) 2.9(± 2.0) 3.6(± 1.9) 32 5.8(± 3.2) 4.5(± 0.8) 6.0(± 3.2) 2.3(± 2.3) 2.6(± 2.2) Table 1: Prediction error of different classifiers for the“two lines” example. Number of Labeled Methods(Gaussian) K-RLSC TSVM LapRLSC FredGauss(N) 16 17.4(± 5.0) 32.2(± 5.2) 17.0(± 4.6) 7.1(± 2.4) 32 16.5(± 7.1) 29.9(± 9.3) 18.0(± 6.8) 6.0(± 1.6) 64 8.7(± 1.7) 20.3(± 4.2) 9.7(± 2.0) 5.5(± 0.7) Table 2: Prediction error of different classifiers for the “circle” example. Gaussian Kernel. We test our methods on hand-written digit recognition. The experiment use subsets of two handwriting digits data sets MNIST and USPS: (1) the one from MNIST contains 10k digits in total with balanced examples for each class, and the one for USPS is the original testing set containing about 2k images. The pixel values are normalized to [0, 1] as features. Results are reported in Table 5. In Table 6, we show that as we add additional Gaussian noise to MNIST data, Fredholm kernels start to show significant improvement. Data Set Methods(Linear) RLSC TSVM FredLin1 FredLin2 FredLin2(N) Webkb 16.9(± 1.4) 12.7(± 0.8) 13.0(± 1.3) 12.0(± 1.6) 12.0(± 1.6) 20news 22.2(± 1.0) 21.0(± 0.9) 20.5 (± 0.7) 20.5 (±0.7) 20.5(± 0.7) IMDB 30.0(± 2.0) 20.2(± 2.6) 19.9(± 2.3) 21.7(± 2.9) 21.7(± 2.7) Twitter 38.7(± 1.1) 37.6(± 1.4) 37.4(± 1.2) 37.4(± 1.2) 37.5(± 1.2) Table 3: The error of various methods on the text data sets. 20 labeled data per class are given with rest of the data set as unlabeled points. Optimal parameter for each method are used. Number of Labeled Methods(Linear) RLSC TSVM FredLin1 FredLin2 FredLin2(N) 10 20.7(± 2.4) 13.5(± 0.5) 14.8(± 2.4) 14.6(± 2.4) 14.6(± 2.3) 20 16.9(± 1.4) 12.7(± 0.8) 13.0(± 1.3) 12.0(± 1.6) 12.0(± 1.6) 80 10.9(± 1.4) 9.7(± 1.0) 8.1(± 1.0) 7.9(± 0.9) 7.9(± 0.9) Table 4: Prediction error on Webkb with different number of labeled points. Data Set Methods(Gaussian) K-RLSC LapRLSC FredGauss FredGauss(N) USPST 11.8(± 1.4) 10.2 (±0.5) 12.4(± 1.8) 10.8(± 1.1) MNIST 14.3(± 1.2) 8.6(± 1.2) 12.2(±1.0) 13.0(± 0.9) Table 5: Prediction error of nonlinear classifiers on the MNIST and USPS. 20 labeled data per class are given with rest of the data set as unlabeled points. Optimal parameter for each method are used. Number of Labeled Methods(Gaussian) K-RLSC LapRLSC FredGauss FredGauss(N) 10 34.1(± 2.1) 35.6 (±3.5) 27.9(± 1.6) 29.0(± 1.5) 20 27.2(± 1.1) 27.3 (±1.8) 21.9(± 1.2) 22.9(± 1.2) 40 20.0(± 0.7) 20.3 (±0.8) 17.3(± 0.5) 18.4(± 0.4) 80 15.6(± 0.4) 15.6 (±0.5) 14.8(± 0.6) 15.4(± 0.5) Table 6: The prediction error of nonlinear classifiers on MNIST corrupted with Gaussian noise with standard deviation 0.3, with different numbers of labeled points, from 10 to 80. Optimal parameter for each method are used. Acknowledgments. The work was partially supported by NSF Grants CCF-1319406 and RI 1117707. We thank the anonymous NIPS reviewers for insightful comments. 8 References [1] Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434, 2006. [2] Oliver Chapelle, Bernhard Sch¨olkopf, and Alexander Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA, 2006. [3] Oliver Chapelle, Jason Weston, and Bernhard Sch¨olkopf. Cluster kernels for semi-supervised learning. In Advances in Neural Information Processing Systems 17, pages 585–592, 2003. [4] Oliver Chapelle and Alexander Zien. Semi-supervised classification by low density separation. In Robert G. Cowell and Zoubin Ghahramani, editors, AISTATS, pages 57–64, 2005. [5] Arthur Gretton, Alex Smola, Jiayuan Huang, Marcel Schmittfull, Karsten Borgwardt, and Bernhard Sch¨olkopf. Covariate shift by kernel mean matching. Dataset shift in machine learning, pages 131–160, 2009. [6] S. Gr¨unew¨alder, G. Lever, L. Baldassarre, S. Patterson, A. Gretton, and M. Pontil. Conditional mean embeddings as regressors. In Proceedings of the 29th International Conference on Machine Learning, volume 2, pages 1823–1830, 2012. [7] Steffen Grunewalder, Gretton Arthur, and John Shawe-Taylor. Smooth operators. In Proceedings of the 30th International Conference on Machine Learning, pages 1184–1192, 2013. [8] Michiel Hazewinkel. Encyclopaedia of Mathematics, volume 4. Springer, 1989. [9] Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Arthur Gretton, and Bernhard Sch¨olkopf. Kernel mean shrinkage estimators. arXiv preprint arXiv:1405.5505, 2014. [10] Qichao Que and Mikhail Belkin. Inverse density as an inverse problem: the fredholm equation approach. In Advances in Neural Information Processing Systems 26, pages 1484–1492, 2013. [11] Bernhard Sch¨olkopf and Alexander J Smola. Learning with kernels: Support vector machines, regularization, optimization, and beyond. MIT press, 2001. [12] John Shawe-Taylor and Nello Cristianini. Kernel methods for pattern analysis. Cambridge university press, 2004. [13] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. Beyond the point cloud: from transductive to semi-supervised learning. In Proceedings of the 22nd International Conference on Machine Learning, pages 824–831, New York, NY, USA, 2005. ACM Press. [14] SVN Vishwanathan, Alexander J Smola, and Ren´e Vidal. Binet-cauchy kernels on dynamical systems and its application to the analysis of dynamic scenes. International Journal of Computer Vision, 73(1):95–119, 2007. [15] Xiaojin Zhu. Semi-supervised learning literature survey. Technical report, Computer Science, University of Wisconsin-Madison, 2005. 9
2014
86
5,576
How transferable are features in deep neural networks? Jason Yosinski,1 Jeff Clune,2 Yoshua Bengio,3 and Hod Lipson4 1 Dept. Computer Science, Cornell University 2 Dept. Computer Science, University of Wyoming 3 Dept. Computer Science & Operations Research, University of Montreal 4 Dept. Mechanical & Aerospace Engineering, Cornell University Abstract Many deep neural networks trained on natural images exhibit a curious phenomenon in common: on the first layer they learn features similar to Gabor filters and color blobs. Such first-layer features appear not to be specific to a particular dataset or task, but general in that they are applicable to many datasets and tasks. Features must eventually transition from general to specific by the last layer of the network, but this transition has not been studied extensively. In this paper we experimentally quantify the generality versus specificity of neurons in each layer of a deep convolutional neural network and report a few surprising results. Transferability is negatively affected by two distinct issues: (1) the specialization of higher layer neurons to their original task at the expense of performance on the target task, which was expected, and (2) optimization difficulties related to splitting networks between co-adapted neurons, which was not expected. In an example network trained on ImageNet, we demonstrate that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also document that the transferability of features decreases as the distance between the base task and target task increases, but that transferring features even from distant tasks can be better than using random features. A final surprising result is that initializing a network with transferred features from almost any number of layers can produce a boost to generalization that lingers even after fine-tuning to the target dataset. 1 Introduction Modern deep neural networks exhibit a curious phenomenon: when trained on images, they all tend to learn first-layer features that resemble either Gabor filters or color blobs. The appearance of these filters is so common that obtaining anything else on a natural image dataset causes suspicion of poorly chosen hyperparameters or a software bug. This phenomenon occurs not only for different datasets, but even with very different training objectives, including supervised image classification (Krizhevsky et al., 2012), unsupervised density learning (Lee et al., 2009), and unsupervised learning of sparse representations (Le et al., 2011). Because finding these standard features on the first layer seems to occur regardless of the exact cost function and natural image dataset, we call these first-layer features general. On the other hand, we know that the features computed by the last layer of a trained network must depend greatly on the chosen dataset and task. For example, in a network with an N-dimensional softmax output layer that has been successfully trained toward a supervised classification objective, each output unit will be specific to a particular class. We thus call the last-layer features specific. These are intuitive notions of general and specific for which we will provide more rigorous definitions below. If first-layer 1 features are general and last-layer features are specific, then there must be a transition from general to specific somewhere in the network. This observation raises a few questions: • Can we quantify the degree to which a particular layer is general or specific? • Does the transition occur suddenly at a single layer, or is it spread out over several layers? • Where does this transition take place: near the first, middle, or last layer of the network? We are interested in the answers to these questions because, to the extent that features within a network are general, we will be able to use them for transfer learning (Caruana, 1995; Bengio et al., 2011; Bengio, 2011). In transfer learning, we first train a base network on a base dataset and task, and then we repurpose the learned features, or transfer them, to a second target network to be trained on a target dataset and task. This process will tend to work if the features are general, meaning suitable to both base and target tasks, instead of specific to the base task. When the target dataset is significantly smaller than the base dataset, transfer learning can be a powerful tool to enable training a large target network without overfitting; Recent studies have taken advantage of this fact to obtain state-of-the-art results when transferring from higher layers (Donahue et al., 2013a; Zeiler and Fergus, 2013; Sermanet et al., 2014), collectively suggesting that these layers of neural networks do indeed compute features that are fairly general. These results further emphasize the importance of studying the exact nature and extent of this generality. The usual transfer learning approach is to train a base network and then copy its first n layers to the first n layers of a target network. The remaining layers of the target network are then randomly initialized and trained toward the target task. One can choose to backpropagate the errors from the new task into the base (copied) features to fine-tune them to the new task, or the transferred feature layers can be left frozen, meaning that they do not change during training on the new task. The choice of whether or not to fine-tune the first n layers of the target network depends on the size of the target dataset and the number of parameters in the first n layers. If the target dataset is small and the number of parameters is large, fine-tuning may result in overfitting, so the features are often left frozen. On the other hand, if the target dataset is large or the number of parameters is small, so that overfitting is not a problem, then the base features can be fine-tuned to the new task to improve performance. Of course, if the target dataset is very large, there would be little need to transfer because the lower level filters could just be learned from scratch on the target dataset. We compare results from each of these two techniques — fine-tuned features or frozen features — in the following sections. In this paper we make several contributions: 1. We define a way to quantify the degree to which a particular layer is general or specific, namely, how well features at that layer transfer from one task to another (Section 2). We then train pairs of convolutional neural networks on the ImageNet dataset and characterize the layer-by-layer transition from general to specific (Section 4), which yields the following four results. 2. We experimentally show two separate issues that cause performance degradation when using transferred features without fine-tuning: (i) the specificity of the features themselves, and (ii) optimization difficulties due to splitting the base network between co-adapted neurons on neighboring layers. We show how each of these two effects can dominate at different layers of the network. (Section 4.1) 3. We quantify how the performance benefits of transferring features decreases the more dissimilar the base task and target task are. (Section 4.2) 4. On the relatively large ImageNet dataset, we find lower performance than has been previously reported for smaller datasets (Jarrett et al., 2009) when using features computed from random lower-layer weights vs. trained weights. We compare random weights to transferred weights— both frozen and fine-tuned—and find the transferred weights perform better. (Section 4.3) 5. Finally, we find that initializing a network with transferred features from almost any number of layers can produce a boost to generalization performance after fine-tuning to a new dataset. This is particularly surprising because the effect of having seen the first dataset persists even after extensive fine-tuning. (Section 4.1) 2 2 Generality vs. Specificity Measured as Transfer Performance We have noted the curious tendency of Gabor filters and color blobs to show up in the first layer of neural networks trained on natural images. In this study, we define the degree of generality of a set of features learned on task A as the extent to which the features can be used for another task B. It is important to note that this definition depends on the similarity between A and B. We create pairs of classification tasks A and B by constructing pairs of non-overlapping subsets of the ImageNet dataset.1 These subsets can be chosen to be similar to or different from each other. To create tasks A and B, we randomly split the 1000 ImageNet classes into two groups each containing 500 classes and approximately half of the data, or about 645,000 examples each. We train one eight-layer convolutional network on A and another on B. These networks, which we call baseA and baseB, are shown in the top two rows of Figure 1. We then choose a layer n from {1, 2, . . . , 7} and train several new networks. In the following explanation and in Figure 1, we use layer n = 3 as the example layer chosen. First, we define and train the following two networks: • A selffer network B3B: the first 3 layers are copied from baseB and frozen. The five higher layers (4–8) are initialized randomly and trained on dataset B. This network is a control for the next transfer network. (Figure 1, row 3) • A transfer network A3B: the first 3 layers are copied from baseA and frozen. The five higher layers (4–8) are initialized randomly and trained toward dataset B. Intuitively, here we copy the first 3 layers from a network trained on dataset A and then learn higher layer features on top of them to classify a new target dataset B. If A3B performs as well as baseB, there is evidence that the third-layer features are general, at least with respect to B. If performance suffers, there is evidence that the third-layer features are specific to A. (Figure 1, row 4) We repeated this process for all n in {1, 2, . . . , 7}2 and in both directions (i.e. AnB and BnA). In the above two networks, the transferred layers are frozen. We also create versions of the above two networks where the transferred layers are fine-tuned: • A selffer network B3B+: just like B3B, but where all layers learn. • A transfer network A3B+: just like A3B, but where all layers learn. To create base and target datasets that are similar to each other, we randomly assign half of the 1000 ImageNet classes to A and half to B. ImageNet contains clusters of similar classes, particularly dogs and cats, like these 13 classes from the biological family Felidae: {tabby cat, tiger cat, Persian cat, Siamese cat, Egyptian cat, mountain lion, lynx, leopard, snow leopard, jaguar, lion, tiger, cheetah}. On average, A and B will each contain approximately 6 or 7 of these felid classes, meaning that base networks trained on each dataset will have features at all levels that help classify some types of felids. When generalizing to the other dataset, we would expect that the new high-level felid detectors trained on top of old low-level felid detectors would work well. Thus A and B are similar when created by randomly assigning classes to each, and we expect that transferred features will perform better than when A and B are less similar. Fortunately, in ImageNet we are also provided with a hierarchy of parent classes. This information allowed us to create a special split of the dataset into two halves that are as semantically different from each other as possible: with dataset A containing only man-made entities and B containing natural entities. The split is not quite even, with 551 classes in the man-made group and 449 in the natural group. Further details of this split and the classes in each half are given in the supplementary material. In Section 4.2 we will show that features transfer more poorly (i.e. they are more specific) when the datasets are less similar. 1The ImageNet dataset, as released in the Large Scale Visual Recognition Challenge 2012 (ILSVRC2012) (Deng et al., 2009) contains 1,281,167 labeled training images and 50,000 test images, with each image labeled with one of 1000 classes. 2Note that n = 8 doesn’t make sense in either case: B8B is just baseB, and A8B would not work because it is never trained on B. 3 input A labels A WA1 WA2 WA3 WA4 WA5 WA6 WA7 WA8 input B labels B WB1 WB2 WB3 WB4 WB5 WB6 WB7 WB8 WA1 WA2 WA3 or or or B3B and B3B+ or or or WB1 WB2 WB3 baseA baseB A3B and A3B+ Figure 1: Overview of the experimental treatments and controls. Top two rows: The base networks are trained using standard supervised backprop on only half of the ImageNet dataset (first row: A half, second row: B half). The labeled rectangles (e.g. WA1) represent the weight vector learned for that layer, with the color indicating which dataset the layer was originally trained on. The vertical, ellipsoidal bars between weight vectors represent the activations of the network at each layer. Third row: In the selffer network control, the first n weight layers of the network (in this example, n = 3) are copied from a base network (e.g. one trained on dataset B), the upper 8 −n layers are randomly initialized, and then the entire network is trained on that same dataset (in this example, dataset B). The first n layers are either locked during training (“frozen” selffer treatment B3B) or allowed to learn (“fine-tuned” selffer treatment B3B+). This treatment reveals the occurrence of fragile coadaptation, when neurons on neighboring layers co-adapt during training in such a way that cannot be rediscovered when one layer is frozen. Fourth row: The transfer network experimental treatment is the same as the selffer treatment, except that the first n layers are copied from a network trained on one dataset (e.g. A) and then the entire network is trained on the other dataset (e.g. B). This treatment tests the extent to which the features on layer n are general or specific. 3 Experimental Setup Since Krizhevsky et al. (2012) won the ImageNet 2012 competition, there has been much interest and work toward tweaking hyperparameters of large convolutional models. However, in this study we aim not to maximize absolute performance, but rather to study transfer results on a well-known architecture. We use the reference implementation provided by Caffe (Jia et al., 2014) so that our results will be comparable, extensible, and useful to a large number of researchers. Further details of the training setup (learning rates, etc.) are given in the supplementary material, and code and parameter files to reproduce these experiments are available at http://yosinski.com/transfer. 4 Results and Discussion We performed three sets of experiments. The main experiment has random A/B splits and is discussed in Section 4.1. Section 4.2 presents an experiment with the man-made/natural split. Section 4.3 describes an experiment with random weights. 4 0 1 2 3 4 5 6 7 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Top-1 accuracy (higher is better) baseB selffer BnB selffer BnB + transfer AnB transfer AnB + 0 1 2 3 4 5 6 7 Layer n at which network is chopped and retrained 0.54 0.56 0.58 0.60 0.62 0.64 Top-1 accuracy (higher is better) 5: Transfer + fine-tuning improves generalization 3: Fine-tuning recovers co-adapted interactions 2: Performance drops due to fragile co-adaptation 4: Performance drops due to representation specificity Figure 2: The results from this paper’s main experiment. Top: Each marker in the figure represents the average accuracy over the validation set for a trained network. The white circles above n = 0 represent the accuracy of baseB. There are eight points, because we tested on four separate random A/B splits. Each dark blue dot represents a BnB network. Light blue points represent BnB+ networks, or fine-tuned versions of BnB. Dark red diamonds are AnB networks, and light red diamonds are the fine-tuned AnB+ versions. Points are shifted slightly left or right for visual clarity. Bottom: Lines connecting the means of each treatment. Numbered descriptions above each line refer to which interpretation from Section 4.1 applies. 4.1 Similar Datasets: Random A/B splits The results of all A/B transfer learning experiments on randomly split (i.e. similar) datasets are shown3 in Figure 2. The results yield many different conclusions. In each of the following interpretations, we compare the performance to the base case (white circles and dotted line in Figure 2). 3AnA networks and BnB networks are statistically equivalent, because in both cases a network is trained on 500 random classes. To simplify notation we label these BnB networks. Similarly, we have aggregated the statistically identical BnA and AnB networks and just call them AnB. 5 1. The white baseB circles show that a network trained to classify a random subset of 500 classes attains a top-1 accuracy of 0.625, or 37.5% error. This error is lower than the 42.5% top-1 error attained on the 1000-class network. While error might have been higher because the network is trained on only half of the data, which could lead to more overfitting, the net result is that error is lower because there are only 500 classes, so there are only half as many ways to make mistakes. 2. The dark blue BnB points show a curious behavior. As expected, performance at layer one is the same as the baseB points. That is, if we learn eight layers of features, save the first layer of learned Gabor features and color blobs, reinitialize the whole network, and retrain it toward the same task, it does just as well. This result also holds true for layer 2. However, layers 3, 4, 5, and 6, particularly 4 and 5, exhibit worse performance. This performance drop is evidence that the original network contained fragile co-adapted features on successive layers, that is, features that interact with each other in a complex or fragile way such that this co-adaptation could not be relearned by the upper layers alone. Gradient descent was able to find a good solution the first time, but this was only possible because the layers were jointly trained. By layer 6 performance is nearly back to the base level, as is layer 7. As we get closer and closer to the final, 500-way softmax output layer 8, there is less to relearn, and apparently relearning these one or two layers is simple enough for gradient descent to find a good solution. Alternately, we may say that there is less co-adaptation of features between layers 6 & 7 and between 7 & 8 than between previous layers. To our knowledge it has not been previously observed in the literature that such optimization difficulties may be worse in the middle of a network than near the bottom or top. 3. The light blue BnB+ points show that when the copied, lower-layer features also learn on the target dataset (which here is the same as the base dataset), performance is similar to the base case. Such fine-tuning thus prevents the performance drop observed in the BnB networks. 4. The dark red AnB diamonds show the effect we set out to measure in the first place: the transferability of features from one network to another at each layer. Layers one and two transfer almost perfectly from A to B, giving evidence that, at least for these two tasks, not only are the first-layer Gabor and color blob features general, but the second layer features are general as well. Layer three shows a slight drop, and layers 4-7 show a more significant drop in performance. Thanks to the BnB points, we can tell that this drop is from a combination of two separate effects: the drop from lost co-adaptation and the drop from features that are less and less general. On layers 3, 4, and 5, the first effect dominates, whereas on layers 6 and 7 the first effect diminishes and the specificity of representation dominates the drop in performance. Although examples of successful feature transfer have been reported elsewhere in the literature (Girshick et al., 2013; Donahue et al., 2013b), to our knowledge these results have been limited to noticing that transfer from a given layer is much better than the alternative of training strictly on the target task, i.e. noticing that the AnB points at some layer are much better than training all layers from scratch. We believe this is the first time that (1) the extent to which transfer is successful has been carefully quantified layer by layer, and (2) that these two separate effects have been decoupled, showing that each effect dominates in part of the regime. 5. The light red AnB+ diamonds show a particularly surprising effect: that transferring features and then fine-tuning them results in networks that generalize better than those trained directly on the target dataset. Previously, the reason one might want to transfer learned features is to enable training without overfitting on small target datasets, but this new result suggests that transferring features will boost generalization performance even if the target dataset is large. Note that this effect should not be attributed to the longer total training time (450k base iterations + 450k finetuned iterations for AnB+ vs. 450k for baseB), because the BnB+ networks are also trained for the same longer length of time and do not exhibit this same performance improvement. Thus, a plausible explanation is that even after 450k iterations of fine-tuning (beginning with completely random top layers), the effects of having seen the base dataset still linger, boosting generalization performance. It is surprising that this effect lingers through so much retraining. This generalization improvement seems not to depend much on how much of the first network we keep to initialize the second network: keeping anywhere from one to seven layers produces improved performance, with slightly better performance as we keep more layers. The average boost across layers 1 to 7 is 1.6% over the base case, and the average if we keep at least five layers is 2.1%.4 The degree of performance boost is shown in Table 1. 4We aggregate performance over several layers because each point is computationally expensive to obtain (9.5 days on a GPU), so at the time of publication we have few data points per layer. The aggregation is 6 Table 1: Performance boost of AnB+ over controls, averaged over different ranges of layers. mean boost mean boost layers over over aggregated baseB selffer BnB+ 1-7 1.6% 1.4% 3-7 1.8% 1.4% 5-7 2.1% 1.7% 4.2 Dissimilar Datasets: Splitting Man-made and Natural Classes Into Separate Datasets As mentioned previously, the effectiveness of feature transfer is expected to decline as the base and target tasks become less similar. We test this hypothesis by comparing transfer performance on similar datasets (the random A/B splits discussed above) to that on dissimilar datasets, created by assigning man-made object classes to A and natural object classes to B. This man-made/natural split creates datasets as dissimilar as possible within the ImageNet dataset. The upper-left subplot of Figure 3 shows the accuracy of a baseA and baseB network (white circles) and BnA and AnB networks (orange hexagons). Lines join common target tasks. The upper of the two lines contains those networks trained toward the target task containing natural categories (baseB and AnB). These networks perform better than those trained toward the man-made categories, which may be due to having only 449 classes instead of 551, or simply being an easier task, or both. 4.3 Random Weights We also compare to random, untrained weights because Jarrett et al. (2009) showed — quite strikingly — that the combination of random convolutional filters, rectification, pooling, and local normalization can work almost as well as learned features. They reported this result on relatively small networks of two or three learned layers and on the smaller Caltech-101 dataset (Fei-Fei et al., 2004). It is natural to ask whether or not the nearly optimal performance of random filters they report carries over to a deeper network trained on a larger dataset. The upper-right subplot of Figure 3 shows the accuracy obtained when using random filters for the first n layers for various choices of n. Performance falls off quickly in layers 1 and 2, and then drops to near-chance levels for layers 3+, which suggests that getting random weights to work in convolutional neural networks may not be as straightforward as it was for the smaller network size and smaller dataset used by Jarrett et al. (2009). However, the comparison is not straightforward. Whereas our networks have max pooling and local normalization on layers 1 and 2, just as Jarrett et al. (2009) did, we use a different nonlinearity (relu(x) instead of abs(tanh(x))), different layer sizes and number of layers, as well as other differences. Additionally, their experiment only considered two layers of random weights. The hyperparameter and architectural choices of our network collectively provide one new datapoint, but it may well be possible to tweak layer sizes and random initialization details to enable much better performance for random weights.5 The bottom subplot of Figure 3 shows the results of the experiments of the previous two sections after subtracting the performance of their individual base cases. These normalized performances are plotted across the number of layers n that are either random or were trained on a different, base dataset. This comparison makes two things apparent. First, the transferability gap when using frozen features grows more quickly as n increases for dissimilar tasks (hexagons) than similar tasks (diamonds), with a drop by the final layer for similar tasks of only 8% vs. 25% for dissimilar tasks. Second, transferring even from a distant task is better than using random filters. One possible reason this latter result may differ from Jarrett et al. (2009) is because their fully-trained (non-random) networks were overfitting more on the smaller Caltech-101 dataset than ours on the larger ImageNet informative, however, because the performance at each layer is based on different random draws of the upper layer initialization weights. Thus, the fact that layers 5, 6, and 7 result in almost identical performance across random draws suggests that multiple runs at a given layer would result in similar performance. 5For example, the training loss of the network with three random layers failed to converge, producing only chance-level validation performance. Much better convergence may be possible with different hyperparameters. 7 0 1 2 3 4 5 6 7 0.3 0.4 0.5 0.6 0.7 Top-1 accuracy Man-made/Natural split 0 1 2 3 4 5 6 7 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Random, untrained filters 0 1 2 3 4 5 6 7 Layer n at which network is chopped and retrained −0.30 −0.25 −0.20 −0.15 −0.10 −0.05 0.00 Relative top-1 accuracy (higher is better) reference mean AnB, random splits mean AnB, m/n split random features Figure 3: Performance degradation vs. layer. Top left: Degradation when transferring between dissimilar tasks (from man-made classes of ImageNet to natural classes or vice versa). The upper line connects networks trained to the “natural” target task, and the lower line connects those trained toward the “man-made” target task. Top right: Performance when the first n layers consist of random, untrained weights. Bottom: The top two plots compared to the random A/B split from Section 4.1 (red diamonds), all normalized by subtracting their base level performance. dataset, making their random filters perform better by comparison. In the supplementary material, we provide an extra experiment indicating the extent to which our networks are overfit. 5 Conclusions We have demonstrated a method for quantifying the transferability of features from each layer of a neural network, which reveals their generality or specificity. We showed how transferability is negatively affected by two distinct issues: optimization difficulties related to splitting networks in the middle of fragilely co-adapted layers and the specialization of higher layer features to the original task at the expense of performance on the target task. We observed that either of these two issues may dominate, depending on whether features are transferred from the bottom, middle, or top of the network. We also quantified how the transferability gap grows as the distance between tasks increases, particularly when transferring higher layers, but found that even features transferred from distant tasks are better than random weights. Finally, we found that initializing with transferred features can improve generalization performance even after substantial fine-tuning on a new task, which could be a generally useful technique for improving deep neural network performance. Acknowledgments The authors would like to thank Kyunghyun Cho and Thomas Fuchs for helpful discussions, Joost Huizinga, Anh Nguyen, and Roby Velez for editing, as well as funding from the NASA Space Technology Research Fellowship (JY), DARPA project W911NF-12-1-0449, NSERC, Ubisoft, and CIFAR (YB is a CIFAR Fellow). 8 References Bengio, Y. (2011). Deep learning of representations for unsupervised and transfer learning. In JMLR W&CP: Proc. Unsupervised and Transfer Learning. Bengio, Y., Bastien, F., Bergeron, A., Boulanger-Lewandowski, N., Breuel, T., Chherawala, Y., Cisse, M., Cˆot´e, M., Erhan, D., Eustache, J., Glorot, X., Muller, X., Pannetier Lebeuf, S., Pascanu, R., Rifai, S., Savard, F., and Sicard, G. (2011). Deep learners benefit more from out-of-distribution examples. In JMLR W&CP: Proc. AISTATS’2011. Caruana, R. (1995). Learning many related tasks at the same time with backpropagation. pages 657–664, Cambridge, MA. MIT Press. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2013a). Decaf: A deep convolutional activation feature for generic visual recognition. Technical report, arXiv preprint arXiv:1310.1531. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. (2013b). Decaf: A deep convolutional activation feature for generic visual recognition. arXiv preprint arXiv:1310.1531. Fei-Fei, L., Fergus, R., and Perona, P. (2004). Learning generative visual models from few training examples: An incremental Bayesian approach tested on 101 object categories. In Conference on Computer Vision and Pattern Recognition Workshop (CVPR 2004), page 178. Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2013). Rich feature hierarchies for accurate object detection and semantic segmentation. arXiv preprint arXiv:1311.2524. Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. (2009). What is the best multi-stage architecture for object recognition? In Proc. International Conference on Computer Vision (ICCV’09), pages 2146–2153. IEEE. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., and Darrell, T. (2014). Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093. Krizhevsky, A., Sutskever, I., and Hinton, G. (2012). ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS’2012). Le, Q. V., Karpenko, A., Ngiam, J., and Ng, A. Y. (2011). ICA with reconstruction cost for efficient overcomplete feature learning. In J. Shawe-Taylor, R. Zemel, P. Bartlett, F. Pereira, and K. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 1017–1025. Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. Montreal, Canada. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y. (2014). Overfeat: Integrated recognition, localization and detection using convolutional networks. In International Conference on Learning Representations (ICLR 2014). CBLS. Zeiler, M. D. and Fergus, R. (2013). Visualizing and understanding convolutional networks. Technical Report Arxiv 1311.2901. 9
2014
87
5,577
Concavity of reweighted Kikuchi approximation Po-Ling Loh Department of Statistics The Wharton School University of Pennsylvania loh@wharton.upenn.edu Andre Wibisono Computer Science Division University of California, Berkeley wibisono@berkeley.edu Abstract We analyze a reweighted version of the Kikuchi approximation for estimating the log partition function of a product distribution defined over a region graph. We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion, and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce global optima of the Kikuchi approximation whenever the algorithm converges. When the region graph has two layers, corresponding to a Bethe approximation, we show that our sufficient conditions for concavity are also necessary. Finally, we provide an explicit characterization of the polytope of concavity in terms of the cycle structure of the region graph. We conclude with simulations that demonstrate the advantages of the reweighted Kikuchi approach. 1 Introduction Undirected graphical models are a familiar framework in diverse application domains such as computer vision, statistical physics, coding theory, social science, and epidemiology. In certain settings of interest, one is provided with potential functions defined over nodes and (hyper)edges of the graph. A crucial step in probabilistic inference is to compute the log partition function of the distribution based on these potential functions for a given graph structure. However, computing the log partition function either exactly or approximately is NP-hard in general [2, 17]. An active area of research involves finding accurate approximations of the log partition function and characterizing the graph structures for which such approximations may be computed efficiently [29, 22, 7, 19, 25, 18]. When the underlying graph is a tree, the log partition function may be computed exactly via the sum product algorithm in time linear in the number of nodes [15]. However, when the graph contains cycles, a generalized version of the sum product algorithm known as loopy belief propagation may either fail to converge or terminate in local optima of a nonconvex objective function [26, 20, 8, 13]. In this paper, we analyze the Kikuchi approximation method, which is constructed from a variational representation of the log partition function by replacing the entropy with an expression that decomposes with respect to a region graph. Kikuchi approximations were previously introduced in the physics literature [9] and reformalized by Yedidia et al. [28, 29] and others [1, 14] in the language of graphical models. The Bethe approximation, which is a special case of the Kikuchi approximation when the region graph has only two layers, has been studied by various authors [3, 28, 5, 25]. In addition, a reweighted version of the Bethe approximation was proposed by Wainwright et al. [22, 16]. As described in Vontobel [21], computing the global optimum of the Bethe variational problem may in turn be used to approximate the permanent of a nonnegative square matrix. The particular objective function that we study generalizes the Kikuchi objective appearing in previous literature by assigning arbitrary weights to individual terms in the Kikuchi entropy expansion. We establish necessary and sufficient conditions under which this class of objective functions is concave, so a global optimum may be found efficiently. Our theoretical results synthesize known results on Kikuchi and Bethe approximations, and our main theorem concerning concavity conditions for the reweighted Kikuchi entropy recovers existing results when specialized to the unweighted 1 Kikuchi [14] or reweighted Bethe [22] case. Furthermore, we provide a valuable converse result in the reweighted Bethe case, showing that when our concavity conditions are violated, the entropy function cannot be concave over the whole feasible region. As demonstrated by our experiments, a message-passing algorithm designed to optimize the Kikuchi objective may terminate in local optima for weights outside the concave region. Watanabe and Fukumizu [24, 25] provide a similar converse in the unweighted Bethe case, but our proof is much simpler and our result is more general. In the reweighted Bethe setting, we also present a useful characterization of the concave region of the Bethe entropy function in terms of the geometry of the graph. Specifically, we show that if the region graph consists of only singleton vertices and pairwise edges, then the region of concavity coincides with the convex hull of incidence vectors of single-cycle forest subgraphs of the original graph. When the region graph contains regions with cardinality greater than two, the latter region may be strictly contained in the former; however, our result provides a useful way to generate weight vectors within the region of concavity. Whereas Wainwright et al. [22] establish the concavity of the reweighted Bethe objective on the spanning forest polytope, that region is contained within the single-cycle forest polytope, and our simulations show that generating weight vectors in the latter polytope may yield closer approximations to the log partition function. The remainder of the paper is organized as follows: In Section 2, we review background information about the Kikuchi and Bethe approximations. In Section 3, we provide our main results on concavity conditions for the reweighted Kikuchi approximation, including a geometric characterization of the region of concavity in the Bethe case. Section 4 outlines the reweighted sum product algorithm and proves that fixed points correspond to global optima of the Kikuchi approximation. Section 5 presents experiments showing the improved accuracy of the reweighted Kikuchi approximation over the region of concavity. Technical proofs and additional simulations are contained in the Appendix. 2 Background and problem setup In this section, we review basic concepts of the Kikuchi approximation and establish some terminology to be used in the paper. Let G = (V, R) denote a region graph defined over the vertex set V , where each region r 2 R is a subset of V . Directed edges correspond to inclusion, so r ! s is an edge of G if s ✓r. We use the following notation, for r 2 R: A(r) := {s 2 R: r ( s} (ancestors of r) F(r) := {s 2 R: r ✓s} (forebears of r) N(r) := {s 2 R: r ✓s or s ✓r} (neighbors of r). For R0 ✓R, we define A(R0) = S r2R0 A(r), and we define F(R0) and N(R0) similarly. We consider joint distributions x = (xs)s2V that factorize over the region graph; i.e., p(x) = 1 Z(↵) Y r2R ↵r(xr), (1) for potential functions ↵r > 0. Here, Z(↵) is the normalization factor, or partition function, which is a function of the potential functions ↵r, and each variable xs takes values in a finite discrete set X. One special case of the factorization (1) is the pairwise Ising model, defined over a graph G = (V, E), where the distribution is given by pγ(x) = exp ⇣X s2V γs(xs) + X (s,t)2E γst(xs, xt) −A(γ) ⌘ , (2) and X = {−1, +1}. Our goal is to analyze the log partition function log Z(↵) = log n X x2X |V | Y r2R ↵r(xr) o . (3) 2.1 Variational representation It is known from the theory of graphical models [14] that the log partition function (3) may be written in the variational form log Z(↵) = sup {⌧r(xr)}2∆R n X r2R X xr ⌧r(xr) log(↵r(xr)) + H(p⌧) o , (4) 2 where p⌧is the maximum entropy distribution with marginals {⌧r(xr)} and H(p) := − X x p(x) log p(x) is the usual entropy. Here, ∆R denotes the R-marginal polytope; i.e., {⌧r(xr): r 2 R} 2 ∆R if and only if there exists a distribution ⌧(x) such that ⌧r(xr) = P x\r ⌧(xr, x\r) for all r. For ease of notation, we also write ⌧⌘{⌧r(xr): r 2 R}. Let ✓⌘✓(x) denote the collection of log potential functions {log(↵r(xr)): r 2 R}. Then equation (4) may be rewritten as log Z(✓) = sup ⌧2∆R {h✓, ⌧i + H(p⌧)} . (5) Specializing to the Ising model (2), equation (5) gives the variational representation A(γ) = sup µ2M {hγ, µi + H(pµ)} , (6) which appears in Wainwright and Jordan [23]. Here, M ⌘M(G) denotes the marginal polytope, corresponding to the collection of mean parameter vectors of the sufficient statistics in the exponential family representation (2), ranging over different values of γ, and pµ is the maximum entropy distribution with mean parameters µ. 2.2 Reweighted Kikuchi approximation Although the set ∆R appearing in the variational representation (5) is a convex polytope, it may have exponentially many facets [23]. Hence, we replace ∆R with the set ∆K R = n ⌧: 8t, u 2 R s.t. t ✓u, X xu\t ⌧u(xt, xu\t) = ⌧t(xt) and 8u 2 R, X xu ⌧u(xu) = 1 o of locally consistent R-pseudomarginals. Note that ∆R ✓∆K R and the latter set has only polynomially many facets, making optimization more tractable. In the case of the pairwise Ising model (2), we let L ⌘L(G) denote the polytope ∆K R . Then L is the collection of nonnegative functions ⌧= (⌧s, ⌧st) satisfying the marginalization constraints P xs ⌧s(xs) = 1, 8s 2 V, P xt ⌧st(xs, xt) = ⌧s(xs) and P xs ⌧st(xs, xt) = ⌧t(xt), 8(s, t) 2 E. Recall that M(G) ✓L(G), with equality achieved if and only if the underlying graph G is a tree. In the general case, we have ∆R = ∆K R when the Hasse diagram of the region graph admits a minimal representation that is loop-free (cf. Theorem 2 of Pakzad and Anantharam [14]). Given a collection of R-pseudomarginals ⌧, we also replace the entropy term H(p⌧), which is difficult to compute in general, by the approximation H(p⌧) ⇡ X r2R ⇢rHr(⌧r) := H(⌧; ⇢), (7) where Hr(⌧r) := −P xr ⌧r(xr) log ⌧r(xr) is the entropy computed over region r, and {⇢r : r 2 R} are weights assigned to the regions. Note that in the pairwise Ising case (2), with p := pγ, we have the equality H(p) = X s2V Hs(ps) − X (s,t)2E Ist(pst) when G is a tree, where Ist(pst) = Hs(ps) + Ht(pt) −Hst(pst) denotes the mutual information and ps and pst denote the node and edge marginals. Hence, the approximation (7) is exact with ⇢st = 1, 8(s, t) 2 E, and ⇢s = 1 −deg(s), 8s 2 V. Using the approximation (7), we arrive at the following reweighted Kikuchi approximation: B(✓; ⇢) := sup ⌧2∆K R {h✓, ⌧i + H(⌧; ⇢)} | {z } B✓,⇢(⌧) . (8) Note that when {⇢r} are the overcounting numbers {cr}, defined recursively by cr = 1 − X s2A(r) cs, (9) the expression (8) reduces to the usual (unweighted) Kikuchi approximation considered in Pakzad and Anantharam [14]. 3 3 Main results and consequences In this section, we analyze the concavity of the Kikuchi variational problem (8). We derive a sufficient condition under which the function B✓,⇢(⌧) is concave over the set ∆K R , so global optima of the reweighted Kikuchi approximation may be found efficiently. In the Bethe case, we also show that the condition is necessary for B✓,⇢(⌧) to be concave over the entire region ∆K R , and we provide a geometric characterization of ∆K R in terms of the edge and cycle structure of the graph. 3.1 Sufficient conditions for concavity We begin by establishing sufficient conditions for the concavity of B✓,⇢(⌧). Clearly, this is equivalent to establishing conditions under which H(⌧; ⇢) is concave. Our main result is the following: Theorem 1. If ⇢2 R|R| satisfies X s2F(S) ⇢s ≥0, 8S ✓R, (10) then the Kikuchi entropy H(⌧; ⇢) is strictly concave on ∆K R . The proof of Theorem 1 is contained in Appendix A.1, and makes use of a generalization of Hall’s marriage lemma for weighted graphs (cf. Lemma 1 in Appendix A.2). The condition (10) depends heavily on the structure of the region graph. For the sake of interpretability, we now specialize to the case where the region graph has only two layers, with the first layer corresponding to vertices and the second layer corresponding to hyperedges. In other words, for r, s 2 R, we have r ✓s only if |r| = 1, and R = V [ F, where F is the set of hyperedges and V denotes the set of singleton vertices. This is the Bethe case, and the entropy H(⌧; ⇢) = X s2V ⇢sHs(⌧s) + X ↵2F ⇢↵H↵(⌧↵) (11) is consequently known as the Bethe entropy. The following result is proved in Appendix A.3: Corollary 1. Suppose ⇢↵≥0 for all ↵2 F, and the following condition also holds: X s2U ⇢s + X ↵2F : ↵\U6=; ⇢↵≥0, 8U ✓V. (12) Then the Bethe entropy H(⌧; ⇢) is strictly concave over ∆K R . 3.2 Necessary conditions for concavity We now establish a converse to Corollary 1 in the Bethe case, showing that condition (12) is also necessary for the concavity of the Bethe entropy. When ⇢↵= 1 for ↵2 F and ⇢s = 1 −|N(s)| for s 2 V , we recover the result of Watanabe and Fukumizu [25] for the unweighted Bethe case. However, our proof technique is significantly simpler and avoids the complex machinery of graph zeta functions. Our approach proceeds by considering the Bethe entropy H(⌧; ⇢) on appropriate slices of the domain ∆K R so as to extract condition (12) for each U ✓V . The full proof is provided in Appendix B.1. Theorem 2. If the Bethe entropy H(⌧; ⇢) is concave over ∆K R , then ⇢↵≥0 for all ↵2 F, and condition (12) holds. Indeed, as demonstrated in the simulations of Section 5, the Bethe objective function B✓,⇢(⌧) may have multiple local optima if ⇢does not satisfy condition (12). 3.3 Polytope of concavity We now characterize the polytope defined by the inequalities (12). We show that in the pairwise Bethe case, the polytope may be expressed geometrically as the convex hull of single-cycle forests 4 formed by the edges of the graph. In the more general (non-pairwise) Bethe case, however, the polytope of concavity may strictly contain the latter set. Note that the Bethe entropy (11) may be written in the alternative form H(⌧; ⇢) = X s2V ⇢0 sHs(⌧s) − X ↵2F ⇢↵eI↵(⌧↵), (13) where eI↵(⌧↵) := {P s2↵Hs(⌧s)} −H↵(⌧↵) is the KL divergence between the joint distribution ⌧↵ and the product distribution Q s2↵⌧s, and the weights ⇢0 s are defined appropriately. We show that the polytope of concavity has a nice geometric characterization when ⇢0 s = 1 for all s 2 V , and ⇢↵2 [0, 1] for all ↵2 F. Note that this assignment produces the expression for the reweighted Bethe entropy analyzed in Wainwright et al. [22] (when all elements of F have cardinality two). Equation (13) then becomes H(⌧; ⇢) = X s2V ⇣ 1 − X ↵2N(s) ⇢↵ ⌘ Hs(⌧s) + X ↵2F ⇢↵H↵(⌧↵), (14) and the inequalities (12) defining the polytope of concavity are X ↵2F : ↵\U6=; (|↵\ U| −1)⇢↵|U|, 8U ✓V. (15) Consequently, we define C := n ⇢2 [0, 1]|F | : X ↵2F : ↵\U6=; (|↵\ U| −1)⇢↵|U|, 8U ✓V o . By Theorem 2, the set C is the region of concavity for the Bethe entropy (14) within [0, 1]|F |. We also define the set F := {1F 0 : F 0 ✓F and F 0 [ N(F 0) is a single-cycle forest in G} ✓{0, 1}|F |, where a single-cycle forest is defined to be a subset of edges of a graph such that each connected component contains at most one cycle. (We disregard the directions of edges in G.) The following theorem gives our main result. The proof is contained in Appendix C.1. Theorem 3. In the Bethe case (i.e., the region graph G has two layers), we have the containment conv(F) ✓C. If in addition |↵| = 2 for all ↵2 F, then conv(F) = C. The significance of Theorem 3 is that it provides us with a convenient graph-based method for constructing vectors ⇢2 C. From the inequalities (15), it is not even clear how to efficiently verify whether a given ⇢2 [0, 1]|F | lies in C, since it involves testing 2|V | inequalities. Comparing Theorem 3 with known results, note that in the pairwise case (|↵| = 2 for all ↵2 F), Theorem 1 of Wainwright et al. [22] states that the Bethe entropy is concave over conv(T), where T ✓{0, 1}|E| is the set of edge indicator vectors for spanning forests of the graph. It is trivial to check that T ✓F, since every spanning forest is also a single-cycle forest. Hence, Theorems 2 and 3 together imply a stronger result than in Wainwright et al. [22], characterizing the precise region of concavity for the Bethe entropy as a superset of the polytope conv(T) analyzed there. In the unweighted Kikuchi case, it is also known [1, 14] that the Kikuchi entropy is concave for the assignment ⇢= 1F when the region graph G is connected and has at most one cycle. Clearly, 1F 2 C in that case, so this result is a consequence of Theorems 2 and 3, as well. However, our theorems show that a much more general statement is true. It is tempting to posit that conv(F) = C holds more generally in the Bethe case. However, as the following example shows, settings arise where conv(F) ( C. Details are contained in Appendix C.2. Example 1. Consider a two-layer region graph with vertices V = {1, 2, 3, 4, 5} and factors ↵1 = {1, 2, 3}, ↵2 = {2, 3, 4}, and ↵3 = {3, 4, 5}. Then (1, 1 2, 1) 2 C\ conv(F). In fact, Example 1 is a special case of a more general statement, which we state in the following proposition. Here, F := {F 0 ✓F : 1F 0 2 F}, and an element F ⇤2 F is maximal if it is not contained in another element of F. 5 Proposition 1. Suppose (i) G is not a single-cycle forest, and (ii) there exists a maximal element F ⇤2 F such that the induced subgraph F ⇤[ N(F ⇤) is a forest. Then conv(F) ( C. The proof of Proposition 1 is contained in Appendix C.3. Note that if |↵| = 2 for all ↵2 F, then condition (ii) is violated whenever condition (i) holds, so Proposition 1 provides a partial converse to Theorem 3. 4 Reweighted sum product algorithm In this section, we provide an iterative message passing algorithm to optimize the Kikuchi variational problem (8). As in the case of the generalized belief propagation algorithm for the unweighted Kikuchi approximation [28, 29, 11, 14, 12, 27] and the reweighted sum product algorithm for the Bethe approximation [22], our message passing algorithm searches for stationary points of the Lagrangian version of the problem (8). When ⇢satisfies condition (10), Theorem 1 implies that the problem (8) is strictly concave, so the unique fixed point of the message passing algorithm globally maximizes the Kikuchi approximation. Let G = (V, R) be a region graph defining our Kikuchi approximation. Following Pakzad and Anantharam [14], for r, s 2 R, we write r ≺s if r ( s and there does not exist t 2 R such that r ( t ( s. For r 2 R, we define the parent set of r to be P(r) = {s 2 R: r ≺s} and the child set of r to be C(r) = {s 2 R: s ≺r}. With this notation, ⌧= {⌧r(xr): r 2 R} belongs to the set ∆K R if and only if P xs\r ⌧s(xr, xs\r) = ⌧r(xr) for all r 2 R, s 2 P(r). The message passing algorithm we propose is as follows: For each r 2 R and s 2 P(r), let Msr(xr) denote the message passed from s to r at assignment xr. Starting with an arbitrary positive initialization of the messages, we repeatedly perform the following updates for all r 2 R, s 2 P(r): Msr(xr) C 2 64 P xs\r exp % ✓s(xs)/⇢s & Q v2P(s) Mvs(xs)⇢v/⇢s Q w2C(s)\r Msw(xw)−1 exp % ✓r(xr)/⇢r & Q u2P(r)\s Mur(xr)⇢u/⇢r Q t2C(r) Mrt(xt)−1 3 75 ⇢r ⇢r+⇢s . (16) Here, C > 0 may be chosen to ensure a convenient normalization condition; e.g., P xr Msr(xr) = 1. Upon convergence of the updates (16), we compute the pseudomarginals according to ⌧r(xr) / exp ✓✓r(xr) ⇢r ◆ Y s2P(r) Msr(xr)⇢s/⇢r Y t2C(r) Mrt(xt)−1, (17) and we obtain the corresponding Kikuchi approximation by computing the objective function (8) with these pseudomarginals. We have the following result, which is proved in Appendix D: Theorem 4. The pseudomarginals ⌧specified by the fixed points of the messages {Msr(xr)} via the updates (16) and (17) correspond to the stationary points of the Lagrangian associated with the Kikuchi approximation problem (8). As with the standard belief propagation and reweighted sum product algorithms, we have several options for implementing the above message passing algorithm in practice. For example, we may perform the updates (16) using serial or parallel schedules. To improve the convergence of the algorithm, we may damp the updates by taking a convex combination of new and previous messages using an appropriately chosen step size. As noted by Pakzad and Anantharam [14], we may also use a minimal graphical representation of the Hasse diagram to lower the complexity of the algorithm. Finally, we remark that although our message passing algorithm proceeds in the same spirit as classical belief propagation algorithms by operating on the Lagrangian of the objective function, our algorithm as presented above does not immediately reduce to the generalized belief propagation algorithm for unweighted Kikuchi approximations or the reweighted sum product algorithm for tree-reweighted pairwise Bethe approximations. Previous authors use algebraic relations between the overcounting numbers (9) in the Kikuchi case [28, 29, 11, 14] and the two-layer structure of the Hasse diagram in the Bethe case [22] to obtain a simplified form of the updates. Since the coefficients ⇢in our problem lack the same algebraic relations, following the message-passing protocol used in previous work [11, 28] leads to more complicated updates, so we present a slightly different algorithm that still optimizes the general reweighted Kikuchi objective. 6 5 Experiments In this section, we present empirical results to demonstrate the advantages of the reweighted Kikuchi approximation that support our theoretical results. For simplicity, we focus on the binary pairwise Ising model given in equation (2). Without loss of generality, we may take the potentials to be γs(xs) = γsxs and γst(xs, xt) = γstxsxt for some γ = (γs, γst) 2 R|V |+|E|. We run our experiments on two types of graphs: (1) Kn, the complete graph on n vertices, and (2) Tn, the pn ⇥pn toroidal grid graph where every vertex has degree four. Bethe approximation. We consider the pairwise Bethe approximation of the log partition function A(γ) with weights ⇢st ≥0 and ⇢s = 1 −P t2N(s) ⇢st. Because of the regularity structure of Kn and Tn, we take ⇢st = ⇢≥0 for all (s, t) 2 E and study the behavior of the Bethe approximation as ⇢varies. For this particular choice of weight vector ~⇢= ⇢1E, we define ⇢tree = max{⇢≥0: ~⇢2 conv(T)}, and ⇢cycle = max{⇢≥0: ~⇢2 conv(F)}. It is easily verified that for Kn, we have ⇢tree = 2 n and ⇢cycle = 2 n−1; while for Tn, we have ⇢tree = n−1 2n and ⇢cycle = 1 2. Our results in Section 3 imply that the Bethe objective function Bγ,⇢(⌧) in equation (8) is concave if and only if ⇢⇢cycle, and Wainwright et al. [22] show that we have the bound A(γ) B(γ; ⇢) for ⇢⇢tree. Moreover, since the Bethe entropy may be written in terms of the edge mutual information (13), the function B(γ; ⇢) is decreasing in ⇢. In our results below, we observe that we may obtain a tighter approximation to A(γ) by moving from the upper bound region ⇢⇢tree to the concavity region ⇢⇢cycle. In addition, for ⇢> ⇢cycle, we observe multiple local optima of Bγ,⇢(⌧). Procedure. We generate a random potential γ = (γs, γst) 2 R|V |+|E| for the Ising model (2) by sampling each potential {γs}s2V and {γst}(s,t)2E independently. We consider two types of models: Attractive: γst ⇠Uniform[0, !st], and Mixed: γst ⇠Uniform[−!st, !st]. In each case, γs ⇠Uniform[0, !s]. We set !s = 0.1 and !st = 2. Intuitively, the attractive model encourages variables in adjacent nodes to assume the same value, and it has been shown [18, 19] that the ordinary Bethe approximation (⇢st = 1) in an attractive model lower-bounds the log partition function. For ⇢2 [0, 2], we compute stationary points of Bγ,⇢(⌧) by running the reweighted sum product algorithm of Wainwright et al. [22]. We use a damping factor of λ = 0.5, convergence threshold of 10−10 for the average change of messages, and at most 2500 iterations. We repeat this process with at least 8 random initializations for each value of ⇢. Figure 1 shows the scatter plots of ⇢and the Bethe approximation Bγ,⇢(⌧). In each plot, the two vertical lines are the boundaries ⇢= ⇢tree and ⇢= ⇢cycle, and the horizontal line is the value of the true log partition function A(γ). Results. Figures 1(a)–1(d) show the results of our experiments on small graphs (K5 and T9) for both attractive and mixed models. We see that the Bethe approximation with ⇢⇢cycle generally provides a better approximation to A(γ) than the Bethe approximation computed over ⇢⇢tree. However, in general we cannot guarantee whether B(γ; ⇢) will give an upper or lower bound for A(γ) when ⇢⇢cycle. As noted above, we have B(γ; 1) A(γ) for attractive models. We also observe from Figures 1(a)–1(d) that shortly after ⇢leaves the concavity region {⇢⇢cycle}, multiple local optima emerge for the Bethe objective function. The presence of the point clouds near ⇢= 1 in Figures 1(a) and 1(c) arises because the sum product algorithm has not converged after 2500 iterations. Indeed, the same phenomenon is true for all our results: in the region where multiple local optima begin to appear, it is more difficult for the algorithm to converge. See Figure 2 and the accompanying text in Appendix E for a plot of the points (⇢, log10(∆)), where ∆is the final average change in the messages at termination of the algorithm. From Figure 2, we see that the values of ∆are significantly higher for the values of ⇢near where multiple local optima emerge. We suspect that for these values of ⇢, the sum product algorithm fails to converge since distinct local optima are close together, so messages oscillate between the optima. For larger values of ⇢, the local optima become sufficiently separated and the algorithm converges to one of them. However, it is interesting to note that this point cloud phenomenon does not appear for attractive models, despite the presence of distinct local optima. Simulations for larger graphs are shown in Figures 1(e)–1(h). If we zoom into the region near ⇢⇢cycle, we still observe the same behavior that ⇢⇢cycle generally provides a better Bethe 7 0 0.5 1 1.5 2 7 8 9 10 11 12 13 14 15 16 ρ Bethe approximation K5, mixed ρtree ρcycle A(γ) (a) K5, mixed 0 0.5 1 1.5 2 9.5 10 10.5 11 11.5 12 12.5 13 ρ Bethe approximation K5, attractive ρtree ρcycle A(γ) (b) K5, attractive 0 0.5 1 1.5 2 10 12 14 16 18 20 ρ Bethe approximation T9, mixed ρtree ρcycle A(γ) (c) T9, mixed 0 0.5 1 1.5 2 19 20 21 22 23 24 25 ρ Bethe approximation T9, attractive ρtree ρcycle A(γ) (d) T9, attractive 0 0.5 1 1.5 2 30 40 50 60 70 80 90 100 110 ρ Bethe approximation K15, mixed ρtree ρcycle A(γ) (e) K15, mixed 0 0.5 1 1.5 2 102 104 106 108 110 112 114 ρ Bethe approximation K15, attractive ρtree ρcycle A(γ) (f) K15, attractive 0 0.5 1 1.5 2 35 40 45 50 55 60 65 ρ Bethe approximation T25, mixed ρtree ρcycle A(γ) (g) T25, mixed 0 0.5 1 1.5 2 40 45 50 55 60 ρ Bethe approximation T25, attractive ρtree ρcycle A(γ) (h) T25, attractive Figure 1: Values of the reweighted Bethe approximation as a function of ⇢. See text for details. approximation than ⇢⇢tree. Moreover, the presence of the point clouds and multiple local optima are more pronounced, and we see from Figures 1(c), 1(g), and 1(h) that new local optima with even worse Bethe values arise for larger values of ⇢. Finally, we note that the same qualitative behavior also occurs in all the other graphs that we have tried (Kn for n 2 {5, 10, 15, 20, 25} and Tn for n 2 {9, 16, 25, 36, 49, 64}), with multiple random instances of the Ising model pγ. 6 Discussion In this paper, we have analyzed the reweighted Kikuchi approximation method for estimating the log partition function of a distribution that factorizes over a region graph. We have characterized necessary and sufficient conditions for the concavity of the variational objective function, generalizing existing results in literature. Our simulations demonstrate the advantages of using the reweighted Kikuchi approximation and show that multiple local optima may appear outside the region of concavity. An interesting future research direction is to obtain a better understanding of the approximation guarantees of the reweighted Bethe and Kikuchi methods. In the Bethe case with attractive potentials ✓, several recent results [22, 19, 18] establish that the Bethe approximation B(✓; ⇢) is an upper bound to the log partition function A(✓) when ⇢lies in the spanning tree polytope, whereas B(✓; ⇢) A(✓) when ⇢= 1F . By continuity, we must have B(✓; ⇢⇤) = A(✓) for some values of ⇢⇤, and it would be interesting to characterize such values where the reweighted Bethe approximation is exact. Another interesting direction is to extend our theoretical results on properties of the reweighted Kikuchi approximation, which currently depend solely on the structure of the region graph and the weights ⇢, to incorporate the effect of the model potentials ✓. For example, several authors [20, 6] present conditions under which loopy belief propagation applied to the unweighted Bethe approximation has a unique fixed point. The conditions for uniqueness of fixed points slightly generalize the conditions for convexity, and they involve both the graph structure and the strength of the potentials. We suspect that similar results would hold for the reweighted Kikuchi approximation. Acknowledgments. The authors thank Martin Wainwright for introducing the problem to them and providing helpful guidance. The authors also thank Varun Jog for discussions regarding the generalization of Hall’s lemma. The authors thank the anonymous reviewers for feedback that improved the clarity of the paper. PL was partly supported from a Hertz Foundation Fellowship and an NSF Graduate Research Fellowship while at Berkeley. 8 References [1] S. M. Aji and R. J. McEliece. The generalized distributive law and free energy minimization. In Proceedings of the 39th Allerton Conference, 2001. [2] F. Barahona. On the computational complexity of Ising spin glass models. Journal of Physics A: Mathematical and General, 15(10):3241, 1982. [3] H. A. Bethe. Statistical theory of superlattices. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 150(871):552–575, 1935. [4] P. Hall. On representatives of subsets. Journal of the London Mathematical Society, 10:26–30, 1935. [5] T. Heskes. Stable fixed points of loopy belief propagation are minima of the Bethe free energy. In Advances in Neural Information Processing Systems 15, 2002. [6] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation, 16(11):2379– 2413, 2004. [7] T. Heskes. Convexity arguments for efficient minimization of the Bethe and Kikuchi free energies. Journal of Artificial Intelligence Research, 26:153–190, 2006. [8] A. T. Ihler, J. W. Fischer III, and A. S. Willsky. Loopy belief propagation: Convergence and effects of message errors. Journal of Machine Learning Research, 6:905–936, December 2005. [9] R. Kikuchi. A theory of cooperative phenomena. Phys. Rev., 81:988–1003, March 1951. [10] B. Korte and J. Vygen. Combinatorial Optimization: Theory and Algorithms. Springer, 4th edition, 2007. [11] R. J. McEliece and M. Yildirim. Belief propagation on partially ordered sets. In Mathematical Systems Theory in Biology, Communications, Computation, and Finance, pages 275–300, 2002. [12] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms: a unifying view. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI ’09, 2009. [13] J. M. Mooij and H. J. Kappen. Sufficient conditions for convergence of the sum-product algorithm. IEEE Transactions on Information Theory, 53(12):4422–4437, December 2007. [14] P. Pakzad and V. Anantharam. Estimation and marginalization using Kikuchi approximation methods. Neural Computation, 17:1836–1873, 2003. [15] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1988. [16] T. Roosta, M. J. Wainwright, and S. S. Sastry. Convergence analysis of reweighted sum-product algorithms. IEEE Transactions on Signal Processing, 56(9):4293–4305, 2008. [17] D. Roth. On the hardness of approximate reasoning. Artificial Intelligence, 82(12):273 – 302, 1996. [18] N. Ruozzi. The Bethe partition function of log-supermodular graphical models. In Advances in Neural Information Processing Systems 25, 2012. [19] E. B. Sudderth, M. J. Wainwright, and A. S. Willsky. Loop series and Bethe variational bounds in attractive graphical models. In Advances in Neural Information Processing Systems 20, 2007. [20] S. C. Tatikonda and M. I. Jordan. Loopy belief propagation and Gibbs measures. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence, UAI ’02, 2002. [21] P. O. Vontobel. The Bethe permanent of a nonnegative matrix. IEEE Transactions on Information Theory, 59(3):1866–1901, 2013. [22] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 51(7):2313–2335, 2005. [23] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1–305, January 2008. [24] Y. Watanabe and K. Fukumizu. Graph zeta function in the Bethe free energy and loopy belief propagation. In Advances in Neural Information Processing Systems 22, 2009. [25] Y. Watanabe and K. Fukumizu. Loopy belief propagation, Bethe free energy and graph zeta function. arXiv preprint arXiv:1103.0605, 2011. [26] Y. Weiss. Correctness of local probability propagation in graphical models with loops. Neural Computation, 12(1):1–41, 2000. [27] T. Werner. Primal view on belief propagation. In UAI 2010: Proceedings of the Conference of Uncertainty in Artificial Intelligence, pages 651–657, Corvallis, Oregon, July 2010. AUAI Press. [28] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural Information Processing Systems 13, 2000. [29] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. IEEE Transactions on Information Theory, 51:2282–2312, 2005. 9
2014
88
5,578
The Bayesian Case Model: A Generative Approach for Case-Based Reasoning and Prototype Classification Been Kim, Cynthia Rudin and Julie Shah Massachusetts Institute of Technology Cambridge, Massachusetts 02139 {beenkim, rudin, julie a shah}@csail.mit.edu Abstract We present the Bayesian Case Model (BCM), a general framework for Bayesian case-based reasoning (CBR) and prototype classification and clustering. BCM brings the intuitive power of CBR to a Bayesian generative framework. The BCM learns prototypes, the “quintessential” observations that best represent clusters in a dataset, by performing joint inference on cluster labels, prototypes and important features. Simultaneously, BCM pursues sparsity by learning subspaces, the sets of features that play important roles in the characterization of the prototypes. The prototype and subspace representation provides quantitative benefits in interpretability while preserving classification accuracy. Human subject experiments verify statistically significant improvements to participants’ understanding when using explanations produced by BCM, compared to those given by prior art. 1 Introduction People like to look at examples. Through advertising, marketers present examples of people we might want to emulate in order to lure us into making a purchase. We might ignore recommendations made by Amazon.com and look instead at an Amazon customer’s Listmania to find an example of a customer like us. We might ignore medical guidelines computed from a large number of patients in favor of medical blogs where we can get examples of individual patients’ experiences. Numerous studies have demonstrated that exemplar-based reasoning, involving various forms of matching and prototyping, is fundamental to our most effective strategies for tactical decisionmaking ([26, 9, 21]). For example, naturalistic studies have shown that skilled decision makers in the fire service use recognition-primed decision making, in which new situations are matched to typical cases where certain actions are appropriate and usually successful [21]. To assist humans in leveraging large data sources to make better decisions, we desire that machine learning algorithms provide output in forms that are easily incorporated into the human decision-making process. Studies of human decision-making and cognition provided the key inspiration for artificial intelligence Case-Based Reasoning (CBR) approaches [2, 28]. CBR relies on the idea that a new situation can be well-represented by the summarized experience of previously solved problems [28]. CBR has been used in important real-world applications [24, 4], but is fundamentally limited, in that it does not learn the underlying complex structure of data in an unsupervised fashion and may not scale to datasets with high-dimensional feature spaces (as discussed in [29]). In this work, we introduce a new Bayesian model, called the Bayesian Case Model (BCM), for prototype clustering and subspace learning. In this model, the prototype is the exemplar that is most representative of the cluster. The subspace representation is a powerful output of the model because we neither need nor want the best exemplar to be similar to the current situation in all possible ways: 1 for instance, a moviegoer who likes the same horror films as we do might be useful for identifying good horror films, regardless of their cartoon preferences. We model the underlying data using a mixture model, and infer sets of features that are important within each cluster (i.e., subspace). This type of model can help to bridge the gap between machine learning methods and humans, who use examples as a fundamental part of their decision-making strategies. We show that BCM produces prediction accuracy comparable to or better than prior art for standard datasets. We also verify through human subject experiments that the prototypes and subspaces present as meaningful feedback for the characterization of important aspects of a dataset. In these experiments, the exemplar-based output of BCM resulted in statistically significant improvements to participants’ performance of a task requiring an understanding of clusters within a dataset, as compared to outputs produced by prior art. 2 Background and Related Work People organize and interpret information through exemplar-based reasoning, particularly when they are solving problems ([26, 7, 9, 21]). AI Cased-Based Reasoning approaches are motivated by this insight, and provide example cases along with the machine-learned solution. Studies show that example cases significantly improve user confidence in the resulting solutions, as compared to providing the solution alone or by also displaying a rule that was used to find the solution [11]. However, CBR requires solutions (i.e. labels) for previous cases, and does not learn the underlying structure of the data in an unsupervised fashion. Maintaining transparency in complex situations also remains a challenge [29]. CBR models designed explicitly to produce explanations [1] rely on the backward chaining of the causal relation from a solution, which does not scale as complexity increases. The cognitive load of the user also increases with the complexity of the similarity measure used for comparing cases [14]. Other CBR models for explanations require the model to be manually crafted in advance by experts [25]. Alternatively, the mixture model is a powerful tool for discovering cluster distributions in an unsupervised fashion. However, this approach does not provide intuitive explanations for the learned clusters (as pointed out in [8]). Sparse topic models are designed to improve interpretability by reducing the number of words per topic [32, 13]. However, using the number of features as a proxy for interpretability is problematic, as sparsity is often not a good or complete measure of interpretability [14]. Explanations produced by mixture models are typically presented as distributions over features. Even users with technical expertise in machine learning may have a difficult time interpreting such output, especially when the cluster is distributed over a large number of features [14]. Our approach, the Bayesian Case Model (BCM), simultaneously performs unsupervised clustering and learns both the most representative cases (i.e., prototypes) and important features (i.e., subspaces). BCM preserves the power of CBR in generating interpretable output, where interpretability comes not only from sparsity but from the prototype exemplars. In our view, there are at least three widely known types of interpretable models: sparse linear classifiers ([30, 8, 31]); discretization methods, such as decision trees and decision lists (e.g., [12, 32, 13, 23, 15]); and prototype- or case-based classifiers (e.g., nearest neighbors [10] or a supervised optimization-based method [5]). (See [14] for a review of interpretable classification.) BCM is intended as the third model type, but uses unsupervised generative mechanisms to explain clusters, rather than supervised approaches [16] or by focusing myopically on neighboring points [3]. 3 The Bayesian Case Model Intuitively, BCM generates each observation using the important pieces of related prototypes. The model might generate a movie profile made of the horror movies from a quintessential horror movie watcher, and action movies from a quintessential action moviegoer. BCM begins with a standard discrete mixture model [18, 6] to represent the underlying structure of the observations. It augments the standard mixture model with prototypes and subspace feature indicators that characterize the clusters. We show in Section 4.2 that prototypes and subspace feature indicators improve human interpretability as compared to the standard mixture model output. The graphical model for BCM is depicted in Figure 1. 2 N q ps φs ωs λ, c xij zij πi α N F S Figure 1: Graphical model for the Bayesian Case Model We start with N observations, denoted by x = {x1, x2, . . . , xN}, with each xi represented as a random mixture over clusters. There are S clusters, where S is assumed to be known in advance. (This assumption can easily be relaxed through extension to a non-parametric mixture model.) Vector πi are the mixture weights over these clusters for the ith observation xi, πi ∈RS +. Each observation has P features, and we denote the jth feature of the ith observation as xij. Each feature j of the observation xi comes from one of the clusters, the index of the cluster for xij is denoted by zij and the full set of cluster assignments for observation-feature pairs is denoted by z. Each zij takes on the value of a cluster index between 1 and S. Hyperparameters q, λ, c, and α are assumed to be fixed. The explanatory power of BCM results from how the clusters are characterized. While a standard mixture model assumes that each cluster take the form of a predefined parametric distribution (e.g., normal), BCM characterizes each cluster by a prototype, ps, and a subspace feature indicator, ωs. Intuitively, the subspace feature indicator selects only a few features that play an important role in identifying the cluster and prototype (hence, BCM clusters are subspace clusters). We intuitively define these latent variables below. Prototype, ps: The prototype ps for cluster s is defined as one observation in x that maximizes p(ps|ωs, z, x), with the probability density and ωs as defined below. Our notation for element j of ps is psj. Since ps is a prototype, it is equal to one of the observations, so psj = xij for some i. Note that more than one maximum may exist per cluster; in this case, one prototype is arbitrarily chosen. Intuitively, the prototype is the “quintessential” observation that best represents the cluster. Subspace feature indicator ωs: Intuitively, ωs ‘turns on’ the features that are important for characterizing cluster s and selecting the prototype, ps. Here, ωs ∈{0, 1}P is an indicator variable that is 1 on the subset of features that maximizes p(ωs|ps, z, x), with the probability for ωs as defined below. Here, ωs is a binary vector of size P, where each element is an indicator of whether or not feature j belongs to subspace s. The generative process for BCM is as follows: First, we generate the subspace clusters. A subspace cluster can be fully described by three components: 1) a prototype, ps, generated by sampling uniformly over all observations, 1 . . . N; 2) a feature indicator vector, ωs, that indicates important features for that subspace cluster, where each element of the feature indicator (ωsj) is generated according to a Bernoulli distribution with hyperparameter q; and 3) the distribution of feature outcomes for each feature, φs, for subspace s, which we now describe. Distribution of feature outcomes φs for cluster s: Here, φs is a data structure wherein each “row” φsj is a discrete probability distribution of possible outcomes for feature j. Explicitly, φsj is a vector of length Vj, where Vj is the number of possible outcomes of feature j. Let us define Θ as a vector of the possible outcomes of feature j (e.g., for feature ‘color’, Θ = [red, blue, yellow]), where Θv represents a particular outcome for that feature (e.g., Θv = blue). We will generate φs so that it mostly takes outcomes from the prototype ps for the important dimensions of the cluster. We do this by considering the vector g, indexed by possible outcomes v, as follows: gpsj,ωsj,λ(v) = λ(1 + c1[wsj=1 and psj=Θv]), where c and λ are constant hyperparameters that indicate how much we will copy the prototype in order to generate the observations. The distribution of feature outcomes will be determined by g through φsj ∼Dirichlet(gpsj,ωsj,λ). To explain at an intuitive level: First, consider the irrelevant dimensions j in subspace s, which have wsj = 0. In that case, φsj will look like a uniform distribu3 tion over all possible outcomes for features j; the feature values for the unimportant dimensions are generated arbitrarily according to the prior. Next, consider relevant dimensions where wsj = 1. In this case, φsj will generally take on a larger value λ+c for the feature value that prototype ps has on feature j, which is called Θv. All of the other possible outcomes are taken with lower probability λ. As a result, we will be more likely to select the outcome Θv that agrees with the prototype ps. In the extreme case where c is very large, we can copy the cluster’s prototype directly within the cluster’s relevant subspace and assign the rest of the feature values randomly. An observation is then a mix of different prototypes, wherein we take the most important pieces of each prototype. To do this, mixture weights πi are generated according to a Dirichlet distribution, parameterized by hyperparameter α. From there, to select a cluster and obtain the cluster index zij for each xij, we sample from a multinomial distribution with parameters πi. Finally, each feature for an observation, xij, is sampled from the feature distribution of the assigned subspace cluster (φzij). (Note that Latent Dirichlet Allocation (LDA) [6] also begins with a standard mixture model, though our feature values exist in a discrete set that is not necessarily binary.) Here is the full model, with hyperparameters c, λ, q, and α: ωsj ∼Bernoulli(q) ∀s, j ps ∼Uniform(1, N) ∀s φsj ∼Dirichlet(gpsj,ωsj,λ) ∀s, j where gpsj,ωsj,λ(v) = λ(1 + c1[wsj=1 and psj=Θv]) πi ∼Dirichlet(α) ∀i zij ∼Multinomial(πi) ∀i, j xij ∼Multinomial(φzijj) ∀i, j. Our model can be readily extended to different similarity measures, such as standard kernel methods or domain specific similarity measures, by modifying the function g. For example, we can use the least squares loss i.e., for fixed threshold ǫ, gpsj,ωsj,λ(v) = λ(1 + c1[wsj=1 and (psj−Θv)2≤ǫ]); or, more generally, gpsj,ωsj,λ(v) = λ(1 + c1[wsj=1 and ℓ(psj,Θv)≤ǫ]). In terms of setting hyperparameters, there are natural settings for α (all entries being 1). This means that there are three real-valued parameters to set, which can be done through cross-validation, another layer of hierarchy with more diffuse hyperparameters, or plain intuition. To use BCM for classification, vector πi is used as S features for a classifier, such as SVM. 3.1 Motivating example This section provides an illustrative example for prototypes, subspace feature indicators and subspace clusters, using a dataset composed of a mixture of smiley faces. The feature set for a smiley face is composed of types, shapes and colors of eyes and mouths. For the purpose of this example, assume that the ground truth is that there are three clusters, each of which has two features that are important for defining that cluster. In Table 1, we show the first cluster, with a subspace defined by the color (green) and shape (square) of the face; the rest of the features are not important for defining the cluster. For the second cluster, color (orange) and eye shape define the subspace. We generated 240 smiley faces from BCM’s prior with α = 0.1 for all entries, and q = 0.5, λ = 1 and c = 50. Data in assigned to cluster LDA BCM Top 3 words and probabilities Prototype Subspaces 1 color ( ) and shape ( ) are important. 0.26 0.23 0.12 2 color ( ) and eye ( ) are important. 0.26 0.24 0.16 3 eye ( ) and mouth ( ) are important. 0.35 0.27 0.15 Table 1: The mixture of smiley faces for LDA and BCM 4 BCM works differently to Latent Dirichlet Allocation (LDA) [6], which presents its output in a very different form. Table 1 depicts the representation of clusters in both LDA (middle column) and BCM (right column). This dataset is particularly simple, and we chose this comparison because the two most important features that both LDA and BCM learn are identical for each cluster. However, LDA does not learn prototypes, and represents information differently. To convey cluster information using LDA (i.e., to define a topic), we must record several probability distributions – one for each feature. For BCM, we need only to record a prototype (e.g., the green face depicted in the top row, right column of the figure), and state which features were important for that cluster’s subspace (e.g., shape and color). For this reason, BCM is more succinct than LDA with regard to what information must be recorded in order to define the clusters. One could define a “special” constrained version of LDA with topics having uniform weights over a subset of features, and with “word” distributions centered around a particular value. This would require a similar amount of memory; however, it loses information, with respect to the fact that BCM carries a full prototype within it for each cluster. A major benefit of BCM over LDA is that the “words” in each topic (the choice of feature values) are coupled and not assumed to be independent – correlations can be controlled depending on the choice of parameters. The independence assumption of LDA can be very strong, and this may be crippling for its use in many important applications. Given our example of images, one could easily generate an image with eyes and a nose that cannot physically occur on a single person (perhaps overlapping). BCM can also generate this image, but it would be unlikely, as the model would generally prefer to copy the important features from a prototype. BCM performs joint inference on prototypes, subspace feature indicators and cluster labels for observations. This encourages the inference step to achieve solutions where clusters are better represented by prototypes. We will show that this is beneficial in terms of predictive accuracy in Section 4.1. We will also show through an experiment involving human subjects that BCM’s succinct representation is very effective for communicating the characteristics of clusters in Section 4.2. 3.2 Inference: collapsed Gibbs sampling We use collapsed Gibbs sampling to perform inference, as this has been observed to converge quickly, particularly in mixture models [17]. We sample ωsj, zij, and ps, where φ and π are integrated out. Note that we can recover φ by simply counting the number of feature values assigned to each subspace. Integrating out φ and π results in the following expression for sampling zij: p(zij = s|zi¬j, x, p, ω, α, λ) ∝α/S + n(s,i,¬j,·) α + n × g(psj, ωsj, λ) + n(s,·,j,xij) P s g(psj, ωsj, λ) + n(s,·,j,·) , (1) where n(s,i,j,v) = 1(zij = s, xij = v). In other words, if xij takes feature value v for feature j and is assigned to cluster s, then n(s,i,j,v) = 1, or 0 otherwise. Notation n(s,·,j,v) is the number of times that the jth feature of an observation takes feature value v and that observation is assigned to subspace cluster s (i.e., n(s,·,j,v) = P i 1(zij = s, xij = v)). Notation n(s,·,j,·) means sum over i and v. We use n(s,i,¬j,v) to denote a count that does not include the feature j. The derivation is similar to the standard collapsed Gibbs sampling for LDA mixture models [17]. Similarly, integrating out φ results in the following expression for sampling ωsj: p(ωsj = b|q, psj, λ, φ, x, z, α) ∝        q × B(g(psj, 1, λ) + n(s,·,j,·)) B(g(psj, 1, λ)) b = 1 1 −q × B(g(psj, 0, λ) + n(s,·,j,·)) B(g(psj, 0, λ)) b = 0, (2) where B is the Beta function and comes from integrating out φ variables, which are sampled from Dirichlet distributions. 4 Results In this section, we show that BCM produces prediction accuracy comparable to or better than LDA for standard datasets. We also verify the interpretability of BCM through human subject experiments involving a task that requires an understanding of clusters within a dataset. We show statistically 5 (a) Accuracy and standard deviation with SVM (b) Unsupervised accuracy for BCM (c) Sensitivity analysis for BCM Figure 2: Prediction test accuracy reported for the Handwritten Digit [19] and 20 Newsgroups datasets [22]. (a) applies SVM for both LDA and BCM, (b) presents the unsupervised accuracy of BCM for Handwritten Digit (top) and 20 Newsgroups (bottom) and (c) depicts the sensitivity analysis conducted for hyperparameters for Handwritten Digit dataset. Datasets were produced by randomly sampling 10 to 70 observations of each digit for the Handwritten Digit dataset, and 100450 documents per document class for the 20 Newsgroups dataset. The Handwritten Digit pixel values (range from 0 to 255) were rescaled into seven bins (range from 0 to 6). Each 16-by-16 pixel picture was represented as a 1D vector of pixel values, with a length of 256. Both BCM and LDA were randomly initialized with the same seed (one half of the labels were incorrect and randomly mixed), The number of iterations was set at 1,000. S = 4 for 20 Newsgroups and S = 10 for Handwritten Digit. α = 0.01, λ = 1, c = 50, q = 0.8. significant improvements in objective measures of task performance using prototypes produced by BCM, compared to output of LDA. Finally, we visually illustrate that the learned prototypes and subspaces present as meaningful feedback for the characterization of important aspects of the dataset. 4.1 BCM maintains prediction accuracy. We show that BCM output produces prediction accuracy comparable to or better than LDA, which uses the same mixture model (Section 3) to learn the underlying structure but does not learn explanations (i.e., prototypes and subspaces). We validate this through use of two standard datasets: Handwritten Digit [19] and 20 Newsgroups [22]. We use the implementation of LDA available from [27], which incorporates Gibbs sampling, the same inference technique used for BCM. Figure 2a depicts the ratio of correctly assigned cluster labels for BCM and LDA. In order to compare the prediction accuracy with LDA, the learned cluster labels are provided as features to a support vector machine (SVM) with linear kernel, as is often done in the LDA literature on clustering [6]. The improved accuracy of BCM over LDA, as depicted in the figures, is explained in part by the ability of BCM to capture dependencies among features via prototypes, as described in Section 3. We also note that prediction accuracy when using the full 20 Newsgroups dataset acquired by LDA (accuracy: 0.68± 0.01) matches that reported previously for this dataset when using a combined LDA and SVM approach [33]. Also, LDA accuracy for the full Handwritten Digit dataset (accuracy: 0.76 ± 0.017) is comparable to that produced by BCM using the subsampled dataset (70 samples per digit, accuracy: 0.77 ± 0.03). As indicated by Figure 2b, BCM achieves high unsupervised clustering accuracy as a function of iterations. We can compute this measure for BCM because each cluster is characterized by a prototype – a particular data point with a label in the given datasets. (Note that this is not possible for LDA.) We set α to prefer each πi to be sparse, so only one prototype generates each observation, 6 Figure 3: Web-interface for the human subject experiment and we use that prototype’s label for the observation. Sensitivity analysis in Figure 2c indicates that the additional parameters introduced to learn prototypes and subspaces (i.e., q, λ and c) are not too sensitive within the range of reasonable choices. 4.2 Verifying the interpretability of BCM We verified the interpretability of BCM by performing human subject experiments that incorporated a task requiring an understanding of clusters within a dataset. This task required each participant to assign 16 recipes, described only by a set of required ingredients (recipe names and instructions were withheld), to one cluster representation out of a set of four to six. (This approach is similar to those used in prior work to measure comprehensibility [20].) We chose a recipe dataset1 for this task because such a dataset requires clusters to be well-explained in order for subjects to be able to perform classification, but does not require special expertise or training. Our experiment incorporated a within-subjects design, which allowed for more powerful statistical testing and mitigated the effects of inter-participant variability. To account for possible learning effects, we blocked the BCM and LDA questions and balanced the assignment of participants into the two ordering groups: Half of the subjects were presented with all eight BCM questions first, while the other half first saw the eight LDA questions. Twenty-four participants (10 females, 14 males, average age 27 years) performed the task, answering a total of 384 questions. Subjects were encouraged to answer the questions as quickly and accurately as possible, but were instructed to take a 5-second break every four questions in order to mitigate the potential effects of fatigue. Cluster representations (i.e., explanations) from LDA were presented as the set of top ingredients for each recipe topic cluster. For BCM we presented the ingredients of the prototype without the name of the recipe and without subspaces. The number of top ingredients shown for LDA was set as the number of ingredients from the corresponding BCM prototype and ran Gibbs sampling for LDA with different initializations until the ground truth clusters were visually identifiable. Using explanations from BCM, the average classification accuracy was 85.9%, which was statistically significantly higher (c2(1, N = 24) = 12.15, p ≪0.001) than that of LDA, (71.3%). For both LDA and BCM, each ground truth label was manually coded by two domain experts: the first author and one independent analyst (kappa coefficient: 1). These manually-produced ground truth labels were identical to those that LDA and BCM predicted for each recipe. There was no statistically significant difference between BCM and LDA in the amount of time spent on each question (t(24) = 0.89, p = 0.37); the overall average was 32 seconds per question, with 3% more time spent on BCM than on LDA. Subjective evaluation using Likert-style questionnaires produced no statistically significant differences between reported preferences for LDA versus BCM. Interestingly, this suggests that participants did not have insight into their superior performance using output from BCM versus that from LDA. 1Computer Cooking Contest: http://liris.cnrs.fr/ccc/ccc2014/ 7 (a) Handwritten Digit dataset Prototype (Recipe names) Ingredients ( Subspaces ) Herbs and Tomato in Pasta basil, garlic, Italian seasoning, oil pasta pepper salt, tomato Generic chili recipe beer chili powder cumin, garlic, meat, oil, onion, pepper, salt, tomato Microwave brownies baking powder sugar, butter, chocolate chopped pecans, eggs, flour, salt, vanilla Spiced-punch cinnamon stick, lemon juice orange juice pineapple juice sugar, water, whole cloves (b) Recipe dataset Figure 4: Learned prototypes and subspaces for the Handwritten Digit and Recipe datasets. Overall, the experiment demonstrated substantial improvement to participants’ classification accuracy when using BCM compared with LDA, with no degradation to other objective or subjective measures of task performance. 4.3 Learning subspaces Figure 4a illustrates the learned prototypes and subspaces as a function of sampling iterations for the Handwritten Digit dataset. For the later iterations, shown on the right of the figure, the BCM output effectively characterizes the important aspects of the data. In particular, the subspaces learned by BCM are pixels that define the digit for the cluster’s prototype. Interestingly, the subspace highlights the absence of writing in certain areas. This makes sense: For example, one can define a ‘7’ by showing the absence of pixels on the left of the image where the loop of a ‘9’ might otherwise appear. The pixels located where there is variability among digits of the same cluster are not part of the defining subspace for the cluster. Because we initialized randomly, in early iterations, the subspaces tend to identify features common to the observations that were randomly initialized to the cluster. This is because ωs assigns higher likelihood to features with the most similar values across observations within a given cluster. For example, most digits ‘agree’ (i.e., have the same zero pixel value) near the borders; thus, these are the first areas that are refined, as shown in Figure 4a. Over iterations, the third row of Figure 4a shows how BCM learns to separate the digits “3” and “5,” which tend to share many pixel values in similar locations. Note that the sparsity of the subspaces can be customized by hyperparameter q. Next, we show results for BCM using the Computer Cooking Contest dataset in Figure 4b. Each prototype consists of a set of ingredients for a recipe, and the subspace is a set of important ingredients that define that cluster, highlighted in red boxes. For instance, BCM found a “chili” cluster defined by the subspace “beer,” “chili powder,” and “tomato.” A recipe called “Generic Chili Recipe” was chosen as the prototype for the cluster. (Note that beer is indeed a typical ingredient in chili recipes.) 5 Conclusion The Bayesian Case Model provides a generative framework for case-based reasoning and prototypebased modeling. Its clusters come with natural explanations; namely, a prototype (a quintessential exemplar for the cluster) and a set of defining features for that cluster. We showed the quantitative advantages in prediction quality and interpretability resulting from the use of BCM. Exemplar-based modeling (nearest-neighbors, case-based reasoning) has historical roots dating back to the beginning of artificial intelligence; this method offers a fresh perspective on this topic, and a new way of thinking about the balance of accuracy and interpretability in predictive modeling. 8 References [1] A. Aamodt. A knowledge-intensive, integrated approach to problem solving and sustained learning. Knowledge Engineering and Image Processing Group. University of Trondheim, pages 27–85, 1991. [2] A. Aamodt and E. Plaza. Case-based reasoning: Foundational issues, methodological variations, and system approaches. AI communications, 1994. [3] D. Baehrens, T. Schroeter, S. Harmeling, M. Kawanabe, K. Hansen, and K.R. M¨uller. How to explain individual classification decisions. JMLR, 2010. [4] I. Bichindaritz and C. Marling. Case-based reasoning in the health sciences: What’s next? AI in medicine, 2006. [5] J. Bien, R. Tibshirani, et al. Prototype selection for interpretable classification. AOAS, 2011. [6] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. JMLR, 2003. [7] J.S. Carroll. Analyzing decision behavior: The magician’s audience. Cognitive processes in choice and decision behavior, 1980. [8] J. Chang, J.L. Boyd-Graber, S. Gerrish, C. Wang, and D.M. Blei. Reading tea leaves: How humans interpret topic models. In NIPS, 2009. [9] M.S. Cohen, J.T. Freeman, and S. Wolf. Metarecognition in time-stressed decision making: Recognizing, critiquing, and correcting. Human Factors, 1996. [10] T. Cover and P. Hart. Nearest neighbor pattern classification. Information Theory, 1967. [11] P. Cunningham, D. Doyle, and J. Loughrey. An evaluation of the usefulness of case-based explanation. In CBRRD. Springer, 2003. [12] G. De’ath and K.E. Fabricius. Classification and regression trees: a powerful yet simple technique for ecological data analysis. Ecology, 2000. [13] J. Eisenstein, A. Ahmed, and E. Xing. Sparse additive generative models of text. In ICML, 2011. [14] A. Freitas. Comprehensible classification models: a position paper. ACM SIGKDD Explorations, 2014. [15] S. Goh and C. Rudin. Box drawings for learning with imbalanced data. In KDD, 2014. [16] A. Graf, O. Bousquet, G. R¨atsch, and B. Sch¨olkopf. Prototype classification: Insights from machine learning. Neural computation, 2009. [17] T.L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 2004. [18] T. Hofmann. Probabilistic latent semantic indexing. In ACM SIGIR, 1999. [19] J.J. Hull. A database for handwritten text recognition research. TPAMI, 1994. [20] J. Huysmans, K. Dejaeger, C. Mues, J. Vanthienen, and B. Baesens. An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. DSS, 2011. [21] G.A. Klein. Do decision biases explain too much. HFES, 1989. [22] K. Lang. Newsweeder: Learning to filter netnews. In ICML, 1995. [23] B. Letham, C. Rudin, T. McCormick, and D. Madigan. Interpretable classifiers using rules and Bayesian analysis. Technical report, University of Washington, 2014. [24] H. Li and J. Sun. Ranking-order case-based reasoning for financial distress prediction. KBSI, 2008. [25] J.W. Murdock, D.W. Aha, and L.A. Breslow. Assessing elaborated hypotheses: An interpretive case-based reasoning approach. In ICCBR. Springer, 2003. [26] A. Newell and H.A. Simon. Human problem solving. Prentice-Hall Englewood Cliffs, 1972. [27] X. Phan and C. Nguyen. GibbsLDA++, AC/C++ implementation of latent dirichlet allocation using gibbs sampling for parameter estimation and inference, 2013. [28] S. Slade. Case-based reasoning: A research paradigm. AI magazine, 1991. [29] F. Sørmo, J. Cassens, and A. Aamodt. Explanation in case-based reasoning–perspectives and goals. AI Review, 2005. [30] R. Tibshirani. Regression shrinkage and selection via the lasso. JRSS, 1996. [31] B. Ustun and C. Rudin. Methods and models for interpretable linear classification. ArXiv, 2014. [32] S. Williamson, C. Wang, K. Heller, and D. Blei. The IBP compound dirichlet process and its application to focused topic modeling. 2010. [33] J. Zhu, A. Ahmed, and E.P. Xing. MedLDA: maximum margin supervised topic models. JMLR, 2012. 9
2014
89
5,579
Expectation Backpropagation: Parameter-Free Training of Multilayer Neural Networks with Continuous or Discrete Weights Daniel Soudry1, Itay Hubara2, Ron Meir2 (1) Department of Statistics, Columbia University (2) Department of Electrical Engineering, Technion, Israel Institute of Technology daniel.soudry@gmail.com,itayhubara@gmail.com,rmeir@ee.technion.ac.il Abstract Multilayer Neural Networks (MNNs) are commonly trained using gradient descent-based methods, such as BackPropagation (BP). Inference in probabilistic graphical models is often done using variational Bayes methods, such as Expectation Propagation (EP). We show how an EP based approach can also be used to train deterministic MNNs. Specifically, we approximate the posterior of the weights given the data using a “mean-field” factorized distribution, in an online setting. Using online EP and the central limit theorem we find an analytical approximation to the Bayes update of this posterior, as well as the resulting Bayes estimates of the weights and outputs. Despite a different origin, the resulting algorithm, Expectation BackPropagation (EBP), is very similar to BP in form and efficiency. However, it has several additional advantages: (1) Training is parameter-free, given initial conditions (prior) and the MNN architecture. This is useful for large-scale problems, where parameter tuning is a major challenge. (2) The weights can be restricted to have discrete values. This is especially useful for implementing trained MNNs in precision limited hardware chips, thus improving their speed and energy efficiency by several orders of magnitude. We test the EBP algorithm numerically in eight binary text classification tasks. In all tasks, EBP outperforms: (1) standard BP with the optimal constant learning rate (2) previously reported state of the art. Interestingly, EBP-trained MNNs with binary weights usually perform better than MNNs with continuous (real) weights - if we average the MNN output using the inferred posterior. 1 Introduction Recently, Multilayer1 Neural Networks (MNNs) with deep architecture have achieved state-of-theart performance in various supervised learning tasks [11, 14, 8]. Such networks are often massive and require large computational and energetic resources. A dense, fast and energetically efficient hardware implementation of trained MNNs could be built if the weights were restricted to discrete values. For example, with binary weights, the chip in [13] can perform 1012 operations per second with 1mW power efficiency. Such performances will enable the integration of massive MNNs into small and low-power electronic devices. Traditionally, MNNs are trained by minimizing some error function using BackPropagation (BP) or related gradient descent methods [15]. However, such an approach cannot be directly applied if the weights are restricted to binary values. Moreover, crude discretization of the weights is usually quite 1i.e., having more than a single layer of adjustable weights. 1 destructive [20]. Other methods have been suggested in the 90’s (e.g., [23, 3, 18]), but it is not clear whether these approaches are scalable. The most efficient methods developed for training Single-layer2 Neural Networks (SNN) with binary weights use approximate Bayesian inference, either implicitly [6, 1] or explicitly [24, 22]. In theory, given a prior, the Bayes estimate of the weights can be found from their posterior given the data. However, storing or updating the full posterior is usually intractable. To circumvent this problem, these previous works used a factorized “mean-field” form the posterior of the weights given the data. As explained in [22], this was done using a special case of the widely applicable Expectation Propagation (EP) algorithm [19] - with an additional approximation that the fan-in of all neurons is large, so their inputs are approximately Gaussian. Thus, given an error function, one can analytically obtain the Bayes estimate of the weights or the outputs, using the factorized approximation of the posterior. However, to the best of our knowledge, it is still unknown whether such an approach could be generalized to MNNs, which are more relevant for practical applications. In this work we derive such generalization, using similar approximations (section 3). The end result is the Expectation BackPropagation (EBP, section 4) algorithm for online training of MNNs where the weight values can be either continuous (i.e., real numbers) or discrete (e.g., ±1 binary). Notably, the training is parameter-free (with no learning rate), and insensitive to the magnitude of the input. This algorithm is very similar to BP. Like BP, it is very efficient in each update, having a linear computational complexity in the number of weights. We test the EBP algorithm (section 5) on various supervised learning tasks: eight high dimensional tasks of classifying text into one of two semantic classes, and one low dimensional medical discrimination task. Using MNNs with two or three weight layers, EBP outperforms both standard BP, as well as the previously reported state of the art for these tasks [7]. Interestingly, the best performance of EBP is usually achieved using the Bayes estimate of the output of MNNs with binary weights. This estimate can be calculated analytically, or by averaging the output of several such MNNs, with weights sampled from the inferred posterior. 2 Preliminaries General Notation A non-capital boldfaced letter x denotes a column vector with components xi, a boldfaced capital letter X denotes a matrix with components Xij. Also, if indexed, the components of xl are denoted xi,l and those of Xl are denoted Xij,l. We denote by P (x) the probability distribution (in the discrete case) or density (in the continuous case) of a random variable X, P (x|y) = P (x, y) /P (y),⟨x⟩= ´ xP (x) dx, ⟨x|y⟩= ´ xP (x|y) dx, Cov (x, y) = ⟨xy⟩−⟨x⟩⟨y⟩ and Var (x) = Cov (x, x). Integration is exchanged with summation in the discrete case. For any condition A, we make use of I {A}, the indicator function (i.e., I {A} = 1 if A holds, and zero otherwise), and δij = I {i = j}, Kronecker’s delta function. If x ∼N (µ, Σ) then it is Gaussian with mean µ and covariance matrix Σ, and we denote its density by N (x|µ, Σ). Furthermore, we use the cumulative distribution function Φ (x) = ´ x −∞N (u|0, 1) du. Model We consider a general feedforward Multilayer Neural Network (MNN) with connections between adjacent layers (Fig. 2.1). For analytical simplicity, we focus here on deterministic binary (±1) neurons. However, the framework can be straightforwardly extended to other types of neurons (deterministic or stochastic). The MNN has L layers, where Vl is the width of the l-th layer, and W = {Wl}L l=1 is the collection of Vl × Vl−1 synaptic weight matrices which connect neuronal layers sequentially. The outputs of the layers are {vl}L l=0, where v0 is the input layer, {vl}L−1 l=1 are the hidden layers and vL is the output layer. In each layer, vl = sign (Wlvl−1) (2.1) where each sign “activation function” (a neuronal layer) operates component-wise (i.e., ∀i : (sign (x))i = sign (xi)). The output of the network is therefore vL = g (v0, W) = sign (WLsign (WL−1sign (· · · W1v0))) . (2.2) 2i.e., having only a single layer of adjustable weights. 2 Figure 2.1: Our MNN model (Eq. 2.2). We assume that the weights are constrained to some set S, with the specific restrictions on each weight denoted by Sij,l, so Wij,l ∈Sij,l and W ∈S. If Sij,l = {0}, then we say that Wij,l is “disconnected”. For simplicity, we assume that in each layer the “fan-in” Kl = |{j|Sij,l ̸= {0}}| is constant for all i. Biases can be optionally included in the standard way, by adding a constant output v0,l = 1 to each layer. Task We examine a supervised classification learning task, in Bayesian framework. We are given a fixed set of sequentially labeled data pairs DN =  x(n), y(n) N n=1 (so D0 = ∅), where each x(n) ∈RV0 is a data point, and each y(n) is a label taken from a binary set Y ⊂{−1, +1}VL. For brevity, we will sometimes suppress the sample index n, where it is clear from the context. As common for supervised learning with MNNs, we assume that for all n the relation x(n) →y(n) can be represented by a MNN with known architecture (the ‘hypothesis class’), and unknown weights W ∈S. This is a reasonable assumption since a MNN can approximate any deterministic function, given that it has sufficient number of neurons [12] (if L ≥2). Specifically, there exists some W∗∈S, so that y(n) = f x(n), W∗ (see Eq. 2.2). Our goals are: (1) estimate the most probable W∗for this MNN, (2) estimate the most probable y given some (possibly unseen) x. 3 Theory In this section we explain how a specific learning algorithm for MNNs (described in section 4) arises from approximate (mean-field) Bayesian inference, used in this context (described in section 2). 3.1 Online Bayesian learning in MNNs We approach this task within a Bayesian framework, where we assume some prior distribution on the weights - P (W|D0). Our aim is to find P (W|DN), the posterior probability for the configuration of the weights W, given the data. With this posterior, one can select the most probable weight configuration - the Maximum A Posteriori (MAP) weight estimate W∗= argmaxW∈SP (W|DN) , (3.1) minimizing the expected zero-one loss over the weights (I {W∗̸= W}). This weight estimate can be implemented in a single MNN, which can provide an estimate of the label y for (possibly unseen) data points x through y =g (x, W∗). Alternatively, one can aim to minimize the expected loss over the output - as more commonly done in the MNN literature. For example, if the aim is to reduce classification error then one should use the MAP output estimate y∗= argmaxy∈Y X W I {g (x, W) = y} P (W|DN) , (3.2) which minimizes the zero-one loss (I {y∗̸= g (x, W)}) over the outputs. The resulting estimator does not generally have the form of a MNN (i.e., y =g (x, W) with W ∈S), but can be approximated by averaging the output over many such MNNs with W values sampled from the posterior. Note that averaging the output of several MNNs is a common method to improve performance. We aim to find the posterior P (W|DN) in an online setting, where samples arrive sequentially. After the n-th sample is received, the posterior is updated according to Bayes rule: P (W|Dn) ∝P  y(n)|x(n), W  P (W|Dn−1) , (3.3) for n = 1, . . . , N. Note that the MNN is deterministic, so the likelihood (per data point) has the following simple form3 P  y(n)|x(n), W  = I n g  x(n), W  = y(n)o . (3.4) 3MNN with stochastic activation functions will have a “smoothed out” version of this. 3 Therefore, the Bayes update in Eq. 3.3 simply makes sure that P (W|Dn) = 0 in any “illegal” configuration (i.e., any W0 such that g x(k), W0 ̸= y(k)) for some 1 ≤k ≤n. In other words, the posterior is equal to the prior, restricted to the “legal” weight domain, and re-normalized appropriately. Unfortunately, this update is generally intractable for large networks, mainly because we need to store and update an exponential number of values for P (W|Dn). Therefore, some approximation is required. 3.2 Approximation 1: mean-field In order to reduce computational complexity, instead of storing P (W|Dn), we will store its factorized (‘mean-field’) approximation ˆP (W|Dn), for which ˆP (W|Dn) = Y i,j,l ˆP (Wij,l|Dn) , (3.5) where each factor must be normalized. Notably, it is easy to find the MAP estimate of the weights (Eq. 3.1) under this factorized approximation ∀i, j, l W ∗ ij,l = argmaxWij,l∈Sij,l ˆP (Wij,l|DN) . (3.6) The factors ˆP (Wij,l|Dn) can be found using a standard variational approach [5, 24]. For each n, we first perform the Bayes update in Eq. 3.3 with ˆP (W|Dn−1) instead of P (W|Dn−1). Then, we project the resulting posterior onto the family of distributions factorized as in Eq. 3.5, by minimizing the reverse Kullback-Leibler divergence (similarly to EP [19, 22]). A straightforward calculation shows that the optimal factor is just a marginal of the posterior (appendix A, available in the supplementary material). Performing this marginalization on the Bayes update and re-arranging terms, we obtain a Bayes-like update to the marginals ∀i, j, l ˆP (Wij,l|Dn) ∝ˆP  y(n)|x(n), Wij,l, Dn−1  ˆP (Wij,l|Dn−1) , (3.7) where ˆP  y(n)|x(n), Wij,l, Dn−1  = X W′:W ′ ij,l=Wij,l P  y(n)|x(n), W′ Y {k,r,m}̸={i,j,l} ˆP W ′ kr,m|Dn−1  (3.8) is the marginal likelihood. Thus we can directly update the factor ˆP (Wij,l|Dn) in a single step. However, the last equation is still problematic, since it contains a generally intractable summation over an exponential number of values, and therefore requires simplification. For simplicity, from now on we replace any ˆP with P, in a slight abuse of notation (keeping in mind that the distributions are approximated). 3.3 Simplifying the marginal likelihood In order to be able to use the update rule in Eq. 3.7, we must first calculate the marginal likelihood P y(n)|x(n), Wij,l, Dn−1  using Eq. 3.8. For brevity, we suppress the index n and the dependence on Dn−1 and x, obtaining P (y|Wij,l) = X W′:W ′ ij,l=Wij,l P (y|W′) Y {k,r,m}̸={i,j,l} P W ′ kr,m  , (3.9) where we recall that P (y|W′) is simply an indicator function (Eq. 3.4). Since, by assumption, P (y|W′) arises from a feed-forward MNN with input v0 = x and output vL = y, we can perform the summations in Eq. 3.9 in a more convenient way - layer by layer. To do this, we define P (vm|vm−1) = X W′m Vm Y k=1  I   vk,m Vm−1 X r=1 vr,m−1W ′ kr,m > 0    Vm−1 Y r=1 P W ′ kr,m    (3.10) and P (vl|vl−1, Wij,l), which is defined identically to P (vl|vl−1), except that the summation is performed over all configurations in which Wij,l is fixed (i.e., W′ l : W ′ ij,l = Wij,l) and we set 4 P (Wij,l) = 1. Now we can write recursively P (v1) = P (v1|v0 = x) ∀m ∈{2, .., l −1} : P (vm) = X vm−1 P (vm|vm−1) P (vm−1) (3.11) P (vl|Wij,l) = X vl−1 P (vl|vl−1, Wij,l) P (vl−1) (3.12) ∀m ∈{l + 1, l + 2, .., L} : P (vm|Wij,l) = X vm−1 P (vm|vm−1) P (vm−1|Wij,l) (3.13) Thus we obtain the result of Eq. 3.9, through P (y|Wij,l) = P (vL = y|Wij,l). However, this computation is still generally intractable, since all of the above summations (Eqs. 3.10-3.13) are still over an exponential number of values. Therefore, we need to make one additional approximation. 3.4 Approximation 2: large fan-in Next we simplify the above summations (Eqs. 3.10-3.13) assuming that the neuronal fan-in is “large”. We keep in mind that i, j and l are the specific indices of the fixed weight Wij,l. All the other weights beside Wij,l can be treated as independent random variables, due to the mean field approximation (Eq. 3.5). Therefore, in the limit of a infinite neuronal fan-in (∀m : Km →∞) we can use the Central Limit Theorem (CLT) and say that the normalized input to each neuronal layer, is distributed according to a Gaussian distribution ∀m : um = Wmvm−1/ p Km ∼N (µm, Σm) . (3.14) Since Km is actually finite, this would be only an approximation - though a quite common and effective one (e.g., [22]). Using the approximation in Eq. 3.14 together with vm = sign (um) (Eq. 2.1) we can calculate (appendix B) the distribution of um and vm sequentially for all the layers m ∈{1, . . . , L}, for any given value of v0 and Wij,l. These effectively simplify the summations in 3.10-3.13 using Gaussian integrals (appendix B). At the end of this “forward pass” we will be able to find P (y|Wij,l) = P (vL = y|Wij,l) , ∀i, j, l. This takes a polynomial number of steps (appendix B.3), instead of a direct calculation through Eqs. 3.11-3.13, which is exponentially hard. Using P (y|Wij,l) and Eq. 3.7 we can now update the distribution of P (Wij,l). This immediately gives the Bayes estimate of the weights (Eq. 3.6) and outputs (Eq. 3.2). As we note in appendix B.3, the computational complexity of the forward pass is significantly lower if Σm is diagonal. This is true exactly only in special cases. For example, this is true if all hidden neurons have a fan-out of one - such as in a 2-layer network with a single output. However, in order to reduce the computational complexity in cases that Σm is not diagonal, we will approximate the distribution of um with its factorized (‘mean-field’) version. Recall that the optimal factor is the marginal of the distribution (appendix A). Therefore, we can now find P (y|Wij,l) easily (appendix B.1), as all the off-diagonal components in Σm are zero, so Σkk′,m = σ2 k,mδkk′ . A direct calculation of P (vL = y|Wij,l) for every i, j, l would be computationally wasteful, since we will repeat similar calculations many times. In order to improve the algorithm’s efficiency, we again exploit the fact that Kl is large. We approximate ln P (vL = y|Wij,l) using a Taylor expansion of Wij,l around its mean, ⟨Wij,l⟩, to first order in K−1/2 l . The first order terms in this expansion can be calculated using backward propagation of derivative terms ∆k,m = ∂ln P (vL = y) /∂µk,m , (3.15) similarly to the BP algorithm (appendix C). Thus, after a forward pass for m = 1, . . . , L, and a backward pass for l = L, . . . , 1, we obtain P (vL = y|Wij,l) for all Wij,l and update P (Wij,l). 4 The Expectation Backpropagation Algorithm Using our results we can efficiently update the posterior distribution P (Wij,l|Dn) for all the weights with O (|W|) operations, according to Eqs. 3.7. Next, we summarize the resulting general algorithm - the Expectation BackPropgation (EBP) algorithm. In appendix D, we exemplify how to apply the 5 algorithm in the special cases of MNNs with binary, ternary or real (continuous) weights. Similarly to the original BP algorithm (see review in [16]), given input x and desired output y, first we perform a forward pass to calculate the mean output ⟨vl⟩for each layer. Then we perform a backward pass to update P (Wij,l|Dn) for all the weights. Forward pass In this pass we perform the forward calculation of probabilities, as in Eq. 3.11. Recall that ⟨Wkr,m⟩is the mean of the posterior distribution P (Wkr,m|Dn). We first initialize the MNN input ⟨vk,0⟩= xk for all k and calculate recursively the following quantities for m = 1, . . . , L and all k µk,m = 1 √Km Vm−1 X r=1 ⟨Wkr,m⟩⟨vr,m−1⟩ ; ⟨vk,m⟩= 2Φ (µk,m/σk,m) −1 . (4.1) σ2 k,m = 1 Km Vm−1 X r=1 W 2 kr,m  δm,1  ⟨vr,m−1⟩2 −1  + 1  −⟨Wkr,m⟩2 ⟨vr,m−1⟩2 , (4.2) where µm and σ2 m are, respectively, the mean and variance of um, the input of layer m (Eq. 3.14), and ⟨vm⟩is the resulting mean of the output of layer m. Backward pass In this pass we perform the Bayes update of the posterior (Eq. 3.7) using a Taylor expansion. Recall Eq. 3.15. We first initialize4 ∆i,L = yi N 0|µi,L, σ2 i,L  Φ (yiµi,L/σi,L) . (4.3) for all i. Then, for l = L, . . . , 1 and ∀i, j we calculate ∆i,l−1 = 2 √Kl N 0|µi,l−1, σ2 i,l−1  Vm X j=1 ⟨Wji,l⟩∆j,l . (4.4) ln P (Wij,l|Dn) = ln P (Wij,l|Dn−1) + 1 √Kl Wij,l∆i,l ⟨vj,l−1⟩+ C , (4.5) where C is some unimportant constant (which does not depend on Wij,l). Output Using the posterior distribution, the optimal configuration can be immediately found through the MAP weights estimate (Eq. 3.6) ∀i, j, l W ∗ ij,l = argmaxWij,l∈Sij,l ln P (Wij,l|Dn) . (4.6) The output of a MNN implementing these weights would be g (x, W∗) (see Eq. 2.2). We define this to be the ‘deterministic‘ EBP output (EBP-D). Additionally, the MAP output (Eq. 3.2) can be calculated directly y∗ = argmaxy∈Y ln P (vL = y) = argmaxy∈Y "X k ln 1 + ⟨vk,L⟩ 1 −⟨vk,L⟩ yk# (4.7) using ⟨vk,L⟩from Eq. 4.1, or as an ensemble average over the outputs of all possible MNN with the weights Wij,l being sampled from the estimated posterior P (Wij,l|Dn). We define the output in Eq. 4.7 to be the Probabilistic EBP output (EBP-P). Note that in the case of a single output Y = {−1, 1}, so this output simplifies to y = sign (⟨vk,L⟩). 4Due to numerical inaccuracy, calculating ∆i,L using Eq. 4.3 can generate nonsensical values (±∞, NaN) if |µi,L/σi,L| becomes to large. If this happens, we use instead the asymptotic form in that limit ∆i,L = − µi,L σ2 i,L √KL I {yiµi,L < 0} 6 5 Numerical Experiments We use several high dimensional text datasets to assess the performance of the EBP algorithm in a supervised binary classification task. The datasets (taken from [7]) contain eight binary tasks from four datasets: ‘Amazon (sentiment)’, ‘20 Newsgroups’, ‘Reuters’ and ‘Spam or Ham’. Data specification (N =#examples and M =#features) and results (for each algorithm) are described in Table 1. More details on the data including data extraction and labeling can be found in [7]. We test the performance of EBP on MNNs with a 2-layer architecture of M →120 →1, and bias weights. We examine two special cases: (1) MNNs with real weights (2) MNNs with binary weights (and real bias). Recall the motivation for the latter (section 1) is that they can be efficiently implemented in hardware (real bias has negligible costs). Recall also that for each type of MNN, the algorithm gives two outputs - EBP-D (deterministic) and EBP-P (probabilistic), as explained near Eqs. 4.6-4.7. To evaluate our results we compare EBP to: (1) the AROW algorithm, which reports state-of-the-art results on the tested datasets [7] (2) the traditional Backpropagation (BP) algorithm, used to train an M →120 →1 MNN with real weights. In the latter case, we used both Cross Entropy (CE) and Mean Square Error (MSE) as loss functions. On each dataset we report the results of BP with the loss function which achieved the minimal error. We use a simple parameter scan for both AROW (regularization parameter) and the traditional BP (learning rate parameter). Only the results with the optimal parameters (i.e., achieving best results) are reported in Table 1. The optimal parameters found were never at the edges of the scanned field. Lastly, to demonstrate the destructive effect of naive quantization, we also report the performance of the BP-trained MNNs, after all the weights (except the bias) were clipped using a sign function. During training the datasets were repeatedly presented in three epochs (in all algorithms, additional epochs did not reduce test error). On each epoch the examples were shuffled at random order for BP and EBP (AROW determines its own order). The test results are calculated after each epoch using 8-fold cross-validation, similarly to [7]. Empirically, EBP running time is similar to BP with real weights, and twice slower with binary weights. For additional implementation details, see appendix E.1. The code is available on the author’s website. The minimal values achieved over all three epochs are summarized in Table 1. As can be seen, in all datasets EBP-P performs better then AROW, which performs better then BP. Also, EBP-P usually perfroms better with binary weights. In appendix E.2 we show that this ranking remains true even if the fan-in is small (in contrast to our assumptions), or if a deeper 3-layer architecture is used. Dataset #Examples #Features Real EBP-D Real EBP-P Binary EBP-D Binary EBP-P AROW BP Clipped BP Reuters news I6 2000 11463 14.5% 11.35% 21.7% 9.95% 11.72% 13.3% 26.15% Reuters news I8 2000 12167 15.65% 15.25% 23.15% 16.4% 15.27% 18.2% 26.4% Spam or ham d0 2500 26580 1.28% 1.11% 7.93% 0.76% 1.12% 1.32% 7.97% Spam or ham d1 2500 27523 1.0% 0.96% 3.85% 0.96% 1.4% 1.36% 7.33% 20News group comp vs HW 1943 29409 5.06% 4.96% 7.54% 4.44% 5.79% 7.02% 13.07% 20News group elec vs med 1971 38699 3.36% 3.15% 6.0% 2.08% 2.74% 3.96% 14.23% Amazon Book reviews 3880 221972 2.14% 2.09% 2.45% 2.01% 2.24% 2.96% 3.81% Amazon DVD reviews 3880 238739 2.06% 2.14% 5.72% 2.27% 2.63% 2.94% 5.15% Table 1: Data specification, and test errors (with 8-fold cross-validation). Best results are boldfaced. 6 Discussion Motivated by the recent success of MNNs, we developed the Expectation BackPropagation algorithm (EBP - see section 4) for approximate Bayesian inference of the synaptic weights of a MNN. Given a supervised classification task with labeled training data and a prior over the weights, this deterministic online algorithm can be used to train deterministic MNNs (Eq. 2.2) without the need to tune learning parameters (e.g., learning rate). Furthermore, each synaptic weight can be restricted to some set - which can be either finite (e.g., binary numbers) or infinite (e.g., real numbers). This opens the possibility of implementing trained MNNs in power-efficient hardware devices requiring limited parameter precision. 7 This algorithm is essentially an analytic approximation to the intractable Bayes calculation of the posterior distribution of the weights after the arrival of a new data point. To simplify the intractable Bayes update rule we use several approximations. First, we approximate the posterior using a product of its marginals - a ‘mean field’ approximation. Second, we assume the neuronal layers have a large fan-in, so we can approximate them as Gaussian. After these two approximations each Bayes update can be tractably calculated in polynomial time in the size of the MNN. However, in order to further improve computational complexity (to O (|W|) in each step, like BP), we make two additional approximations. First, we use the large fan-in to perform a first order expansion. Second, we optionally5 perform a second ‘mean field’ approximation - to the distribution of the neuronal inputs. Finally, after we obtain the approximated posterior using the algorithm, the Bayes estimates of the most probable weights and the outputs are found analytically. Previous approaches to obtain these Bayes estimates were too limited for our purposes. The Monte Carlo approach [21] achieves state-of-the-art performance for small MNNs [26], but does not scale well [25]. The Laplace approximation [17] and variational Bayes [10, 2, 9] based methods require real-value weights, tuning of the learning rate parameter, and stochastic neurons (to “smooth” the likelihood). Previous EP [24, 22] and message passing [6, 1] (a special case of EP[5]) based methods were derived only for SNNs. In contrast, the EBP allows parameter free and scalable training of various types of MNNs (deterministic or stochastic) with discrete (e.g., binary) or continuous weights. In appendix F, we see that for continuous weights EBP is almost identical to standard BP with a specific choice of activation function s (x) = 2Φ (x) −1, CE loss and learning rate η = 1. The only difference is that the input is normalized by its standard deviation (Eq. 4.1, right), which depends on the weights and inputs (Eq. 4.2). This re-scaling makes the learning algorithm invariant to the amplitude changes in the neuronal input. This results from the same invariance of the sign activation functions. Note that in standard BP algorithm the performance is directly affected by the amplitude of the input, so it is a recommended practice to re-scale it in pre-processing [16]. We numerically evaluated the algorithm on binary classification tasks using MNNs with two or three synaptic layers. In all data sets and MNNs EBP performs better than standard BP with the optimal constant learning rate, and even achieves state-of-the-art results in comparison to [7]. Surprisingly, EBP usually performs best when it is used to train binary MNNs. As suggested by a reviewer, this could be related to the type of problems examined here. In text classification tasks have large sparse input spaces (bag of words), and presence/absence of features (words) is more important than their real values (frequencies). Therefore, (distributions over) binary weights and a threshold activation function may work well. In order to get such a good performance in binary MNNs, one must average over the output the inferred (approximate) posterior of the weights. The EBP-P output of the algorithm calculates this average analytically. In hardware this output could be realizable by averaging the output of several binary MNNs, by sampling weights from P (Wij,l|Dn). This can be done efficiently (appendix G). Our numerical testing mainly focused on high-dimensional text classification tasks, where shallow architectures seem to work quite well. In other domains, such as vision [14] and speech [8], deep architectures achieve state-of-the-art performance. Such deep MNNs usually require considerable fine-tuning and additional ‘tricks’ such as unsupervised pre-training [8], weight sharing [14] or momentum6. Integrating such methods into EBP and using it to train deep MNNs is a promising direction for future work. Another important generalization of the algorithm, which is rather straightforward, is to use activation functions other than sign (·). This is particularly important for the last layer - where a linear activation function would be useful for regression tasks, and joint activation functions7 would be useful for multi-class tasks[4]. Acknowledgments The authors are grateful to C. Baldassi, A. Braunstein and R. Zecchina for helpful discussions and to A. Hallak, T. Knafo and U. Sümbül for reviewing parts of this manuscript. The research was partially funded by the Technion V.P.R. fund, by the Intel Collaborative Research Institute for Computational Intelligence (ICRI-CI), and by the Gruss Lipper Charitable Foundation. 5This approximation is not required if all neurons in the MNN have a fan-out of one. 6Which departs from the online framework considered here, since it requires two samples in each update. 7i.e., activation functions for which (f (x))i ̸= f (xi), such as softmax or argmax. 8 References [1] C Baldassi, A Braunstein, N Brunel, and R Zecchina. Efficient supervised learning in networks with binary synapses. PNAS, 104(26):11079–84, 2007. [2] D Barber and C M Bishop. Ensemble learning for multi-layer networks. In Advances in Neural Information Processing Systems, pages 395–401, 1998. [3] R Battiti and G Tecchiolli. Training neural nets with the reactive tabu search. IEEE transactions on neural networks, 6(5):1185–200, 1995. [4] C M Bishop. Neural networks for pattern recognition. 1995. [5] C M Bishop. Pattern recognition and machine learning. Springer, Singapore, 2006. [6] A Braunstein and R Zecchina. Learning by message passing in networks of discrete synapses. Physical review letters, 96(3), 2006. [7] K Crammer, A Kulesza, and M Dredze. Adaptive regularization of weight vectors. Machine Learning, 91(2):155–187, March 2013. [8] G E Dahl, D Yu, L Deng, and A Acero. Context-Dependent Pre-Trained Deep Neural Networks for Large-Vocabulary Speech Recognition. Audio, Speech, and Language Processing, 20(1):30–42, 2012. [9] A Graves. Practical variational inference for neural networks. Advances in Neural Information Processing Systems, pages 1–9, 2011. [10] G E Hinton and D Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In COLT ’93, 1993. [11] G E Hinton, L Deng, D Yu, G E Dahl, A R Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T N Sainath, and B Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012. [12] K Hornik. Approximation capabilities of multilayer feedforward networks. Neural networks, 4(1989):251–257, 1991. [13] R Karakiewicz, R Genov, and G Cauwenberghs. 1.1 TMACS/mW Fine-Grained Stochastic Resonant Charge-Recycling Array Processor. IEEE Sensors Journal, 12(4):785–792, 2012. [14] A Krizhevsky, I Sutskever, and G E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [15] Y LeCun and L Bottou. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [16] Y LeCun, L Bottou, G B Orr, and K R Müller. Efficient Backprop. In G Montavon, G B Orr, and K-R Müller, editors, Neural networks: Tricks of the Trade. Springer, Heidelberg, 2nd edition, 2012. [17] D J C MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 472(1):448–472, 1992. [18] E Mayoraz and F Aviolat. Constructive training methods for feedforward neural networks with binary weights. International journal of neural systems, 7(2):149–66, 1996. [19] T P Minka. Expectation Propagation for Approximate Bayesian Inference. NIPS, pages 362–369, 2001. [20] P Moerland and E Fiesler. Neural Network Adaptations to Hardware Implementations. In Handbook of neural computation. Oxford University Press, New York, 1997. [21] R M Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995. [22] F Ribeiro and M Opper. Expectation propagation with factorizing distributions: a Gaussian approximation and performance results for simple models. Neural computation, 23(4):1047–69, April 2011. [23] D Saad and E Marom. Training Feed Forward Nets with Binary Weights Via a Modified CHIR Algorithm. Complex Systems, 4:573–586, 1990. [24] S A Solla and O Winther. Optimal perceptron learning: an online Bayesian approach. In On-Line Learning in Neural Networks. Cambridge University Press, Cambridge, 1998. [25] N Srivastava and G E Hinton. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning, 15:1929–1958, 2014. [26] H Y Xiong, Y Barash, and B J Frey. Bayesian prediction of tissue-regulated splicing using RNA sequence and cellular context. Bioinformatics (Oxford, England), 27(18):2554–62, October 2011. 9
2014
9
5,580
Advances in Learning Bayesian Networks of Bounded Treewidth Siqi Nie Rensselaer Polytechnic Institute Troy, NY, USA nies@rpi.edu Denis D. Mau´a University of S˜ao Paulo S˜ao Paulo, Brazil denis.maua@usp.br Cassio P. de Campos Queen’s University Belfast Belfast, UK c.decampos@qub.ac.uk Qiang Ji Rensselaer Polytechnic Institute Troy, NY, USA qji@ecse.rpi.edu Abstract This work presents novel algorithms for learning Bayesian networks of bounded treewidth. Both exact and approximate methods are developed. The exact method combines mixed integer linear programming formulations for structure learning and treewidth computation. The approximate method consists in sampling k-trees (maximal graphs of treewidth k), and subsequently selecting, exactly or approximately, the best structure whose moral graph is a subgraph of that k-tree. The approaches are empirically compared to each other and to state-of-the-art methods on a collection of public data sets with up to 100 variables. 1 Introduction Bayesian networks are graphical models widely used to represent joint probability distributions on complex multivariate domains. A Bayesian network comprises two parts: a directed acyclic graph (the structure) describing the relationships among the variables in the model, and a collection of conditional probability tables from which the joint distribution can be reconstructed. As the number of variables in the model increases, specifying the underlying structure becomes a daunting task, and practitioners often resort to learning Bayesian networks directly from data. Here, learning a Bayesian network refers to inferring its structure from data, a task known to be NP-hard [9]. Learned Bayesian networks are commonly used for drawing inferences such as querying the posterior probability of some variable given some evidence or finding the mode of the posterior joint distribution. Those inferences are NP-hard to compute even approximately [23], and all known exact and provably good algorithms have worst-case time complexity exponential in the treewidth, which is a measure of the tree-likeness of the structure. In fact, under widely believed assumptions from complexity theory, exponential time complexity in the treewidth is inevitable for any algorithm that performs exact inference [7, 20]. Thus, learning networks of small treewidth is essential if one wishes to ensure reliable and efficient inference. This is particularly important in the presence of missing data, when learning becomes intertwined with inference [16]. There is a second reason to limit the treewidth. Previous empirical results [15, 22] suggest that bounding the treewidth improves model performance on unseen data, hence improving the model generalization ability. In this paper we present two novel ideas for score-based Bayesian network learning with a hard constraint on treewidth. The first one is a mixed-integer linear programming (MILP) formulation of the problem (Section 3) that builds on existing MILP formulations for unconstrained learning of Bayesian networks [10, 11] and for computing the treewidth of a graph [17]. Unlike the MILP 1 formulation of Parviainen et al. [21], the MILP problem we generate is of polynomial size in the number of variables, and dispense with the use of cutting planes techniques. This makes for a clean and succinct formulation that can be solved with a single call of any MILP optimizer. We provide some empirical evidence (in Section 5) that suggests that our approach is not only simpler but often faster. It also outperforms the dynamic programming approach of Korhonen and Parviainen [19]. Since linear programming relaxations are used for solving the MILP problem, any MILP formulation can be used to provide approximate solutions and error estimates in an anytime fashion (i.e., the method can be stopped at any time during the computation with a feasible solution whose quality monotonically improves with time). However, the MILP formulations (both ours and that of Parviainen et al. [21]) cannot cope with very large domains, even if we settle for approximate solutions. In order to deal with large domains, we devise (in Section 4) an approximate method based on a uniform sampling of k-trees (maximal chordal graphs of treewidth k), which is achieved by using a fast computable bijection between k-trees and Dandelion codes [6]. For each sampled k-tree, we either run an exact algorithm similar to the one in [19] (when computationally appealing) to learn the score-maximizing network whose moral graph is a subgraph of that k-tree, or we resort to a more efficient method that takes partial variable orderings uniformly at random from a (relatively small) space of orderings that are compatible with the k-tree. We show empirically (in Section 5) that our sampling-based methods are very effective in learning close to optimal structures and scale up to large domains. We conclude in Section 6 and point out possible future work. We begin with some background knowledge and literature review on learning Bayesian networks (Section 2). 2 Bayesian Network Structure Learning Let N be {1, . . . , n} and consider a finite set X = {Xi : i ∈N} of categorical random variables Xi taking values in finite sets Xi. A Bayesian network is a triple (X, G, θ), where G = (N, A) is a directed acyclic graph (DAG) whose nodes are in one-to-one correspondence with variables in X, and θ = {θi(xi, xGi)} is a set of numerical parameters specifying (conditional) probabilities θi(xi, xGi) = Pr(xi|xGi), for every node i in G, value xi of Xi and assignment xGi to the parents Gi of Xi in G. The structure G of the network represents a set of stochastic independence assessments among variables in X called graphical Markov conditions: every variable Xi is conditionally independent of its nondescendant nonparents given its parents. As a consequence, a Bayesian network uniquely defines a joint probability distribution over X as the product of its parameters. As it is common in the literature, we formulate the problem of Bayesian network learning as an optimization over DAG structures guided by a score function. We only require that (i) the score function can be written as a sum of local score functions si(Gi), i ∈N, each depending only on the corresponding parent set Gi and on the data, and (ii) the local score functions can be efficiently computed and stored [13, 14]. These properties are satisfied by commonly used score functions such as the Bayesian Dirichlet equivalent uniform score [18]. We assume the reader is familiar with graph-theoretic concepts such as polytrees, chordal graphs, chordalizations, moral graphs, moralizations, topological orders, (perfect) elimination orders, fill-in edges and clique-trees. References [1] and [20] are good starting points to the topic. Most score functions penalize model complexity in order to avoid overfitting. The way scores penalize model complexity generally leads to learning structures of bounded in-degree, but even bounded in-degree graphs can have high treewidth (for instance, directed square grids have treewidth equal to the square root of the number of nodes, yet have maximum in-degree equal to two), which brings difficulty to subsequent probabilistic inferences with the model [5]. The goal of this work is to develop methods that search for G∗= argmax G∈GN,k X i∈N si(Gi) , (1) where GN,k is the set of all DAGs with node set N and treewidth at most k. Dasgupta proved NP-hardness of learning polytrees of bounded treewidth when the score is data log likelihood [12]. Korhonen and Parviainen [19] adapted Srebro’s complexity result for Markov networks [25] to show that learning Bayesian networks of treewidth two or greater is NP-hard. In comparison to the unconstrained problem, few algorithms have been designed for the bounded treewidth case. Korhonen and Parviainen [19] developed an exact algorithm based on dynamic 2 programming that learns optimal n-node structures of treewidth at most w in time 3nnw+O(1), which is above the 2nnO(1) time required by the best worst-case algorithms for learning optimal Bayesian networks with no constraint on treewidth [24]. We shall refer to their method in the rest of this paper as K&P (after the authors’ initials). Elidan and Gould [15] combined several heuristics to treewidth computation and network structure learning in order to design approximate methods. Others have addressed the similar (but not equivalent) problem of learning undirected models of bounded treewidth [2, 8, 25]. Very recently, there seems to be an increase of interest in the topic. Berg et al. [4] showed that the problem of learning bounded treewidth Bayesian networks can be reduced to a weighted maximum satisfiability problem, and subsequently solved by weighted MAXSAT solvers. They report experimental results showing that their approach outperforms K&P. In the same year, Parviainen et al. [21] showed that the problem can be reduced to a MILP. Their reduced MILP problem however has exponentially many constraints in the number of variables. Following the work of Cussens [10], the authors avoid creating such large programs by a cutting plane generation mechanism, which iteratively includes a new constraint while the optimum is not found. The generation of each new constraint (cutting plane) requires solving another MILP problem. We shall refer to their method from now on as TWILP (after the name of the software package the authors provide). 3 A Mixed Integer Linear Programming Approach The first contribution of this work is the MILP formulation that we design to solve the problem of structure learning with bounded treewidth. MILP formulations have shown to be very effective for learning Bayesian networks with no constraint on treewidth [3, 10], surpassing other attempts in a range of data sets. The formulation is based on combining the MILP formulation for structure learning in [11] with the MILP formulation presented in [17] for computing the treewidth of an undirected graph. There are however notable differences: for instance, we do not enforce a linear elimination ordering of nodes; instead we allow for partial orders which capture the equivalence between different orders in terms of minimizing treewidth, and we represent such partial order by real numbers instead of integers. We avoid the use of sophisticate techniques for solving MILP problems such as constraint generation [3, 10], which allows for an easy implementation and parallelization (MILP optimizers such as CPLEX can take advantage of that). For each node i in N, let Pi be the collection of allowed parent sets for that node (these sets can be specified manually by the user or simply defined as the subsets of N \ {i} with cardinality less than a given bound). We denote an element of Pi as Pit, with t = 1, . . . , ri and ri = |Pi| (hence Pit ⊂N). We will refer to a DAG as valid if its node set is N and the parent set of each node i in it is an element of Pi. The following MILP problem can be used to find valid DAGs whose treewidth is at most w: Maximize X it pit · si(Pit) subject to (2) P j∈N yij ≤w, ∀i ∈N, (3a) (n + 1) · yij ≤n + zj −zi, ∀i, j ∈N, (3b) yij + yik −yjk −ykj ≤1, ∀i, j, k ∈N, (3c) P t pit = 1, ∀i ∈N, (4a) (n + 1)pit ≤n + vj −vi, ∀i ∈N, ∀t ∈{1, . . . , ri}, ∀j ∈Pit, (4b) pit ≤yij + yji, ∀i ∈N, ∀t ∈{1, . . . , ri}, ∀j ∈Pit, (4c) pit ≤yjk + ykj, ∀i ∈N, ∀t ∈{1, . . . , ri}, ∀j, k ∈Pit, (4d) zi ∈[0, n], vi ∈[0, n], yij ∈{0, 1}, pit ∈{0, 1} ∀i, j ∈N, ∀t ∈{1, . . . , ri}. (5) The variables pit define which parent sets are chosen, while the variables vi guarantee that those choices respect a linear ordering of the variables, and hence that the corresponding directed graph is acyclic. The variables yij specify a chordal moralization of this DAG with arcs respecting an elimination ordering of width at most w, which is given by the variables zi. The following result shows that any solution to the MILP above can be decoded into a chordal graph of bounded treewidth and a suitable perfect elimination ordering. 3 Lemma 1. Let zi, yij, i, j ∈N, be variables satisfying Constraints (4) and (5). Then the undirected graph M = (N, E), where E = {ij ∈N × N : yij = 1 or yji = 1}, is chordal and has treewidth at most w. Any elimination ordering that extends the weak ordering induced by zi is perfect for M. The graph M is used in the formulation as a template for the moral graph of a valid DAG: Lemma 2. Let vi, pit, i ∈N, t = 1, . . . , ri, be variables satisfying Constraints (4) and (5). Then the directed graph G = (N, A), where Gi = {j : pit = 1 and j ∈Pit}, is acyclic and valid. Moreover the moral graph of G is a subgraph of the graph M defined in the previous lemma. The previous lemmas suffice to show that the solutions of the MILP problem can be decoded into valid DAGs of bounded treewidth: Theorem 1. Any solution to the MILP can be decoded into a valid DAG of treewidth less than or equal to w. In particular, the decoding of an optimal solution solves (1). The MILP formulation can be directly fed into any off-the-shelf MILP optimizer. Most MILP optimizers (e.g. CPLEX) can be prematurely stopped while providing an incumbent solution and an error estimate. Moreover, given enough resources (time and memory), these solvers always find optimal solutions. Hence, the MILP formulation provides an anytime algorithm that can be used to provide both exact and approximate solutions. The bottleneck in terms of efficiency of the MILP construction lies in the specification of Constraints (3c) and (4d), as there are Θ(n3) such constraints. Thus, as n increases even the linear relaxations of the MILP problem become hard to solve. We demonstrate empirically in Section 5 that the quality of solutions found by the MILP approach in a reasonable amount of time degrades quickly as the number of variables exceeds a few dozens. In the next section, we present an approximate algorithm to overcome such limitations and handle large domains. 4 A Sampling Based Approach A successful method for learning Bayesian networks of unconstrained treewidth on large domains is order-based local search, which consists in sampling topological orderings for the variables and selecting optimal compatible DAGs [26]. Given a topological ordering, the optimal DAG can be found in linear time (assuming scores are given as input), hence rendering order-based search really effective in exploring the solution space. A naive extension of that approach to the bounded treewidth case would be to (i) sample a topological order, (ii) find the optimal compatible DAG, (iii) verify the treewidth and discard if it exceeds the desired bound. There are two serious issues with that approach. First, verifying the treewidth is an NP-hard problem, and even if there are linear-time algorithms (which are exponential in the treewidth), they perform poorly in practice; second, the vast majority of structures would be discarded, since the most used score functions penalize the number of free parameters, which correlates poorly with treewidth [5]. In this section, we propose a more sophisticate extension of order-based search to learn bounded treewidth structures. Our method relies on sampling k-trees, which are defined inductively as follows [6]. A complete graph with k + 1 nodes (i.e., a (k + 1)-clique) is a k-tree. Let Tk = (V, E) be a k-tree, K be a k-clique in it, and v be a node not in V . Then the graph obtained by connecting v to every node in K is also a k-tree. A k-tree is a maximal graph of treewidth k in the sense that no edge can be added without increasing the treewidth. Every graph of treewidth at most k is a subgraph of some k-tree. Hence, Bayesian networks of treewidth bounded by k are exactly those whose moral graph is a subgraph of some k-tree [19]. We are interested in k-trees over the nodes N of the Bayesian network and where k = w is the bound we impose to the treewidth. Caminiti et al. [6] proposed a linear time method (in both n and k) for coding and decoding ktrees into what is called (generalized) Dandelion codes. They also established a bijection between Dandelion codes and k-trees. Hence, sampling Dandelion codes is essentially equivalent to sampling k-trees. The former however is computationally much easier and faster to perform, especially if we want to draw samples uniformly at random (uniform sampling provides good coverage of the space and produces low variance estimates across data sets). Formally, a Dandelion code is a pair (Q, S), where Q ⊆N with |Q| = k and S is a list of n−k −2 pairs of integers drawn from N ∪{ϵ}, where ϵ is an arbitrary number not in N. Dandelion codes can be sampled uniformly by a trivial linear-time 4 algorithm that uniformly chooses k elements from N to build Q, then uniformly samples n −k −2 pairs of integers in N ∪{ϵ}. Algorithm 1 contains a high-level description of our approach. Algorithm 1 Learning a structure of bounded treewidth by sampling Dandelion codes. % Takes a score function si, i ∈N, and an integer k, and outputs a DAG G∗of treewidth ≤k. 1 Initialize G∗as an empty DAG. 2 Repeat a certain number of iterations: 2.a Uniformly sample a Dandelion code (Q, S) and decode it into Tk. 2.b Search for a DAG G that maximizes the score function and is compatible with Tk. 2.c If P i∈N si(Gi) > P i∈N si(G∗ i ), update G∗. We assume from now on that a k-tree Tk is available, and consider the problem of searching for a compatible DAG that maximizes the score (Step 2.b). Korhonen and Parviainen [19] presented an algorithm (which we call K&P) that given an undirected graph M finds a DAG G maximizing the score function such that the moralization of G is a subgraph of M. The algorithm runs in time and space O(n) assuming the scores are part of the input (hence pre-computed and accessed at constant time). We can use their algorithm to find the optimal structure whose moral graph is a subgraph of Tk. We call this approach S+K&P to remind of (k-tree) sampling followed by K&P. Theorem 2. The size of the sampling space of S+K&P is less than en log(nk). Each of its iterations runs in linear time in n (but exponential in k). According to the result above, the sampling space of S+K&P is not much bigger than that of standard order-based local search (which is approximately en log n), especially if k ≪n. The practical drawback of this approach is the Θ(k3k(k + 1)!n) time taken by K&P to process each sampled k-tree, which forbids its use for moderately high treewidth bounds (say, k ≥10). Our experiments in the next section further corroborate our claim: S+K&P often performs poorly even on small k, mostly due to the small number of k-trees sampled within the given time limit. A better approach is to sacrifice the optimality of the search for compatible DAGs in exchange of an efficiency gain. We next present a method based on sampling topological orderings that achieves such a goal. Let Ci be the collection of maximal cliques of Tk that contain a certain node i (these can be obtained efficiently, as Tk is chordal), and consider a topological ordering < of N. Let C<i = {j ∈C : j < i}. We can find an optimal DAG G compatible with < and Tk by making Gi = argmax{si(P) : P ⊆C<i : C ∈Ci} for each i ∈N. The graph G is acyclic since each parent set Gi respects the topological ordering by construction. Its treewidth is at most k because both i and Gi belong to a clique C of Tk, which implies that the moralization of G is a subgraph of Tk. Sampling topological orderings is both inefficient and wasteful, as different topological orderings impose the same constraints on the choices of Gi. To see this, consider the k-tree with edges 1– 2,1–3,2–3,2–4 and 3–4. Since there is no edge connecting nodes 1 and 4 their relative ordering is irrelevant when choosing both G1 or G4. A better approach is to linearly order the nodes in each maximal clique. A k-tree Tk can be represented by a clique-tree structure, which comprises its maximal cliques C1, . . . , Cn+k−1 and a tree T over the maximal cliques. Every two adjacent cliques in T differ by exactly one node. Assume T is rooted at a clique R, so we can unambiguously refer to the (single) parent of a (maximal) clique and to its children. A clique-tree structure as such can directly be obtained from the process of decoding a Dandelion code [6]. The procedure in Algorithm 2 shows how to efficiently obtain a collection of compatible orderings of the nodes of each clique of a k-tree. Algorithm 2 Sampling a partial order within a k-tree. % Takes a k-tree represented as a clique-tree structure T rooted at R, and outputs a collection of orderings σC for every maximal clique C of T. 1 Sample an order σR of the nodes in R, paint R black and the other maximal cliques white. 2 Repeat until all maximal cliques are painted black: 2.a Take a white clique C whose parent clique P in T is black, and let i be the single node in C \P. 2.b Sample a relative order for i with respect to σP (i.e., insert i into some arbitrary position of the projection of σP onto C), and generate σC accordingly; when done paint C black. 5 Table 1: Number of variables in the data sets. nursery breast housing adult zoo letter mushroom wdbc audio hill community 9 10 14 15 17 17 22 31 62 100 100 The cliques in Algorithm 2 are processed in topological ordering in the clique-tree structure, which ensures that the order σP of the parent P of a clique C is already defined when processing C (note that the order in which we process cliques does not restrict the possible orderings among nodes). At the end, we have a node ordering for each clique. Given such a collection of local orderings, we can efficiently learn the optimal parent set of every node i by Gi = argmax P ⊆C:P ∼σC,C∈Ci si(P) , (6) where P ∼σC denotes that the parent sets are constrained to be nodes smaller than i in σC. In fact, the choices made in (6) can be implemented together with step 2.b of Algorithm 2, providing a slight increase of efficiency. We call the method obtained by Algorithm 1 with partial orderings established by Algorithm 2 and parent set selection by (6) as S2, in allusion to the double sampling scheme of k-trees and local node orderings. Theorem 3. S2 samples DAGs σ on a sample space of size k! · (k + 1)n−k, and runs in linear time in n and k. The generation of partial orderings can also serve to implement the DAG search in S+K&P, by replacing the sampling with complete enumeration of them. Then Step 2.b would be performed for each compatible ordering σP of the parent in a recursive way. Dynamic programming can be used to make the procedure more efficient. We have actually used this approach in our implementation of S+K&P. Finally, the sampling can be enhanced by some systematic search in the neighborhood of the sampled candidates. We have implemented and put in place a simple hill-climbing procedure for that, even though the quality of solutions does not considerably improve by doing so. 5 Experiments We empirically analyzed the accuracy of the algorithms proposed here against each other and against the available implementations of TWILP (https://bitbucket.org/twilp/twilp/) and K&P (http://www.cs.helsinki.fi/u/jazkorho/aistats-2013/) on a collection of data sets from the UCI repository. The S+K&P and S2 algorithms were implemented (purely) in Matlab. The data sets were selected so as to span a wide range of dimensionality, and were preprocessed to have variables discretized over the median value when needed. Some columns of the original data sets audio and community were discarded: 7 variables of audio had a constant value, 5 variables of community have almost one different value per sample (such as personal data), and 22 variables had missing data (Table 1 shows the number of (binary) variables after pre-processing). In all experiments, we maximize the Bayesian Dirichlet equivalent uniform score with equivalent sample size equal to one. 5.1 Exact Solutions We refer to our MILP formulation as simply MILP hereafter. We compared MILP, TWILP and K&P on the task of finding an optimal structure. Table 2 reports the running time on a selection of data sets of reasonably low dimensionality and small values for the treewidth bound. The experiments were run in a computer with 32 cores, memory limit of 64GB, time limit of 3h and maximum number of parents of three (the latter restriction facilitates the experiments and does not constrain the treewidth). On cases where MILP or TWILP did not finish we report also the error estimates from CPLEX (an error of e% means that the achieved solution is certainly not more than e% worse than the optimal). While we emphasize that one should be careful when directly comparing execution time between methods, as the implementations use different languages (we are running CPLEX 12.4, the original K&P uses a Cython compiled Python code, TWILP uses a Python interface to CPLEX to generate the cutting plane mechanism), we note that MILP goes much further in terms of which data sets and treewidth values it can compute. MILP has found the optimal structure in all instances, but was not able to certify its optimality in due time. TWILP found the optimum for 6 all treewidth bounds only on the nursery and breast data sets. The results also suggest that MILP becomes faster with the increase of the bound, while TWILP running times remain almost unaltered. This might be explained by the fact that the MILP formulation is complete and the increase of the bound facilitates encountering good solutions, while TWILP needs to generate constraints until an optimal solution can be certified. Table 2: Time to learn an optimal Bayesian network subject to treewidth bound w. Dashes denote failure to solve due to excessive memory demand. method w nursery breast housing adult mushroom n=9 n=10 n=14 n=15 n=22 MILP 2 1s 31s 3h [2.4%] 3h [0.39%] 3h [50%] 3 <1s 19s 25m 3h [0.04%] 3h [19.3%] 4 <1s 8s 80s 40m 3h [14.9%] 5 <1s 8s 56s 37s 3h [11.2%] TWILP 2 5m 3h [0.5%] 3h [7%] 3h [0.6%] 3h [32%] 3 5s 3h [3%] 3h [9%] 3h [0.7%] 3h [31%] 4 <1s 3h [0.3%] 3h [9%] 3h [0.9%] 3h [27%] 5 <1s 3h [0.5%] 3h [7%] 3h [0.9%] 3h [23%] K&P 2 7s 26s 128m 137m – 3 72s 5m – – – 4 12m 103m – – – 5 131m – – – – 5.2 Approximate Solutions We used treewidth bounds of 4 and 10, and maximum parent set size of 3, except for hill and community, where it was set as 2 to help the integer programming approaches (which suffer the most from large parent sets). To be fair with all methods, we pre-computed scores, and considered them as input of the problem. Both MILP and TWILP used CPLEX 12.4 with a memory limit of 64GB to solve the optimizations. We have allowed CPLEX to run up to three hours, collecting the incumbent solution after 10 minutes. S+K&P and S2 have been given 10 minutes. This evaluation at 10 minutes is to be seen as an early-stage comparison for applications that need a reasonably fast response. To account for the intrinsic variability of the performance of the sampling methods with respect to the sampling seed, S+K&P and S2 were ran ten times on each data set with different seeds; we report the minimum, median and maximum obtained values over the runs. Figure 1 shows the normalized scores (in percentage) of each method on each data set. The normalized score of a method that returns a solution with score s on a certain data set is norm-score(s) = (s −sempty)/(smax −sempty), where sempty is the score of an empty DAG (used as baseline), and smax is the maximum score over all methods in that data set. Hence, a normalized score of 0 indicates the method found solutions as good as the empty graph (a trivial solution), whereas a normalized score of 1 indicates the method performed best on that data set. The exponential dependence on treewidth of S+K&P prevents it to run with treewidth bound greater than 6. We see from the plot on the left that S2 is largely superior to S+K&P, even though the former finds suboptimal networks for each given k-tree. This suggests that finding good k-trees is more important than selecting good networks for a given k-tree. We also see that both integer programming formulations scale poorly with the number of variables, being unable to obtain satisfactory solutions for data sets with more than 50 variables. For the hill data set and treewidth ≤4, MILP was not able to find a feasible solution within 10 minutes, and could only find the trivial solution (empty DAG) after 3 hours; TWILP did not find any solution even after 3 hours. On the community data set with treewidth ≤4, neither MILP nor TWILP found a solution within 3 hours. For treewidth ≤10 the integer programming approaches performed even worse: TWILP could not provide a solution for the audio, hill and community data sets; MILP could only find the empty graph. Since both S+K&P and S2 were implemented in Matlab, the comparison with either MILP or TWILP within the same time period (10 minutes) might be unfair (one could also try to improve the MILP formulation, although it will eventually suffer from the problems discussed in Section 3). Nevertheless, the results show that S2 is very competitive even under implementation disadvantage. 7 0 50 100 nursery breast housing adult zoo letter mushroom wdbc audio hill community NORMALIZED SCORE (%) Treewidth ≤4 0 50 100 nursery breast housing adult zoo letter mushroom wdbc audio hill community NORMALIZED SCORE (%) Treewidth ≤10 S+K&P S2 MILP-10m MILP-3h TWILP-10m TWILP-3h Figure 1: Normalized scores. Missing results indicate failure to provide a solution. 6 Conclusions We presented exact and approximate procedures to learn Bayesian networks of bounded treewidth. The exact procedure is based on a MILP formulation, and is shown to outperform other methods for exact learning, including the different MILP formulation proposed in [21]. Our MILP approach is also competitive when used to produce approximate solutions. However, due to the cubic number of constraints, the MILP formulation cannot cope with very large domains, and there is probably little we can do to considerably improve this situation. Constraint generation techniques [3] are yet to be explored, even though we do not expect them to produce dramatic performance gains – the competing objectives of maximizing score and bounding treewidth usually lead to the generation of a large number of constraints. To tackle large problems, we developed an approximate algorithm that samples k-trees and then searches for compatible structures. We derived two variants by trading off the computational effort spent in sampling k-trees and in searching for compatible structures. The sampling-based methods are empirically shown to provide fairly accurate solutions and to scale to large domains. Acknowledgments We thank the authors of [19, 21] for making their software publicly available and the anonymous reviewers for their useful suggestions. Most of this work has been performed while C. P. de Campos was with the Dalle Molle Institute for Artificial Intelligence. This work has been partially supported by the Swiss NSF grant 200021 146606/1, by the S˜ao Paulo Research Foundation (FAPESP) grant 2013/23197-4, and by the grant N00014-12-1-0868 from the US Office of Navy Research. References [1] S. Arnborg, D. Corneil, and A. Proskurowski. Complexity of finding embeddings in a k-tree. SIAM J. on Matrix Analysis and Applications, 8(2):277–284, 1987. [2] F. R. Bach and M. I. Jordan. Thin junction trees. In Advances in Neural Inf. Proc. Systems 14, pages 569–576, 2001. [3] M. Barlett and J. Cussens. Advances in Bayesian Network Learning using Integer Programming. In Proc. 29th Conf. on Uncertainty in AI, pages 182–191, 2013. 8 [4] J. Berg, M. J¨arvisalo, and B. Malone. Learning optimal bounded treewidth Bayesian networks via maximum satisfiability. In Proc. 17th Int. Conf. on AI and Stat., pages 86–95, 2014. JMLR W&CP 33. [5] A. Beygelzimer and I. Rish. Inference complexity as a model-selection criterion for learning Bayesian networks. In Proc. 8th Int. Conf. Princ. Knowledge Representation and Reasoning, pages 558–567, 1998. [6] S. Caminiti, E. G. Fusco, and R. Petreschi. Bijective linear time coding and decoding for k-trees. Theory of Comp. Systems, 46(2):284–300, 2010. [7] V. Chandrasekaran, N. Srebro, and P. Harsha. Complexity of inference in graphical models. In Proc. 24th Conf. on Uncertainty in AI, pages 70–78, 2008. [8] A. Chechetka and C. Guestrin. Efficient principled learning of thin junction trees. In Advances in Neural Inf. Proc. Systems, pages 273–280, 2007. [9] D. M. Chickering. Learning Bayesian networks is NP-complete. In Learning from Data: AI and Stat. V, pages 121–130. Springer-Verlag, 1996. [10] J. Cussens. Bayesian network learning with cutting planes. In Proc. 27th Conf. on Uncertainty in AI, pages 153–160, 2011. [11] J. Cussens, M. Bartlett, E. M. Jones, and N. A. Sheehan. Maximum Likelihood Pedigree Reconstruction using Integer Linear Programming. Genetic Epidemiology, 37(1):69–83, 2013. [12] S. Dasgupta. Learning polytrees. In Proc. 15th Conf. on Uncertainty in AI, pages 134–141, 1999. [13] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints. J. of Mach. Learning Res., 12:663–689, 2011. [14] C. P. de Campos, Z. Zeng, and Q. Ji. Structure learning of Bayesian networks using constraints. In Proc. 26th Int. Conf. on Mach. Learning, pages 113–120, 2009. [15] G. Elidan and S. Gould. Learning Bounded Treewidth Bayesian Networks. J. of Mach. Learning Res., 9:2699–2731, 2008. [16] N. Friedman. The Bayesian structural EM algorithm. In Proc. 14th Conf. on Uncertainty in AI, pages 129–138, 1998. [17] A. Grigoriev, H. Ensinck, and N. Usotskaya. Integer linear programming formulations for treewidth. Technical report, Maastricht Res. School of Economics of Tech. and Organization, 2011. [18] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Mach. Learning, 20(3):197–243, 1995. [19] J. H. Korhonen and P. Parviainen. Exact learning of bounded tree-width Bayesian networks. In Proc. 16th Int. Conf. on AI and Stat., pages 370–378, 2013. JMLR W&CP 31. [20] J. H. P. Kwisthout, H. L. Bodlaender, and L. C. van der Gaag. The Necessity of Bounded Treewidth for Efficient Inference in Bayesian Networks. In Proc. 19th European Conf. on AI, pages 237–242, 2010. [21] P. Parviainen, H. S. Farahani, and J. Lagergren. Learning bounded tree-width Bayesian networks using integer linear programming. In Proc. 17th Int. Conf. on AI and Stat., pages 751– 759, 2014. JMLR W&CP 33. [22] E. Perrier, S. Imoto, and S. Miyano. Finding optimal Bayesian network given a super-structure. J. of Mach. Learning Res., 9(2):2251–2286, 2008. [23] D. Roth. On the hardness of approximate reasoning. Artif. Intell., 82(1–2):273–302, 1996. [24] T. Silander and P. Myllymaki. A simple approach for finding the globally optimal Bayesian network structure. In Proc. 22nd Conf. on Uncertainty in AI, pages 445–452, 2006. [25] N. Srebro. Maximum likelihood bounded tree-width Markov networks. Artif. Intell., 143(1): 123–138, 2003. [26] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In Proc. 21st Conf. on Uncertainty in AI, pages 584–590, 2005. 9
2014
90
5,581
Learning From Weakly Supervised Data by The Expectation Loss SVM (e-SVM) algorithm Jun Zhu Department of Statistics University of California, Los Angeles jzh@ucla.edu Junhua Mao Department of Statistics University of California, Los Angeles mjhustc@ucla.edu Alan Yuille Department of Statistics University of California, Los Angeles yuille@stat.ucla.edu Abstract In many situations we have some measurement of confidence on “positiveness” for a binary label. The “positiveness” is a continuous value whose range is a bounded interval. It quantifies the affiliation of each training data to the positive class. We propose a novel learning algorithm called expectation loss SVM (eSVM) that is devoted to the problems where only the “positiveness” instead of a binary label of each training sample is available. Our e-SVM algorithm can also be readily extended to learn segment classifiers under weak supervision where the exact positiveness value of each training example is unobserved. In experiments, we show that the e-SVM algorithm can effectively address the segment proposal classification task under both strong supervision (e.g. the pixel-level annotations are available) and the weak supervision (e.g. only bounding-box annotations are available), and outperforms the alternative approaches. Besides, we further validate this method on two major tasks of computer vision: semantic segmentation and object detection. Our method achieves the state-of-the-art object detection performance on PASCAL VOC 2007 dataset. 1 Introduction Recent work in computer vision relies heavily on manually labeled datasets to achieve satisfactory performance. However, the detailed hand-labelling of datasets is expensive and impractical for large datasets such as ImageNet [6]. It is better to have learning algorithms that can work with data that has only been weakly labelled, for example by putting a bounding box around an object instead of segmenting it or parsing it into parts. In this paper we present a learning algorithm called expectation loss SVM (e-SVM). It requires a method that can generate a set of proposals for the true label (e.g., the exact silhouette of the object). But this set of proposals may be very large, each proposal may be only partially correct (the correctness can be quantified by a continues value between 0 and 1 called ”positiveness”), and several proposals may be required to obtain the correct label. In the training stage, our algorithm can deal with the strong supervised case where the positiveness of the proposals are observed, and can easily extend to the weakly supervised case by treating the positiveness as latent variables. In the testing stage, it will predict the label for each proposal and provide a confidence score. There are some alternative approaches for this problem, such as Support Vector Classification (SVC) and Support Vector Regression (SVR). For the SVC algorithm, because this is not a standard binary 1 latent latent latent Annotations Segment Proposals IoU Ratios e-SVM Train 0.79 0.02 0 ... ... Test images ... Classifiers 3.49 0.25 -2.76 Confidence of class “dog”: ... Figure 1: The illustration of our algorithm. In the training process, the e-SVM model can handle two types of annotations: pixel level (strong supervision) and bounding box (weak supervision) annotations. For pixel level annotations, we set the positiveness of the proposal as IoU overlap ratios with the groundtruth and train classifiers using basic e-SVM. For bounding box annotations, we treat the positiveness as latent variables and use latent e-SVM to train classifiers. In the testing process, the e-SVM will provide each segment proposal a class label and a confidence score. (Best viewed in color) classification problem, one might need to binarize the positiveness using ad-hoc heuristics to determine a threshold, which degrades its performance [18]. To address this problem, previous works usually used SVR [4, 18] to train the class confidence prediction models in segmentic segmentation. However, it is also not a standard regression problem since the value of positiveness belongs to a bounded interval [0, 1]. We compare our e-SVM to these two related methods in the segment proposal confidence prediction problem. The positiveness of each segment proposal is set as the Intersection over Union (IoU) overlap ratio between the proposal and the pixel level instance groundtruth. We test our algorithm under two types of scenarios with different annotations: the pixel level annotations (positiveness is observed) and the bounding box annotations (positiveness is unobserved). Experiments show that our model outperforms SVC and SVR in both scenarios. Figure 1 illustrates the framework of our algorithm. We further validate our approach on two fundamental computer vision tasks: (i) semantic segmentation, and (ii) object detection. Firstly, we consider semantic segmentation. There has recently been impressive progress at this task using rich appearance cues. Segments are extracted from images [1, 3, 4, 12], appearance cues are computed for each segment [5, 21, 25], and classifiers are trained using groundtruth pixel labeling [18]. Methods of this type are almost always among the winners of the PASCAL VOC segmentation challenge [5]. But all these methods rely on datasets which have been hand-labelled at the pixel level. For this application we generate the segment proposals using CPMC segments [4]. The positiveness of each proposal is set as the Intersection over Union (IoU) overlap ratio. We show that appearance cues learnt by e-SVM, using either the bounding box annotations or pixel level annotations, are more effective than those learnt with SVC and SVR on PASCAL VOC 2011 [9] segmentation dataset. Our algorithm is also flexible enough to utilize additional bounding box annotations to further improve the results. Secondly, we address object detection by exploiting the effectiveness of segmentation cues and coupling them to existing object detection methods. For this application, the data is only weakly labeled because the groundtruth for object detection is typically specified by bounding boxes (e.g. PASCAL VOC [8, 9] and Imagenet [6]), which means that pixel level groundtruth is not available. We use either CPMC or super-pixels as methods for producing segment proposals. IoU is again used to represent the positiveness of the proposals. We test our approach on the PASCAL dataset using, as our base detector, the Regions with CNN features (RCNN) [14] (currently state of the art on PASCAL and outperforms previous works by a large margin). This method first used selective search method [24] to extract candidate bounding boxes. For each candidate bounding box, it extracts features by deep networks [16] learned on Imagenet dataset and fine-tuned on PASCAL. We couple our appearance cues to this system by simple concatenating our spatial confidence map features based on the trained e-SVM classifiers and the deep learning features, and then train a linear SVM. We show that this simple approach yields an average improvement of 1.5 percent on per-class average precision (AP). We note that our approach is general. It can use any segment proposal detectors, any image features, and any classifiers. When applied to object detection it could use any base detector, and we could couple the appearance cues with the base detector in many different ways (we choose the simplest). 2 In addition, it can handle other classification problems where only the ”positiveness” of the samples instead of binary labels are available. 2 Related work on weakly supervised learning and weighted SVMs We have introduced some of the most relevant works published recently for semantic segmentation or object detection. In this section, we will briefly review related work of weakly supervised learning methods for segment classification, and discuss the connection to instance weighted SVM in literature. The problem settings for most previous works generally assumed that they only get a set of accompanying words of an image or a set of image level labeling, which is different from the problem settings in this paper. Multiple Instance Learning (MIL) [7, 2] was adopted to solve these problems [20, 22]. MIL handles cases where at least one positive instance is present in a positive bag and only the labels of a set of bags are available. Vezhnevets et.al. [26] proposed a Multi-Image Model (MIM) to solve this problem and showed that MIL in [22] is a special case of MIM. Later, [26] developed MIM to a generalized MIM and used it as their segmentation model. Recently, Liu et.al. [19] presented a weakly-supervised dual clustering approach to handle this task. Our weakly supervised problem setting is in the middle between these settings and the strong supervision case (i.e. the full pixel level annotations are available). It is also very important and useful because bounding box annotations of large-scale image dataset are already available (e.g. Imagenet [6]) while the pixel level annotations of large datasets are still hard to obtain. This weakly supervised problem cannot be solved by MIL. We cannot assume that at least one ”completely” positive instance (i.e. a CPMC segment proposals) is present in a positive bag (i.e. a groundtruth instance) since most of the proposals will contain both foreground pixels and background pixels. We will show how our e-SVM and its latent extension address this problem in the next sections. In machine learning literature, the weighted SVM (WSVM) methods [23, 27, ?] also use an instancedependent weight on the cost of each example, and can improve the robustness of model estimation [23], alleviate the effect of outliers [27], leverage privileged information [17] or deal with unbalanced classification problems. The difference between our e-SVM and WSVMs mainly lies in that it weights labels instead of data points, which leads to each example contributing both to the costs of positive and negative labels. Although the loss function of e-SVM model is different from those of WSVMs, it can be effortlessly solved by any standard SVM solver (e.g., LibLinear [10]) like those used in WSVMs. This is an advantage because it does not require a specific solver for the implementation of our e-SVM. 3 The expectation loss SVM model In this section, we will first describe the basic formulation of our expectation loss SVM model (e-SVM) in section 3.1 when the positiveness of each segment proposal is observed. Then, in section 3.2, a latent e-SVM model is introduced to handle the weak supervision situation where the positiveness of each segment proposal is unobserved. 3.1 The basic e-SVM model We are given a set of training images D. Using some segmentation method (we adopt CPMC [4] in this work), we can generate a set of foreground segment proposals {S1, S2, . . . , SN} from these images. For each segment Si, we extract feature xi, xi ∈Rd. Suppose the pixelwise annotations are available for all the groundtruth instances in D. For each object class, we can calculate the IoU ratio ui (ui ∈[0, 1]) between each segment Si and the groundtruth instances labeling, and set the positiveness of Si as ui (although positiveness can be some functions of IoU ratio, for simplicity, we just set it as IoU and use ui to represent the positiveness in the following paragraphs). Because many foreground segments overlap partially with the groundtruth instances (i.e. 0 < ui < 1), it is not a standard binary classification problem for training. Of course, we can define a threshold τb and treat all the segments whose ui ≥τb as positive examples and the segments whose ui < τb as negative examples. In this way, this problem is transferred to a Support Vector Classification (SVC) problem. But it needs some heuristics to determine τb and its performance is only partially satisfactory [18]. 3 To address this issue, we proposed our expectation loss SVM model as an extension of the classical SVC models. In this model, we treat the label Yi of each segment as an unobserved random variable. Yi ∈{−1, +1}. Given xi, we assume that Yi follows a Bernoulli distribution. The probability of Yi = 1 given xi (i.e. the success probability of the Bernoulli distribution) is denoted as µi. We assume that µi is a function of the positiveness ui, i.e. µi = g(ui). In the experiment, we simply set µi = ui. Similar to the traditional linear SVC problem, we adopt a linear function as the prediction function: F(xi) = wT xi + b. For simplicity, we denote [w b] as w, [xi 1] as xi and F(xi) = wT xi in the remaining part of the paper. The loss function of our e-SVM is the expectation over the random variables Yi: L(w) =λw · 1 2wT w + 1 N N X i=1 EYi[max(0, 1 −YiwT xi)] =λw · 1 2wT w + 1 N N X i=1 [l+ i · Pr(Yi = +1|xi) + l− i · Pr(Yi = −1|xi)] =λw · 1 2wT w + 1 N N X i=1 {l+ i · g(ui) + l− i · [1 −g(ui)]} (1) where l+ i = max(0, 1 −wT xi) and l− i = max(0, 1 + wT xi). Given the pixelwise groundtruth annotations, g(ui) is known. From Equation 1, we can see that it is equivalent to ”weight” each sample with a function of its positiveness. The standard linear SVM solver is used to solve this model with loss function of L(w). In the experiments, we show that the performance of our e-SVM is much better than SVC and slightly better than Support Vector Regression (SVR) in the segment classification task. 3.2 The latent e-SVM model One of the advantage of our e-SVM model is that we can easily extend it to the situation where only bounding box annotations are available (this type of labeling is of most interest in the paper). Under this weakly supervised setting, we cannot obtain the exact value of the positiveness (IoU) ui for each segment. Instead, ui will be treated as a latent variable which will be determined by minimizing the following loss function: L(w, u) = λw · 1 2wT w + 1 N N X i=1 {l+ i · g(ui) + l− i · [1 −g(ui)]} + λR · R(u) (2) where u denotes {ui}i=1,...,N. R(u) is a regularization term for u. We can see that the loss function in Equation 1 is a special case of that in Equation 2 by setting u as constant and λR equal to 0. When u is fixed, L(w, u) is a standard linear SVM loss, which is convex with respect to w. When w is fixed, L(w, u) is also a convex function if R(u) is a convex function with respect to u. The IoU between a segment Si and groundtruth bounding boxes, denoted as ubb i , can serve as an initialization for ui. We can iteratively fix u and w, and solve the two convex optimization problems until it converges. The pseudo-code for the optimization algorithm is shown in Algorithm 1. Algorithm 1 The optimization for training latent e-SVM Initialization: 1: u(cur) ←ubb; Process: 2: repeat 3: w(new) ←arg minw L(w, u(cur)); 4: u(new) ←arg minu L(w(new), u); 5: u(cur) ←u(new); 6: until Converge 4 If we do not add any regularization term on u (i.e. set λR = 0), u will become 0 or 1 in the optimization step in line 4 of algorithm 1 because the loss function becomes a linear function with respect to u when w is fixed. It turns to be similar to a latent SVM and can lead the algorithm to stuck in the local minimal as shown in the experiments. The regularization term will prevent this situation under the assumption that the true value of u should be around ubb. There are a lot of different designs of the regularization term R(u). In practice, we use the following one based on the cross entropy between two Bernoulli distributions with success probability ubb i and ui respectively. R(u) = −1 N N X i=1 [ubb i · log(ui) + (1 −ubb i ) · log(1 −ui)] = −1 N N X i=1 DKL[Bern(ubb i )||Bern(ui)] + C (3) where C is a constant value with respect to u. DKL(.) represents the KL distance between two Bernoulli distributions. This regularization term is a convex function with respect to u and achieves its minimal when u = ubb. It is a strong regularization term since its value increases very fast when u ̸= ubb. 4 Visual Tasks 4.1 Semantic segmentation We can easily apply our e-SVM model to the semantic segmentation task with the framework proposed by Carreira et al. [5]. Firstly, CPMC segment proposals [4] are generated and the secondorder pooling features [5] are extracted from each segment. Then we train the segment classifiers using either e-SVM or latent e-SVM according to whether the groundtruth pixel-level annotations are available. In the testing stage, the CPMC segments are sorted based on their confidence scores output by the trained classifiers. The top ones will be selected to produce the predicted semantic label map. 4.2 Object detection For the task of object detection, we can only acquire bounding-box annotations instead of pixel-level labeling. Therefore, it is natural to apply our latent e-SVM in this task to provide complementary information for the current object detection system. In the state-of-the-art object detection systems [11, 13, 24, 14], the window candidates of foreground object are extracted from images and the confidence scores are predicted on them. Window candidates are extracted either by sliding window approaches (used in e.g. the deformable part-based model [11, 13]) or most recently, the Selective Search method [24] (used in e.g. the Region Convolutional Neural Networks [14]). This method lowers down the number of window candidates compared to the traditional sliding window approach. Original Image e-SVM classifiers Mapping segment confidence to pixels Confidence Map Pooling in each bins Features (a) (b) (c) Figure 2: The illustration of our spatial confidence map features for window candidates based on e-SVM. The confidence scores of the segments are mapped to pixels to generate a pixel-level confidence map. We will divide a window candidate into m × m spatial bins and pool the confidence scores of the pixels in each bin. It leads to a m × m dimensional feature. 5 It is not easy to directly incorporate confidence scores of the segments into these object detection systems based on window candidates. The difficulty lies in two aspects. First, only some of the segments are totally inside a window candidate or totally outside the window candidate. It might be hard to calculate the contribution of the confidence score of a segment that only partially overlaps with a window candidate. Second, the window candidates (even the groundtruth bounding boxes) will contain some of the background regions. Some regions (e.g. the regions near the boundary of the window candidates) will have higher probability to be the background region than the regions in the center. Treating them equally will harm the accuracy of the whole detection system. In order to solve these issues, we propose a new spatial confidence map feature. Given an image and a set of window candidates, we first calculate the confidence scores of all the segments in the image using the learned e-SVM models. The confidence score for a segment S is denoted as CfdScore(S). For each pixel, the confidence score is set as the maximum confidence score of all the segments that contain this pixel. CfdScore(p) = max∀S,p∈S CfdScore(S). In this way, we can handle the difficulty of partial overlapping between segments and candidate windows. For the second difficulty, we divide each candidate window into M = m × m spatial bins and pool the confidence scores of the pixels in each bin. Because the classifiers are trained with the one-vs-all scheme, our spatial confidence map feature is class-specific. It leads to a (M × K)-dimensional feature for each candidate window, where K refers to the total number of object classes. After that, we encode it by additive kernels approximation mapping [25] and obtain the final feature representation of candidate windows. The feature generating process is illustrated in Figure 2. In the testing stage, we can concatenate this segment feature with the features from other object detection systems. 5 Experiments In this section, we first evaluate the performance of e-SVM method on segment proposal classification, by using two new evaluation criterions for this task. After that, we apply our method to two essential tasks in computer vision: semantic segmentation and object detection. For semantic segmentation task, we test the proposed eSVM and latent eSVM on two different scenarios (i.e., with pixel-level groundtruth label annotation and with only bounding-box object annotation) respectively. For object detection task, we combine our confidence map feature with the state-of-the-art object detection system, and show our method can obtain non-trivial improvement on detection performance. 5.1 Performance evaluation on e-SVM We use PASCAL VOC 2011 [9] segmentation dataset in this experiment. It is a subset of the whole PASCAL 2011 datasets with 1112 images in the training set and 1111 images in the validation set, with 20 foreground object classes in total. We use the official training set and validation set for training and testing respectively. Similar to [5], we extract 150 CPMC [4] segment proposals for each image and compute the second-order pooling features on each segment. Besides, we use the same sequential pasting scheme [5] as the inference algorithm in testing. 5.1.1 Evaluation criteria In literature [5], the supervised learning framework of segment-based prediction model either regressed the overlapping value or converted it to a binary classification problem via a threshold value, and evaluate the performance by certain task-specific criterion (i.e., the pixel-wise accuracy used for semantic segmentation). In this paper, we adopt a direct performance evaluation criteria for the segment-wise target class prediction task, which is consistent with the learning problem itself and not biased to particular tasks. Unfortunately, we have not found any work on this sort of direct performance evaluation, and thus introduce two new evaluation criteria for this purpose. We first briefly describe them as follows: Threshold Average Precision Curve (TAPC) Although the ground-truth target value (i.e., the overlap rate of segment and bounding box) is a real value in the range of [0, 1], we can transform original prediction problem to a series of binary problems, each of which is conducted by thresholding the original groundtruth overlap rate. Thus, we calculate the Precison-Recall Curve as well as AP on each of binary classification problem, and compute the mean AP w.r.t. different threshold values as a performance measurement for the segment-based class confidence prediction problem. 6 TAPC NDCG e-SVM 36.69 e-SVM 0.8750 SVR 35.23 SVR 0.8652 SVC-0.0 22.48 SVC-0 0.8153 SVC-0.2 33.96 SVC-0.2 0.8672 SVC-0.4 35.62 SVC-0.4 0.8656 SVC-0.6 32.57 SVC-0.6 0.8485 SVC-0.8 26.73 SVC-0.8 0.8244 20 22 24 26 28 30 32 34 36 38 e-SVM SVR SVC-0.0 SVC-0.2 SVC-0.4 SVC-0.6 SVC-0.8 TAPC 0.8000 0.8100 0.8200 0.8300 0.8400 0.8500 0.8600 0.8700 0.8800 e-SVM SVR SVC-0 SVC-0.2 SVC-0.4 SVC-0.6 SVC-0.8 NDCG (a) Using pixel level annotations 12.00 15.50 19.00 22.50 26.00 29.50 33.00 L-eSVM SVR SVC-0.0 SVC-0.2 SVC-0.4 SVC-0.6 SVC-0.8 TAPC 0.7500 0.7700 0.7900 0.8100 0.8300 0.8500 0.8700 0.7500 0.7700 0.7900 0.8100 0.8300 0.8500 0.8700 L-eSVM SVR SVC-0 SVC-0.2 SVC-0.4 SVC-0.6 SVC-0.8 NDCG (b) Using bounding box annotations Figure 3: Performance evaluation and comparison to SVC and SVR Normalized Discounted Cumulative Gain (NDCG) [15] Considering that a higher confidence value is expected to be predicted for the segment with higher overlap rate, we think this prediction problem can be treated as a ranking problem, and thus we use the Normalized Discounted Cumulative Gain (NDCG), which is common performance measurement for ranking problem, as another kind of performance evaluation criterion in this paper. 5.1.2 Comparisons to SVC and SVR Based on the TAPC and NDCG introduced above, we evaluate the performance of our e-SVM model on PASCAL VOC 2011 segmentation dataset, and compare the results to two common methods (i.e. SVC and SVR) in literature. Note that we test the SVC’s performance with a variety of binary classification problems, each of which are trained by using different threshold values (e.g., 0, 0.2, 0.4, 0.6 and 0.8 as shown in figure 3). In figure 3 (a) and (b), we show the experimental results w.r.t. the model/classifier trained with clean pixel-wise object class labels and weakly-labelled bounding-box annotation, respectively. For both cases, we can see that our method obtains consistently superior performance than SVC model for all different threshold values. Besides, we can see that the TAPC and NDCG of our method are higher than those of SVR, which is a popular regression model for continuously valued target variable based on the max-margin principle. 5.2 Results of semantic segmentation For the semantic segmentation task, we test our e-SVM model with PASCAL VOC 2011 segmtation dataset using training set for training and validation set for testing. We evaluate the performance under two different data annotation settings, i.e., training with pixel-wise semantic class label maps and object bounding-box annotations. The accuracy w.r.t. these two settings are 36.8% and 27.7% respectively, which are comparable to the results of the state-of-the-art segment confidence prediction model (i.e., SVR) [5] used in semantic segmentation task. 5.3 Results of object detection As mentioned in Section 4.2, one of the natural applications of our e-SVM method is the object detection task. Most recently, Girshick et.al [14] presented a Regions with CNN features method (RCNN) using the Convolutional Neural Network pre-trained on the ImageNet Dataset [6] and finetuned on the PASCAL VOC datasets. They achieved a significantly improvement over the previous state-of-the-art algorithms (e.g. Deformable Part-based Model (DPM) [11])and push the detection 7 plane bike bird boat bottle bus car cat chair cow RCNN 64.1 69.2 50.4 41.2 33.2 62.8 70.5 61.8 32.4 58.4 Ours 63.7 70.2 51.9 42.5 33.4 63.2 71.3 62.0 34.7 58.7 Gain -0.4 1.0 1.5 1.3 0.2 0.4 0.8 0.2 2.3 0.2 RCNN (bb) 68.1 72.8 56.8 43.0 36.8 66.3 74.2 67.6 34.4 63.5 Ours (bb) 70.4 74.2 59.1 44.7 38.0 67.2 74.6 69.0 36.7 64.3 Gain 2.3 1.4 2.3 1.6 1.2 1.0 0.3 1.3 2.3 0.8 table dog horse motor. person plant sheep sofa train tv Average RCNN 45.8 55.8 61.0 66.8 53.9 30.9 53.3 49.2 56.9 64.1 54.1 Ours 47.8 57.9 61.2 67.5 54.9 34.5 55.8 51.0 58.4 65.0 55.3 Gain 2.0 2.1 0.3 0.8 1.0 3.7 2.5 1.8 1.6 0.9 1.2 RCNN (bb) 54.5 61.2 69.1 68.6 58.7 33.4 62.9 51.1 62.5 64.8 58.5 Ours (bb) 56.4 62.9 69.3 69.9 59.6 35.6 64.6 53.2 64.3 65.5 60.0 Gain (bb) 1.9 1.8 0.2 1.4 0.9 2.2 1.7 2.1 1.8 0.7 1.5 Table 1: Detection results on PASCAL 2007. ”bb” means the result after applying bounding box regression. Gain means the improved AP of our system compared to RCNN under the same settings (both with bounding box or without). The better results in the comparisons are bold. performance into a very high level (The average AP is 58.5 with boundary regularization on PASCAL VOC 2007). A question arises: can we further improve their performance? The answer is yes. In our method, we first learn the latent e-SVM models based on the object bounding-box annotation, and calculate the spatial confidence map features as in section 4.2. Then we simply concatenate them with RCNN the features to train object classifiers on candidate windows. We use PASCAL VOC 2007 dataset in this experiment. As shown in table 1, our method can improve the average AP by 1.2 before applying bounding boxes regression. For some categories that the original RCNN does not perform well, such as potted plant, the gain of AP is up to 3.65. After applying bounding box regression for both RCNN and our algorithm, the gain of performance is 1.5 on average. In the experiment, we set m = 5 and adopt average pooling on the pixel level confidence scores within each spatial bin. We also modified the bounding box regularization method used in [14] by augmenting the fifth layer features with additive kernels approximation methods [25]. It will lead to a slightly improved performance. In summary, we achieved an average AP of 60.0, which is 1.5 higher than the best known results (the original RCNN with bounding box regression) of this dataset. Please note that we only use the annotations on PASCAL VOC 2007 to train the e-SVM classifiers and have not considered context. The results are expected to be further improved if the data in ImageNet is used. 6 Conclusion We present a novel learning algorithm call e-SVM that can well handle the situation in which the labels of training data are continuous values whose range is a bounded interval. It can be applied to segment proposal classification task and can be easily extended to learn segment classifiers under weak supervision (e.g. only bounding box annotations are available). We apply this method on two major tasks of computer vision (i.e., semantic segmentation and object detection), and obtain the state-of-the-art object detection performance on PASCAL VOC 2007 dataset. We believe that, with the ever growing size of datesets, it is increasingly important to learn segment classifiers under weak supervision to reduce the amount of labeling required. In future work, we will consider using the bounding box annotation from large datasets, such as ImageNet, to further improve semantic segmentation performance on PASCAL VOC. Acknowledgements. We gratefully acknowledge funding support from the National Science Foundation (NSF) with award CCF-1317376, and from the National Institute of Health NIH Grant 5R01EY022247-03. We also thank the NVIDIA Corporation for providing GPUs in our experiments. 8 References [1] R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Susstrunk. SLIC superpixels compared to state-of-the-art superpixel methods. TPAMI, 34(11):2274–2282, 2012. [2] S. Andrews, I. Tsochantaridis, and T. Hofmann. Support vector machines for multiple-instance learning. In Advances in Neural Information Processing Systems 15, pages 561–568. MIT Press, 2003. [3] P. Arbelaez, B. Hariharan, C. Gu, S. Gupta, and J. Malik. Semantic segmentation using regions and parts. In CVPR, 2012. [4] J. Carreira and C. Sminchisescu. Cpmc: Automatic object segmentation using constrained parametric min-cuts. TPAMI, 34(7):1312–1328, 2012. [5] J. a. Carreira, R. Caseiro, J. Batista, and C. Sminchisescu. Semantic segmentation with second-order pooling. In ECCV, pages 430–443, 2012. [6] J. Deng, A. Berg, , J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2010 (VOC2010) Results. http://www.image-net.org/challenges/LSVRC/2012/index. [7] T. G. Dietterich, R. H. Lathrop, and T. Lozano-P´erez. Solving the multiple instance problem with axisparallel rectangles. Artif. Intell., 89(1-2):31–71, Jan. 1997. [8] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. [9] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2011 (VOC2011) Results. http://www.pascalnetwork.org/challenges/VOC/voc2011/workshop/index.html. [10] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871–1874, 2008. [11] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part-based models. TPAMI, 32(9):1627–1645, 2010. [12] P. F. Felzenszwalb and D. P. Huttenlocher. Efficient graph-based image segmentation. IJCV, 59(2):167– 181, Sept. 2004. [13] S. Fidler, R. Mottaghi, A. L. Yuille, and R. Urtasun. Bottom-up segmentation for top-down detection. In CVPR, pages 3294–3301, 2013. [14] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [15] K. J¨arvelin and J. Kek¨al¨ainen. Cumulated gain-based evaluation of ir techniques. TOIS, 20(4):422–446, 2002. [16] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012. [17] M. Lapin, M. Hein, and B. Schiele. Learning using privileged information: Svm+ and weighted svm. Neural Networks, 53:95–108, 2014. [18] F. Li, J. Carreira, and C. Sminchisescu. Object recognition as ranking holistic figure-ground hypotheses. In CVPR, pages 1712–1719, 2010. [19] Y. Liu, J. Liu, Z. Li, J. Tang, and H. Lu. Weakly-supervised dual clustering for image semantic segmentation. In Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, pages 2075–2082. IEEE, 2013. [20] A. M¨uller and S. Behnke. Multi-instance methods for partially supervised image segmentation. In PSL, pages 110–119, 2012. [21] X. Ren, L. Bo, and D. Fox. Rgb-(d) scene labeling: Features and algorithms. In CVPR, June 2012. [22] J. Shotton, M. Johnson, and R. Cipolla. Semantic texton forests for image categorization and segmentation. In CVPR, pages 1–8, 2008. [23] J. Suykens, J. D. Brabanter, L. Lukas, and J. Vandewalle. Weighted least squares support vector machines: robustness and sparse approximation. NEUROCOMPUTING, 48:85–105, 2002. [24] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 104(2):154–171, 2013. [25] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. TPAMI, 34(3):480–492, 2012. [26] A. Vezhnevets, V. Ferrari, and J. M. Buhmann. Weakly supervised structured output learning for semantic segmentation. In CVPR, pages 845–852, 2012. [27] X. Yang, Q. Song, and A. Cao. Weighted support vector machine for data classification. In IJCNN, 2005. 9
2014
91
5,582
On the Computational Efficiency of Training Neural Networks Roi Livni The Hebrew University roi.livni@mail.huji.ac.il Shai Shalev-Shwartz The Hebrew University shais@cs.huji.ac.il Ohad Shamir Weizmann Institute of Science ohad.shamir@weizmann.ac.il Abstract It is well-known that neural networks are computationally hard to train. On the other hand, in practice, modern day neural networks are trained efficiently using SGD and a variety of tricks that include different activation functions (e.g. ReLU), over-specification (i.e., train networks which are larger than needed), and regularization. In this paper we revisit the computational complexity of training neural networks from a modern perspective. We provide both positive and negative results, some of them yield new provably efficient and practical algorithms for training certain types of neural networks. 1 Introduction One of the most significant recent developments in machine learning has been the resurgence of “deep learning”, usually in the form of artificial neural networks. A combination of algorithmic advancements, as well as increasing computational power and data size, has led to a breakthrough in the effectiveness of neural networks, and they have been used to obtain very impressive practical performance on a variety of domains (a few recent examples include [17, 16, 24, 10, 7]). A neural network can be described by a (directed acyclic) graph, where each vertex in the graph corresponds to a neuron and each edge is associated with a weight. Each neuron calculates a weighted sum of the outputs of neurons which are connected to it (and possibly adds a bias term). It then passes the resulting number through an activation function σ : R →R and outputs the resulting number. We focus on feed-forward neural networks, where the neurons are arranged in layers, in which the output of each layer forms the input of the next layer. Intuitively, the input goes through several transformations, with higher-level concepts derived from lower-level ones. The depth of the network is the number of layers and the size of the network is the total number of neurons. From the perspective of statistical learning theory, by specifying a neural network architecture (i.e. the underlying graph and the activation function) we obtain a hypothesis class, namely, the set of all prediction rules obtained by using the same network architecture while changing the weights of the network. Learning the class involves finding a specific set of weights, based on training examples, which yields a predictor that has good performance on future examples. When studying a hypothesis class we are usually concerned with three questions: 1. Sample complexity: how many examples are required to learn the class. 2. Expressiveness: what type of functions can be expressed by predictors in the class. 3. Training time: how much computation time is required to learn the class. For simplicity, let us first consider neural networks with a threshold activation function (i.e. σ(z) = 1 if z > 0 and 0 otherwise), over the boolean input space, {0, 1}d, and with a single output in {0, 1}. The sample complexity of such neural networks is well understood [3]. It is known that the VC dimension grows linearly with the number of edges (up to log factors). It is also easy to see that no matter what the activation function is, as long as we represent each weight of the network using 1 a constant number of bits, the VC dimension is bounded by a constant times the number of edges. This implies that empirical risk minimization - or finding weights with small average loss over the training data - can be an effective learning strategy from a statistical point of view. As to the expressiveness of such networks, it is easy to see that neural networks of depth 2 and sufficient size can express all functions from {0, 1}d to {0, 1}. However, it is also possible to show that for this to happen, the size of the network must be exponential in d (e.g. [19, Chapter 20]). Which functions can we express using a network of polynomial size? The theorem below shows that all boolean functions that can be calculated in time O(T(d)), can also be expressed by a network of depth O(T(d)) and size O(T(d)2). Theorem 1. Let T : N →N and for every d, let Fd be the set of functions that can be implemented by a Turing machine using at most T(d) operations. Then there exist constants b, c ∈R+ such that for every d, there is a network architecture of depth c T(d) + b, size of (c T(d) + b)2, and threshold activation function, such that the resulting hypotesis class contains Fd. The proof of the theorem follows directly from the relation between the time complexity of programs and their circuit complexity (see, e.g., [22]), and the fact that we can simulate the standard boolean gates using a fixed number of neurons. We see that from the statistical perspective, neural networks form an excellent hypothesis class; On one hand, for every runtime T(d), by using depth of O(T(d)) we contain all predictors that can be run in time at most T(d). On the other hand, the sample complexity of the resulting class depends polynomially on T(d). The main caveat of neural networks is the training time. Existing theoretical results are mostly negative, showing that successfully learning with these networks is computationally hard in the worst case. For example, neural networks of depth 2 contain the class of intersection of halfspaces (where the number of halfspaces is the number of neurons in the hidden layer). By reduction to k-coloring, it has been shown that finding the weights that best fit the training set is NP-hard ([9]). [6] has shown that even finding weights that result in close-to-minimal empirical error is computationally infeasible. These hardness results focus on proper learning, where the goal is to find a nearly-optimal predictor with a fixed network architecture A. However, if our goal is to find a good predictor, there is no reason to limit ourselves to predictors with one particular architecture. Instead, we can try, for example, to find a network with a different architecture A′, which is almost as good as the best network with architecture A. This is an example of the powerful concept of improper learning, which has often proved useful in circumventing computational hardness results. Unfortunately, there are hardness results showing that even with improper learning, and even if the data is generated exactly from a small, depth-2 neural network, there are no efficient algorithms which can find a predictor that performs well on test data. In particular, [15] and [12] have shown this in the case of learning intersections of halfspaces, using cryptographic and average case complexity assumptions. On a related note, [4] recently showed positive results on learning from data generated by a neural network of a certain architecture and randomly connected weights. However, the assumptions used are strong and unlikely to hold in practice. Despite this theoretical pessimism, in practice, modern-day neural networks are trained successfully in many learning problems. There are several tricks that enable successful training: • Changing the activation function: The threshold activation function, σ(a) = 1a>0, has zero derivative almost everywhere. Therefore, we cannot apply gradient-based methods with this activation function. To circumvent this problem, we can consider other activation functions. Most widely known is a sigmoidal activation, e.g. σ(a) = 1 1+ea , which forms a smooth approximation of the threshold function. Another recent popular activation function is the rectified linear unit (ReLU) function, σ(a) = max{0, a}. Note that subtracting a shifted ReLU from a ReLU yields an approximation of the threshold function, so by doubling the number of neurons we can approximate a network with threshold activation by a network with ReLU activation. • Over-specification: It was empirically observed that it is easier to train networks which are larger than needed. Indeed, we empirically demonstrate this phenomenon in Sec. 5. • Regularization: It was empirically observed that regularizing the weights of the network speeds up the convergence (e.g. [16]). 2 The goal of this paper is to revisit and re-raise the question of neural network’s computational efficiency, from a modern perspective. This is a challenging topic, and we do not pretend to give any definite answers. However, we provide several results, both positive and negative. Most of them are new, although a few appeared in the literature in other contexts. Our contributions are as follows: • We make a simple observation that for sufficiently over-specified networks, global optima are ubiquitous and in general computationally easy to find. Although this holds only for extremely large networks which will overfit, it can be seen as an indication that the computational hardness of learning does decrease with the amount of over-specification. This is also demonstrated empirically in Sec. 5. • Motivated by the idea of changing the activation function, we consider the quadratic activation function, σ(a) = a2. Networks with the quadratic activation compute polynomial functions of the input in Rd, hence we call them polynomial networks. Our main findings for such networks are as follows: – Networks with quadratic activation are as expressive as networks with threshold activation. – Constant depth networks with quadratic activation can be learned in polynomial time. – Sigmoidal networks of depth 2, and with ℓ1 regularization, can be approximated by polynomial networks of depth O(log log(1/ϵ)). It follows that sigmoidal networks with ℓ1 regularization can be learned in polynomial time as well. – The aforementioned positive results are interesting theoretically, but lead to impractical algorithms. We provide a practical, provably correct, algorithm for training depth-2 polynomial networks. While such networks can also be learned using a linearization trick, our algorithm is more efficient and returns networks whose size does not depend on the data dimension. Our algorithm follows a forward greedy selection procedure, where each step of the greedy selection procedure builds a new neuron by solving an eigenvalue problem. – We generalize the above algorithm to depth-3, in which each forward greedy step involves an efficient approximate solution to a tensor approximation problem. The algorithm can learn a rich sub-class of depth-3 polynomial networks. – We describe some experimental evidence, showing that our practical algorithm is competitive with state-of-the-art neural network training methods for depth-2 networks. 2 Sufficiently Over-Specified Networks Are Easy to Train We begin by considering the idea of over-specification, and make an observation that for sufficiently over-specified networks, the optimization problem associated with training them is generally quite easy to solve, and that global optima are in a sense ubiquitous. As an interesting contrast, note that for very small networks (such as a single neuron with a non-convex activation function), the associated optimization problem is generally hard, and can exhibit exponentially many local (non-global) minima [5]. We emphasize that our observation only holds for extremely large networks, which will overfit in any reasonable scenario, but it does point to a possible spectrum where computational cost decreases with the amount of over-specification. To present the result, let X ∈Rd,m be a matrix of m training examples in Rd. We can think of the network as composed of two mappings. The first maps X into a matrix Z ∈Rn,m, where n is the number of neurons whose outputs are connected to the output layer. The second mapping is a linear mapping Z 7→WZ, where W ∈Ro,n, that maps Z to the o neurons in the output layer. Finally, there is a loss function ℓ: Ro,m →R, which we’ll assume to be convex, that assesses the quality of the prediction on the entire data (and will of course depend on the m labels). Let V denote all the weights that affect the mapping from X to Z, and denote by f(V ) the function that maps V to Z. The optimization problem associated with learning the network is therefore minW,V ℓ(W f(V )). The function ℓ(W f(V )) is generally non-convex, and may have local minima. However, if n ≥m, then it is reasonable to assume that Rank(f(V )) = m with large probability (under some random choice of V ), due to the non-linear nature of the function computed by neural networks1. In that case, we can simply fix V and solve minW ℓ(W f(V )), which is computationally tractable as ℓis 1For example, consider the function computed by the first layer, X 7→σ(VdX), where σ is a sigmoid function. Since σ is non-linear, the columns of σ(VdX) will not be linearly dependent in general. 3 assumed to be convex. Since f(V ) has full rank, the solution of this problem corresponds to a global optima of ℓ, and hence to a global optima of the original optimization problem. Thus, for sufficiently large networks, finding global optima is generally easy, and they are in a sense ubiquitous. 3 The Hardness of Learning Neural Networks We now review several known hardness results and apply them to our learning setting. For simplicity, throughout most of this section we focus on the PAC model in the binary classification case, over the Boolean cube, in the realizable case, and with a fixed target accuracy.2 Fix some ϵ, δ ∈(0, 1). For every dimension d, let the input space be Xd = {0, 1}d and let H be a hypothesis class of functions from Xd to {±1}. We often omit the subscript d when it is clear from context. A learning algorithm A has access to an oracle that samples x according to an unknown distribution D over X and returns (x, f ∗(x)), where f ∗is some unknown target hypothesis in H. The objective of the algorithm is to return a classifier f : X →{±1}, such that with probability of at least 1 −δ, Px∼D [f(x) ̸= f ∗(x)] ≤ϵ. We say that A is efficient if it runs in time poly(d) and the function it returns can also be evaluated on a new instance in time poly(d). If there is such A, we say that H is efficiently learnable. In the context of neural networks, every network architecture defines a hypothesis class, Nt,n,σ, that contains all target functions f that can be implemented using a neural network with t layers, n neurons (excluding input neurons), and an activation function σ. The immediate question is which Nt,n,σ are efficiently learnable. We will first address this question for the threshold activation function, σ0,1(z) = 1 if z > 0 and 0 otherwise. Observing that depth-2 networks with the threshold activation function can implement intersections of halfspaces, we will rely on the following hardness results, due to [15]. Theorem 2 (Theorem 1.2 in [15]). Let X = {±1}d, let Ha =  x →σ0,1 w⊤x −b −1/2  : b ∈N, w ∈Nd, |b| + ∥w∥1 ≤poly(d) , and let Ha k = {x →h1(x) ∧h2(x) ∧. . . ∧hk(x) : ∀i, hi ∈Ha}, where k = dρ for some constant ρ > 0. Then under a certain cryptographic assumption, Ha k is not efficiently learnable. Under a different complexity assumption, [12] showed a similar result even for k = ω(1). As mentioned before, neural networks of depth ≥2 and with the σ0,1 activation function can express intersections of halfspaces: For example, the first layer consists of k neurons computing the k halfspaces, and the second layer computes their conjunction by the mapping x 7→ σ0,1 (P i xi −k + 1/2). Trivially, if some class H is not efficiently learnable, then any class containing it is also not efficiently learnable. We thus obtain the following corollary: Corollary 1. For every t ≥2, n = ω(1), the class Nt,n,σ0,1 is not efficiently learnable (under the complexity assumption given in [12]). What happens when we change the activation function? In particular, two widely used activation functions for neural networks are the sigmoidal activation function, σsig(z) = 1/(1 + exp(−z)), and the rectified linear unit (ReLU) activation function, σrelu(z) = max{z, 0}. As a first observation, note that for |z| ≫1 we have that σsig(z) ≈σ0,1(z). Our data domain is the discrete Boolean cube, hence if we allow the weights of the network to be arbitrarily large, then Nt,n,σ0,1 ⊆Nt,n,σsig. Similarly, the function σrelu(z)−σrelu(z−1) equals σ0,1(z) for every |z| ≥1. As a result, without restricting the weights, we can simulate each threshold activated neuron by two ReLU activated neurons, which implies that Nt,n,σ0,1 ⊆Nt,2n,σrelu. Hence, Corollary 1 applies to both sigmoidal networks and ReLU networks as well, as long as we do not regularize the weights of the network. 2While we focus on the realizable case (i.e., there exists f ∗∈H that provides perfect predictions), with a fixed accuracy (ϵ) and confidence (δ), since we are dealing with hardness results, the results trivially apply to the agnostic case and to learning with arbitrarily small accuracy and confidence parameters. 4 What happens when we do regularize the weights? Let Nt,n,σ,L be all target functions that can be implemented using a neural network of depth t, size n, activation function σ, and when we restrict the input weights of each neuron to be ∥w∥1 + |b| ≤L. One may argue that in many real world distributions, the difference between the two classes, Nt,n,σ,L and Nt,n,σ0,1 is small. Roughly speaking, when the distribution density is low around the decision boundary of neurons (similarly to separation with margin assumptions), then sigmoidal neurons will be able to effectively simulate threshold activated neurons. In practice, the sigmoid and ReLU activation functions are advantageous over the threshold activation function, since they can be trained using gradient based methods. Can these empirical successes be turned into formal guarantees? Unfortunately, a closer examination of Thm. 2 demonstrates that if L = Ω(d) then learning N2,n,σsig,L and N2,n,σrelu,L is still hard. Formally, to apply these networks to binary classification, we follow a standard definition of learning with a margin assumption: We assume that the learner receives examples of the form (x, sign(f ∗(x))) where f ∗is a real-valued function that comes from the hypothesis class, and we further assume that |f ∗(x)| ≥1. Even under this margin assumption, we have the following: Corollary 2. For every t ≥2, n = ω(1), L = Ω(d), the classes Nt,n,σsig,L and Nt,n,σrelu,L are not efficiently learnable (under the complexity assumption given in [12]). A proof is provided in the appendix. What happens when L is much smaller? Later on in the paper we will show positive results for L being a constant and the depth being fixed. These results will be obtained using polynomial networks, which we study in the next section. 4 Polynomial Networks In the previous section we have shown several strong negative results for learning neural networks with the threshold, sigmoidal, and ReLU activation functions. One way to circumvent these hardness results is by considering another activation function. Maybe the simplest non-linear function is the squared function, σ2(x) = x2. We call networks that use this activation function polynomial networks, since they compute polynomial functions of their inputs. As in the previous section, we denote by Nt,n,σ2,L the class of functions that can be implemented using a neural network of depth t, size n, squared activation function, and a bound L on the ℓ1 norm of the input weights of each neuron. Whenever we do not specify L we refer to polynomial networks with unbounded weights. Below we study the expressiveness and computational complexity of polynomial networks. We note that algorithms for efficiently learning (real-valued) sparse or low-degree polynomials has been studied in several previous works (e.g. [13, 14, 8, 2, 1]). However, these rely on strong distributional assumptions, such as the data instances having a uniform or log-concave distribution, while we are interested in a distribution-free setting. 4.1 Expressiveness We first show that, similarly to networks with threshold activation, polynomial networks of polynomial size can express all functions that can be implemented efficiently using a Turing machine. Theorem 3 (Polynomial networks can express Turing Machines). Let Fd and T be as in Thm. 1. Then there exist constants b, c ∈R+ such that for every d, the class Nt,n,σ2,L, with t = c T(d) log(T(d)) + b, n = t2, and L = b, contains Fd. The proof of the theorem relies on the result of [18] and is given in the appendix. Another relevant expressiveness result, which we will use later, shows that polynomial networks can approximate networks with sigmoidal activation functions: Theorem 4. Fix 0 < ϵ < 1, L ≥3 and t ∈N. There are Bt ∈˜O(log(tL + L log 1 ϵ )) and Bn ∈˜O(tL+L log 1 ϵ ) such that for every f ∈Nt,n,σsig,L there is a function g ∈NtBt,nBn,σ2, such that sup∥x∥∞<1 ∥f(x) −g(x)∥∞≤ϵ. The proof relies on an approximation of the sigmoid function based on Chebyshev polynomials, as was done in [21], and is given in the appendix. 5 4.2 Training Time We now turn to the computational complexity of learning polynomial networks. We first show that it is hard to learn polynomial networks of depth Ω(log(d)). Indeed, by combining Thm. 4 and Corollary 2 we obtain the following: Corollary 3. The class Nt,n,σ2, where t = Ω(log(d)) and n = Ω(d), is not efficiently learnable. On the flip side, constant-depth polynomial networks can be learned in polynomial time, using a simple linearization trick. Specifically, the class of polynomial networks of constant depth t is contained in the class of multivariate polynomials of total degree at most s = 2t. This class can be represented as a ds-dimensional linear space, where each vector is the coefficient vector of some such polynomial. Therefore, the class of polynomial networks of depth t can be learned in time poly(d2t), by mapping each instance vector x ∈Rd to all of its monomials, and learning a linear predictor on top of this representation (which can be done efficiently in the realizable case, or when a convex loss function is used). In particular, if t is a constant then so is 2t and therefore polynomial networks of constant depth are efficiently learnable. Another way to learn this class is using support vector machines with polynomial kernels. An interesting application of this observation is that depth-2 sigmoidal networks are efficiently learnable with sufficient regularization, as formalized in the result below. This contrasts with corollary 2, which provides a hardness result without regularization. Theorem 5. The class N2,n,σsig,L can be learned, to accuracy ϵ, in time poly(T) where T = (1/ϵ) · O(d4L ln(11L2+1)). The idea of the proof is as follows. Suppose that we obtain data from some f ∈N2,n,σsig,L. Based on Thm. 4, there is g ∈N2Bt,nBn,σ2 that approximates f to some fixed accuracy ϵ0 = 0.5, where Bt and Bn are as defined in Thm. 4 for t = 2. Now we can learn N2Bt,nBn,σ2 by considering the class of all polynomials of total degree 22Bt, and applying the linearization technique discussed above. Since f is assumed to separate the data with margin 1 (i.e. y = sign(f ∗(x)),|f ∗(x)| ≥1|), then g separates the data with margin 0.5, which is enough for establishing accuracy ϵ in sample and time that depends polynomially on 1/ϵ. 4.3 Learning 2-layer and 3-layer Polynomial Networks While interesting theoretically, the above results are not very practical, since the time and sample complexity grow very fast with the depth of the network.3 In this section we describe practical, provably correct, algorithms for the special case of depth-2 and depth-3 polynomial networks, with some additional constraints. Although such networks can be learned in polynomial time via explicit linearization (as described in section 4.2), the runtime and resulting network size scales quadratically (for depth-2) or cubically (for depth-3) with the data dimension d. In contrast, our algorithms and guarantees have a much milder dependence on d. We first consider 2 layer polynomial networks, of the following form: P2,k = ( x 7→b + w⊤ 0 x + k X i=1 αi(w⊤ i x)2 : ∀i ≥1, |αi| ≤1, ∥wi∥2 = 1 ) . This networks corresponds to one hidden layer containing r neurons with the squared activation function, where we restrict the input weights of all neurons in the network to have bounded ℓ2 norm, and where we also allow a direct linear dependency between the input layer and the output layer. We’ll describe an efficient algorithm for learning this class, which is based on the GECO algorithm for convex optimization with low-rank constraints [20]. 3If one uses SVM with polynomial kernels, the time and sample complexity may be small under margin assumptions in a feature space corresponding to a given kernel. Note, however, that large margin in that space is very different than the assumption we make here, namely, that there is a network with a small number of hidden neurons that works well on the data. 6 The goal of the algorithm is to find f that minimizes the objective R(f) = 1 m m X i=1 ℓ(f(xi), yi), (1) where ℓ: R × R →R is a loss function. We’ll assume that ℓis β-smooth and convex. The basic idea of the algorithm is to gradually add hidden neurons to the hidden layer, in a greedy manner, so as to decrease the loss function over the data. To do so, define V = {x 7→(w⊤x)2 : ∥w∥2 = 1} the set of functions that can be implemented by hidden neurons. Then every f ∈P2,r is an affine function plus a weighted sum of functions from V. The algorithm starts with f being the minimizer of R over all affine functions. Then at each greedy step, we search for g ∈V that minimizes a first order approximation of R(f + ηg): R(f + ηg) ≈R(f) + η 1 m m X i=1 ℓ′(f(xi), yi)g(xi) , (2) where ℓ′ is the derivative of ℓw.r.t. its first argument. Observe that for every g ∈V there is some w with ∥w∥2 = 1 for which g(x) = (w⊤x)2 = w⊤xx⊤w. Hence, the right-hand side of Eq. (2) can be rewritten as R(f) + η w⊤ 1 m Pm i=1 ℓ′(f(xi), yi)xix⊤ i  w . The vector w that minimizes this expression (for positive η) is the leading eigenvector of the matrix 1 m Pm i=1 ℓ′(f(xi), yi)xix⊤ i  . We add this vector as a hidden neuron to the network.4 Finally, we minimize R w.r.t. the weights from the hidden layer to the output layer (namely, w.r.t. the weights αi). The following theorem, which follows directly from Theorem 1 of [20], provides convergence guarantee for GECO. Observe that the theorem gives guarantee for learning P2,k if we allow to output an over-specified network. Theorem 6. Fix some ϵ > 0. Assume that the loss function is convex and β-smooth. Then if the GECO Algorithm is run for r > 2βk2 ϵ iterations, it outputs a network f ∈N2,r,σ2 for which R(f) ≤minf ∗∈P2,k R(f ∗) + ϵ. We next consider a hypothesis class consisting of third degree polynomials, which is a subset of 3-layer polynomial networks (see Lemma 1 in the appendix) . The hidden neurons will be functions from the class: V = ∪3 i=1Vi where Vi = n x 7→Qi j=1(w⊤ j x) : ∀j, ∥wj∥2 = 1 o . The hypothesis class we consider is P3,k = n x 7→Pk i=1 αigi(x) : ∀i, |αi| ≤1, gi ∈V o . The basic idea of the algorithm is the same as for 2-layer networks. However, while in the 2-layer case we could implement efficiently each greedy step by solving an eigenvalue problem, we now face the following tensor approximation problem at each greedy step: max g∈V3 1 m m X i=1 ℓ′(f(xi), yi)g(xi) = max ∥w∥=1,∥u∥=1,∥v∥=1 1 m m X i=1 ℓ′(f(xi), yi)(w⊤xi)(u⊤xi)(v⊤xi) . While this is in general a hard optimization problem, we can approximate it – and luckily, an approximate greedy step suffices for success of the greedy procedure. This procedure is given in Figure 1, and is again based on an approximate eigenvector computation. A guarantee for the quality of approximation is given in the appendix, and this leads to the following theorem, whose proof is given in the appendix. Theorem 7. Fix some δ, ϵ > 0. Assume that the loss function is convex and β-smooth. Then if the GECO Algorithm is run for r > 4dβk2 ϵ(1−τ)2 iterations, where each iteration relies on the approximation procedure given in Fig. 1, then with probability (1−δ)r, it outputs a network f ∈N3,5r,σ2 for which R(f) ≤minf ∗∈P3,k R(f ∗) + ϵ. 4It is also possible to find an approximate solution to the eigenvalue problem and still retain the performance guarantees (see [20]). Since an approximate eigenvalue can be found in time O(d) using the power method, we obtain the runtime of GECO depends linearly on d. 7 Input: {xi}m i=1 ∈Rd α ∈Rm, τ,δ Output: A 1−τ √ d approximate solution to max ∥w∥,∥u∥,∥v∥=1 F(w, u, v) = X i αi(w⊤xi)(u⊤xi)(v⊤xi) Pick randomly w1, . . . , ws iid according to N(0, Id). For t = 1, . . . , 2d log 1 δ wt ← wt ∥wt∥ Let A = P i αi(w⊤ t xi)xix⊤ i and set ut, vt s.t: Tr(u⊤ t Avt) ≥(1 −τ) max∥u∥,∥v∥=1 Tr(u⊤Av). Return w, u, v the maximizers of maxi≤s F(wi, ui, ui). Figure 1: Approximate tensor maximization. 5 Experiments To demonstrate the practicality of GECO to train neural networks for real world problems, we considered a pedestrian detection problem as follows. We collected 200k training examples of image patches of size 88x40 pixels containing either pedestrians (positive examples) or hard negative examples (containing images that were classified as pedestrians by applying a simple linear classifier in a sliding window manner). See a few examples of images above. We used half of the examples as a 0 0.2 0.4 0.6 0.8 1 ·105 5 · 10−2 6 · 10−2 7 · 10−2 8 · 10−2 9 · 10−2 0.1 iterations Error SGD ReLU SGD Squared GECO 0 0.2 0.4 0.6 0.8 1 ·105 0 1 2 3 4 #iterations MSE 1 2 4 8 training set and the other half as a test set. We calculated HoG features ([11]) from the images5. We then trained, using GECO, a depth-2 polynomial network on the resulting features. We used 40 neurons in the hidden layer. For comparison we trained the same network architecture (i.e. 40 hidden neurons with a squared activation function) by SGD. We also trained a similar network (40 hidden neurons again) with the ReLU activation function. For the SGD implementation we tried the following tricks to speed up the convergence: heuristics for initialization of the weights, learning rate rules, mini-batches, Nesterov’s momentum (as explained in [23]), and dropout. The test errors of SGD as a function of the number of iterations are depicted on the top plot of the Figure on the side. We also mark the performance of GECO as a straight line (since it doesn’t involve SGD iterations). As can be seen, the error of GECO is slightly better than SGD. It should be also noted that we had to perform a very large number of SGD iterations to obtain a good solution, while the runtime of GECO was much faster. This indicates that GECO may be a valid alternative approach to SGD for training depth-2 networks. It is also apparent that the squared activation function is slightly better than the ReLU function for this task. The second plot of the side figure demonstrates the benefit of over-specification for SGD. We generated random examples in R150 and passed them through a random depth-2 network that contains 60 hidden neurons with the ReLU activation function. We then tried to fit a new network to this data with over-specification factors of 1, 2, 4, 8 (e.g., overspecification factor of 4 means that we used 60 · 4 = 240 hidden neurons). As can be clearly seen, SGD converges much faster when we over-specify the network. Acknowledgements: This research is supported by Intel (ICRI-CI). OS was also supported by an ISF grant (No. 425/13), and a Marie-Curie Career Integration Grant. SSS and RL were also supported by the MOS center of Knowledge for AI and ML (No. 3-9243). RL is a recipient of the Google Europe Fellowship in Learning Theory, and this research is supported in part by this Google Fellowship. We thank Itay Safran for spotting a mistake in a previous version of Sec. 2 and to James Martens for helpful discussions. 5Using the Matlab implementation provided in http://www.mathworks.com/matlabcentral/ fileexchange/33863-histograms-of-oriented-gradients. 8 References [1] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning polynomials with neural networks. In ICML, 2014. [2] A. Andoni, R. Panigrahy, G. Valiant, and L. Zhang. Learning sparse polynomial functions. In SODA, 2014. [3] M. Anthony and P. Bartlett. Neural Network Learning - Theoretical Foundations. Cambridge University Press, 2002. [4] S. Arora, A. Bhaskara, R. Ge, and T. Ma. Provable bounds for learning some deep representations. arXiv preprint arXiv:1310.6343, 2013. [5] P. Auer, M. Herbster, and M. Warmuth. Exponentially many local minima for single neurons. In NIPS, 1996. [6] P. L. Bartlett and S. Ben-David. Hardness results for neural network approximation problems. Theor. Comput. Sci., 284(1):53–66, 2002. [7] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35:1798–1828, 2013. [8] E. Blais, R. O’Donnell, and K. Wimmer. Polynomial regression under arbitrary product distributions. Machine Learning, 80(2-3):273–294, 2010. [9] A. Blum and R. Rivest. Training a 3-node neural network is np-complete. Neural Networks, 5(1):117–127, 1992. [10] G. Dahl, T. Sainath, and G. Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In ICASSP, 2013. [11] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. [12] A. Daniely, N. Linial, and S. Shalev-Shwartz. From average case complexity to improper learning complexity. In FOCS, 2014. [13] A. Kalai, A. Klivans, Y. Mansour, and R. Servedio. Agnostically learning halfspaces. SIAM J. Comput., 37(6):1777–1805, 2008. [14] A. Kalai, A. Samorodnitsky, and S.-H. Teng. Learning and smoothed analysis. In FOCS, 2009. [15] A. Klivans and A. Sherstov. Cryptographic hardness for learning intersections of halfspaces. In FOCS, 2006. [16] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [17] Q. V. Le, M.-A. Ranzato, R. Monga, M. Devin, G. Corrado, K. Chen, J. Dean, and A. Y. Ng. Building high-level features using large scale unsupervised learning. In ICML, 2012. [18] N. Pippenger and M. Fischer. Relations among complexity measures. Journal of the ACM (JACM), 26(2):361–381, 1979. [19] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [20] S. Shalev-Shwartz, A. Gonen, and O. Shamir. Large-scale convex minimization with a lowrank constraint. In ICML, 2011. [21] S. Shalev-Shwartz, O. Shamir, and K. Sridharan. Learning kernel-based halfspaces with the 0-1 loss. SIAM Journal on Computing, 40(6):1623–1646, 2011. [22] M. Sipser. Introduction to the Theory of Computation. Thomson Course Technology, 2006. [23] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, 2013. [24] M. Zeiler and R. Fergus. Visualizing and understanding convolutional neural networks. arXiv preprint arXiv:1311.2901, 2013. 9
2014
92
5,583
Scaling-up Importance Sampling for Markov Logic Networks Deepak Venugopal Department of Computer Science University of Texas at Dallas dxv021000@utdallas.edu Vibhav Gogate Department of Computer Science University of Texas at Dallas vgogate@hlt.utdallas.edu Abstract Markov Logic Networks (MLNs) are weighted first-order logic templates for generating large (ground) Markov networks. Lifted inference algorithms for them bring the power of logical inference to probabilistic inference. These algorithms operate as much as possible at the compact first-order level, grounding or propositionalizing the MLN only as necessary. As a result, lifted inference algorithms can be much more scalable than propositional algorithms that operate directly on the much larger ground network. Unfortunately, existing lifted inference algorithms suffer from two interrelated problems, which severely affects their scalability in practice. First, for most real-world MLNs having complex structure, they are unable to exploit symmetries and end up grounding most atoms (the grounding problem). Second, they suffer from the evidence problem, which arises because evidence breaks symmetries, severely diminishing the power of lifted inference. In this paper, we address both problems by presenting a scalable, lifted importance sampling-based approach that never grounds the full MLN. Specifically, we show how to scale up the two main steps in importance sampling: sampling from the proposal distribution and weight computation. Scalable sampling is achieved by using an informed, easy-to-sample proposal distribution derived from a compressed MLN-representation. Fast weight computation is achieved by only visiting a small subset of the sampled groundings of each formula instead of all of its possible groundings. We show that our new algorithm yields an asymptotically unbiased estimate. Our experiments on several MLNs clearly demonstrate the promise of our approach. 1 Introduction Markov Logic Networks (MLNs) [5] are powerful template models that define Markov networks by instantiating first-order formulas with objects from its domain. Designing scalable inference for MLNs is a challenging task because as the domain-size increases, the Markov network underlying the MLN can become extremely large. Lifted inference algorithms [1, 2, 3, 7, 8, 13, 15, 18] try to tackle this challenge by exploiting symmetries in the relational representation. However, current lifted inference approaches face two interrelated problems. First, most of these techniques have the grounding problem, i.e., unless the MLN has a specific symmetric, liftable structure [3, 4, 9], most algorithms tend to ground most formulas in the MLN and this is infeasible for large domains. Second, lifted inference algorithms have an evidence problem, i.e., even if the MLN is liftable, in the presence of arbitrary evidence, symmetries are broken and once again, lifted inference is just as scalable as propositional inference [16]. Both these problems are severe because, often, practical applications require arbitrarily structured MLNs which can handle arbitrary evidence. To handle this problem, a promising approach is to approximate/bias the MLN distribution such that inference is less expensive on this biased MLN. This idea has been explored in recent work such as [16] which uses the idea of introducing new symmetries or [19] which uses unsupervised learning to reduce the objects in the 1 domain. However, in both these approaches, it may turn out that for certain cases, the bias skews the MLN distribution to a large extent. Here, we propose a general-purpose importance sampling based algorithm that retains the scalability of the aforementioned biased approaches but has theoretical guarantees, i.e., it yields asymptotically unbiased estimates. Importance sampling, a widely used sampling approach has two steps, namely, we first sample from a proposal distribution and next, for each sample, we compute its importance weight. It turns out that for MLNs, both steps can be computationally expensive. Therefore, we scale-up each of these steps. Specifically, to scale-up step one, based on the recently proposed MLN approximation approach [19], we design an informed proposal distribution using a “compressed” representation of the ground MLN. We then compile a symbolic counting formula where each symbol is lifted, i.e., it represents multiple assignments to multiple ground atoms. The compilation allows us to sample each lifted symbol efficiently using Gibbs sampling. Importantly, the state space of the sampler depends upon the number of symbols allowing us to trade-off accuracy-of-the-proposal with efficiency. Step two requires iterating over all ground formulas to compute the number of groundings satisfied by a sample. Though this operation can be made space-efficient (for bounded formula-length), i.e., we can go over each grounding independently, the time-complexity is prohibitively large and is equivalent to the grounding problem. For example, consider a simple relationship, Friends(x, y) ∧Likes(y, z) ⇒Likes(x, z). If the domain-size of each variable is 100, then to obtain the importance weight of a single sample, we need to process 1 million ground formulas which is practically infeasible. Therefore, to make this weight-computation step feasible, we propose the following approach. We use a second sampler to sample ground formulas in the MLN and compute the importance weight based on the sampled groundings. We show that this method yields asymptotically unbiased estimates. Further, by taking advantage of first-order structure, we reduce the variance of estimates in many cases through Rao-Blackwellization [11]. We perform experiments on varied MLN structures (Alchemy benchmarks [10]) with arbitrary evidence to illustrate the generality of our approach. We show that using our approach, we can systematically trade-off accuracy with efficiency and can scale-up inference to extremely large domain-sizes which cannot be handled by state-of-the-art MLN systems such as Alchemy. 2 Preliminaries 2.1 Markov Logic In this paper, we assume a strict subset of first-order logic called finite Herbrand logic. Thus, we assume that we have no function constants and finitely many object constants. We also assume that each argument of each predicate is typed and can only be assigned to a fixed subset of constants. By extension, each logical variable in each formula is also typed. The domain of a term x in any formula refers to the set of constants that can be substituted for x and is represented as ∆x. We further assume that all first-order formulas are disjunctive (clauses), have no free logical variables (namely, each logical variable is quantified), have only universally quantified logical variables (CNF). Note that all first-order formulas can be easily converted to this form. A ground atom is an atom that contains no logical variables. Markov logic extends FOL by softening the hard constraints expressed by the formulas. A soft formula or a weighted formula is a pair (f, w) where f is a formula in FOL and w is a real-number. A MLN denoted by M, is a set of weighted formulas (fi, wi). Given a set of constants that represent objects in the domain, an MLN defines a Markov network or a log-linear model. The Markov network is obtained by grounding the weighted first-order knowledge base and represents the following probability distribution. PM(ω) = 1 Z(M) exp X i wiN(fi, ω) ! (1) where ω is a world, N(fi, ω) is the number of groundings of fi that evaluate to True in the world ω and Z(M) is a normalization constant or the partition function. In this paper, we assume that the input MLN to our algorithm is in normal form [9, 12]. A normal MLN [9] is an MLN that satisfies the following two properties: (1) There are no constants in any formula, and (2) If two distinct atoms with the same predicate symbol have variables x and y in 2 the same position then ∆x = ∆y. An important distinction here is that, unlike in previous work on lifted inference that use normal forms [7, 9] which require the MLN along with the associated evidence to be normalized, here we only require the MLN in normal form. This is important because normalizing the MLN along with evidence typically requires grounding the MLN and blows-up its size. In contrast, normalizing without evidence typically does not change the MLN. For instance, in all the benchmarks in Alchemy, the MLNs are already normalized. Two main inference problems in MLNs are computing the partition function and the marginal probabilities of query atoms given evidence. In this paper, we focus on the latter. 2.2 Importance Sampling Importance sampling [6] is a standard sampling-based approach, where we draw samples from a proposal distribution H that is easier to sample compared to sampling from the true distribution P. Each sample is then weighted with its importance weight to correct for the fact that it is drawn from the wrong distribution. To compute the marginal probabilities from the weighted samples, we use the following estimator. P ′( ¯Q) = PT t=1 δ ¯ Q(¯s(t))w(¯s(t)) PT t=1 w(¯s(t)) (2) where ¯s(t) is the tth sample drawn from H, δ ¯ Q(¯s(t)) = 1 iff the query atom Q is assigned ¯Q in ¯s(t) and 0 otherwise, w(¯s(t)) is the importance weight of the sample given by P (¯s(t)) H(¯s(t)). P ′( ¯Q) computed from Eq. (2) is an asymptotically unbiased estimate of PM( ¯Q), namely as T →∞ P ′( ¯Q) almost surely converges to P( ¯Q). Eq. (2) is called as a ratio estimate or a normalized estimate because we only need to know each sample’s importance weight up to a normalizing constant. We will leverage this property throughout the paper. 2.3 Compressed MLN Representation Recently, we [19] proposed an approach to generate a “compressed” approximation of the MLN using unsupervised learning. Specifically, for each unique domain in the MLN, the objects in that domain are clustered into groups based on approximate symmetries. To learn the clusters effectively, we use standard clustering algorithms and a distance function based on the evidence structure presented to the MLN. The distance function is constructed to ensure that objects that are approximately symmetrical to each other (from an inference perspective) are placed in a common cluster. Formally, given a MLN M, let D denote the set of all domains in M. That is, D ∈D is a set of objects that belong to the same domain. To compress M, we consider each D ∈D independently and learn a new domain D′ where |D′| ≤D and g : D →D′ is a surjective mapping, i.e., ∀µ ∈D′, ∃C ∈D such that g(C) = µ. In other words, each cluster of objects is replaced by its cluster center in the reduced domain. In this paper, we utilize the above approach to build an informed proposal distribution for importance sampling. 3 Scalable Importance Sampling In this section, we describe the two main steps in our new importance sampling algorithm: (a) constructing and sampling the proposal distribution, and (b) computing the sample weight. We carefully design each step, ensuring that we never ground the full MLN. As a result, the computational complexity of our method is much smaller than existing importance sampling approaches [8]. 3.1 Constructing and Sampling the Proposal Distribution We first compress the domains of the given MLN, say M, based on the method in [19]. Let ˆ M be the network obtained by grounding M with its reduced domains (which corresponds to the cluster centers) and let MG be the ground Markov network of M using the original domains. ˆ M and MG 3 Formulas: R(x) ∨S(x, y), w Domains: ∆x = {A1, B1, C1, D1} ∆y = {A2, B2, C2, D2} (a) Formulas: R1(µ1) ∨S(µ1, µ3), w; R1(µ2) ∨S(µ2, µ3), w R1(µ1) ∨S(µ1, µ4), w; R1(µ2) ∨S(µ2, µ4), w Domains: ζ(µ1) = {A1, B1}; ζ(µ2) = {C1, D1} ζ(µ3) = {A2, B2}; and ζ(µ4) = {C2, D2} (b) Figure 1: (a) an example MLN M and (b) MLN ˆ M obtained from M by grounding each logical variable in M by the cluster centers µ1, . . ., µ4. are related as follows. We can think of ˆ M as an MLN, in which the logical variables are the cluster centers. If we set the domain of each logical variable corresponding to cluster center µ ∈D′ to ζ(µ) where ζ(µ) = {C ∈D|g(C) = µ}, then the ground Markov network of ˆ M is MG. Figure 1 shows an example MLN M and its corresponding compressed MLN ˆ M. Notice that the Markov network obtained by grounding M is the same as the one obtained by grounding ˆ M. Next, we describe how to generate samples from ˆ M. Let ˆ M contain ˆK predicates, for which we assume some ordering. Let E and U represent the counts of true (evidence) and unknown ground atoms respectively. For instance, Ei ∈E represents the number of true ground atoms corresponding to the i-th predicate in ˆ M. To keep the equations more readable, we assume that we only have positive evidence (i.e., an assertion that the ground atom is true). Note that it is straightforward to extend the equations to the general case in which we have both positive and negative evidences. Without loss of generality, let the j-th formula in ˆ M, denoted by fj, contain the atoms p1, . . . pk where pi is an instance of the pi-th predicate and if i ≤m, it has a positive sign else it has a negative sign. The task is to now count the total number of satisfied groundings in fj symbolically without actually going over the ground formulas. Unfortunately, this task is in #P. Therefore, we make the following approximation. Let N(p1, . . . pk) denote the number of the satisfied groundings of fj based on the assignments to all groundings of predicates indexed by p1, . . . pk. Then, we will approximate N(p1, . . . pk) using Pk i=1 N(pi), thereby independently counting the number of satisfied groundings for each predicate. Clearly, our approximation overestimates the number of satisfied formulas because it ignores the joint dependencies between atoms in f. To compensate for this, we scale-down each count by a scaling factor (γ) which is the ratio of the actual number of ground formulas in f to the assumed number of ground formulas. Next, we define these counting equations formally. Given the j-th formula fj and a set of indexes k, where k ∈k corresponds to the k-th atom in fj, let #Gfj(k) denote the number of ground formulas in fj if all the terms in all atoms specified by k are replaced by constants. For instance, in the example shown in Fig. 1, let f be R1(µ1) ∨S1(µ1, µ3), then, #Gf(∅) = 4, #Gf({1}) = 2 and #Gf({2}) = 1. We now count fj’s satisfied groundings symbolically as follows. S′ j = γ m X i=1 Epi#Gfj({i}) (3) where γ = #Gfj (∅) m#Gfj (∅) = 1 m and S′ j is rounded to the nearest integer. Sj = γ m X i=1 ˆSpi#Gfj({i}) + k X i=m+1 (Upi −ˆSpi)#Gfj({i}) ! (4) where γ = max(#Gfj (∅)−S′ j,0) k#Gfj (∅) , ˆSpi is a lifted symbol representing the total number of true ground atoms (among the unknown atoms) of the pi-th predicate and Sj is rounded to the nearest integer. The symbolic (un-normalized) proposal probability is given by the following equation. H(ˆS, E) = exp   C X j=1 wjSj   (5) 4 Algorithm 1: Compute-Marginals Input: ˆ M, ζ, Evidence E, Query Q, sampling threshold β, thinning parameter p, iterations T Output: Marginal probabilities P for Q begin Construct the symbolic counting formula Eq. (5) // Outer Sampler for t = 1 to T do Sample ˆS(t) using Gibbs sampling on Eq. (5) After burn-in, for every p-th sample, generate ¯s(t) from ˆS(t) for each formula fi do // Inner Sampler for c = 1 to β do // Rao-Blackwellization f ′ i = Partially ground formula created by sampling assignments to shared variables in fi Compute the satisfied groundings in f ′ i Compute the sample weight using Eq. (7) Update the marginal probability estimates using Eq. (2) where C is the number of formulas in ˆ M and wj is the weight of the j-th formula. Given the symbolic equation Eq. (5), we sample the set of lifted symbols, ˆS, using randomized Gibbs sampling. For this, we initialize all symbols to a random value. We then choose a random symbol ˆSi and substitute it in Eq. (5) for each value between 0 to ( ˆUi) yielding a conditional distribution on ˆSi given assignments to ˆS−i, where ˆS−i refers to all symbols other than the ith one. We then sample from this conditional distribution by taking into account that there are ˆUi v  different assignments corresponding to the vth value in the distribution, which corresponds to setting exactly v groundings of the ith predicate to True. After the Markov chain has mixed, to reduce the dependency between successive Gibbs samples, we thin the samples and only use every p-th sample for estimation. Note that during the process of sampling from the proposal, we only had to compute ˆ M, namely ground the original MLN with the cluster-centers. Therefore, the representation is lifted because we do not ground ˆ M. This helps us scale up the sampling step to large domains-sizes (since we can control the number of clusters). 3.2 Computing the Importance Weight In order to compute the marginal probabilities as in Eq. (2), given a sample, we need to compute (up to a normalization constant) the weight of that sample. It is easy to see that a sample from the proposal (assignments on all symbols) has multiple possible assignments in the original MLN. For instance, suppose in our running example in Fig. 1, the symbol corresponding to R(µ1) has a value equal to 1, this corresponds to 2 different assignments in M, either R(A1) is true or R(B1) is true. Formally, a sample from the proposal has Q ˆ K i=1 ˆUi ˆSi  different assignments in the true distribution. We assume that all these assignments are equi-probable (have the same weight) in the proposal. Thus, to compute the (un-normalized) probability of a sample w.r.t M, we first convert the assignments on a specific sample, ˆS(t) into one of the equi-probable assignments ¯s by randomly choosing one of the assignments. Then, we compute the (un-normalized) probability P(¯s, E). The importance weight (upto a multiplicative constant) for the t-th sample is given by the ratio, w(ˆS(t), E) = P(¯s(t), E) H(ˆS(t), E) (6) Plugging-in the weight computed by Eq. (6) into Eq. (2) yields an asymptotically unbiased estimate of the query marginal probabilities [11]. However, in the case of MLNs, computing Eq. (6) turns out to be a hard problem. Specifically, to compute ˆP(¯s(t), E), given a sample, we need to go over each ground formula in M and check if it is satisfied or not. The combined-complexity [17] (domain-size as well as formula-size are assumed to be variable) of this operation for each formula 5 is #P-complete (cf. [5]). However, the data complexity (fixed formula-size, variable domain-size) is polynomial. That is, for k variables in a formula where the domain-size of each variable is d, the complexity is clearly O(dk) to go over every grounding. However, in the case of MLNs, notice that a polynomial data-complexity is equivalent to the complexity of the grounding-problem, which is precisely what we are trying to avoid and is therefore intractable for all practical purposes. To make this weight-computation step tractable, we use an additional sampler which samples a bounded number of groundings of a formula in M and approximates the importance weight based on these sampled groundings. Formally, Let Ui be a proposal distribution defined on the groundings of the i-th formula. Here, we define this distribution as a product of |Vi| uniform distributions where Vi = Vi1 . . . Vik is the set of distinct variables in the i-th formula. Formally, Ui = Q|Vi| j=1 Uij, where Uij is a uniform distribution over the domain-size of Vik. A sample from Ui contains a grounding for every variable in the i-th formula. Using this, we can approximate the importance weight using the following equation. w(¯s(t), E, ¯u(t) i ) = exp PM i=1 wi N ′ i(¯s(t),E,¯u(t) i ) β Q|Vi| j=1 Uij  H(ˆS(t), E) (7) where M is the number of formulas in M, ¯u(t) i are β groundings of the i-th formula drawn from Ui and N ′ i(¯s(t), E, ¯u(t) i ) is the count of satisfied groundings in ¯u(t) i groundings of the i-th formula. Proposition 1. Using the importance weights shown in Eq. (7) in a normalized estimator (see Eq. (2)) yields an asymptotically unbiased estimate of the query marginals, i.e., as the number of samples, T →∞, the estimated marginal probabilities almost surely converge to the true marginal probabilities. We skip the proof for lack of space, but the idea is that for each unique sample of the outer sampler, each of the importance weight estimates computed using a subset of formula groundings converge towards the true importance weights (if all groundings of formulas were used). Specifically, the weights computed by the “inner” sampler by considering partial groundings of formulas add up to the true weight as T →∞and therefore each importance weight is asymptotically unbiased. Eq. (2) is thus a ratio of asymptotically unbiased quantities and the above proposition follows. We now show how we can leverage MLN structure to improve the weight estimate in Eq. (7). Specifically, we Rao-Blackwellize the “inner” sampler as follows. We partition the variables in each formula into two sets, V1 and V2, such that we sample a grounding for the variables in V1 and for each sample, we tractably compute the exact number of satisfied groundings for all possible groundings to V2. We illustrate this with the following example. Example 1. Consider a formula ¬R(x, y) ∨S(y, z) where each variable has domain-size equal to d. The data-complexity of computing the satisfied groundings in this formula is clearly d3. However, for any specific value of y, say y = A, the satisfied groundings in this formula can be computed in closed form as, n1d + n2d −n1n2, where n1 is the number of false groundings of R(x, A) and n2 is the number of true groundings in S(A, z). Computing this for all possible values of y has a complexity of O(d2). Generalizing the above example, for any formula f with variables V, we say that V ′ ∈V is shared, if it occurs more than once in that formula. For instance, in the above example y is a shared variable. Sarkhel et. al [14] showed that for a formula f where no terms are shared, given an assignment to its ground atoms, it is always possible to compute the number of satisfied groundings of f in closed form. Using this, we have the following proposition. Proposition 2. Given assignments to all ground atoms of a formula f with no shared terms, the combined complexity of computing the number of satisfied groundings of f is O(dK), where d is an upper-bound on the domain-size of the non-shared variables in f and K is the maximum number of non-shared variables in an atom of f. Algorithm 1 illustrates our complete sampler. It assumes ˆ M and ζ are provided as input. First, we construct the symbolic equation Eq. (5) that computes the weight of the proposal. In the outer sampler, we sample the symbols from Eq. (5) using Gibbs sampling. After the chain has mixed, for each sample from the outer sampler, for every formula in M, we construct an inner sampler that uses Rao-Blackwelization to approximate the sample weight. Specifically, for a formula f, we sample 6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 10 20 30 40 50 60 70 80 90 100 Error Time Ns=40 Ns=10 Ns=5 (a) Smokers 0.15 0.2 0.25 0.3 0.35 0.4 0.45 10 20 30 40 50 60 70 80 90 100 Error Time Ns=32 Ns=16 Ns=10 (b) Relation 0.05 0.1 0.15 0.2 0.25 0.3 10 20 30 40 50 60 70 80 90 100 Error Time Ns=400 Ns=56 Ns=16 (c) HMM 0.05 0.1 0.15 0.2 0.25 0.3 0.35 10 20 30 40 50 60 70 80 90 100 Error Time Ns=150 Ns=60 (d) LogReq Figure 2: Tradeoff between computational efficiency and accuracy. The y-axis plots the average KL-divergence between the true marginals and the approximated ones for different values of Ns. Larger Ns implies weaker proposal, faster sampling. For this experiment, we set β (sampling bound) to 0.2. Note that changing β did not affect our results very significantly. an assignment to each non-shared variable to create a partially ground formula, f ′ and compute the exact number of satisfied groundings in f ′. Finally, we compute the sample weight as in Eq. (7) and update the normalized estimator in Eq. (2). 4 Experiments We run two sets of experiments. First, to illustrate the trade-off between accuracy and complexity, we experiment with MLNs which can be solved exactly. Our test MLNs include Smokers and HMM (with few states) from the Alchemy website [10] and two additional MLNs, Relation (R(x, y) ⇒S(y, z)), LogReq (randomly generated formulas with singletons). Next, to illustrate scalability, we use two Alchemy benchmarks that are far larger, namely Hypertext classification with 1 million ground formulas and Entity Resolution (ER) with 8 million ground formulas. For all MLNs, we randomly set 25% groundings as true and 25% as false. For clustering, we used the scheme in [19] with KMeans++ as the clustering method. For Gibbs sampling, we set the thinning parameter to 5 and use a burn-in of 50 samples. We ran all experiments on a quad-core, 6GB RAM, Ubuntu laptop. Fig. 2 shows our results on the first set of experiments, where the y-axis plots the average KLdivergence between the true marginals for the query atoms and the marginals generated by our algorithm. The values are shown for varying values of Ns = |GM| |G ˆ M|, i.e. the ratio between the ground MLN-size and the proposal MLN-size. Intuitively, Ns indicates the amount by which M has been compressed to form the proposal. As illustrated in Fig. 2, as Ns increases, the accuracy becomes lower in all cases because the proposal is a weaker approximation of the true distribution. However, at the same time, the complexity decreases allowing us to trade-off accuracy with efficiency. Further, the MLN-structure also determines the proposal accuracy. For example, LogReg that contains singletons yields an accurate estimate even for high values of Ns, while, for Relation, a smaller Ns yields such 7 (Ns, β) C-Time(secs) I-SRate (210,0.1) 3 1200 (210,0.25) 3 250 (210,0.5) 3 150 (25,0.1) 8 650 (25,0.25) 8 180 (25,0.5) 8 100 (23,0.1) 15 600 (23,0.25) 15 150 (23,0.5) 15 90 (a) Hypertext (1M groundings) (Ns, β) C-Time(secs) I-SRate (10K,0.1) 25 125 (10K,0.25) 65 45 (10K,0.5) 65 15 (1K,0.1) 65 125 (1K,0.25) 65 45 (1K,0.5) 65 15 (25,0.1) 150 15 (25,0.25) 150 8 (25,0.5) 150 4 (b) ER (8M groundings) Figure 3: Scalability experiments. C-Time indicates the time in seconds to generate the proposal. I-SRATE is the sampling rate measured as samples/minute. accuracy. This is because, singletons have symmetries [4, 7] which are exploited by the clustering scheme when building the proposal. Fig. 3 shows the results on the second set of experiments where we measure the computational-time required by our algorithm during all its operational steps namely proposal creation, sampling and weight estimation. Note that, for both the MLNs used here, we tried to compare the results with Alchemy, but we were unable to get any results due to the grounding problem. As Fig. 3 shows, we could scale to these large domains because, the complexity of sampling the proposal is feasible even when generating the ground MLN is infeasible. Specifically, we show the time taken to generate the proposal distribution (C-Time) and the the number of weighted samples generated per minute during inference (I-SRate). As expected, decreasing Ns, or increasing β (sampling bound) lowers I-SRate because the complexity of sampling increases. At the same time, we also expect the quality of the samples to be better. Importantly, these results show that by addressing the evidence/grounding problems, we can process large, arbitrarily structured MLNs/evidence without running out of memory in a reasonable amount of time. 5 Conclusion Inference algorithms in Markov logic encounter two interrelated problems hindering scalability – the grounding and evidence problems. Here, we proposed an approach based on importance sampling that avoids these problems in every step of its operation. Further, we showed that our approach yields asymptotically unbiased estimates. Our evaluation showed that our approach can systematically trade-off complexity with accuracy and can therefore scale-up to large domains. Future work includes, clustering strategies using better similarity measures such as graph-based similarity, applying our technique to MCMC algorithms, etc. Acknowledgments This work was supported in part by the AFRL under contract number FA8750-14-C-0021, by the ARO MURI grant W911NF-08-1-0242, and by the DARPA Probabilistic Programming for Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of DARPA, AFRL, ARO or the US government. References [1] Babak Ahmadi, Kristian Kersting, Martin Mladenov, and Sriraam Natarajan. Exploiting symmetries for scaling loopy belief propagation and relational training. Machine Learning, 92(1):91–132, 2013. 8 [2] H. Bui, T. Huynh, and R. de Salvo Braz. Exact lifted inference with distinct soft evidence on every object. In AAAI, 2012. [3] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, Urbana-Champaign, IL, 2007. [4] Guy Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic inference. In NIPS, pages 1386–1394, 2011. [5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, San Rafael, CA, 2009. [6] J. Geweke. Bayesian inference in econometric models using Monte Carlo integration. Econometrica, 57(6):1317–39, 1989. [7] V. Gogate and P. Domingos. Probabilistic Theorem Proving. In Proceedings of the TwentySeventh Conference on Uncertainty in Artificial Intelligence, pages 256–265. AUAI Press, 2011. [8] V. Gogate, A. Jha, and D. Venugopal. Advances in Lifted Importance Sampling. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, 2012. [9] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted Inference from the Other Side: The tractable Features. In Proceedings of the 24th Annual Conference on Neural Information Processing Systems (NIPS), pages 973–981, 2010. [10] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, D. Lowd, J. Wang, and P. Domingos. The Alchemy System for Statistical Relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2008. http://alchemy.cs.washington.edu. [11] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Publishing Company, Incorporated, 2001. [12] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted Probabilistic Inference with Counting Formulas. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pages 1062–1068, 2008. [13] D. Poole. First-Order Probabilistic Inference. In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence, pages 985–991, Acapulco, Mexico, 2003. Morgan Kaufmann. [14] Somdeb Sarkhel, Deepak Venugopal, Parag Singla, and Vibhav Gogate. Lifted MAP inference for markov logic networks. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS, pages 859–867, 2014. [15] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-Order Knowledge Compilation. In Proceedings of the Twenty Second International Joint Conference on Artificial Intelligence, pages 2178–2185, 2011. [16] Guy van den Broeck and Adnan Darwiche. On the complexity and approximation of binary evidence in lifted inference. In Advances in Neural Information Processing Systems 26, pages 2868–2876, 2013. [17] Moshe Y. Vardi. The complexity of relational query languages (extended abstract). In Proceedings of the Fourteenth Annual ACM Symposium on Theory of Computing, pages 137–146, 1982. [18] D. Venugopal and V. Gogate. On lifting the gibbs sampling algorithm. In Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS), pages 1664–1672, 2012. [19] Deepak Venugopal and Vibhav Gogate. Evidence-based clustering for scalable inference in markov logic. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part III, pages 258–273, 2014. 9
2014
93
5,584
Unsupervised Transcription of Piano Music Taylor Berg-Kirkpatrick Jacob Andreas Dan Klein Computer Science Division University of California, Berkeley {tberg,jda,klein}@cs.berkeley.edu Abstract We present a new probabilistic model for transcribing piano music from audio to a symbolic form. Our model reflects the process by which discrete musical events give rise to acoustic signals that are then superimposed to produce the observed data. As a result, the inference procedure for our model naturally resolves the source separation problem introduced by the the piano’s polyphony. In order to adapt to the properties of a new instrument or acoustic environment being transcribed, we learn recording-specific spectral profiles and temporal envelopes in an unsupervised fashion. Our system outperforms the best published approaches on a standard piano transcription task, achieving a 10.6% relative gain in note onset F1 on real piano audio. 1 Introduction Automatic music transcription is the task of transcribing a musical audio signal into a symbolic representation (for example MIDI or sheet music). We focus on the task of transcribing piano music, which is potentially useful for a variety of applications ranging from information retrieval to musicology. This task is extremely difficult for multiple reasons. First, even individual piano notes are quite rich. A single note is not simply a fixed-duration sine wave at an appropriate frequency, but rather a full spectrum of harmonics that rises and falls in intensity. These profiles vary from piano to piano, and therefore must be learned in a recording-specific way. Second, piano music is generally polyphonic, i.e. multiple notes are played simultaneously. As a result, the harmonics of the individual notes can and do collide. In fact, combinations of notes that exhibit ambiguous harmonic collisions are particularly common in music, because consonances sound pleasing to listeners. This polyphony creates a source-separation problem at the heart of the transcription task. In our approach, we learn the timbral properties of the piano being transcribed (i.e. the spectral and temporal shapes of each note) in an unsupervised fashion, directly from the input acoustic signal. We present a new probabilistic model that describes the process by which discrete musical events give rise to (separate) acoustic signals for each keyboard note, and the process by which these signals are superimposed to produce the observed data. Inference over the latent variables in the model yields transcriptions that satisfy an informative prior distribution on the discrete musical structure and at the same time resolve the source-separation problem. For the problem of unsupervised piano transcription where the test instrument is not seen during training, the classic starting point is a non-negative factorization of the acoustic signal’s spectrogram. Most previous work improves on this baseline in one of two ways: either by better modeling the discrete musical structure of the piece being transcribed [1, 2] or by better adapting to the timbral properties of the source instrument [3, 4]. Combining these two kinds of approaches has proven challenging. The standard approach to modeling discrete musical structures—using hidden Markov or semi-Markov models—relies on the availability of fast dynamic programs for inference. Here, coupling these discrete models with timbral adaptation and source separation breaks the conditional independence assumptions that the dynamic programs rely on. In order to avoid this inference problem, past approaches typically defer detailed modeling of discrete structure or timbre to a postprocessing step [5, 6, 7]. 1 time freq velocity duration Envelope Params Spectral Params Spectrogram Component Spectrogram Note Events Event Params Note Activation time time time time freq freq N R ↵ (n) µ (n) σ (n) M (nr) A (nr) S (nr) X (r) PLAY REST velocity Figure 1: We transcribe a dataset consisting of R songs produced by a single piano with N notes. For each keyboard note, n, and each song, r, we generate a sequence of musical events, M (nr), parameterized by µ(n). Then, conditioned on M (nr), we generate an activation time series, A(nr), parameterized by α(n). Next, conditioned on A(nr), we generate a component spectrogram for note n in song r, S(nr), parameterized by σ(n). The observed total spectrogram for song r is produced by superimposing component spectrograms: X(r) = P n S(nr). We present the first approach that tackles these discrete and timbral modeling problems jointly. We have two primary contributions: first, a new generative model that reflects the causal process underlying piano sound generation in an articulated way, starting with discrete musical structure; second, a tractable approximation to the inference problem over transcriptions and timbral parameters. Our approach achieves state-of-the-art results on the task of polyphonic piano music transcription. On a standard dataset consisting of real piano audio data, annotated with ground-truth onsets, our approach outperforms the best published models for this task on multiple metrics, achieving a 10.6% relative gain in our primary measure of note onset F1. 2 Model Our model is laid out in Figure 1. It has parallel random variables for each note on the piano keyboard. For now, we illustrate these variables for a single concrete note—say C♯in the 4th octave— and in Section 2.4 describe how the parallel components combine to produce the observed audio signal. Consider a single song, divided into T time steps. The transcription will be I musical events long. The component model for C♯consists of three primary random variables: M, a sequence of I symbolic musical events, analogous to the locations and values of symbols along the C♯staff line in sheet music, A, a time series of T activations, analogous to the loudness of sound emitted by the C♯piano string over time as it peaks and attenuates during each event in M, S, a spectrogram of T frames, specifying the spectrum of frequencies over time in the acoustic signal produced by the C♯string. 2 Envelope Params Duration Velocity Event State M (nr) E1 E2 E3 D1 D2 D3 V1 V2 V3 A (nr) Scale to velocity Vi Truncate to duration Di Add noise ↵ (n) Figure 2: Joint distribution on musical events, M (nr), and activations, A(nr), for note n in song r, conditioned on event parameters, µ(n), and envelope parameters, α(n). The dependence of Ei, Di, and Vi on n and r is suppressed for simplicity. The parameters that generate each of these random variables are depicted in Figure 1. First, musical events, M, are generated from a distribution parameterized by µ(C♯), which specifies the probability that the C♯key is played, how long it is likely to be held for (duration), and how hard it is likely to be pressed (velocity). Next, the activation of the C♯string over time, A, is generated conditioned on M from a distribution parameterized by a vector, α(C♯) (see Figure 1), which specifies the shape of the rise and fall of the string’s activation each time the note is played. Finally, the spectrogram, S, is generated conditioned on A from a distribution parameterized by a vector, σ(C♯) (see Figure 1), which specifies the frequency distribution of sounds produced by the C♯string. As depicted in Figure 3, S is produced from the outer product of σ(C♯) and A. The joint distribution for the note1 is: P(S, A, M|σ(C♯), α(C♯), µ(C♯)) = P(M|µ(C♯)) [Event Model, Section 2.1] · P(A|M, α(C♯)) [Activation Model, Section 2.2] · P(S|A, σ(C♯)) [Spectrogram Model, Section 2.3] In the next three sections we give detailed descriptions for each of the component distributions. 2.1 Event Model Our symbolic representation of musical structure (see Figure 2) is similar to the MIDI format used by musical synthesizers. M consists of a sequence of I random variables representing musical events for the C♯piano key: M = (M1, M2, . . . , MI). Each event Mi, is a tuple consisting of a state, Ei, which is either PLAY or REST, a duration Di, which is a length in time steps, and a velocity Vi, which specifies how hard the key was pressed (assuming Ei is PLAY). The graphical model for the process that generates M is depicted in Figure 2. The sequence of states, (E1, E2, . . . , EI), is generated from a Markov model. The transition probabilities, µTRANS, control how frequently the note is played (some notes are more frequent than others). An event’s duration, Di, is generated conditioned on Ei from a distribution parameterized by µDUR. The durations of PLAY events have a multinomial parameterization, while the durations of REST events are distributed geometrically. An event’s velocity, Vi, is a real number on the unit interval and is generated conditioned on Ei from a distribution parameterized by µVEL. If Ei = REST, deterministically Vi = 0. The complete event parameters for keyboard note C♯are µ(C♯) = (µTRANS, µDUR, µVEL). 1For notational convenience, we suppress the C♯superscripts on M, A, and S until Section 2.4. 3 2.2 Activation Model In an actual piano, when a key is pressed, a hammer strikes a string and a sound with sharply rising volume is emitted. The string continues to emit sound as long as the key remains depressed, but the volume decays since no new energy is being transferred. When the key is released, a damper falls back onto the string, truncating the decay. Examples of this trajectory are depicted in Figure 1 in the graph of activation values. The graph depicts the note being played softly and held, and then being played more loudly, but held for a shorter time. In our model, PLAY events represent hammer strikes on a piano string with raised damper, while REST events represent the lowered damper. In our parameterization, the shape of the ↵ rise and decay is shared by all PLAY events for a given note, regardless of their duration and velocity. We call this shape an envelope and describe it using a positive vector of parameters. For our running example of C♯, this parameter vector is α(C♯) (depicted to the right). The time series of activations for the C♯string, A, is a positive vector of length T, where T denotes the total length of the song in time steps. Let [A]t be the activation at time step t. As mentioned in Section 2, A may be thought of as roughly representing the loudness of sound emitted by the piano string as a function of time. The process that generates A is depicted in Figure 2. We generate A conditioned on the musical events, M. Each musical event, Mi = (Ei, Di, Vi), produces a segment of activations, Ai, of length Di. For PLAY events, Ai will exhibit an increase in activation. For REST events, the activation will remain low. The segments are appended together to make A. The activation values in each segment are generated in a way loosely inspired by piano mechanics. Given α(C♯), we generate the values in segment Ai as follows: α(C♯) is first truncated to duration Di, then is scaled by the velocity of the strike, Vi, and, finally, is used to parameterize an activation noise distribution which generates the segment Ai. Specifically, we add independent Gaussian noise to each dimension after α(C♯) is truncated and scaled. In principle, this choice of noise distribution gives a formally deficient model, since activations are positive, but in practice performs well and has the benefit of making inference mathematically simple (see Section 3.1). 2.3 Component Spectrogram Model Piano sounds have a harmonic structure; when played, each piano string primarily emits energy at a fundamental frequency determined by the string’s length, but also at all σ integer multiples of that frequency (called partials) with diminishing strength (see the depiction to the right). For example, the audio signal produced by the C♯string will vary in loudness, but its frequency distribution will remain mostly fixed. We call this frequency distribution a spectral profile. In our parameterization, the spectral profile of C♯is specified by a positive spectral parameter vector, σ(C♯) (depicted to the right). σ(C♯) is a vector of length F, where [σ(C♯)]f represents the weight of frequency bin f. In our model, the audio signal produced by the C♯string over the course of the song is represented as a spectrogram, S, which is a positive matrix with F rows, one for each frequency bin, f, and T columns, one for each time step, t (see Figures 1 and 3 for examples). We denote the magnitude of frequency f at time step t as [S]ft. In order to generate the spectrogram (see Figure 3), we first produce a matrix of intermediate values by taking the outer product of the spectral profile, σ(C♯), and the activations, A. These intermediate values are used to parameterize a spectrogram noise distribution that generates S. Specifically, for each frequency bin f and each time step t, the corresponding value of the spectrogram, [S]ft, is generated from a noise distribution parameterized by [σ(C♯)]f · [A]t. In practice, the choice of noise distribution is very important. After examining residuals resulting from fitting mean parameters to actual musical spectrograms, we experimented with various noising assumptions, including multiplicative gamma noise, additive Gaussian noise, log-normal noise, and Poisson noise. Poisson noise performed best. This is consistent with previous findings in the literature, where non-negative matrix factorization using KL divergence (which has a generative interpretation as maximum likelihood inference in a Poisson model [8]) is commonly chosen [7, 2]. Under the Poisson noise assumption, the spectrogram is interpreted as a matrix of (large) integer counts. 4 freq freq time time time freq . . . X (r) S (Nr) S (1r) A (1r) A (Nr) σ (1) σ (N) Figure 3: Conditional distribution for song r on the observed total spectrogram, X(r), and the component spectrograms for each note, (S(1r), . . . , S(Nr)), given the activations for each note, (A(1r), . . . , A(Nr)), and spectral parameters for each note, (σ(1), . . . , σ(N)). X(r) is the superposition of the component spectrograms: X(r) = P n S(nr). 2.4 Full Model So far we have only looked at the component of the model corresponding to a single note’s contribution to a single song. Our full model describes the generation of a collection of many songs, from a complete instrument with many notes. This full model is diagrammed in Figures 1 and 3. Let a piano keyboard consist of N notes (on a standard piano, N is 88), where n indexes the particular note. Each note, n, has its own set of musical event parameters, µ(n), envelope parameters, α(n), and spectral parameters, σ(n). Our corpus consists of R songs (“recordings”), where r indexes a particular song. Each pair of note n and song r has it’s own musical events variable, M (nr), activations variable, A(nr), and component spectrogram S(nr). The full spectrogram for song r, which is the observed input, is denoted as X(r). Our model generates X(r) by superimposing the component spectrograms: X(r) = P n S(nr). Going forward, we will need notation to group together variables across all N notes: let µ = (µ(1), . . . , µ(N)), α = (α(1), . . . , α(N)), and σ = (σ(1), . . . , σ(N)). Also let M (r) = (M (1r), . . . , M (Nr)), A(r) = (A(1r), . . . , A(Nr)), and S(r) = (S(1r), . . . , S(Nr)). 3 Learning and Inference Our goal is to estimate the unobserved musical events for each song, M (r), as well as the unknown envelope and spectral parameters of the piano that generated the data, α and σ. Our inference will estimate both, though our output is only the musical events, which specify the final transcription. Because MIDI sample banks (piano synthesizers) are readily available, we are able to provide the system with samples from generic pianos (but not from the piano being transcribed). We also give the model information about the distribution of notes in real musical scores by providing it with an external corpus of symbolic music data. Specifically, the following information is available to the model during inference: 1) the total spectrogram for each song, X(r), which is the input, 2) the event parameters, µ, which we estimate by collecting statistics on note occurrences in the external corpus of symbolic music, and 3) truncated normal priors on the envelopes and spectral profiles, α and σ, which we extract from the MIDI samples. Let ¯ M = (M (1), . . . , M (R)), ¯A = (A(1), . . . , A(R)), and ¯S = (S(1), . . . , S(R)), the tuples of event, activation, and spectrogram variables across all notes n and songs r. We would like to compute the posterior distribution on ¯ M, α, and σ. However, marginalizing over the activations ¯A couples the discrete musical structure with the superposition process of the component spectrograms in an intractable way. We instead approximate the joint MAP estimates of ¯ M, ¯A, α, and σ via iterated conditional modes [9], only marginalizing over the component spectrograms, ¯S. Specifically, we perform the following optimization via block-coordinate ascent: max ¯ M, ¯ A,α,σ Y r "X S(r) P(X(r), S(r), A(r), M (r)|µ, α, σ) # · P(α, σ) The updates for each group of variables are described in the following sections: ¯ M in Section 3.1, α in Section 3.2, ¯A in Section 3.3, and σ in Section 3.4. 5 3.1 Updating Events We update ¯ M to maximize the objective while the other variables are held fixed. The joint distribution on ¯ M and ¯A is a hidden semi-Markov model [10]. Given the optimal velocity for each segment of activations, the computation of the emission potentials for the semi-Markov dynamic program is straightforward and the update over ¯ M can be performed exactly and efficiently. We let the distribution of velocities for PLAY events be uniform. This choice, together with the choice of Gaussian activation noise, yields a simple closed-form solution for the optimal setting of the velocity variable V (nr) i . Let [α(n)]j denote the jth value of the envelope vector α(n). Let [A(nr) i ]j be the jth entry of the segment of A(nr) generated by event M (nr) i . The velocity that maximizes the activation segment’s probability is given by: V (nr) i = PD(nr) i j=1  [α(n)]j · [A(nr) i ]j  PD(nr) i j=1 [α(n)]2 j 3.2 Updating Envelope Parameters Given settings of the other variables, we update the envelope parameters, α, to optimize the objective. The truncated normal prior on α admits a closed-form update. Let I(j, n, r) = {i : D(nr) i ≤j}, the set of event indices for note n in song r with durations no longer than j time steps. Let [α(n) 0 ]j be the location parameter for the prior on [α(n)]j, and let β be the scale parameter (which is shared across all n and j). The update for [α(n)]j is given by: [α(n)]j = P (n,r) P i∈I(j,n,r) V (nr) i [A(nr) i ]j + 1 2β2 [α(n) 0 ]j P (n,r) P i∈I(j,n,r)[A(nr) i ]2 j + 1 2β2 3.3 Updating Activations In order to update the activations, ¯A, we optimize the objective with respect to ¯A, with the other variables held fixed. The choice of Poisson noise for generating each of the component spectrograms, S(nr), means that the conditional distribution of the total spectrogram for each song, X(r) = P n S(nr), with S(r) marginalized out, is also Poisson. Specifically, the distribution of [X(r)]ft is Poisson with mean P n [σ(n)]f · [A(nr)]t  . Optimizing the probability of X(r) under this conditional distribution with respect to A(r) corresponds to computing the supervised NMF using KL divergence [7]. However, to perform the correct update in our model, we must also incorporate the distribution of A(r), and so, instead of using the standard multiplicative NMF updates, we use exponentiated gradient ascent [11] on the log objective. Let L denote the log objective, let ˜α(n, r, t) denote the velocity-scaled envelope value used to generate the activation value [A(nr)]t, and let γ2 denote the variance parameter for the Gaussian activation noise. The gradient of the log objective with respect to [A(nr)]t is: ∂L ∂[A(nr)]t = X f " [X(r)]ft · [σ(n)]f P n′ [σ(n′)]f · [A(n′r)]t  −[σ(n)]f # −1 γ2  [A(nr)]t −˜α(n, r, t)  3.4 Updating Spectral Parameters The update for the spectral parameters, σ, is similar to that of the activations. Like the activations, σ is part of the parameterization of the Poisson distribution on each X(r). We again use exponentiated gradient ascent. Let [σ(n) 0 ]f be the location parameter of the prior on [σ(n)]f, and let ξ be the scale parameter (which is shared across all n and f). The gradient of the the log objective with respect to [σ(n)]f is given by: ∂L ∂[σ(n)]f = X (r,t) " [X(r)]ft · [A(nr)]t P n′ [σ(n′)]f · [A(n′r)]t  −[A(nr)]t # −1 ξ2  [σ(n)]f −[σ(n) 0 ]f  6 4 Experiments Because polyphonic transcription is so challenging, much of the existing literature has either worked with synthetic data [12] or assumed access to the test instrument during training [5, 6, 13, 7]. As our ultimate goal is the transcription of arbitrary recordings from real, previously-unseen pianos, we evaluate in an unsupervised setting, on recordings from an acoustic piano not observed in training. Data We evaluate on the MIDI-Aligned Piano Sounds (MAPS) corpus [14]. This corpus includes a collection of piano recordings from a variety of time periods and styles, performed by a human player on an acoustic “Disklavier” piano equipped with electromechanical sensors under the keys. The sensors make it possible to transcribe directly into MIDI while the instrument is in use, providing a ground-truth transcript to accompany the audio for the purpose of evaluation. In keeping with much of the existing music transcription literature, we use the first 30 seconds of each of the 30 ENSTDkAm recordings as a development set, and the first 30 seconds of each of the 30 ENSTDkCl recordings as a test set. We also assume access to a collection of synthesized piano sounds for parameter initialization, which we take from the MIDI portion of the MAPS corpus, and a large collection of symbolic music data from the IMSLP library [15, 16], used to estimate the event parameters in our model. Preprocessing We represent the input audio as a magnitude spectrum short-time Fourier transform with a 4096-frame window and a hop size of 512 frames, similar to the approach used by Weninger et al. [7]. We temporally downsample the resulting spectrogram by a factor of 2, taking the maximum magnitude over collapsed bins. The input audio is recorded at 44.1 kHz and the resulting spectrogram has 23ms frames. Initialization and Learning We estimate initializers and priors for the spectral parameters, σ, and envelope parameters, α, by fitting isolated, synthesized, piano sounds. We collect these isolated sounds from the MIDI portion of MAPS, and average the parameter values across several synthesized pianos. We estimate the event parameters µ by counting note occurrences in the IMSLP data. At decode time, to fit the spectral and envelope parameters and predict transcriptions, we run 5 iterations of the block-coordinate ascent procedure described in Section 3. Evaluation We report two standard measures of performance: an onset evaluation, in which a predicted note is considered correct if it falls within 50ms of a note in the true transcription, and a frame-level evaluation, in which each transcription is converted to a boolean matrix specifying which notes are active at each time step, discretized to 10ms frames. Each entry is compared to the corresponding entry in the true matrix. Frame-level evaluation is sensitive to offsets as well as onsets, but does not capture the fact that note onsets have greater musical significance than do offsets. As is standard, we report precision (P), recall (R), and F1-measure (F1) for each of these metrics. 4.1 Comparisons We compare our system to three state-of-the-art unsupervised systems: the hidden semi-Markov model described by Benetos and Weyde [2] and the spectrally-constrained factorization models described by Vincent et al. [3] and O’Hanlon and Plumbley [4]. To our knowledge, Benetos and Weyde [2] report the best published onset results for this dataset, and O’Hanlon and Plumbley [4] report the best frame-level results. The literature also includes a number of supervised approaches to this task. In these approaches, a model is trained on annotated recordings from a known instrument. While best performance is achieved when testing on the same instrument used for training, these models can also achieve reasonable performance when applied to new instruments. Thus, we also compare to a discriminative baseline, a simplified reimplementation of a state-of-the-art supervised approach [7] which achieves slightly better performance than the original on this task. This system only produces note onsets, and therefore is not evaluated at a frame-level. We train the discriminative baseline on synthesized audio with ground-truth MIDI annotations, and apply it directly to our test instrument, which the system has never seen before. 7 System Onsets Frames P R F1 P R F1 Discriminative [7] 76.8 65.1 70.4 Benetos [2] 68.6 68.0 Vincent [3] 62.7 76.8 69.0 79.6 63.6 70.7 O’Hanlon [4] 48.6 73.0 58.3 73.4 72.8 73.2 This work 78.1 74.7 76.4 69.1 80.7 74.4 Table 1: Unsupervised transcription results on the MAPS corpus. “Onsets” columns show scores for identification (within ±50ms) of note start times. “Frames” columns show scores for 10ms frame-level evaluation. Our system achieves state-of-the-art results on both metrics.2 4.2 Results Our model achieves the best published numbers on this task: as shown in Table 1, it achieves an onset F1 of 76.4, which corresponds to a 10.6% relative gain over the onset F1 achieved by the system of Vincent et al. [3], the top-performing unsupervised baseline on this metric. Surprisingly, the discriminative baseline [7], which was not developed for the unsupervised task, outperforms all the unsupervised baselines in terms of onset evaluation, achieving an F1 of 70.4. Evaluated on frames, our system achieves an F1 of 74.4, corresponding to a more modest 1.6% relative gain over the system of O’Hanlon and Plumbley [4], which is the best performing baseline on this metric. The surprisingly competitive discriminative baseline shows that it is possible to achieve high onset accuracy on this task without adapting to the test instrument. Thus, it is reasonable to ask how much of the gain our model achieves is due to its ability to learn instrument timbre. If we skip the blockcoordinate ascent updates (Section 3) for the envelope and spectral parameters, and thus prevent our system from adapting to the test instrument, onset F1 drops from 76.4 to 72.6. This result indicates that learning instrument timbre does indeed help performance. As a short example of our system’s behavior, Figure 4 shows our system’s output passed through a commercially-available MIDI-to-sheet-music converter. This example was chosen because its onset F1 of 75.5 and error types are broadly representative of the system’s performance on our data. The resulting score has musically plausible errors. Predicted score Reference score Figure 4: Result of passing our system’s prediction and the reference transcription MIDI through the GarageBand MIDI-to-sheet-music converter. This is a transcription of the first three bars of Schumann’s Hobgoblin. A careful inspection of the system’s output suggests that a large fraction of errors are either off by an octave (i.e. the frequency of the predicted note is half or double the correct frequency) or are segmentation errors (in which a single key press is transcribed as several consecutive key presses). While these are tricky errors to correct, they may also be relatively harmless for some applications because they are not detrimental to musical perception: converting the transcriptions back to audio using a synthesizer yields music that is qualitatively quite similar to the original recordings. 5 Conclusion We have shown that combining unsupervised timbral adaptation with a detailed model of the generative relationship between piano sounds and their transcriptions can yield state-of-the-art performance. We hope that these results will motivate further joint approaches to unsupervised music transcription. Paths forward include exploring more nuanced timbral parameterizations and developing more sophisticated models of discrete musical structure. 2 For consistency we re-ran all systems in this table with our own evaluation code (except for the system of Benetos and Weyde [2], for which numbers are taken from the paper). For O’Hanlon and Plumbley [4] scores are higher than the authors themselves report; this is due to an extra post-processing step suggested by O’Hanlon in personal correspondence. 8 References [1] Masahiro Nakano, Yasunori Ohishi, Hirokazu Kameoka, Ryo Mukai, and Kunio Kashino. Bayesian nonparametric music parser. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2012. [2] Emmanouil Benetos and Tillman Weyde. Explicit duration hidden markov models for multiple-instrument polyphonic music transcription. In International Society for Information Music Retrieval, 2013. [3] Emmanuel Vincent, Nancy Bertin, and Roland Badeau. Adaptive harmonic spectral decomposition for multiple pitch estimation. IEEE Transactions on Audio, Speech, and Language Processing, 2010. [4] Ken O’Hanlon and Mark D. Plumbley. Polyphonic piano transcription using non-negative matrix factorisation with group sparsity. In IEEE International Conference on Acoustics, Speech, and Signal Processing, 2014. [5] Graham E. Poliner and Daniel P.W. Ellis. A discriminative model for polyphonic piano transcription. EURASIP Journal on Advances in Signal Processing, 2007. [6] R. Lienhart C. G. van de Boogaart. Note onset detection for the transcription of polyphonic piano music. In Multimedia and Expo ICME. IEEE, 2009. [7] Felix Weninger, Christian Kirst, Bjorn Schuller, and Hans-Joachim Bungartz. A discriminative approach to polyphonic piano note transcription using supervised non-negative matrix factorization. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2013. [8] Paul H. Peeling, Ali Taylan Cemgil, and Simon J. Godsill. Generative spectrogram factorization models for polyphonic piano transcription. IEEE Transactions on Audio, Speech, and Language Processing, 2010. [9] Julian Besag. On the statistical analysis of dirty pictures. Journal of the Royal Statistical Society, 1986. [10] Stephen Levinson. Continuously variable duration hidden Markov models for automatic speech recognition. Computer Speech & Language, 1986. [11] Jyrki Kivinen and Manfred K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 1997. [12] Matti P. Ryynanen and Anssi Klapuri. Polyphonic music transcription using note event modeling. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2005. [13] Sebastian B¨ock and Markus Schedl. Polyphonic piano note transcription with recurrent neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2012. [14] Valentin Emiya, Roland Badeau, and Bertrand David. Multipitch estimation of piano sounds using a new probabilistic spectral smoothness principle. IEEE Transactions on Audio, Speech, and Language Processing, 2010. [15] The international music score library project, June 2014. URL http://imslp.org. [16] Vladimir Viro. Peachnote: Music score search and analysis platform. In The International Society for Music Information Retrieval, 2011. 9
2014
94
5,585
Multi-Scale Spectral Decomposition of Massive Graphs Si Si⇤ Department of Computer Science University of Texas at Austin ssi@cs.utexas.edu Donghyuk Shin⇤ Department of Computer Science University of Texas at Austin dshin@cs.utexas.edu Inderjit S. Dhillon Department of Computer Science University of Texas at Austin inderjit@cs.utexas.edu Beresford N. Parlett Department of Mathematics University of California, Berkeley parlett@math.berkeley.edu Abstract Computing the k dominant eigenvalues and eigenvectors of massive graphs is a key operation in numerous machine learning applications; however, popular solvers suffer from slow convergence, especially when k is reasonably large. In this paper, we propose and analyze a novel multi-scale spectral decomposition method (MSEIGS), which first clusters the graph into smaller clusters whose spectral decomposition can be computed efficiently and independently. We show theoretically as well as empirically that the union of all cluster’s subspaces has significant overlap with the dominant subspace of the original graph, provided that the graph is clustered appropriately. Thus, eigenvectors of the clusters serve as good initializations to a block Lanczos algorithm that is used to compute spectral decomposition of the original graph. We further use hierarchical clustering to speed up the computation and adopt a fast early termination strategy to compute quality approximations. Our method outperforms widely used solvers in terms of convergence speed and approximation quality. Furthermore, our method is naturally parallelizable and exhibits significant speedups in shared-memory parallel settings. For example, on a graph with more than 82 million nodes and 3.6 billion edges, MSEIGS takes less than 3 hours on a single-core machine while Randomized SVD takes more than 6 hours, to obtain a similar approximation of the top-50 eigenvectors. Using 16 cores, we can reduce this time to less than 40 minutes. 1 Introduction Spectral decomposition of large-scale graphs is one of the most informative and fundamental matrix approximations. Specifically, we are interested in the case where the top-k eigenvalues and eigenvectors are needed, where k is in the hundreds. This computation is needed in various machine learning applications such as semi-supervised classification, link prediction and recommender systems. The data for these applications is typically given as sparse graphs containing information about dyadic relationship between entities, e.g., friendship between pairs of users. Supporting the current big data trend, the scale of these graphs is massive and continues to grow rapidly. Moreover, they are also very sparse and often exhibit clustering structure, which should be exploited. However, popular solvers, such as subspace iteration, randomized SVD [7] and the classical Lanczos algorithm [21], are often too slow for very big graphs. A key insight is that the graph often exhibits a clustering structure and the union of all cluster’s subspaces turns out to have significant overlap with the dominant subspace of the original matrix, which ⇤Equal contribution to the work. 1 is shown both theoretically and empirically. Based on this observation, we propose a novel divideand-conquer approach to compute the spectral decomposition of large and sparse matrices, called MSEIGS, which exploits the clustering structure of the graph and achieves faster convergence than state-of-the-art solvers. In the divide step, MSEIGS employs graph clustering to divide the graph into several clusters that are manageable in size and allow fast computation of the eigendecomposition by standard methods. Then, in the conquer step, eigenvectors of the clusters are combined to initialize the eigendecomposition of the entire matrix via block Lanczos. As shown in our analysis and experiments, MSEIGS converges faster than other methods that do not consider the clustering structure of the graph. To speedup the computation, we further divide the subproblems into smaller ones and construct a hierarchical clustering structure; our framework can then be applied recursively as the algorithm moves from lower levels to upper levels in the hierarchy tree. Moreover, our proposed algorithm is naturally parallelizable as the main steps can be carried out independently for each cluster. On the SDWeb dataset with more than 82 million nodes and 3.6 billion edges, MSEIGS takes only about 2.7 hours on a single-core machine while Matlab’s eigs function takes about 4.2 hours and randomized SVD takes more than 6 hours. Using 16 cores, we can cut this time to less than 40 minutes showing that our algorithm obtains good speedups in shared-memory settings. While our proposed algorithm is capable of computing highly accurate eigenpairs, it can also obtain a much faster approximate eigendecomposition with modest precision by prematurely terminating the algorithm at a certain level in the hierarchy tree. This early termination strategy is particularly useful as it is sufficient in many applications to use an approximate eigendecomposition. We apply MSEIGS and its early termination strategy to two real-world machine learning applications: label propagation for semi-supervised classification and inductive matrix completion for recommender systems. We show that both our methods are much faster than other methods while still attaining good performance. For example, to perform semi-supervised learning using label propagation on the Aloi dataset with 1,000 classes, MSEIGS takes around 800 seconds to obtain an accuracy of 60.03%; MSEIGS with early termination takes less than 200 seconds achieving an accuracy of 58.98%, which is more than 10 times faster than a conjugate gradient based semi-supervised method [10]. The rest of the paper is organized as follows. In Section 2, we review some closely related work. We present MSEIGS in Section 3 by describing the single-level case and extending it to the multi-level setting. Experimental results are shown in Section 4 followed by conclusions in Section 5. 2 Related Work The spectral decomposition of large and sparse graphs is a fundamental tool that lies at the core of numerous algorithms in varied machine learning tasks. Practical examples include spectral clustering [19], link prediction in social networks [24], recommender systems with side-information [18], densest k-subgraph problem [20] and graph matching [22]. Most of the existing eigensolvers for sparse matrices employ the single-vector version of iterative algorithms, such as the power method and Lanczos algorithm [21]. The Lanczos algorithm iteratively constructs the basis of the Krylov subspace to obtain the eigendecomposition, which has been extensively investigated and applied in popular eigensolvers, e.g., eigs in Matlab (ARPACK) [14] and PROPACK [12]. However, it is well known that single-vector iterative algorithms can only compute the leading eigenvalue/eigenvector (e.g., power method) or have difficulty in computing multiplicities/clusters of eigenvalues (e.g., Lanczos). In contrast, the block version of iterative algorithms using multiple starting vectors, such as the randomized SVD [7] and block Lanczos [21], can avoid such problems and utilize efficient matrix-matrix operations (e.g., Level 3 BLAS) with better caching behavior. While these are the most commonly used methods to compute the spectral decomposition of a sparse matrix, they do not scale well to large problems, especially when hundreds of eigenvalues/eigenvectors are needed. Furthermore, none of them consider the clustering structure of the sparse graph. One exception is the classical divide and conquer algorithm by [3], which partitions the tridiagonal eigenvalue problem into several smaller problems that are solved separately. Then it combines the solutions of these smaller problems and uses rank-one modification to solve the original problem. However, this method can only be used for tridiagonal matrices and it is unclear how to extend it to general sparse matrices. 3 Multi-Scale Spectral Decomposition Suppose we are given a graph G = (V, E, A), which consists of |V| vertices and |E| edges such that an edge between any two vertices i and j represents their similarity wij. The corresponding adjacency matrix A is a n⇥n sparse matrix with (i, j) entry equal to wij if there is an edge between i and j and 0 otherwise. We consider the case where G is an undirected graph, i.e., A is symmetric. Our goal is to efficiently compute the top-k eigenvalues λ1, · · · , λk (|λ1| ≥· · · ≥|λk|) and their 2 corresponding eigenvectors u1, u2, · · · uk of A, which form the best rank-k approximation of A. That is, A ⇡Uk⌃kU T k , where ⌃k is a k ⇥k diagonal matrix with the k largest eigenvalues of A and Uk = [u1, u2, · · · , uk] is an n⇥k orthonormal matrix. In this paper, we propose a novel multi-scale spectral decomposition method (MSEIGS), which embodies the clustering structure of A to achieve faster convergence. We begin by first describing the single-level version of MSEIGS. 3.1 Single-level division Our proposed multi-scale spectral decomposition algorithm, which can be used as an alternative to Matlab’s eigs function, is based on the divide-and-conquer principle to utilize the clustering structure of the graph. It consists of two main phases: in the divide step, we divide the problem into several smaller subproblems such that each subproblem can be solved efficiently and independently; in the conquer step, we use the solutions from each subproblem as a good initialization for the original problem and achieve faster convergence compared to existing solvers which typically start from random initialization. Divide Step: We first use clustering to partition the sparse matrix A into c2 submatrices as A = D + ∆= 2 4 A11 · · · A1c ... ... ... Ac1 · · · Acc 3 5 , D = 2 4 A11 · · · 0 ... ... ... 0 · · · Acc 3 5 , ∆= 2 4 0 · · · A1c ... ... ... Ac1 · · · 0 3 5 , (1) where each diagonal block Aii is a mi⇥mi matrix, D is a block diagonal matrix and ∆is the matrix consisting of all off-diagonal blocks of A. We then compute the dominant r (r k) eigenpairs of each diagonal block Aii independently, such that Aii ⇡U (i) r ⌃(i) r (U (i) r )T , where ⌃(i) r is a r ⇥ r diagonal matrix with the r dominant eigenvalues of Aii and U (i) r = [u(i) 1 , u(i) 2 , · · · , u(i) r ] is an orthonormal matrix with the corresponding eigenvectors. After obtaining the r dominant eigenpairs of each Aii, we can sort all cr eigenvalues from the c diagonal blocks and select the k largest eigenvalues (in terms of magnitude) and the corresponding eigenvectors. More specifically, suppose that we select the top-ki eigenpairs of Aii and construct an mi ⇥ki orthonormal matrix U (i) ki = [u(i) 1 , u(i) 2 , · · · , u(i) ki ], then we concatenate all U (i) ki ’s and form an n ⇥k orthonormal matrix ⌦as ⌦= U (1) k1 ⊕U (2) k2 ⊕· · · ⊕U (c) kc , (2) where P i ki = k and ⊕denotes direct sum, which can be viewed as the sum of the subspaces spanned by U (i) ki . Note that ⌦is exactly the k dominant eigenvectors of D. After obtaining ⌦, we can use it as a starting subspace for the eigendecomposition of A in the conquer step. We next show that if we use graph clustering to generate the partition of A in (1), then the space spanned by ⌦is close to that of Uk, which makes the conquer step more efficient. We use principal angles [15] to measure the closeness of two subspaces. Since ⌦and Uk are orthonormal matrices, the j-th principal angle between subspaces spanned by ⌦and Uk is ✓j(⌦, Uk) = arccos(σj), where σj, j = 1, 2, · · · , k, are the singular values of ⌦T Uk in descending order. In Theorem 3.1, we show that ⇥(⌦, Uk) = diag(✓1(⌦, Uk), · · · , ✓k(⌦, Uk)) is related to the matrix ∆. Theorem 3.1. Suppose λ1(D), · · · , λn(D) (in descending order of magnitude) are the eigenvalues of D. Assume there is an interval [↵, β] and ⌘≥0 such that λk+1(D), · · · , λn(D) lies entirely in [↵, β] and the k dominant eigenvalues of A, λ1, · · · , λk, lie entirely outside of (↵−⌘, β + ⌘), then k sin(⇥(⌦, Uk))k2 k∆k2 ⌘ , k sin(⇥(⌦, Uk))kF  p k k∆kF ⌘ . The proof is given in Appendix 6.2. As we can see, ⇥(⌦, Uk) is influenced by ∆, thus we need to find a partition such that k∆kF is small in order for k sin(⇥(⌦, Uk))kF to be small. Assuming that the graph has clustering structure, we apply graph clustering algorithms to partition A to generate small k∆kF . In general, the goal of graph clustering is to find clusters such that there are many edges within clusters and only a few between clusters, i.e., make k∆kF small. Various graph clustering software can be used to generate the partitions, e.g., Graclus [5], Metis [11], Nerstrand [13] and GEM [27]. Figure 1(a) shows a comparison of the cosine values of ⇥(⌦, Uk) with different ⌦for the CondMat dataset, a collaboration network with 21,362 nodes and 182,628 edges. We compute ⌦using random partitioning and graph clustering, where we cluster the graph into 4 clusters using Metis and more than 85% of edges appear within clusters. In Figure 1(a), more than 80% of principal angles have cosine values that are greater than 0.9 with graph clustering, whereas this ratio drops to 5% with random partitioning. This illustrates that (1) the effectiveness of graph clustering to reduce ⇥(⌦, Uk); (2) the subspace spanned by ⌦from graph clustering is close to that of Uk. 3 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Rank k Cosine of principal angles Random Partition Graph Clustering (a) 0 10 20 30 40 50 60 70 80 90 100 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 rank k Cosine of principal angles RSVD BlkLan MSEIGS with single level MSEIGS (b) 0 10 20 30 40 50 60 70 80 90 100 −2.5 −2 −1.5 −1 −0.5 0 Rank k |¯λi| −|λi| RSVD BlkLan MSEIGS with single level MSEIGS (c) Figure 1: (a): cos(⇥(⌦, Uk)) with graph clustering and random partition. (b) and (c): comparison of RSVD, BlkLan, MSEIGS with single level and MSEIGS on the CondMat dataset with the same number of iterations (5 steps). (b) shows cos(⇥( ¯Uk, Uk)), where ¯Uk consists of the computed top-k eigenvectors and (c) shows the difference between the computed eigenvalues and the exact ones. Conquer Step: After obtaining ⌦from the clusters (diagonal blocks) of A, we use ⌦to initialize the spectral decomposition solver for A. In principle, we can use different solvers such as randomized SVD (RSVD) and block Lanczos (BlkLan). In our divide-and-conquer framework, we focus on using block Lanczos due to its superior performance as compared to RSVD. The basic idea of block Lanczos is to use an n ⇥b initial matrix V0 to construct the Krylov subspace of A. After j −1 steps of block Lanczos, the j-th Krylov subspace of A is given as Kj(A, V0) = span{V0, AV0, · · · , Aj−1V0}. As the block Lanczos algorithm proceeds, an orthonormal basis ˆQj for Kj(A, V0) is generated as well as a block tridiagonal matrix ˆTj, which is a projection of A onto Kj(A, V0). Then the Rayleigh-Ritz procedure is applied to compute the approximate eigenpairs of A. More details about the block Lanczos is given in Appendix 6.1. In contrast, RSVD, which is equivalent to subspace iteration with a Gaussian random matrix, constructs a basis for Aj−1V0 and then restricts A to this subspace to obtain the decomposition. As a consequence, block Lanczos can achieve better performance than RSVD with the same number of iterations. In Figure 1(b), we compare block Lanczos with RSVD in terms of cos(⇥( ¯Uk, Uk)) for the CondMat dataset, where ¯Uk consists of the approximate k dominant eigenvectors. Similarly in Figure 1(c), we show that the eigenvalues computed by block Lanczos are more closer to the true eigenvalues. In other words, block Lanczos needs less iterations than RSVD to achieve similar accuracy. For the CondMat dataset, block Lanczos takes 7 iterations to achieve mean of cos(⇥( ¯Uk, Uk)) to be 0.99, while RSVD takes more than 10 iterations to obtain similar performance. It is worth noting that there are a number of improved versions of block Lanczos [1, 6], and we show in the experiments that our method achieves superior performance even with the simple version of block Lanczos. The single-level version of our proposed MSEIGS algorithm is given in Algorithm 1. Some remarks on Algorithm 1 are in order: (1) kAiikF is likely to be different among clusters and larger clusters tend to have more influence over the spectrum of the entire matrix. Thus, we select the rank r for each cluster i based on the ratio kAiikF / P i kAiikF ; (2) We use a small number of additional eigenvectors in step 4 (similar to RSVD) to improve the effectiveness of block Lanczos; (3) It is time consuming to test convergence of the Ritz pairs in block Lanczos (steps 7, 8 of Algorithm 3 in the Appendix), thus we test convergence after running a few iterations of block Lanczos; (4) Better quality of clustering, i.e., smaller k∆kF , implies higher accuracy of MSEIGS. We give performance results of MSEIGS with varying cluster quality in Appendix 6.4. From Figures 1(b) and 1(c), we can observe that the single-level MSEIGS performs much better than block Lanczos and RSVD. We can now analyze the approximation quality of Algorithm 1 by first examining the difference between the eigenvalues computed by Algorithm 1 and the exact eigenvalues of A. Theorem 3.2. Let ¯λ1 ≥· · · ≥¯λkq be the approximate eigenvalues obtained after q steps of block Lanczos in Algorithm 1. According to Kaniel-Paige Convergence Theory [23], we have λi ¯λi λi + (λ1 −λi) tan2(✓) T 2 q−1( 1+⌫i 1−⌫i ) . Using Theorem 3.1, we further have λi ¯λi λi + (λ1 −λi)k∆k2 2 T 2 q−1( 1+⌫i 1−⌫i )(⌘2 −k∆k2 2), where Tm(x) is the m-th Chebyshev polynomial of the first kind, ✓is the largest principal angle of ⇥(⌦, Uk) and ⌫i = λi−λk+1 λi−λ1 . 4 Next we show the bound of Algorithm 1 in terms of rank-k approximation error. Theorem 3.3. Given a n⇥n symmetric matrix A, suppose by Algorithm 1, we can approximate its k dominant eigenpairs and form a rank-k approximation, i.e., A ⇡¯Uk ¯⌃k ¯V T k with ¯Uk = [¯u1, · · · , ¯uk] and ¯⌃k = diag(¯λ1, · · · , ¯λk) . The approximation error can be bounded as kA −¯Uk ¯⌃k ¯V T k k2 2kA −Akk2 ✓ 1 + sin2(✓) 1 −sin2(✓) ◆ 1 2(q+1) , where q is the number of iterations for block Lanczos and Ak is the best rank-k approximation of A. Using Theorem 3.1, we further have kA −¯Uk ¯⌃k ¯V T k k2 2kA −Akk2 ✓ k∆k2 2 ⌘2 −k∆k2 2 ◆ 1 2(q+1) . The proof is given in Appendix 6.3. The above two theorems show that a good initialization is important for block Lanczos. Using Algorithm 1, we will expect a small k∆k2 and ✓(as shown in Figure 1(a)) because it embodies the clustering structure of A and constructs a good initialization. Therefore, our algorithm can have faster convergence compared to block Lanczos with random initialization. The time complexity for Algorithm 1 is O(|E|k + nk2). Algorithm 1: MSEIGS with single level Input : n ⇥n symmetric sparse matrix A, target rank k and number of clusters c. Output: The approximate dominant k eigenpairs (¯λi, ¯ui), i = 1, · · · , k of A. 1 Generate c clusters A11, · · · , Acc by performing graph clustering on A (e.g., Metis or Graclus). 2 Compute top-r eigenpairs (λ(i) j , u(i) j ), j = 1, · · · , r, of Aii using standard eigensolvers. 3 Select the top-k eigenvalues and their eigenvectors from the c clusters to obtain U (1) k1 , · · · , U (c) kc . 4 Form block diagonal matrix ⌦= U (1) k1 ⊕· · · ⊕U (c) kc (P i ki = k). 5 Apply block Lanczos (Algorithm 3 in Appendix 6.1) with initialization Q1 = ⌦. 3.2 Multi-scale spectral decomposition In this section, we describe our multi-scale spectral decomposition algorithm (MSEIGS). One challenge for Algorithm 1 is the trade-off in choosing the number of clusters c. If c is large, although computing the top-r eigenpairs of Aii can be very efficient, it is likely to increase k∆k, which in turn will result in slower convergence of Algorithm 1. In contrast, larger clusters will emerge when c is small, increasing the time to compute the top-r eigendecomposition for each Aii. However, k∆k is likely to decrease in this case, resulting in faster convergence of Algorithm 1. To address this issue, we can further partition Aii into c smaller clusters and construct a hierarchy until each cluster is small enough to be solved efficiently. After obtaining this hierarchical clustering, we can recursively apply Algorithm 1 as it moves from lower levels to upper levels in the hierarchy tree. By constructing a hierarchy, we can pick a small c to obtain ⌦with small ⇥(⌦, Uk) (we set c = 4 in the experiments). Our MSEIGS algorithm with multiple levels is described in Algorithm 2. Figures 1(b) and 1(c) show a comparison between MSEIGS and MSEIGS with a single level. For the single level case, we use the top-r eigenpairs of the c child clusters computed up to machine precision. We can see that MSEIGS performs similarly well compared to the single level case showing the effectiveness of our multi-scale approach. To build the hierarchy, we can adopt either top-down or bottom-up approaches using existing clustering algorithms. The overhead of clustering is very low, usually less than 10% of the total time. For example, MSEIGS takes 1,825 seconds, where clustering takes only 80 seconds, for the FriendsterSub dataset (in Table 1) with 10M nodes and 83M edges. Early Termination of MSEIGS: Computing the exact spectral decomposition of A can be quite time consuming. Furthermore, highly accurate eigenvalues/eigenvectors are not essential for many applications. Thus, we propose a fast early termination strategy (MSEIGS-Early) to approximate the eigenpairs of A by terminating MSEIGS at a certain level of the hierarchy tree. Suppose that we terminate MSEIGS at the `-th level with c` clusters. From the top-r eigenpairs of each cluster, we can select the top-k eigenvalues and the corresponding eigenvectors from all c` clusters as an approximate eigendecomposition of A. As shown in Sections 4.2 and 4.3, we can significantly reduce the computation time while attaining comparable performance using the early termination strategy for two applications: label propagation and inductive matrix completion. Multi-core Parallelization: An important advantage of MSEIGS is that it can be easily parallelized, which is essential for large-scale eigendecomposition. There are two main aspects of parallelism 5 Algorithm 2: Multi-scale spectral decomposition (MSEIGS) Input : n ⇥n symmetric sparse matrix A, target rank k, the number of levels ` of the hierarchy tree and the number of clusters c at each node. Output: The approximate dominant k eigenpairs (¯λi, ¯ui), i = 1, · · · , k of A. 1 Perform hierarchical clustering on A (e.g., top-down or bottom-up). 2 Compute the top-r eigenpairs of each leaf node A(`) ii for i = 1, · · · , c`, using block Lanczos. 3 for i = ` −1, · · · , 1 do 4 for j = 1, · · · , ci do 5 Form block diagonal matrix ⌦(i) j by (2). 6 Compute the eigendecomposition of A(i) jj by Algorithm 1 with ⌦(i) j as the initial block. 7 end 8 end in MSEIGS: (1) The eigendecomposition of clusters in the same level of the hierarchy tree can be computed independently; (2) Block Lanczos mainly involves matrix-matrix operations (Level 3 BLAS), thus efficient parallel linear algebra libraries (e.g., Intel MKL) can be used. We show in Section 4 that MSEIGS can achieve significant speedup in shared-memory multi-core settings. 4 Experimental Results In this section, we empirically demonstrate the benefits of our proposed MSEIGS method. We compare MSEIGS with other popular eigensolvers including Matlab’s eigs function (EIGS) [14], PROPACK [12], randomized SVD (RSVD) [7] and block Lanczos with random initialization (BlkLan) [21] on three different tasks: approximating the eigendecomposition, label propagation and inductive matrix completion. The experimental settings can be found in Appendix 6.5. 4.1 Approximation results First, we show in Figure 2 the performance of MSEIGS for approximating the top-k eigenvectors for different types of real-world graphs including web graphs, social networks and road networks [17, 28]. Summary of the datasets is given in Table 1, where the largest graph contains more than 3.6 billion edges. We use the average of the cosine of principal angles cos(⇥( ¯Uk, Uk)) as the evaluation metric, where ¯Uk consists of the computed top-k eigenvectors and Uk represents the “true” top-k eigenvectors computed up to machine precision using Matlab’s eigs function. Larger values of the average cos(⇥( ¯Uk, Uk)) imply smaller principal angles between the subspace spanned by Uk and that of ¯Uk, i.e., better approximation. As shown in Figure 2, with the same amount of time, the eigenvectors computed by MSEIGS consistently yield better principal angles than other methods. Table 1: Datasets of increasing sizes. dataset CondMat Amazon RoadCA LiveJournal FriendsterSub SDWeb # of nodes 21,263 334,843 1,965,206 3,997,962 10.00M 82.29M # of nonzeros 182,628 1,851,744 5,533,214 69,362,378 83.67M 3.68B rank k 100 100 200 500 100 50 Since MSEIGS divides the problem into independent subproblems, it is naturally parallelizable. In Figure 3, we compare MSEIGS with other methods under the shared-memory multi-core setting for the LiveJournal and SDWeb datasets. We vary the number of cores from 1 to 16 and show the time to compute similar approximation of the eigenpairs. As shown in Figure 3, MSEIGS achieves almost linear speedup and outperforms other methods. For example, MSEIGS is the fastest method achieving a speedup of 10 using 16 cores for the LiveJournal dataset. 4.2 Label propagation for semi-supervised learning and multi-label learning One application for MSEIGS is to speed up the label propagation algorithm, which is widely used for graph-based semi-supervised learning [29] and multi-label learning [26]. The basic idea of label propagation is to propagate the known labels over an affinity graph (represented as a weighted matrix W) constructed using both labeled and unlabeled examples. Mathematically, at the (t+1)-th iteration, F(t + 1) = ↵SF(t) + (1 −↵)Y , where S is the normalized affinity matrix of W; Y is the n ⇥l initial label matrix; F is the predicted label matrix; l is the number of labels; n is the total number of samples; 0 ↵< 1. The optimal solution is F ⇤= (1 −↵)(I −↵S)−1Y . There are two standard approaches to approximate F ⇤: one is to iterate over F(t) until convergence (truncated 6 0 1 2 3 4 5 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (a) CondMat 0 20 40 60 80 100 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (b) Amazon 0 500 1000 1500 2000 2500 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (c) FriendsterSub 0 500 1000 1500 2000 2500 3000 3500 4000 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (d) RoadCA 0 2000 4000 6000 8000 10000 12000 14000 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (e) LiveJournal 0.5 1 1.5 2 2.5 x 10 4 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Time (sec) Avg. cosine of principal angles EIGS PROPACK RSVD BlkLan MSEIGS (f) SDWeb Figure 2: The k dominant eigenvectors approximation results showing time vs. average cosine of principal angles. For a given time, MSEIGS consistently yields better results than other methods. 2 4 6 8 10 12 14 16 10 3 10 4 Number of cores Time (sec) EIGS RSVD BlkLan MSEIGS (a) LiveJournal 2 4 6 8 10 12 14 16 10 4 Number of cores Time (sec) EIGS RSVD BlkLan MSEIGS (b) SDWeb Figure 3: Shared-memory multi-core results showing number of cores vs. time to compute similar approximation. MSEIGS achieves almost linear speedup and outperforms other methods. method); another is to solve F ⇤as a system of linear equations by using an iterative solver like conjugate gradient (CG) [10]. However, both methods suffer from slow convergence, especially when the number of labels, i.e., columns of Y , grows dramatically. As an alternative, we can apply MSEIGS to generate the top-k eigendecomposition of S such that S ⇡¯Uk ¯⌃k ¯U T k and approximate F ⇤as F ⇤⇡¯F = (1 −↵) ¯Uk(I −↵¯⌃k)−1 ¯U T k Y . Obviously, ¯F is robust to large numbers of labels. In Table 2, we compare MSEIGS and MSEIGS-Early with other methods for label propagation on two public datasets: Aloi and Delicious, where Delicious is a multi-label dataset containing 16,105 samples and 983 labels, and Aloi is a semi-supervised learning dataset containing 108,000 samples with 1,000 classes. More details of the datasets and parameters are given in Appendix 6.6. As we can see in Table 2, MSEIGS and MSEIGS-Early significantly outperform other methods. To achieve similar accuracy, MSEIGS takes much less time. More interestingly, MSEIGS-Early is faster than MSEIGS and almost 10 times faster than other methods with very little degradation of accuracy showing the efficiency of our early-termination strategy. 4.3 Inductive matrix completion for recommender systems In the context of recommender systems, Inductive Matrix Completion (IMC) [8] is another important application where MSEIGS can be applied. IMC incorporates side-information of users and items given in the form of feature vectors for matrix factorization, which has been shown to be effective for the gene-disease association problem [18]. Given a user-item ratings matrix R 2 Rm⇥n, where Rij is the known rating of item j by user i, IMC is formulated as follows: min W 2Rfc⇥r,H2Rfd⇥r X (i,j)2⌦ (Rij −xT i WHT yj)2 + λ 2 (kWk2 F + kHk2 F ), where ⌦is the set of observed entries; λ is a regularization parameter; xi 2 Rfc and yj 2 Rfd are feature vectors for user i and item j, respectively. We evaluated MSEIGS combined with IMC for recommendation tasks where a social network among users is also available. It has been shown 7 Table 2: Label propagation results on two real datasets including Aloi for semi-supervised classification and Delicious for multi-label learning. The graph is constructed using [16], which takes 87.9 seconds for Aloi and 16.1 seconds for Delicious. MSEIGS is about 5 times faster and MSEIGSEarly is almost 20 times faster than EIGS while achieving similar accuracy on the Aloi dataset. Method Aloi (k = 1500) Delicious (k = 1000) time(seconds) acc(%) time(seconds) top3-acc(%) top1-acc(%) Truncated 1824.8 59.87 3385.1 45.12 48.89 CG 2921.6 60.01 1094.9 44.93 48.73 EIGS 3890.9 60.08 458.2 45.11 48.51 RSVD 964.1 59.62 359.8 44.11 46.91 BlkLan 1272.2 59.96 395.6 43.52 45.53 MSEIGS 767.1 60.03 235.6 44.84 49.23 MSEIGS-Early 176.2 58.98 61.36 44.71 48.22 that exploiting these social networks improves the quality of recommendations [9, 25]. One way to obtain useful and robust features from the social network is to consider the k principal components, i.e., top-k eigenvectors, of the corresponding adjacency matrix A. We compare the recommendation performance of IMC using eigenvectors computed by MSEIGS, MSEIGS-Early and EIGS. We also report results for two baseline methods: standard matrix completion (MC) without user/item features and Katz1 on the combined network C = [A R; RT 0] as in [25]. We evaluated the recommendation performance on three publicly available datasets shown in Table 6 (see Appendix 6.7 for more details). The Flixster dataset [9] contains user-movie ratings information and the other two datasets [28] are for the user-affiliation recommendation task. We report recallat-N with N = 20 averaged over 5-fold cross-validation, which is a widely used evaluation metric for top-N recommendation tasks [2]. In Table 3, we can see that IMC outperforms the two baseline methods: Katz and MC. For IMC, both MSEIGS and MSEIGS-Early achieve comparable results compared to other methods, but require much less time to compute the top-k eigenvectors (i.e., user latent features). For the LiveJournal dataset, MSEIGS-Early is almost 8 times faster than EIGS while attaining similar performance as shown in Table 3. Table 3: Recall-at-20 (RCL@20) and top-k eigendecomposition time (eig-time, in seconds) results on three real-world datasets: Flixster, Amazon and LiveJournal. MSEIGS and MSEIGS-Early require much less time to compute the top-k eigenvectors (latent features) for IMC while achieving similar performance compared to other methods. Note that Katz and MC do not use eigenvectors. Method Flixster (k = 100) Amazon (k = 500) LiveJournal (k = 500) eig-time RCL@20 eig-time RCL@20 eig-time RCL@20 Katz 0.1119 0.3224 0.2838 MC 0.0820 0.4497 0.2699 EIGS 120.51 0.1472 871.30 0.4999 12099.57 0.4259 RSVD 85.31 0.1491 369.82 0.4875 7617.98 0.4294 BlkLan 104.95 0.1465 882.58 0.4687 5099.79 0.4248 MSEIGS 36.27 0.1489 264.47 0.4911 2863.55 0.4253 MSEIGS-Early 21.88 0.1481 179.04 0.4644 1545.52 0.4246 5 Conclusions In this paper, we proposed a novel divide-and-conquer based framework, multi-scale spectral decomposition (MSEIGS), for approximating the top-k eigendecomposition of large-scale graphs. Our method exploits the clustering structure of the graph and converges faster than state-of-the-art methods. Moreover, our method can be easily parallelized, which makes it suitable for massive graphs. Empirically, MSEIGS consistently outperforms other popular eigensolvers in terms of convergence speed and approximation quality on real-world graphs with up to billions of edges. We also show that MSEIGS is highly effective for two important applications: label propagation and inductive matrix completion. Dealing with graphs that cannot fit into memory is one of our future research directions. We believe that MSEIGS can also be efficient in streaming and distributed settings with careful implementation. Acknowledgments This research was supported by NSF grant CCF-1117055 and NSF grant CCF-1320746. 1The Katz measure is defined as Pt i=1 βtCt. We set β = 0.01 and t = 10. 8 References [1] J. Baglama, D. Calvetti, and L. Reichel. IRBL: An implicitly restarted block-lanczos method for largescale hermitian eigenproblems. SIAM J. Sci. Comput., 24(5):1650–1677, 2003. [2] P. Cremonesi, Y. Koren, and R. Turrin. Performance of recommender algorithms on top-N recommendation tasks. In RecSys, pages 39–46, 2010. [3] J. Cuppen. A divide and conquer method for the symmetric tridiagonal eigenproblem. Numer. Math., 36(2):177–195, 1980. [4] C. Davis and W. M. Kahan. The rotation of eigenvectors by a perturbation. III. SIAM J. Numer. Anal., 7(1):1–46, 1970. [5] I. S. Dhillon, Y. Guan, and B. Kulis. Weighted graph cuts without eigenvectors a multilevel approach. IEEE Trans. Pattern Anal. Mach. Intell., 29(11):1944–1957, 2007. [6] R. Grimes, J. Lewis, and H. Simon. A shifted block lanczos algorithm for solving sparse symmetric generalized eigenproblems. SIAM J. Matrix Anal. Appl., 15(1):228–272, 1994. [7] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Rev., 53(2):217–288, 2011. [8] P. Jain and I. S. Dhillon. Provable inductive matrix completion. CoRR, abs/1306.0626, 2013. [9] M. Jamali and M. Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In RecSys, pages 135–142, 2010. [10] M. Karasuyama and H. Mamitsuka. Manifold-based similarity adaptation for label propagation. In NIPS, pages 1547–1555, 2013. [11] G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput., 20(1):359–392, 1998. [12] R. M. Larsen. Lanczos bidiagonalization with partial reorthogonalization. Technical Report DAIMI PB-357, Aarhus University, 1998. [13] D. LaSalle and G. Karypis. Multi-threaded modularity based graph clustering using the multilevel paradigm. Technical Report 14-010, University of Minnesota, 2014. [14] R. B. Lehoucq, D. C. Sorensen, and C. Yang. ARPACK Users’ Guide. Society for Industrial and Applied Mathematics, 1998. [15] R. Li. Relative perturbation theory: II. eigenspace and singular subspace variations. SIAM J. Matrix Anal. Appl., 20(2):471–492, 1998. [16] W. Liu, J. He, and S.-F. Chang. Large graph construction for scalable semi-supervised learning. In ICML, pages 679–686, 2010. [17] R. Meusel, S. Vigna, O. Lehmberg, and C. Bizer. Graph structure in the web — revisited: A trick of the heavy tail. In WWW Companion, pages 427–432, 2014. [18] N. Natarajan and I. S. Dhillon. Inductive matrix completion for predicting gene-disease associations. Bioinformatics, 30(12):i60–i68, 2014. [19] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In NIPS, pages 849–856, 2001. [20] D. Papailiopoulos, I. Mitliagkas, A. Dimakis, and C. Caramanis. Finding dense subgraphs via low-rank bilinear optimization. In ICML, pages 1890–1898, 2014. [21] B. N. Parlett. The Symmetric Eigenvalue Problem. Prentice-Hall, 1980. [22] R. Patro and C. Kingsford. Global network alignment using multiscale spectral signatures. Bioinformatics, 28(23):3105–3114, 2012. [23] Y. Saad. On the rates of convergence of the lanczos and the block-lanczos methods. SIAM J. Numer. Anal., 17(5):687–706, 1980. [24] D. Shin, S. Si, and I. S. Dhillon. Multi-scale link prediction. In CIKM, pages 215–224, 2012. [25] V. Vasuki, N. Natarajan, Z. Lu, B. Savas, and I. Dhillon. Scalable affiliation recommendation using auxiliary networks. ACM Trans. Intell. Syst. Technol., 3(1):3:1–3:20, 2011. [26] B. Wang, Z. Tu, and J. Tsotsos. Dynamic label propagation for semi-supervised multi-class multi-label classification. In ICCV, pages 425–432, 2013. [27] J. J. Whang, X. Sui, and I. S. Dhillon. Scalable and memory-efficient clustering of large-scale social networks. In ICDM, pages 705–714, 2012. [28] J. Yang and J. Leskovec. Defining and evaluating network communities based on ground-truth. In ICDM, pages 745–754, 2012. [29] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. In NIPS, pages 321–328, 2004. 9
2014
95
5,586
Dimensionality Reduction with Subspace Structure Preservation Devansh Arpit Department of Computer Science SUNY Buffalo Buffalo, NY 14260 devansha@buffalo.edu Ifeoma Nwogu Department of Computer Science SUNY Buffalo Buffalo, NY 14260 inwogu@buffalo.edu Venu Govindaraju Department of Computer Science SUNY Buffalo Buffalo, NY 14260 govind@buffalo.edu Abstract Modeling data as being sampled from a union of independent subspaces has been widely applied to a number of real world applications. However, dimensionality reduction approaches that theoretically preserve this independence assumption have not been well studied. Our key contribution is to show that 2K projection vectors are sufficient for the independence preservation of any K class data sampled from a union of independent subspaces. It is this non-trivial observation that we use for designing our dimensionality reduction technique. In this paper, we propose a novel dimensionality reduction algorithm that theoretically preserves this structure for a given dataset. We support our theoretical analysis with empirical results on both synthetic and real world data achieving state-of-the-art results compared to popular dimensionality reduction techniques. 1 Introduction A number of real world applications model data as being sampled from a union of independent subspaces. These applications include image representation and compression [7], systems theory [13], image segmentation [16], motion segmentation [14], face clustering [8, 6] and texture segmentation [9], to name a few. Dimensionality reduction is generally used prior to applying these methods because most of these algorithms optimize expensive loss functions like nuclear norm, ℓ1 regularization, e.t.c. Most of these applications simply apply off-the-shelf dimensionality reduction techniques or resize images (in case of image data) as a pre-processing step. The union of independent subspace model can be thought of as a generalization of the traditional approach of representing a given set of data points using a single low dimensional subspace (e.g. Principal Component Analysis). For the application of algorithms that model data at hand with this independence assumption, the subspace structure of the data needs to be preserved after dimensionality reduction. Although a number of existing dimensionality reduction techniques [11, 4, 1, 5] try to preserve the spacial geometry of any given data, no prior work has tried to explicitly preserve the independence between subspaces to the best of our knowledge. In this paper, we propose a novel dimensionality reduction technique that preserves independence between multiple subspaces. In order to achieve this, we first show that for any two disjoint subspaces with arbitrary dimensionality, there exists a two dimensional subspace such that both the 1 subspaces collapse to form two lines. We then extend this non-trivial idea to multi-class case and show that 2K projection vectors are sufficient for preserving the subspace structure of a K class dataset. Further, we design an efficient algorithm that finds the projection vectors with the aforementioned properties while being able to handle corrupted data at the same time. 2 Preliminaries Let S1, S2 . . . SK be K subspaces in Rn. We say that these K subspaces are independent if there does not exist any non-zero vector in Si which is a linear combination of vectors in the other K −1 subspaces. Let the columns of the matrix Bi ∈Rn×d denote the support of the ith subspace of d dimensions. Then any vector in this subspace can be represented as x = Biw ∀w ∈Rd. Now we define the notion of margin between two subspaces. Definition 1 (Subspace Margin) Subspaces Si and Sj are separated by margin γij if γij = max u∈Si,v∈Sj ⟨u, v⟩ ∥u∥2∥v∥2 (1) Thus margin between any two subspaces is defined as the maximum dot product between two unit vectors (u, v), one from either subspace. Such a vector pair (u, v) is known as the principal vector pair between the two subspaces while the angle between these vectors is called the principal angle. With these definitions of independent subspaces and margin, assume that we are given a dataset which has been sampled from a union of independent linear subspaces. Specifically, each class in this dataset lies along one such independent subspace. Then our goal is to reduce the dimensionality of this dataset such that after projection, each class continues to lie along a linear subspace and that each such subspace is independent of all others. Formally, let X = [X1, X2 . . . , XK] be a K class dataset in Rn such that vectors from class i (x ∈Xi) lie along subspace Si. Then our goal is to find a projection matrix (P ∈Rn×m) such that the projected data vectors ¯Xi := {P T x : x ∈Xi} (i ∈{1 . . . K}) are such that data vectors ¯Xi belong to a linear subspace ( ¯Si in Rm). Further, each subspace ¯Si (i ∈{1 . . . K}) is independent of all others. 3 Proposed Approach In this section, we propose a novel subspace learning approach applicable to labeled datasets that theoretically guarantees independent subspace structure preservation. The number of projection vectors required by our approach is not only independent of the size of the dataset but is also fixed, depending only on the number of classes. Specifically, we show that for any K class labeled dataset with independent subspace structure, only 2K projection vectors are required for structure preservation. The entire idea of being able to find a fixed number of projection vectors for the structure preservation of a K class dataset is motivated by theorem 2. This theorem states a useful property of any pair of disjoint subspaces. Theorem 2 Let unit vectors v1 and v2 be the ith principal vector pair for any two disjoint subspaces S1 and S2 in Rn. Let the columns of the matrix P ∈Rn×2 be any two orthonormal vectors in the span of v1 and v2. Then for all vectors x ∈Sj, P T x = αtj (j ∈{1, 2}), where α ∈R depends on x and tj ∈R2 is a fixed vector independent of x. Further, tT 1 t2 ∥t1∥2∥t2∥2 = vT 1 v2 Proof: We use the notation (M)j to denote the jth column vector of matrix M for any arbitrary matrix M. We claim that tj = P T vj (j ∈{1, 2}). Also, without any loss of generality, assume that (P)1 = v1. Then in order to prove theorem 2, it suffices to show that ∀x ∈S1, (P)T 2 x = 0. By symmetry, ∀x ∈S2, P T x will also lie along a line in the subspace spanned by the columns of P. Let the columns of B1 ∈Rn×d1 and B2 ∈Rn×d2 be the support of S1 and S2 respectively, where d1 and d2 are the dimensionality of the two subspaces. Then we can represent v1 and v2 as v1 = B1w1 and v2 = B2w2 for some w1 ∈Rd1 and w2 ∈Rd2. Let B1w be any arbitrary vector in S1 where 2 (a) Independent subspaces in 3 dimensions (b) Subspaces after projection Figure 1: A three dimensional example of the application of theorem 2. See text in section 3 for details. w ∈Rd1. Then we need to show that T := (B1w)T (P)2 = 0∀w. Notice that, T = (B1w)T (B2w2 −(wT 1 BT 1 B2w2)B1w1) = wT (BT 1 B2w2 −(wT 1 BT 1 B2w2)w1) ∀w (2) Let USV T be the svd of BT 1 B2. Then w1 and w2 are the ith columns of U and V respectively, and vT 1 v2 is the ith diagonal element of S if v1 and v2 are the ith principal vectors of S1 and S2. Thus, T = wT (USV T w2 −Sii(U)i) = wT (Sii(U)i −Sii(U)i) = 0 □ (3) Geometrically, this theorem says that after projection on the plane (P) defined by any one of the principal vector pairs between subspaces S1 and S2, both the entire subspaces collapse to just two lines such that points from S1 lie along one line while points from S2 lie along the second line. Further, the angle that separates these lines is equal to the angle between the ith principal vector pair between S1 and S2 if the span of the ith principal vector pair is used as P. We apply theorem 2 on a three dimensional example as shown in figure 1. In figure 1 (a), the first subspace (y-z plane) is denoted by red color while the second subspace is the black line in x-y axis. Notice that for this setting, the x-y plane (denoted by blue color) is in the span of the 1st (and only) principal vector pair between the two subspaces. After projection of both the entire subspaces onto the x-y plane, we get two lines (figure 1 (b)) as stated in the theorem. Finally, we now show that for any K class dataset with independent subspace structure, 2K projection vectors are sufficient for structure preservation. Theorem 3 Let X = {x}N i=1 be a K class dataset in Rn with Independent Subspace structure. Let P = [P1 . . . PK] ∈Rn×2K be a projection matrix for X such that the columns of the matrix Pk ∈ Rn×2 consists of orthonormal vectors in the span of any principal vector pair between subspaces Sk and P j̸=k Sj. Then the Independent Subspace structure of the dataset X is preserved after projection on the 2K vectors in P. Before stating the proof of this theorem, we first state lemma 4 which we will use later in our proof. This lemma states that if two vectors are separated by a non-zero angle, then after augmenting these vectors with any arbitrary vectors, the new vectors remain separated by some non-zero angle as well. This straightforward idea will help us extend the two subspace case in theorem 2 to multiple subspaces. Lemma 4 Let x1, y1 be any two fixed vectors of same dimensionality with respect to each other such that xT 1 y1 ∥x1∥2∥y1∥2 = γ where γ ∈[0, 1). Let x2, y2 be any two arbitrary vectors of same dimensionality with respect to each other. Then there exists a constant ¯γ ∈[0, 1) such that vectors x′ = [x1; x2] and y′ = [y1; y2] are also separated such that x′T y′ ∥x′∥2∥y′∥2 ≤¯γ. Proof of theorem 3: 3 Algorithm 1 Computation of projection matrix P INPUT: X,K,λ, itermax for k=1 to K do w∗ 2 ←random vector in R ¯ Nk while iter < itermax or γ not converged do w∗ 1 ←maxw1∥Xkw1 − ¯ Xkw∗ 2 ∥¯ Xkw∗ 2∥2 ∥2 + λ∥w1∥2 w∗ 1 ←w∗ 1/norm(w∗ 1) w∗ 2 ←maxw2∥ Xkw∗ 1 ∥Xkw∗ 1∥2 −¯Xkw2∥2 + λ∥w2∥2 w∗ 2 ←w∗ 2/norm(w∗ 2) γ ←(Xkw∗ 1)T ( ¯Xkw∗ 2) end while Pk ←[Xkw∗ 1, ¯Xkw∗ 2] end for P ∗←[P1 . . . PK] OUTPUT: P ∗ For the proof of theorem 3, it suffices to show that data vectors from subspaces Sk and P j̸=k Sj (for any k ∈{1 . . . K}) are separated by margin less than 1 after projection using P. Let x and y be any vectors in Sk and P j̸=k Sj respectively and the columns of the matrix Pk be in the span of the ith (say) principal vector pair between these subspaces. Using theorem 2, the projected vectors P T k x and P T k y are separated by an angle equal to the the angle between the ith principal vector pair between Sk and P j̸=k Sj. Let the cosine of this angle be γ. Then, using lemma 4, the added dimensions in the vectors P T k x and P T k y to form the vectors P T x and P T y are also separated by some margin ¯γ < 1. As the same argument holds for vectors from all classes, the Independent Subspace Structure of the dataset remains preserved after projection. □ For any two disjoint subspaces, theorem 2 tells us that there is a two dimensional plane in which the entire projected subspaces form two lines. It can be argued that after adding arbitrary valued finite dimensions to the basis of this plane, the two projected subspaces will also remain disjoint (see proof of theorem 3). Theorem 3 simply applies this argument to each subspace and the sum of the remaining subspaces one at a time. Thus for K subspaces, we get 2K projection vectors. Finally, our approach projects data to 2K dimensions which could be a concern if the original feature dimension itself is less than 2K. However, since we are only concerned with data that has underlying independent subspace assumption, notice that the feature dimension must be at least K. This is because each class must lie on at least 1 dimension which is linearly independent of others. However, this is too strict an assumption and it is straight forward to see that if we relax this assumption to 2 dimensions for each class, the feature dimensions are already at 2K. 3.1 Implementation A naive approach to finding projection vectors (say for a binary class case) would be to compute the SVD of the matrix XT 1 X2, where the columns of X1 and X2 contain vectors from class 1 and class 2 respectively. For large datasets this would not only be computationally expensive but also be incapable of handling noise. Thus, even though theorem 3 guarantees the structure preservation of the dataset X after projection using P as specified, this does not solve the problem of dimensionality reduction. The reason is that given a labeled dataset sampled from a union of independent subspaces, we do not have any information about the basis or even the dimensionality of the underlying subspaces. Under these circumstances, constructing the projection matrix P as specified in theorem 3 itself becomes a problem. To solve this problem, we propose an algorithm that tries to find the underlying principal vector pair between subspaces Sk and P j̸=k Sj (for k = 1 to K) given the labeled dataset X. The assumption behind this attempt is that samples from each subspace (class) are not heavily corrupted and that the underlying subspaces are independent. Notice that we are not specifically interested in a particular principal vector pair between any two subspaces for the computation of the projection matrix. This is because we have assumed independent subspaces and so each principal vector pair is separated by some margin γ < 1. Hence we 4 need an algorithm that computes any arbitrary principal vector pair, given data from two independent subspaces. These vectors can then be used to form one of the K submatrices in P as specified in theorem 3 . For computing the submatrix Pk, we need to find a principal vector pair between subspaces Sk and P j̸=k Sj. In terms of dataset X, we estimate the vector pair using data in Xk and ¯Xk where ¯Xk := X \ {Xk}. We repeat this process for each class to finally form the entire matrix P ∗. Our approach is stated in algorithm 1. For each class k, the idea is to start with a random vector in the span of ¯Xk and find the vector in Xk closest to this vector. Then fix this vector and search of the closest vector in ¯Xk. Repeating this process till the convergence of the cosine between these 2 vectors leads to a principal vector pair. In order to estimate the closest vector from opposite subspace, we have used a quadratic program in 1 that minimizes the reconstruction error of the fixed vector (of one subspace) using vectors from the opposite subspace. The regularization in the optimization is to handle noise in data. 3.2 Justification The definition 1 for margin γ between two subspaces S1 and S2 can be equivalently expressed as 1 −γ = min w1,w2 1 2∥B1w1 −B2w2∥2 s.t. ∥B1w1∥2 = 1, ∥B2w2∥2 = 1 (4) where the columns of B1 ∈Rn×d1 and B2 ∈Rn×d2 are the basis of the subspaces S1 and S2 respectively such that BT 1 B1 and BT 2 B2 are both identity matrices. Proposition 5 Let B1 ∈Rn×d1 and B2 ∈Rn×d2 be the basis of two disjoint subspaces S1 and S2. Then for any principal vector pair (ui, vi) between the subspaces S1 and S2, the corresponding vector pair (w1 ∈Rd1,w2 ∈Rd2), s.t. ui = B1w1 and vi = B2w2, is a local minima to the objective in equation (4). Proof: The Lagrangian function for the above objective is: L(w1, w2, η) = 1 2wT 1 BT 1 B1w1+1 2wT 2 BT 2 B2w2−wT 1 BT 1 B2w2+η1(∥B1w1∥2−1)+η2(∥B2w2∥2−1) (5) Then setting the gradient w.r.t. w1 to zero we get ∇w1L = (1 + η1)w1 −BT 1 B2w2 = 0 (6) Let USV T be the SVD of BT 1 B2 and w1 and w2 be the ith columns of U and V respectively. Then equation (6) becomes ∇wL = (1 + η1)w1 −USV T w2 = (1 + η1)w1 −Siiw1 = 0 (7) Thus the gradient w.r.t. w1 is zero when η1 = 1 −Sii. Similarly, it can be shown that the gradient w.r.t. w2 is zero when η2 = 1 −Sii. Thus the gradient of the Lagrangian L is 0 w.r.t. both w1 and w2 for every corresponding principal vector pair. Thus vector pair (w1, w2) corresponding to any of the principal vector pairs between subspaces S1 and S2 is a local minima to the objective 4. □ Since (w1, w2) corresponding to any principal vector pair between two disjoint subspaces form a local minima to the objective given by equation (4), one can alternatively minimize equation (4) w.r.t. w1 and w2 and reach one of the local minima. Thus, by assuming independent subspace structure for all the K classes in algorithm 1 and setting λ to zero, it is straight forward to see that the algorithm yields a projection matrix that satisfies the criteria specified by theorem 3. Finally, real world data do not strictly satisfy the independent subspace assumption in general and even a slight corruption in data may easily lead to the violation of this independence. In order to tackle this problem, we add a regularization (λ > 0) term while solving for the principal vector pair in algorithm 1. If we assume that the corruption is not heavy, reconstructing a sample using vectors belonging to another subspace would require a large coefficient over those vectors. The regularization avoids reconstructing data from one class using vectors from another class that are slightly corrupted by assigning such vectors small coefficients. 5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 −0.5 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 (a) Data projected using Pa −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 (b) Data projected using Pb Figure 2: Qualitative comparison between (a) true projection matrix and (b) projection matrix from the proposed approach on high dimensional synthetic two class data. See section 4.1.1 for details. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 −0.05 0 0.05 0.1 0.15 0.2 0.25 0.3 (a) −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 0.1 (b) −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 −0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 (c) −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 0 −0.35 −0.3 −0.25 −0.2 −0.15 −0.1 −0.05 0 0.05 (d) Figure 3: Four different pairs of classes from the Extended Yale dataset B projected onto two dimensional subspaces using proposed approach. See section 4.1.1 for details. 3.3 Complexity Solving algorithm 1 requires solving an unconstrained quadratic program within a while-loop. Assume that we run this while loop for T iterations and that we use conjugate gradient descent to solve the quadratic program in each iteration. Also, it is known that for any matrix A ∈Ra×b and vector b ∈Ra, conjugate gradient applied to a problem of the form arg min w ∥Ax −b∥2 (8) takes time O(ab √ K), where K is the condition number of AT A. Thus it is straight forward to see that the time required to compute the projection matrix for a K class problem in our case is O(KTnN √ K), where n is the dimensionality of feature space, N is the total number of samples and K is the condition number of the matrix (XT k Xk + λI). Here I is the identity matrix. 4 Empirical Analysis In this section, we present empirical evidence to support our theoretical analysis of our subspace learning approach. For real world data, we use the following datasets: 1. Extended Yale dataset B [3]: It consists of ∼2414 frontal face images of 38 individuals (K = 38) with 64 images per person. These images were taken under constrained but varying illumination conditions. 2. AR dataset [10]: This dataset consists of more than 4000 frontal face images of 126 individuals with 26 images per person. These images were taken under varying illumination, expression and facial disguise. For our experiments, similar to [15], we use images from 100 individuals (K = 100) with 50 males and 50 females. We further use only 14 images per class which correspond to illumination and expression changes. This corresponds to 7 images from Session 1 and rest 7 from Session 2. 3. PIE dataset [12]: The pose, illumination, and expression (PIE) database is a subset of CMU PIE dataset consisting of 11, 554 images of 68 people (K = 68). We crop all the images to 32×32, and concatenate all the pixel intensity to form our feature vectors. Further, we normalize all data vectors to have unit ℓ2 norm. 6 (a) Yale dataset B (b) AR dataset (c) PIE dataset Figure 4: Multi-class separation after projection using proposed approach for different datasets. See section 4.1.2 for details. 4.1 Qualitative Analysis 4.1.1 Two Subspaces-Two Lines We test both the claim of theorem 2 and the quality of approximation achieved by algorithm 1 in this section. We perform these tests on both synthetic and real data. 1. Synthetic Data: We generate two random subspaces in R1000 of dimensionality 20 and 30 (notice that these subspaces will be independent with probability 1). We randomly generate 100 data vectors from each subspace and normalize them to have unit length. We then compute the 1st principal vector pair between the two subspaces using their basis vectors by performing SVD of BT 1 B2, where B1 and B2 are the basis of the two subspaces. We orthonormalize the vector pair to form the projection matrix Pa. Next, we use the labeled dataset of 200 points generated to form the projection matrix Pb by applying algorithm 1. The entire dataset of 200 points is then projected onto Pa and Pb separately and plotted in figure 2. The green and red points denote data from either subspace. The results not only substantiate our claim in theorem 2 but also suggest that the proposed algorithm for estimating the projection matrix is a good approximation. 2. Real Data: Here we use Extended Yale dataset B for analysis. Since we are interested in projection of two class data in this experimental setup, we randomly choose 4 different pairs of classes from the dataset and use the labeled data from each pair to generate the two dimensional projection matrix (for that pair) using algorithm 1. The resulting projected data from the 4 pairs can be seen in figure 3. As is evident from the figure, the projected two class data for each pair approximately lie along two different lines. 4.1.2 Multi-class separability We analyze the separation between the K classes of a given K-class dataset after dimensionality reduction. First we compute the projection matrix for that dataset using our approach and project the data. Second, we compute the top principal vector for each class separately from the projected data. This gives us K vectors. Let the columns of the matrix Z ∈R2K×K contain these vectors. Then in order to visualize inter-class separability, we simply take the dot product of the matrix Z with itself, i.e. ZT Z. Figure 4 shows this visualization for the three face datasets. The diagonal elements represent self-dot product; thus the value is 1 (white). The off-diagonal elements represent interclass dot product and these values are consistently small (dark) for all the three datasets reflecting between class separability. 4.2 Quantitative Analysis In order to evaluate theorem 3, we perform a classification experiment on all the three real world datasets mentioned above after projecting the data vectors using different dimensionality reduction techniques. We compare our quantitative results against PCA, Linear discriminant analysis (LDA), Regularized LDA and Random Projections (RP) 1. We make use of sparse coding [15] for classification. 1We also used LPP (Locality Preserving Projections) [4], NPE (Neighborhood Preserving Embedding) [5], and Laplacian Eigenmaps [1] for dimensionality reduction on Extended Yale B dataset. However, because the best performing of these reduction techniques yielded a result of only 73% compared to the close to 98% accuracy from our approach, we do not report results from these methods. 7 For Extended Yale dataset B, we use all 38 classes for evaluation with 50% −50% train-test split 1 and 70% −30% train-test split 2. Since our method is randomized, we perform 50 runs of computing the projection matrix using algorithm 1 and report the mean accuracy with standard deviation. Similarly for RP, we generate 50 different random matrices and then perform classification. Since all other methods are deterministic, there is no need for multiple runs. Table 1: Classification Accuracy on Extended Yale dataset B with 50%-50% train-test split. See section 4.2 for details. Method Ours PCA LDA Reg-LDA RP dim 76 76 37 37 76 acc 98.06 ± 0.18 92.54 83.68 95.77 93.78 ± 0.48 Table 2: Classification Accuracy on Extended Yale dataset B with 70%-30% train-test split. See section 4.2 for details. Method Ours PCA LDA Reg-LDA RP dim 76 76 37 37 76 acc 99.45 ± 0.20 93.98 93.85 97.47 94.72 ± 0.66 Table 3: Classification Accuracy on AR dataset. See section 4.2 for details. Method Ours PCA LDA Reg-LDA RP dim 200 200 99 99 200 acc 92.18 ± 0.08 85.00 88.71 84.76 ± 1.36 Table 4: Classification Accuracy on a subset of CMU PIE dataset. See section 4.2 for details. Method Ours PCA LDA Reg-LDA RP dim 136 136 67 67 136 acc 93.65 ± 0.08 87.76 86.71 92.59 90.46 ± 0.93 Table 5: Classification Accuracy on a subset of CMU PIE dataset. See section 4.2 for details. Method Ours PCA LDA Reg-LDA RP dim 20 20 9 9 20 acc 99.07 ± 0.09 97.06 95.88 97.25 95.03 ± 0.41 For AR dataset, we take the 7 images from Session 1 for training and the 7 images from Session 2 for testing. The results are shown in table 3. The result using LDA is not reported because we found that the summed within class covariance was degenerate and hence LDA was not applicable. It can be clearly seen that our approach significantly outperforms other dimensionality reduction methods. Finally for PIE dataset, we perform experiments on two different subsets. First, we take all the 68 classes and for each class, we randomly choose 25 images for training and 25 for testing. The performance for this subset is shown in table 4. Second, we take only the first 10 classes of the dataset and of all the 170 images per class, we randomly split the data into 70%−30% train-test set. The performance for this subset is shown in table 5. Evidently, our approach consistently yields the best performance on all the three datasets compared to other dimensionality reduction methods. 5 Conclusion We proposed a theoretical analysis on the preservation of independence between multiple subspaces. We show that for K independent subspaces, 2K projection vectors are sufficient for independence preservation (theorem 3). This result is motivated from our observation that for any two disjoint subspaces of arbitrary dimensionality, there exists a two dimensional plane such that after projection, the entire subspaces collapse to just two lines (theorem 2). Resulting from this analysis, we proposed an efficient iterative algorithm (1) that tries to exploit these properties for learning a projection matrix for dimensionality reduction that preserves independence between multiple subspaces. Our empirical results on three real world datasets yield state-of-the-art results compared to popular dimensionality reduction methods. 8 References [1] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput., 15(6):1373–1396, June 2003. [2] Deng Cai, Xiaofei He, and Jiawei Han. Efficient kernel discriminant analysis via spectral regression. In Data Mining, 2007. ICDM 2007. Seventh IEEE International Conference on, pages 427–432. IEEE, 2007. [3] A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman. From few to many: Illumination cone models for face recognition under variable lighting and pose. IEEE Trans. Pattern Anal. Mach. Intelligence, 23(6):643–660, 2001. [4] X. He and P. Niyogi. Locality preserving projections (lpp). Proc. of the NIPS, Advances in Neural Information Processing Systems. Vancouver: MIT Press, 103, 2004. [5] Xiaofei He, Deng Cai, Shuicheng Yan, and Hong-Jiang Zhang. Neighborhood preserving embedding. In Computer Vision, 2005. ICCV 2005. Tenth IEEE International Conference on, volume 2, pages 1208–1213 Vol. 2, Oct 2005. [6] Jeffrey Ho, Ming-Husang Yang, Jongwoo Lim, Kuang-Chih Lee, and David Kriegman. Clustering appearances of objects under varying illumination conditions. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, volume 1, pages I–11–I–18. IEEE, 2003. [7] Wei Hong, John Wright, Kun Huang, and Yi Ma. Multiscale hybrid linear models for lossy image representation. Image Processing, IEEE Transactions on, 15(12):3655–3671, 2006. [8] Guangcan Liu, Zhouchen Lin, and Yong Yu. Robust subspace segmentation by low-rank representation. In ICML, 2010. [9] Yi Ma, Harm Derksen, Wei Hong, John Wright, and Student Member. Segmentation of multivariate mixed data via lossy coding and compression. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3, 2007. [10] Aleix Mart´ınez and Robert Benavente. AR Face Database, 1998. [11] Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, December 2000. [12] Terence Sim, Simon Baker, and Maan Bsat. The cmu pose, illumination, and expression (pie) database. In Automatic Face and Gesture Recognition, 2002. Proceedings. Fifth IEEE International Conference on, pages 46–51. IEEE, 2002. [13] Ren´e Vidal, Stefano Soatto, Yi Ma, and Shankar Sastry. An algebraic geometric approach to the identification of a class of linear hybrid systems. In Decision and Control, 2003. Proceedings. 42nd IEEE Conference on, volume 1, pages 167–172. IEEE, 2003. [14] Ren´e Vidal, Roberto Tron, and Richard Hartley. Multiframe motion segmentation with missing data using powerfactorization and gpca. International Journal of Computer Vision, 79(1):85– 105, 2008. [15] J. Wright, A.Y. Yang, A. Ganesh, S.S. Sastry, and Yi Ma. Robust face recognition via sparse representation. IEEEE TPAMI, 31(2):210 –227, Feb. 2009. [16] Allen Y Yang, John Wright, Yi Ma, and S Shankar Sastry. Unsupervised segmentation of natural images via lossy data compression. Computer Vision and Image Understanding, 110(2):212–225, 2008. 9
2014
96
5,587
Ranking via Robust Binary Classification Hyokun Yun Amazon Seattle, WA 98109 yunhyoku@amazon.com Parameswaran Raman, S. V. N. Vishwanathan Department of Computer Science University of California Santa Cruz, CA 95064 {params,vishy}@ucsc.edu Abstract We propose RoBiRank, a ranking algorithm that is motivated by observing a close connection between evaluation metrics for learning to rank and loss functions for robust classification. It shows competitive performance on standard benchmark datasets against a number of other representative algorithms in the literature. We also discuss extensions of RoBiRank to large scale problems where explicit feature vectors and scores are not given. We show that RoBiRank can be efficiently parallelized across a large number of machines; for a task that requires 386, 133 × 49, 824, 519 pairwise interactions between items to be ranked, RoBiRank finds solutions that are of dramatically higher quality than that can be found by a state-of-the-art competitor algorithm, given the same amount of wall-clock time for computation. 1 Introduction Learning to rank is a problem of ordering a set of items according to their relevances to a given context [8]. While a number of approaches have been proposed in the literature, in this paper we provide a new perspective by showing a close connection between ranking and a seemingly unrelated topic in machine learning, namely, robust binary classification. In robust classification [13], we are asked to learn a classifier in the presence of outliers. Standard models for classification such as Support Vector Machines (SVMs) and logistic regression do not perform well in this setting, since the convexity of their loss functions does not let them give up their performance on any of the data points [16]; for a classification model to be robust to outliers, it has to be capable of sacrificing its performance on some of the data points. We observe that this requirement is very similar to what standard metrics for ranking try to evaluate. Discounted Cumulative Gain (DCG) [17] and its normalized version NDCG, popular metrics for learning to rank, strongly emphasize the performance of a ranking algorithm at the top of the list; therefore, a good ranking algorithm in terms of these metrics has to be able to give up its performance at the bottom of the list if that can improve its performance at the top. In fact, we will show that DCG and NDCG can indeed be written as a natural generalization of robust loss functions for binary classification. Based on this observation we formulate RoBiRank, a novel model for ranking, which maximizes the lower bound of (N)DCG. Although the non-convexity seems unavoidable for the bound to be tight [9], our bound is based on the class of robust loss functions that are found to be empirically easier to optimize [10]. Indeed, our experimental results suggest that RoBiRank reliably converges to a solution that is competitive as compared to other representative algorithms even though its objective function is non-convex. While standard deterministic optimization algorithms such as L-BFGS [19] can be used to estimate parameters of RoBiRank, to apply the model to large-scale datasets a more efficient parameter estimation algorithm is necessary. This is of particular interest in the context of latent collaborative 1 retrieval [24]; unlike standard ranking task, here the number of items to rank is very large and explicit feature vectors and scores are not given. Therefore, we develop an efficient parallel stochastic optimization algorithm for this problem. It has two very attractive characteristics: First, the time complexity of each stochastic update is independent of the size of the dataset. Also, when the algorithm is distributed across multiple number of machines, no interaction between machines is required during most part of the execution; therefore, the algorithm enjoys near linear scaling. This is a significant advantage over serial algorithms, since it is very easy to deploy a large number of machines nowadays thanks to the popularity of cloud computing services, e.g. Amazon Web Services. We apply our algorithm to latent collaborative retrieval task on Million Song Dataset [3] which consists of 1,129,318 users, 386,133 songs, and 49,824,519 records; for this task, a ranking algorithm has to optimize an objective function that consists of 386, 133 × 49, 824, 519 number of pairwise interactions. With the same amount of wall-clock time given to each algorithm, RoBiRank leverages parallel computing to outperform the state-of-the-art with a 100% lift on the evaluation metric. 2 Robust Binary Classification Suppose we are given training data which consists of n data points (x1, y1), (x2, y2), . . . , (xn, yn), where each xi ∈Rd is a d-dimensional feature vector and yi ∈{−1, +1} is a label associated with it. A linear model attempts to learn a d-dimensional parameter ω, and for a given feature vector x it predicts label +1 if ⟨x, ω⟩≥0 and −1 otherwise. Here ⟨·, ·⟩denotes the Euclidean dot product between two vectors. The quality of ω can be measured by the number of mistakes it makes: L(ω) := Pn i=1 I(yi · ⟨xi, ω⟩< 0). The indicator function I(· < 0) is called the 0-1 loss function, because it has a value of 1 if the decision rule makes a mistake, and 0 otherwise. Unfortunately, since the 0-1 loss is a discrete function its minimization is difficult [11]. The most popular solution to this problem in machine learning is to upper bound the 0-1 loss by an easy to optimize function [2]. For example, logistic regression uses the logistic loss function σ0(t) := log2(1 + 2−t), to come up with a continuous and convex objective function L(ω) := n X i=1 σ0(yi · ⟨xi, ω⟩), (1) which upper bounds L(ω). It is clear that for each i, σ0(yi · ⟨xi, ω⟩) is a convex function in ω; therefore, L(ω), a sum of convex functions, is also a convex function which is relatively easier to optimize [6]. Support Vector Machines (SVMs) on the other hand can be recovered by using the hinge loss to upper bound the 0-1 loss. However, convex upper bounds such as L(ω) are known to be sensitive to outliers [16]. The basic intuition here is that when yi · ⟨xi, ω⟩is a very large negative number for some data point i, σ(yi · ⟨xi, ω⟩) is also very large, and therefore the optimal solution of (1) will try to decrease the loss on such outliers at the expense of its performance on “normal” data points. In order to construct robust loss functions, consider the following two transformation functions: ρ1(t) := log2(t + 1), ρ2(t) := 1 − 1 log2(t + 2), (2) which, in turn, can be used to define the following loss functions: σ1(t) := ρ1(σ0(t)), σ2(t) := ρ2(σ0(t)). (3) One can see that σ1(t) →∞as t →−∞, but at a much slower rate than σ0(t) does; its derivative σ′ 1(t) →0 as t →−∞. Therefore, σ1(·) does not grow as rapidly as σ0(t) on hard-to-classify data points. Such loss functions are called Type-I robust loss functions by Ding [10], who also showed that they enjoy statistical robustness properties. σ2(t) behaves even better: σ2(t) converges to a constant as t →−∞, and therefore “gives up” on hard to classify data points. Such loss functions are called Type-II loss functions, and they also enjoy statistical robustness properties [10]. In terms of computation, of course, σ1(·) and σ2(·) are not convex, and therefore the objective function based on such loss functions is more difficult to optimize. However, it has been observed 2 in Ding [10] that models based on optimization of Type-I functions are often empirically much more successful than those which optimize Type-II functions. Furthermore, the solutions of Type-I optimization are more stable to the choice of parameter initialization. Intuitively, this is because Type-II functions asymptote to a constant, reducing the gradient to almost zero in a large fraction of the parameter space; therefore, it is difficult for a gradient-based algorithm to determine which direction to pursue. See Ding [10] for more details. 3 Ranking Model via Robust Binary Classification Let X = {x1, x2, . . . , xn} be a set of contexts, and Y = {y1, y2, . . . , ym} be a set of items to be ranked. For example, in movie recommender systems X is the set of users and Y is the set of movies. In some problem settings, only a subset of Y is relevant to a given context x ∈X; e.g. in document retrieval systems, only a subset of documents is relevant to a query. Therefore, we define Yx ⊂Y to be a set of items relevant to context x. Observed data can be described by a set W := {Wxy}x∈X,y∈Yx where Wxy is a real-valued score given to item y in context x. We adopt a standard problem setting used in the literature of learning to rank. For each context x and an item y ∈Yx, we aim to learn a scoring function f(x, y) : X × Yx →R that induces a ranking on the item set Yx; the higher the score, the more important the associated item is in the given context. To learn such a function, we first extract joint features of x and y, which will be denoted by φ(x, y). Then, we parametrize f(·, ·) using a parameter ω, which yields the linear model fω(x, y) := ⟨φ(x, y), ω⟩, where, as before, ⟨·, ·⟩denotes the Euclidean dot product between two vectors. ω induces a ranking on the set of items Yx; we define rankω(x, y) to be the rank of item y in a given context x induced by ω. Observe that rankω(x, y) can also be written as a sum of 0-1 loss functions (see e.g. Usunier et al. [23]): rankω(x, y) = X y′∈Yx,y′̸=y I (fω(x, y) −fω(x, y′) < 0) . (4) 3.1 Basic Model If an item y is very relevant in context x, a good parameter ω should position y at the top of the list; in other words, rankω(x, y) has to be small, which motivates the following objective function [7]: L(ω) := X x∈X cx X y∈Yx v(Wxy) · rankω(x, y), (5) where cx is an weighting factor for each context x, and v(·) : R+ →R+ quantifies the relevance level of y on x. Note that {cx} and v(Wxy) can be chosen to reflect the metric the model is going to be evaluated on (this will be discussed in Section 3.2). Note that (5) can be rewritten using (4) as a sum of indicator functions. Following the strategy in Section 2, one can form an upper bound of (5) by bounding each 0-1 loss function by a logistic loss function: L(ω) := X x∈X cx X y∈Yx v (Wxy) · X y′∈Yx,y′̸=y σ0 (fω(x, y) −fω(x, y′)) . (6) Just like (1), (6) is convex in ω and hence easy to minimize. 3.2 DCG Although (6) enjoys convexity, it may not be a good objective function for ranking. This is because in most applications of learning to rank, it is more important to do well at the top of the list than at the bottom, as users typically pay attention only to the top few items. Therefore, it is desirable to give up performance on the lower part of the list in order to gain quality at the top. This intuition is similar to that of robust classification in Section 2; a stronger connection will be shown below. Discounted Cumulative Gain (DCG) [17] is one of the most popular metrics for ranking. For each context x ∈X, it is defined as: DCG(ω) := cx X y∈Yx v (Wxy) log2 (rankω(x, y) + 2), (7) 3 where v(t) = 2t −1 and cx = 1. Since 1/ log(t + 2) decreases quickly and then asymptotes to a constant as t increases, this metric emphasizes the quality of the ranking at the top of the list. Normalized DCG (NDCG) simply normalizes the metric to bound it between 0 and 1 by calculating the maximum achievable DCG value mx and dividing by it [17]. 3.3 RoBiRank Now we formulate RoBiRank, which optimizes the lower bound of metrics for ranking in form (7). Observe that maxω DCG(ω) can be rewritten as min ω X x∈X cx X y∈Yx v (Wxy) ·  1 − 1 log2 (rankω(x, y) + 2)  . (8) Using (4) and the definition of the transformation function ρ2(·) in (2), we can rewrite the objective function in (8) as: L2(ω) := X x∈X cx X y∈Yx v (Wxy) · ρ2   X y′∈Yx,y′̸=y I (fω(x, y) −fω(x, y′) < 0)  . (9) Since ρ2(·) is a monotonically increasing function, we can bound (9) with a continuous function by bounding each indicator function using the logistic loss: L2(ω) := X x∈X cx X y∈Yx v (Wxy) · ρ2   X y′∈Yx,y′̸=y σ0 (fω(x, y) −fω(x, y′))  . (10) This is reminiscent of the basic model in (6); as we applied the transformation ρ2(·) on the logistic loss σ0(·) to construct the robust loss σ2(·) in (3), we are again applying the same transformation on (6) to construct a loss function that respects the DCG metric used in ranking. In fact, (10) can be seen as a generalization of robust binary classification by applying the transformation on a group of logistic losses instead of a single loss. In both robust classification and ranking, the transformation ρ2(·) enables models to give up on part of the problem to achieve better overall performance. As we discussed in Section 2, however, transformation of logistic loss using ρ2(·) results in Type-II loss function, which is very difficult to optimize. Hence, instead of ρ2(·) we use an alternative transformation ρ1(·), which generates Type-I loss function, to define the objective function of RoBiRank: L1(ω) := X x∈X cx X y∈Yx v (Wxy) · ρ1   X y′∈Yx,y′̸=y σ0 (fω(x, y) −fω(x, y′))  . (11) Since ρ1(t) ≥ρ2(t) for every t > 0, we have L1(ω) ≥L2(ω) ≥L2(ω) for every ω. Note that L1(ω) is continuous and twice differentiable. Therefore, standard gradient-based optimization techniques can be applied to minimize it. As is standard, a regularizer on ω can be added to avoid overfitting; for simplicity, we use the ℓ2-norm in our experiments. 3.4 Standard Learning to Rank Experiments We conducted experiments to check the performance of RoBiRank (11) in a standard learning to rank setting, with a small number of labels to rank. We pitch RoBiRank against the following algorithms: RankSVM [15], the ranking algorithm of Le and Smola [14] (called LSRank in the sequel), InfNormPush [22], IRPush [1], and 8 standard ranking algorithms implemented in RankLib1 namely MART, RankNet, RankBoost, AdaRank, CoordAscent, LambdaMART, ListNet and RandomForests. We use three sources of datasets: LETOR 3.0 [8] , LETOR 4.02 and YAHOO LTRC [20], which are standard benchmarks for ranking (see Table 2 for summary statistics). Each dataset consists of five folds; we consider the first fold, and use the training, validation, and test splits provided. We train with different values of regularization parameter, and select one with the best NDCG 1http://sourceforge.net/p/lemur/wiki/RankLib 2http://research.microsoft.com/en-us/um/beijing/projects/letor/letor4dataset.aspx 4 on the validation dataset. The performance of the model with this parameter on the test dataset is reported. We used implementation of the L-BFGS algorithm provided by the Toolkit for Advanced Optimization (TAO)3 for estimating the parameter of RoBiRank. For the other algorithms, we either implemented them using our framework or used the implementations provided by the authors. 5 10 15 20 0.4 0.6 0.8 1 k NDCG@k TD 2004 RoBiRank RankSVM LSRank InfNormPush IRPush 5 10 15 20 0.4 0.6 0.8 1 k NDCG@k TD 2004 RoBiRank MART RankNet RankBoost AdaRank CoordAscent LambdaMART ListNet RandomForests Figure 1: Comparison of RoBiRank with a number of competing algorithms. We use values of NDCG at different levels of truncation as our evaluation metric [17]; see Figure 1. RoBiRank outperforms its competitors on most of the datasets; due to space constraints, we only present plots for the TD 2004 dataset here and other plots can be found in Appendix B. The performance of RankSVM seems insensitive to the level of truncation for NDCG. On the other hand, RoBiRank, which uses non-convex loss function to concentrate its performance at the top of the ranked list, performs much better especially at low truncation levels. It is also interesting to note that the NDCG@k curve of LSRank is similar to that of RoBiRank, but RoBiRank consistently outperforms at each level. RoBiRank dominates Inf-Push and IR-Push at all levels. When compared to standard algorithms, Figure 1 (right), again RoBiRank outperforms especially at the top of the list. Overall, RoBiRank outperforms IRPush and InfNormPush on all datasets except TD 2003 and OHSUMED where IRPush seems to fare better at the top of the list. Compared to the 8 standard algorithms, again RobiRank either outperforms or performs comparably to the best algorithm except on two datasets (TD 2003 and HP 2003), where MART and Random Forests overtake RobiRank at few values of NDCG. We present a summary of the NDCG values obtained by each algorithm in Table 2 in the appendix. On the MSLR30K dataset, some of the additional algorithms like InfNormPush and IRPush did not complete within the time period available; indicated by dashes in the table. 4 Latent Collaborative Retrieval For each context x and an item y ∈Y, the standard problem setting of learning to rank requires training data to contain feature vector φ(x, y) and score Wxy assigned on the x, y pair. When the number of contexts |X| or the number of items |Y| is large, it might be difficult to define φ(x, y) and measure Wxy for all x, y pairs. Therefore, in most learning to rank problems we define the set of relevant items Yx ⊂Y to be much smaller than Y for each context x, and then collect data only for Yx. Nonetheless, this may not be realistic in all situations; in a movie recommender system, for example, for each user every movie is somewhat relevant. On the other hand, implicit user feedback data is much more abundant. For example, a lot of users on Netflix would simply watch movie streams on the system but do not leave an explicit rating. By the action of watching a movie, however, they implicitly express their preference. Such data consist only of positive feedback, unlike traditional learning to rank datasets which have score Wxy between each context-item pair x, y. Again, we may not be able to extract feature vectors for each x, y pair. In such a situation, we can attempt to learn the score function f(x, y) without a feature vector φ(x, y) by embedding each context and item in an Euclidean latent space; specifically, we redefine the score function to be: f(x, y) := ⟨Ux, Vy⟩, where Ux ∈Rd is the embedding of the context x and Vy ∈Rd 3http://www.mcs.anl.gov/research/projects/tao/index.html 5 is that of the item y. Then, we can learn these embeddings by a ranking model. This approach was introduced in Weston et al. [24], and was called latent collaborative retrieval. Now we specialize RoBiRank model for this task. Let us define Ωto be the set of context-item pairs (x, y) which was observed in the dataset. Let v(Wxy) = 1 if (x, y) ∈Ω, and 0 otherwise; this is a natural choice since the score information is not available. For simplicity, we set cx = 1 for every x. Now RoBiRank (11) specializes to: L1(U, V ) = X (x,y)∈Ω ρ1  X y′̸=y σ0(f(Ux, Vy) −f(Ux, Vy′))  . (12) Note that now the summation inside the parenthesis of (12) is over all items Y instead of a smaller set Yx, therefore we omit specifying the range of y′ from now on. To avoid overfitting, a regularizer is added to (12); for simplicity we use the Frobenius norm of U and V in our experiments. 4.1 Stochastic Optimization When the size of the data |Ω| or the number of items |Y| is large, however, methods that require exact evaluation of the function value and its gradient will become very slow since the evaluation takes O (|Ω| · |Y|) computation. In this case, stochastic optimization methods are desirable [4]; in this subsection, we will develop a stochastic gradient descent algorithm whose complexity is independent of |Ω| and |Y|. For simplicity, let θ be a concatenation of all parameters {Ux}x∈X , {Vy}y∈Y. The gradient ∇θL1(U, V ) of (12) is X (x,y)∈Ω ∇θρ1  X y′̸=y σ0(f(Ux, Vy) −f(Ux, Vy′))  . Finding an unbiased estimator of the gradient whose computation is independent of |Ω| is not difficult; if we sample a pair (x, y) uniformly from Ω, then it is easy to see that the following estimator |Ω| · ∇θρ1  X y′̸=y σ0(f(Ux, Vy) −f(Ux, Vy′))   (13) is unbiased. This still involves a summation over Y, however, so it requires O(|Y|) calculation. Since ρ1(·) is a nonlinear function it seems unlikely that an unbiased stochastic gradient which randomizes over Y can be found; nonetheless, to achieve convergence guarantees of the stochastic gradient descent algorithm, unbiasedness of the estimator is necessary [18]. We attack this problem by linearizing the objective function by parameter expansion. Note the following property of ρ1(·) [5]: ρ1(t) = log2(t + 1) ≤−log2 ξ + ξ · (t + 1) −1 log 2 . (14) This holds for any ξ > 0, and the bound is tight when ξ = 1 t+1. Now introducing an auxiliary parameter ξxy for each (x, y) ∈Ωand applying this bound, we obtain an upper bound of (12) as L(U, V, ξ) := X (x,y)∈Ω −log2 ξxy + ξxy P y′̸=y σ0(f(Ux, Vy) −f(Ux, Vy′)) + 1  −1 log 2 . (15) Now we propose an iterative algorithm in which, each iteration consists of (U, V )-step and ξ-step; in the (U, V )-step we minimize (15) in (U, V ) and in the ξ-step we minimize in ξ. Pseudo-code can be found in Algorithm 1 in Appendix C. (U, V )-step The partial derivative of (15) in terms of U and V can be calculated as: ∇U,V L(U, V, ξ) := 1 log 2 P (x,y)∈Ωξxy P y′̸=y ∇U,V σ0(f(Ux, Vy) −f(Ux, Vy′))  . Now it is easy to see that the following stochastic procedure unbiasedly estimates the above gradient: 6 0 0.5 1 1.5 2 2.5 3 ·106 0 0.1 0.2 0.3 number of machines × seconds elapsed Mean Precision@1 RoBiRank 4 RoBiRank 16 RoBiRank 32 RoBiRank 1 0 0.2 0.4 0.6 0.8 1 ·105 0 0.1 0.2 0.3 seconds elapsed Mean Precision@1 Weston et al. (2012) RoBiRank 1 RoBiRank 4 RoBiRank 16 RoBiRank 32 0 0.2 0.4 0.6 0.8 1 ·105 0 5 · 10−2 0.1 0.15 0.2 seconds elapsed Mean Precision@10 Weston et al. (2012) RoBiRank 1 RoBiRank 4 RoBiRank 16 RoBiRank 32 Figure 2: Left: Scaling of RoBiRank on Million Song Dataset. Center, Right: Comparison of RoBiRank and Weston et al. [24] when the same amount of wall-clock computation time is given. • Sample (x, y) uniformly from Ω • Sample y′ uniformly from Y \ {y} • Estimate the gradient by |Ω| · (|Y| −1) · ξxy log 2 · ∇U,V σ0(f(Ux, Vy) −f(Ux, Vy′)). (16) Therefore a stochastic gradient descent algorithm based on (16) will converge to a local minimum of the objective function (15) with probability one [21]. Note that the time complexity of calculating (16) is independent of |Ω| and |Y|. Also, it is a function of only Ux and Vy; the gradient is zero in terms of other variables. ξ-step When U and V are fixed, minimization of ξxy variable is independent of each other and a simple analytic solution exists: ξxy = 1 P y′̸=y σ0(f(Ux,Vy)−f(Ux,Vy′))+1. This of course requires O(|Y|) work. In principle, we can avoid summation over Y by taking stochastic gradient in terms of ξxy as we did for U and V . However, since the exact solution is simple to compute and also because most of the computation time is spent on (U, V )-step, we found this update rule to be efficient. Parallelization The linearization trick in (15) not only enables us to construct an efficient stochastic gradient algorithm, but also makes possible to efficiently parallelize the algorithm across multiple number of machines. Due to lack of space, details are relegated to Appendix D. 4.2 Experiments In this subsection we use the Million Song Dataset (MSD) [3], which consists of 1,129,318 users (|X|), 386,133 songs (|Y|), and 49,824,519 records (|Ω|) of a user x playing a song y in the training dataset. The objective is to predict the songs from the test dataset that a user is going to listen to4. Since explicit ratings are not given, NDCG is not applicable for this task; we use precision at 1 and 10 [17] as our evaluation metric. In our first experiment we study the scaling behavior of RoBiRank as a function of number of machines. RoBiRank p denotes the parallel version of RoBiRank which is distributed across p machines. In Figure 2 (left) we plot mean Precision@1 as a function of the number of machines × the number of seconds elapsed; this is a proxy for CPU time. If an algorithm linearly scales across multiple processors, then all lines in the figure should overlap with each other. As can be seen RoBiRank exhibits near ideal speed up when going from 4 to 32 machines5. In our next experiment we compare RoBiRank with a state of the art algorithm from Weston et al. [24], which optimizes a similar objective function (17). We compare how fast the quality of the solution improves as a function of wall clock time. Since the authors of Weston et al. [24] do not make available their code, we implemented their algorithm within our framework using the same data structures and libraries used by our method. Furthermore, for a fair comparison, we used the same initialization for U and V and performed an identical grid-search over the step size parameter. 4the original data also provides the number of times a song was played by a user, but we ignored this in our experiment. 5The graph for RoBiRank 1 is hard to see because it was run for only 100,000 CPU-seconds. 7 It can be seen from Figure 2 (center, right) that on a single machine the algorithm of Weston et al. [24] is very competitive and outperforms RoBiRank. The reason for this might be the introduction of the additional ξ variables in RoBiRank, which slows down convergence. However, RoBiRank training can be distributed across processors, while it is not clear how to parallelize the algorithm of Weston et al. [24]. Consequently, RoBiRank 32 which uses 32 machines for its computation can produce a significantly better model within the same wall clock time window. 5 Related Work In terms of modeling, viewing ranking problems as generalization of binary classification problems is not a new idea; for example, RankSVM defines the objective function as a sum of hinge losses, similarly to our basic model (6) in Section 3.1. However, it does not directly optimize the ranking metric such as NDCG; the objective function and the metric are not immediately related to each other. In this respect, our approach is closer to that of Le and Smola [14] which constructs a convex upper bound on the ranking metric and Chapelle et al. [9] which improves the bound by introducing non-convexity. The objective function of Chapelle et al. [9] is also motivated by ramp loss, which is used for robust classification; nonetheless, to our knowledge the direct connection between the ranking metrics in form (7) (DCG, NDCG) and the robust loss (3) is our novel contribution. Also, our objective function is designed to specifically bound the ranking metric, while Chapelle et al. [9] proposes a general recipe to improve existing convex bounds. Stochastic optimization of the objective function for latent collaborative retrieval has been also explored in Weston et al. [24]. They attempt to minimize X (x,y)∈Ω Φ  1 + X y′̸=y I(f(Ux, Vy) −f(Ux, Vy′) < 0)  , (17) where Φ(t) = Pt k=1 1 k. This is similar to our objective function (15); Φ(·) and ρ2(·) are asymptotically equivalent. However, we argue that our formulation (15) has two major advantages. First, it is a continuous and differentiable function, therefore gradient-based algorithms such as L-BFGS and stochastic gradient descent have convergence guarantees. On the other hand, the objective function of Weston et al. [24] is not even continuous, since their formulation is based on a function Φ(·) that is defined for only natural numbers. Also, through the linearization trick in (15) we are able to obtain an unbiased stochastic gradient, which is necessary for the convergence guarantee, and to parallelize the algorithm across multiple machines as discussed in Appendix D. It is unclear how these techniques can be adapted for the objective function of Weston et al. [24]. 6 Conclusion In this paper, we developed RoBiRank, a novel model on ranking, based on insights and techniques from robust binary classification. Then, we proposed a scalable and parallelizable stochastic optimization algorithm that can be applied to latent collaborative retrieval task which large-scale data without feature vectors and explicit scores have to take care of. Experimental results on both learning to rank datasets and latent collaborative retrieval dataset suggest the advantage of our approach. As a final note, the experiments in Section 4.2 are arguably unfair towards WSABIE. For instance, one could envisage using clever engineering tricks to derive a parallel variant of WSABIE (e.g., by averaging gradients from various machines), which might outperform RoBiRank on the MSD dataset. While performance on a specific dataset might be better, we would lose global convergence guarantees. Therefore, rather than obsess over the performance of a specific algorithm on a specific dataset, via this paper we hope to draw the attention of the community to the need for developing principled parallel algorithms for this important problem. Acknowledgments We thank anonymous reviewers for their constructive comments, and Texas Advanced Computing Center for infrastructure and support for experiments. This material is partially based upon work supported by the National Science Foundation under grant No. IIS-1117705. 8 References [1] S. Agarwal. The infinite push: A new support vector ranking algorithm that directly optimizes accuracy at the absolute top of the list. In SDM, pages 839–850. SIAM, 2011. [2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [3] T. Bertin-Mahieux, D. P. Ellis, B. Whitman, and P. Lamere. The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), 2011. [4] L. Bottou and O. Bousquet. The tradeoffs of large-scale learning. Optimization for Machine Learning, page 351, 2011. [5] G. Bouchard. Efficient bounds for the softmax function, applications to inference in hybrid models. 2007. [6] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, England, 2004. [7] D. Buffoni, P. Gallinari, N. Usunier, and C. Calauz`enes. Learning scoring functions with order-preserving losses and standardized supervision. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 825–832, 2011. [8] O. Chapelle and Y. Chang. Yahoo! learning to rank challenge overview. Journal of Machine Learning Research-Proceedings Track, 14:1–24, 2011. [9] O. Chapelle, C. B. Do, C. H. Teo, Q. V. Le, and A. J. Smola. Tighter bounds for structured estimation. In Advances in neural information processing systems, pages 281–288, 2008. [10] N. Ding. Statistical Machine Learning in T-Exponential Family of Distributions. PhD thesis, PhD thesis, Purdue University, West Lafayette, Indiana, USA, 2013. [11] V. Feldman, V. Guruswami, P. Raghavendra, and Y. Wu. Agnostic learning of monomials by halfspaces is hard. SIAM Journal on Computing, 41(6):1558–1590, 2012. [12] R. Gemulla, E. Nijkamp, P. J. Haas, and Y. Sismanis. Large-scale matrix factorization with distributed stochastic gradient descent. In Conference on Knowledge Discovery and Data Mining, pages 69–77, 2011. [13] P. J. Huber. Robust Statistics. John Wiley and Sons, New York, 1981. [14] Q. V. Le and A. J. Smola. Direct optimization of ranking measures. Technical Report 0704.3359, arXiv, April 2007. http://arxiv.org/abs/0704.3359. [15] C.-P. Lee and C.-J. Lin. Large-scale linear ranksvm. Neural Computation, 2013. To Appear. [16] P. Long and R. Servedio. Random classification noise defeats all convex potential boosters. Machine Learning Journal, 78(3):287–304, 2010. [17] C. D. Manning, P. Raghavan, and H. Sch¨utze. Introduction to Information Retrieval. Cambridge University Press, 2008. URL http://nlp.stanford.edu/IR-book/. [18] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [19] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer, 2nd edition, 2006. [20] T. Qin, T.-Y. Liu, J. Xu, and H. Li. Letor: A benchmark collection for research on learning to rank for information retrieval. Information Retrieval, 13(4):346–374, 2010. [21] H. E. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400–407, 1951. [22] C. Rudin. The p-norm push: A simple convex ranking algorithm that concentrates at the top of the list. The Journal of Machine Learning Research, 10:2233–2271, 2009. [23] N. Usunier, D. Buffoni, and P. Gallinari. Ranking with ordered weighted pairwise classification. In Proceedings of the International Conference on Machine Learning, 2009. [24] J. Weston, C. Wang, R. Weiss, and A. Berenzweig. Latent collaborative retrieval. arXiv preprint arXiv:1206.4603, 2012. 9
2014
97
5,588
Learning the Learning Rate for Prediction with Expert Advice Wouter M. Koolen Queensland University of Technology and UC Berkeley wouter.koolen@qut.edu.au Tim van Erven Leiden University, the Netherlands tim@timvanerven.nl Peter D. Gr¨unwald Leiden University and Centrum Wiskunde & Informatica, the Netherlands pdg@cwi.nl Abstract Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate. This learning rate needs to be large enough to fit the data well, but small enough to prevent overfitting. For the exponential weights algorithm, a sequence of prior work has established theoretical guarantees for higher and higher data-dependent tunings of the learning rate, which allow for increasingly aggressive learning. But in practice such theoretical tunings often still perform worse (as measured by their regret) than ad hoc tuning with an even higher learning rate. To close the gap between theory and practice we introduce an approach to learn the learning rate. Up to a factor that is at most (poly)logarithmic in the number of experts and the inverse of the learning rate, our method performs as well as if we would know the empirically best learning rate from a large range that includes both conservative small values and values that are much higher than those for which formal guarantees were previously available. Our method employs a grid of learning rates, yet runs in linear time regardless of the size of the grid. 1 Introduction Consider a learner who in each round t = 1, 2, . . . specifies a probability distribution wt on K experts, before being told a vector ℓt ∈[0, 1]K with their losses and consequently incurring loss ht := wt · ℓt. Losses are summed up over trials and after T rounds the learner’s cumulative loss HT = PT t=1 ht is compared to the cumulative losses Lk T = PT t=1 ℓk t of the experts k = 1, . . . , K. This is essentially the framework of prediction with expert advice [1, 2], in particular the standard Hedge setting [3]. Ideally, the learner’s predictions would not be much worse than those of the best expert, who has cumulative loss L∗ T = mink Lk T , so that the regret RT = HT −L∗ T is small. Follow-the-Leader (FTL) is a natural strategy for the learner. In any round t, it predicts with a point mass on the expert k with minimum loss Lk t−1, i.e. the expert that was best on the previous t −1 rounds. However, in the standard game-theoretic analysis, the experts’ losses are assumed to be generated by an adversary, and then the regret for FTL can grow linearly in T [4], which means that it is not learning. To do better, the predictions need to be less outspoken, which can be accomplished by replacing FTL’s choice of the expert with minimal cumulative loss by the soft minimum wk t ∝e−ηLk t−1, which is known as the exponential weights or Hedge algorithm [3]. Here η > 0 is a regularisation parameter that is called the learning rate. As η →∞the soft minimum approaches the exact minimum and exponential weights converges to FTL. In contrast, the lower η, the more the soft minimum resembles a uniform distribution and the more conservative the learner. 1 Let Rη T denote the regret for exponential weights with learning rate η. To obtain guarantees against adversarial losses, several tunings of η have been proposed in the literature. Most of these may be understood by starting with the bound Rη T ≤ln K η + T X t=1 δη t , (1) which holds for any sequence of losses. Here δη t ≥0 is the approximation error (called mixability gap by [5]) when the loss of the learner in round t is approximated by the so-called mix loss, which is a certain η-exp-concave lower bound (see Section 2.1). The analysis then proceeds by giving an upper bound bt(η) ≥δη t and choosing η to balance the two terms ln(K)/η and P t bt(η). In particular, the bound δη t ≤η/8 results in the most conservative tuning η = p 8 ln(K)/T, for which the regret is always bounded by O( p T ln(K)); the same guarantee can still be achieved even if the horizon T is unknown in advance by using, for instance, the so-called doubling trick [4]. It is possible though to learn more aggressively by using a bound on δη t that depends on the data. The first such improvement can be obtained by using δη t ≤eηwt · ℓt and choosing η = ln(1 + p 2 ln(K)/L∗ T ) ≈ p 2 ln(K)/L∗ T , where again the doubling trick can be used if L∗ T is unknown in advance, which leads to a bound of O( p L∗ T ln(K) + ln K) [6, 4]. Since L∗ T ≤T this is never worse than the conservative tuning, and it can be better if the best expert has very small losses (a case sometimes called the “low noise condition”). A further improvement has been proposed by Cesa-Bianchi et al. [7], who bound δη t by a constant times the variance vη t of ℓk t when k is distributed according to wt, such that vη t = wt · (ℓt −ht)2. Rather than using a constant learning rate, at time t they play the Hedge weights wt based on a time-varying learning rate ηt that is approximately tuned as p ln(K)/Vt−1 with Vt = P s≤t vηs s . This leads to a so-called second-order bound on the regret of the form RT = O p Vt ln(K) + ln K  , (2) which, as Cesa-Bianchi et al. show, implies RT = O r L∗ T (T −L∗ T ) T ln(K) + ln K ! (3) and is therefore always better than the tuning in terms of L∗ T (note though that (2) can be much stronger than (3) on data for which the exponential weights rapidly concentrate on a single expert, see also [8]). The general pattern that emerges is that the better the bound on δη t , the higher η can be chosen and the more aggressive the learning. De Rooij et al. [5] take this approach to its extreme and do not bound δη t at all. In their AdaHedge algorithm they tune ηt = ln(K)/∆t−1 where ∆t = P s≤t δηs s , which is very similar to the second-order tuning of Cesa-Bianchi et al. and indeed also satisfies (2) and (3). Thus, this sequence of prior works appears to have reached the limit of what is possible based on improving the bound on δη t . Unfortunately, however, if the data are not adversarial, then even second-order bounds do not guarantee the best possible tuning of η for the data at hand. (See the experiments that study the influence of η in [5].) In practice, selecting ηt to be the best-performing learning rate so far (that is, running FTL at the meta-level) appears to work well [9], but this approach requires a computationally intensive grid search over learning rates [9] and formal guarantees can only be given for independent and identically distributed (IID) data [10]. A new technique based on speculatively trying out different η was therefore introduced in the FlipFlop algorithm [5]. By alternating learning rates ηt = ∞and ηt that are very similar to those of AdaHedge, FlipFlop is both able to satisfy the second-order bounds (2) and (3), and to guarantee that its regret is never much worse than the regret R∞ T for Follow-the-Leader: RT = O R∞ T  . (4) Thus FlipFlop covers two extremes: on the one hand it is able to compete with η that are small enough to deal with the worst case, and on the other hand it can compete with η = ∞(FTL). Main Contribution We generalise the FlipFlop approach to cover a large range of η in between. As before, let Rη T denote the regret of exponential weights with fixed learning rate η. We introduce 2 the learning the learning rate (LLR) algorithm, which satisfies (2), (3) and (4) and in addition guarantees a regret satisfying RT = O  ln(K)  ln 1 η 1+ε Rη T  for all η ∈[ηah t∗, 1] (5) for any ε > 0. Thus, LLR performs almost as well as the learning rate ˆηT ∈[ηah t∗, 1] that is optimal with hindsight. Here the lower end-point ηah t∗≥(1 −o(1)) p ln(K)/T (as follows from (28) below) is a data-dependent value that is sufficiently conservative (i.e. small) to provide secondorder guarantees and consequently worst-case optimality. The upper end-point 1 is an artefact of the analysis, which we introduce because, for general losses in [0, 1]K, we do not have a guarantee in terms of Rη T for 1 < η < ∞. For the special case of binary losses ℓt ∈{0, 1}K, however, we can say a bit more: as shown in Appendix B of the supplementary material, in this special case the LLR algorithm guarantees regret bounded by RT = O(KRη T ) for all η ∈[1, ∞]. The additional factor ln(K) ln1+ε(1/η) in (5) comes from a prior on an exponentially spaced grid of η. It is logarithmic in the number of experts K, and its dependence on 1/η grows slower than ln1+ε(1/η) ≤ln1+ε(1/ηah t∗) = O(ln1+ε(T)) for any ε > 0. For the optimally tuned ˆηT , we have in mind regret that grows like RˆηT T = O(T α) for some α ∈[0, 1/2], so an additional polylog factor seems a small price to pay to adapt to the right exponent α. Although η ≥ηah t∗appear to be most important, the regret for LLR can also be related to Rη T for lower η: RT = O ln K η  for all η < ηah t∗, (6) which is not in terms of Rη T , but still improves on the standard bound (1) because δη t ≥0 for all η. The LLR algorithm takes two parameters, which determine the trade-off between constants in the bounds (2)–(6) above. Normally we would propose to set these parameters to moderate values, but if we do let them approach various limits, LLR becomes essentially the same as FlipFlop, AdaHedge or FTL (see Section 2). 10 −4 10 −2 10 0 10 2 0 1000 2000 3000 4000 5000 6000 7000 8000 learning rate (η) regret Worst−case bound and worst−case η Hedge(η) AdaHedge FlipFlop LLR and ηt* ah Figure 1: Example data (details in Appendix A) on which Hedge/exponential weights with intermediate learning rate (global minimum) performs much better than both the worst-case optimal learning rate (local minimum on the left) and large learning rates (plateau on the right). We also show the performance of the algorithms mentioned in the introduction. We emphasise that we do not just have a bound on LLR that is unavailable for earlier methods; there also exist actual losses for which the optimal learning rate with hindsight ˆηT is fundamentally in between the robust learning rates chosen by AdaHedge and the aggressive choice η = ∞of FTL. On such data, Hedge with fixed learning rate ˆηT performs significantly better than both these extremes; see Figure 1. In Appendix A we describe the data used to generate Figure 1 and explain why the regret obtained by LLR is significantly smaller than the regret of AdaHedge, FTL and all other tunings described above. Computational Efficiency Although LLR employs a grid of η, it does not have to search over this grid. Instead, in each time step it only has to do computations for the single η that is active, and, as a consequence, it runs as fast as using exponential weights with a single fixed η, which is linear in K and T. LLR, as presented here, does store information about all the grid points, which requires O(ln(K) ln(T)) storage, but we describe a simple approximation that runs equally fast and only requires a constant amount of storage. 3 Outline The paper is organized as follows. In Section 2 we define the LLR algorithm and in Section 3 we make precise how it satisfies (2), (3), (4), (5) and (6). Section 4 provides a discussion. Finally, the appendix contains a description of the data in Figure 1 and most of the proofs. 2 The Learning the Learning Rate Algorithm In this section we describe the LLR algorithm, which is a particular strategy for choosing a timevarying learning rate in exponential weights. We start by formally describing the setting and then explain how LLR chooses its learning rates. 2.1 The Hedge Setting At the start of each round t = 1, 2, . . . the learner produces a probability distribution wt = (w1 t , . . . , wK t ) on K ≥2 experts. Then the experts incur losses ℓt = (ℓ1 t, . . . , ℓK t ) ∈[0, 1]K and the learner’s loss ht = wt · ℓt = P k wk t ℓk t is the expected loss under wt. After T rounds, the learner’s cumulative loss is HT = PT t=1 ht and the cumulative losses for the experts are Lk T = PT t=1 ℓk t . The goal is to minimize the regret RT = HT −L∗ T with respect to the cumulative loss L∗ T = mink Lk T of the best expert. We consider strategies for the learner that play the exponential weights distribution wk t = e−ηtLk t−1 PK j=1 e−ηtLj t−1 for a choice of learning rate ηt that may depend on all losses before time t. To analyse such methods, it is common to approximate the learner’s loss ht by the mix loss mt = −1 ηt ln P k wk t e−ηtℓk t , which appears under a variety of names in e.g. [7, 4, 11, 5]. The resulting approximation error or mixability gap δt = ht−mt is always non-negative and cannot exceed 1. This, and some other basic properties of the mix loss are listed in Lemma 1 of De Rooij et al. [5], which we reproduce as Lemma C.1 in the additional material. As will be explained in the next section, LLR does not monitor the regrets of all learning rates directly. Instead, it tracks their cumulative mixability gaps, which provide a convenient lower bound on the regret that is monotonically increasing with the number of rounds T, in contrast to the regret itself. To show this, let Rη T denote the regret of the exponential weights strategy with fixed learning rate ηt = η, and similarly let M η T = PT t=1 mη t and ∆η T = PT t=1 δη t denote its cumulative mix loss and mixability gap. Lemma 2.1. For any fixed learning rate η ∈(0, ∞], the regret of exponential weights satisfies Rη T ≥∆η T . (7) Proof. Apply property 3 in Lemma C.1 to the regret decomposition Rη T = M η T −L∗ T + ∆η T . We will use the following notational conventions. Lower-case letters indicate instantaneous quantities like mt, δt and wt, whereas uppercase letters denote cumulative quantities like MT , ∆T and RT . In the absence of a superscript the learning rates present in any such quantity are those chosen by LLR. In contrast, the superscript η refers to using the same fixed learning rate η throughout. 2.2 LLR’s Choice of Learning Rate The LLR algorithm is a member of the exponential weights family of algorithms. Its defining property is its adaptive and non-monotonic selection of the learning rate ηt, which is specified in Algorithm 1 and explained next. The LLR algorithm works in regimes in which it speculatively tries out different strategies for ηt. Almost all of these strategies consist of choosing a fixed η from the following grid: η1 = ∞, ηi = α2−i for i = 2, 3, . . . , (8) where the exponential base α = 1 + 1/ log2 K (9) 4 Algorithm 1 LLR(πah, π∞). The grid η1, η2, . . . and weights π1, π2, . . . are defined in (8) and (12). Initialise b0 := 0; ∆ah 0 := 0; ∆i 0 := 0 for all i ≥1. for t = 1, 2, . . . do if all active indices and ah are bt−1-full then Increase bt := φ∆ah t−1/πah (with φ as defined in (14)) else Keep bt := bt−1 end if Let i be the least non-bt-full index. if i is active then Play ηi. Update ∆i t := ∆i t−1 + δi t. Keep ∆j t := ∆j t−1 for j ̸= i and ∆ah t := ∆ah t−1. else Play ηah t as defined in (10). Update ∆ah t := ∆ah t−1 + δah t . Keep ∆j t := ∆j t−1 for all j ≥1. end if end for is chosen to ensure that the grid is dense enough so that ηi for i ≥2 is representative for all η ∈[ηi+1, ηi] (this is made precise in Lemma 3.3). We also include the special value η1 = ∞, because it corresponds to FTL, which works well for IID data and data with a small number of leader changes, as discussed by De Rooij et al. [5]. For each index i = 1, 2, . . . in the grid, let Ai t ⊆{1, . . . , t} denote the set of rounds up to trial t in which the LLR algorithm plays ηi. Then LLR keeps track of the performance of ηi by storing the sum of mixability gaps δi t ≡δηi t for which ηi is responsible: ∆i t = X s∈Ai t δi s. In addition to the grid in (8), LLR considers one more strategy, which we will call the AdaHedge strategy, because it is very similar to the learning rate chosen by the AdaHedge algorithm [5]. In the AdaHedge strategy, LLR plays ηt equal to ηah t = ln K ∆ah t−1 , (10) where ∆ah t = P s∈Aah t δah s is the sum of mixability gaps δah t ≡δηah t t during the rounds Aah t ⊆ {1, . . . , t} in which LLR plays the AdaHedge strategy. The only difference to the original AdaHedge is that the latter sums the mixability gaps over all s ∈{1, . . . , t}, not just those in Aah t . Note that, in our variation, ηah t does not change during rounds outside Aah t . The AdaHedge learning rate ηah t is non-increasing with t, and (as we will show in Theorem 3.6 below) it is small enough to guarantee the worst-case bound (3), which is optimal for adversarial data. We therefore focus on η > ηah t and call an index i in the grid active in round t if ηi > ηah t . Let imax ≡imax(t) be the number of grid indices that are active at time t, such that ηimax(t) ≈ηah t . Then LLR cyclically alternates grid learning rates and the AdaHedge learning rate, in a way that approximately maintains ∆1 t π1 ≈∆2 t π2 ≈. . . ≈∆imax t πimax ≈∆ah t πah for all t, (11) where πah > 0 and π1, π2, . . . > 0 are fixed weights that control the relative importance of AdaHedge and the grid points (higher weight = more important). The LLR algorithm takes as parameters πah and π∞, where πah only has to be positive, but π∞is restricted to (0, 1). We then choose π1 = π∞, πi = (1 −π∞)ρ(i −1) for i ≥2, (12) where ρ is a prior probability distribution on {1, 2, . . .}. It follows that P∞ i=1 πi = 1, so that πi may be interpreted as a prior probability mass on grid index i. For ρ, we require a distribution with very 5 heavy tails (meaning ρ(i) not much smaller than 1 i ), and we fix the convenient choice ρ(i) = Z i ln K i−1 ln K 1 (x + e) ln2(x + e) dx = 1 ln i−1 ln K + e  − 1 ln i ln K + e . (13) We cannot guarantee that the invariant (11) holds exactly, and our algorithm incurs overhead for changing learning rates, so we do not want to change learning rates too often. LLR therefore uses an exponentially increasing budget b and tries grid indices and the AdaHedge strategy in sequence until they exhaust the budget. To make this precise, we say that an index i is b-full in round t if ∆i t−1/πi > b and similarly that AdaHedge is b-full in round t if ∆ah t−1/πah > b. Let bt be the budget at time t, which LLR chooses as follows: first it initialises b0 = 0 and then, for t ≥1, it tests whether all active indices and AdaHedge are bt−1-full. If this is the case, LLR approximately increases the budget by a factor φ > 1 by setting bt = φ∆ah t−1/πah > φbt−1, otherwise it just keeps the budget the same: bt = bt−1. In particular, we will fix budget multiplier φ = 1 + √ πah, (14) which minimises the constants in our bounds. Now if, at time t, there exists an active index that is not bt-full, then LLR plays the first such index. And if all active indices are bt-full, LLR plays the AdaHedge strategy, which cannot be bt-full in this case by definition of bt. This guarantees that all ratios ∆i T /πi T are approximately within a factor φ of each other for all i that are active at time t∗, which we define to be the last time t ≤T that LLR plays AdaHedge: t∗= max Aah T . (15) Whenever LLR plays AdaHedge it is possible, however, that a new index i becomes active and it then takes a while for this index’s cumulative mixability gap ∆i T to also grow up to the budget. Since AdaHedge is not played while the new index is catching up, the ratio guarantee always still holds for all indices that were active at time t∗. 2.3 Choosing the LLR Parameters LLR has several existing strategies as sub-cases. For πah →∞it essentially becomes AdaHedge. For π∞→1 it becomes FlipFlop. For π∞→1 and πah →0 it becomes FTL. Intermediate values for πah and π∞retain the benefits of these algorithms, but in addition allow LLR to compete with essentially all learning rates ranging from worst-case safe to extremely aggressive. 2.4 Run time and storage LLR, as presented here, runs in constant time per round. This is because, in each round, it only needs to compute the weights and update the corresponding cumulative mixability gap for a single learning rate strategy. If the current strategy exceeds its budget (becomes bt-full), LLR proceeds to the next1. The memory requirement is dominated by the storage of ∆1 t, . . . , ∆imax(t) t , which, following the discussion below (5), is at most imax(T) = 2 + ln 1 ηimax(T ) ln α ≤2 + logα 1 ηah T = O(ln(K) ln(T)). However, a minor approximation reduces the memory requirement down to a constant: At any point in time the grid strategies considered by LLR split in three. Let us say that ηi is played at time t. Then all preceding ηj for j ≤i are already at (or slightly past) the budget. And all succeeding ηj for i < j ≤imax are still at (or slightly past) the previous budget. So we can approximate their cumulative mixability gaps by simply ignoring these slight overshoots. It then suffices to store only the cumulative mixability gap for the currently advancing ηi, and the current and previous budget. 1In the early stages it may happen that the next strategy is already over the budget and needs to be skipped, but this start-up effect quickly disappears when the budget exceeds 1, as the weighted increment δi t/πi ≤ ηi/8 log1+ϵ(1/η) is bounded for all 0 ≤η ≤1. 6 3 Analysis of the LLR algorithm In this section we analyse the regret of LLR. We first show that for each loss sequence the regret is bounded in terms of the cumulative mixability gaps ∆i T and ∆ah T incurred by the active learning rates (Lemma 3.1). As LLR keeps the cumulative mixability gaps approximately balanced according to (11), we can then further bound the regret in terms of each of the individual learning rates in the grid (Lemma 3.2). The next step is to deal with learning rates between grid points, by showing that their cumulative mixability gap ∆η T relates to ∆i T for the nearest higher grid point ηi ≥η (Lemma 3.3). In Lemma 3.4 we put all these steps together. As the cumulative mixability gap ∆η T does not exceed the regret Rη T for fixed learning rates (Lemma 2.1), we can then derive the bounds (2) through (6) from the introduction in Theorems 3.5 and 3.6. We start by showing that the regret of LLR is bounded by the cumulative mixability gaps of the learning rates that it plays. The proof, which appears in Section C.4, is a generalisation of Lemma 12 in [5]. It crucially uses the fact that the lowest learning rate played by LLR is the AdaHedge rate ηah t which relates to ∆ah t . Lemma 3.1. On any sequence of losses, the regret of the LLR algorithm with parameters πah > 0 and π∞∈(0, 1) is bounded by RT ≤  φ φ −1 + 2  ∆ah T + imax X i=1 ∆i T , where imax is the largest i such that ηi is active in round T and φ is defined in (14). The LLR budgeting scheme keeps the cumulative mixability gaps from Lemma 3.1 approximately balanced according to (11). The next result, proved in Section C.5, makes this precise. Lemma 3.2. Fix t∗as in (15). Then for each index i that was active at time t∗and arbitrary j ̸= i: ∆j T ≤φ πj πi ∆i T + πj πah  + min{1, ηj/8}, (16a) ∆j T ≤φ πj πah ∆ah T + min{1, ηj/8}, (16b) ∆ah T ≤πah πi ∆i T + 1. (16c) LLR employs an exponentially spaced grid of learning rates that are evaluated using — and played proportionally to — their cumulative mixability gaps. In the next step (which is restated and proved as Lemma C.7 in the additional material) we show that the mixability gap of a learning rate between grid-points cannot be much smaller than that of its next higher grid neighbour. This establishes in particular that an exponential grid is sufficiently fine. Lemma 3.3. For γ ≥1 and for any sequence of losses with values in [0, 1]: δγη t ≤γe(γ−1)(ln K+η)δη t . The preceding results now allow us to bound the regret of LLR in terms of the cumulative mixability gap of any fixed learning rate (which does not exceed its regret by Lemma 2.1) and in terms of the cumulative mixability gap of AdaHedge (which we will use to establish worst-case optimality). Lemma 3.4. Suppose the losses take values in [0, 1], let πah > 0 and π∞∈(0, 1) be the parameters of the LLR algorithm, and abbreviate B = φ φ−1 + 2  πah + φ. Then the regret of the LLR algorithm is bounded by RT ≤Bαe(α−1)(ln K+1) ∆η T πi(η) +  α 8(α −1) + φ πah + φ φ −1 + 3  for all η ∈[ηah t∗, 1], where i(η) = 2+⌊logα(1/η)⌋is the index of the nearest grid point above η, and by RT ≤B ∆∞ T π∞+  α 8(α −1) + φ πah + φ φ −1 + 3  7 for η = ∞. In addition RT ≤B ∆ah T πah + α 8(α −1) + 1, and for any η < ηah t∗ ∆ah T ≤ln K η + 1. The proof appears in additional material Section C.6. We are now ready for our main result, which is proved in Section C.7. It shows that LLR competes with the regret of any learning rate above the worst-case safe rate and below 1 modulo a mild factor. In addition, LLR also performs well on all data favoured by Follow-the-Leader. Theorem 3.5. Suppose the losses take values in [0, 1], let πah > 0 and π∞∈(0, 1) be the parameters of the LLR algorithm, and introduce the constants B = 1 + 2 √ πah + 3πah and CK = (log2 K + 1)/8 + B/πah + 1. Then the regret of LLR is simultaneously bounded by RT ≤ 4Be1 1 −π∞(log2 K + 1) ln(7/η) ln22 log2(5/η)  | {z } =O(ln1+ε(1/η)) for any ε > 0 Rη T + CK for all η ∈[ηah t∗, 1] and by RT ≤B π∞R∞ T + CK for η = ∞. In addition RT ≤B πah ln K η + CK for any η < ηah t∗. To interpret the theorem, we recall from the introduction that ln(1/η) is better than O(ln T) for all η ≥ηah t∗. We finally show that LLR is robust to the worst-case. We do this by showing something much stronger, namely that LLR guarantees a so-called second-order bound (a concept introduced in [7]). The bound is phrased in terms of the cumulative variance VT = PT t=1 vt, where vt = Vk∼wt  ℓk t  is the variance of ℓk t for k distributed according to wt. See Section C.8 for the proof. Theorem 3.6. Suppose the losses take values in [0, 1], let πah > 0 and π∞∈(0, 1) be the parameters of the LLR algorithm, and introduce the constants B = φ φ−1 + 2  πah + φ and CK = (log2 K + 1)/8 + B/πah + 1. Then the regret of LLR is bounded by RT ≤B πah p VT ln K +  CK + 2B ln K 3πah  and consequently by RT ≤B πah r L∗ T (T −L∗ T ) T ln K + 2  CK + 2B ln K 3πah + B2 ln K (πah)2  . 4 Discussion We have shown that our new LLR algorithm is able to recover the same second-order bounds as previous methods, which guard against worst-case data by picking a small learning rate if necessary. What LLR adds is that, at the cost of a (poly)logarithmic overhead factor, it is also able to learn a range of higher learning rates η, which can potentially achieve much smaller regret (see Figure 1). This is accomplished by covering this range with a grid of sufficient granularity. The overhead factor depends on a prior on the grid, for which we have fixed a particular choice with a heavy tail. However, the algorithm would also work with any other prior, so if it were known a priori that certain values in the grid were of special importance, they could be given larger prior mass. Consequently, a more advanced analysis demonstrating that only a subset of learning rates could potentially be optimal (in the sense of minimizing the regret Rη T ) would directly lead to factors of improvement in the algorithm. Thus we raise the open question: what is the smallest subset E of learning rates such that, for any data, the minimum of the regret over this subset minη∈E Rη T is approximately the same as the minimum minη Rη T over all or a large range of learning rates? 8 References [1] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108(2):212–261, 1994. [2] V. Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences, 56(2):153–173, 1998. [3] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55:119–139, 1997. [4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [5] S. de Rooij, T. van Erven, P. D. Gr¨unwald, and W. M. Koolen. Follow the leader if you can, Hedge if you must. Journal of Machine Learning Research, 15:1281–1316, 2014. [6] P. Auer, N. Cesa-Bianchi, and C. Gentile. Adaptive and self-confident on-line learning algorithms. Journal of Computer and System Sciences, 64:48–75, 2002. [7] N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2/3):321–352, 2007. [8] T. van Erven, P. Gr¨unwald, W. M. Koolen, and S. de Rooij. Adaptive hedge. In Advances in Neural Information Processing Systems 24 (NIPS), 2011. [9] M. Devaine, P. Gaillard, Y. Goude, and G. Stoltz. Forecasting electricity consumption by aggregating specialized experts; a review of the sequential aggregation of specialized experts, with an application to Slovakian and French country-wide one-day-ahead (half-)hourly predictions. Machine Learning, 90(2):231–260, 2013. [10] P. Gr¨unwald. The safe Bayesian: learning the learning rate via the mixability gap. In Proceedings of the 23rd International Conference on Algorithmic Learning Theory (ALT). Springer, 2012. [11] V. Vovk. Competitive on-line statistics. International Statistical Review, 69:213–248, 2001. [12] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, 1991. 9
2014
98
5,589
Beta-Negative Binomial Process and Exchangeable Random Partitions for Mixed-Membership Modeling Mingyuan Zhou IROM Department, McCombs School of Business The University of Texas at Austin, Austin, TX 78712, USA mingyuan.zhou@mccombs.utexas.edu Abstract The beta-negative binomial process (BNBP), an integer-valued stochastic process, is employed to partition a count vector into a latent random count matrix. As the marginal probability distribution of the BNBP that governs the exchangeable random partitions of grouped data has not yet been developed, current inference for the BNBP has to truncate the number of atoms of the beta process. This paper introduces an exchangeable partition probability function to explicitly describe how the BNBP clusters the data points of each group into a random number of exchangeable partitions, which are shared across all the groups. A fully collapsed Gibbs sampler is developed for the BNBP, leading to a novel nonparametric Bayesian topic model that is distinct from existing ones, with simple implementation, fast convergence, good mixing, and state-of-the-art predictive performance. 1 Introduction For mixture modeling, there is a wide selection of nonparametric Bayesian priors, such as the Dirichlet process [1] and the more general family of normalized random measures with independent increments (NRMIs) [2, 3]. Although a draw from an NRMI usually consists of countably infinite atoms that are impossible to instantiate in practice, one may transform the infinite-dimensional problem into a finite one by marginalizing out the NRMI. For instance, it is well known that the marginalization of the Dirichlet process random probability measure under multinomial sampling leads to the Chinese restaurant process [4, 5]. The general structure of the Chinese restaurant process is broadened by [5] to the so called exchangeable partition probability function (EPPF) model, leading to fully collapsed inference and providing a unified view of the characteristics of various nonparametric Bayesian mixture-modeling priors. Despite significant progress on EPPF models in the past decade, their use in mixture modeling (clustering) is usually limited to a single set of data points. Moving beyond mixture modeling of a single set, there has been significant recent interest in mixedmembership modeling, i.e., mixture modeling of grouped data x1, . . . , xJ, where each group xj = {xji}i=1,mj consists of mj data points that are exchangeable within the group. To cluster the mj data points in each group into a random, potentially unbounded number of partitions, which are exchangeable and shared across all the groups, is a much more challenging statistical problem. While the hierarchical Dirichlet process (HDP) [6] is a popular choice, it is shown in [7] that a wide variety of integer-valued stochastic processes, including the gamma-Poisson process [8, 9], betanegative binomial process (BNBP) [10, 11], and gamma-negative binomial process (GNBP), can all be applied to mixed-membership modeling. However, none of these stochastic processes are able to describe their marginal distributions that govern the exchangeable random partitions of grouped data. Without these marginal distributions, the HDP exploits an alternative representation known as the Chinese restaurant franchise [6] to derive collapsed inference, while fully collapsed inference is available for neither the BNBP nor the GNBP. 1 The EPPF provides a unified treatment to mixture modeling, but there is hardly a unified treatment to mixed-membership modeling. As the first step to fill that gap, this paper thoroughly investigates the law of the BNBP that governs its exchangeable random partitions of grouped data. As directly deriving the BNBP’s EPPF for mixed-membership modeling is difficult, we first randomize the group sizes {mj}j and derive the joint distribution of {mj}j and their random partitions on a shared list of exchangeable clusters; we then derive the marginal distribution of the group-size count vector m = (m1, . . . , mJ)T , and use Bayes’ rule to further arrive at the BNBP’s EPPF that describes the prior distribution of a latent column-exchangeable random count matrix, whose jth row sums to mj. The general method to arrive at an EPPF for mixed-membership modeling using an integer-valued stochastic process is an important contribution. We make several additional contributions: 1) We derive a prediction rule for the BNBP to simulate exchangeable random partitions of grouped data governed by its EPPF. 2) We construct a BNBP topic model, derive a fully collapsed Gibbs sampler that analytically marginalizes out not only the topics and topic weights, but also the infinitedimensional beta process, and provide closed-form update equations for model parameters. 3) The straightforward to implement BNBP topic model sampling algorithm converges fast, mixes well, and produces state-of-the-art predictive performance with a compact representation of the corpus. 1.1 Exchangeable Partition Probability Function Let Πm = {A1, . . . , Al} denote a random partition of the set [m] = {1, 2, . . . , m}, where there are l partitions and each element i ∈[m] belongs to one and only one set Ak from Πm. If P(Πm = {A1, . . . , Al}|m) depends only on the number and sizes of the Ak’s, regardless of their order, then it is called an exchangeable partition probability function (EPPF) of Πm. An EPPF of Πm is an EPPF of Π := (Π1, Π2, . . .) if P(Πm|n) = P(Πm|m) does not depend on n, where P(Πm|n) denotes the marginal partition probability for [m] when it is known the sample size is n. Such a constraint can also be expressed as an addition rule for the EPPF [5]. In this paper, the addition rule is not required and the proposed EPPF is allowed to be dependent on the group sizes (or sample size if the number of groups is one). Detailed discussions about sample size dependent EPPFs can be found in [12]. We generalize the work of [12] to model the partition of a count vector into a latent column-exchangeable random count matrix. A marginal sampler for σ-stable Poisson-Kigman mixture models (but not mixed-membership models) is proposed in [13], encompassing a large class of random probability measures and their corresponding EPPFs of Π. Note that the BNBP is not within that class and both the BNBP’s EPPF and perdition rule are dependent on the group sizes. 1.2 Beta Process The beta process B ∼BP(c, B0) is a completely random measure defined on the product space [0, 1] × Ω, with a concentration parameter c > 0 and a finite and continuous base measure B0 over a complete separable metric space Ω[14, 15] . We define the L´evy measure of the beta process as ν(dpdω) = p−1(1 −p)c−1dpB0(dω). (1) A draw from B ∼ BP(c, B0) can be represented as a countably infinite sum as B = P∞ k=1 pkδωk, ωk ∼g0, where γ0 = B0(Ω) is the mass parameter and g0(dω) = B0(dω)/γ0 is the base distribution. The beta process is unique in that the beta distribution is not infinitely divisible, and its measure on a Borel set A ⊂Ω, expressed as B(A) = P k:ωk∈A pk, could be larger than one and hence clearly not a beta random variable. In this paper we will work with Q(A) = −P k:ωk∈A ln(1 −pk), defined as a logbeta random variable, to analyze model properties and derive closed-form Gibbs sampling update equations. We provide these details in the Appendix. 2 Exchangeable Cluster/Partition Probability Functions for the BNBP The integer-valued beta-negative binomial process (BNBP) is defined as Xj|B ∼NBP(rj, B), B ∼BP(c, B0), (2) where for the jth group rj is the negative binomial dispersion parameter and Xj|B ∼NBP(rj, B) is a negative binomial process such that Xj(A) = P k:ωk∈A njk, njk ∼NB(rj, pk) for each Borel set A ⊂Ω. The negative binomial distribution n ∼NB(r, p) has probability mass function (PMF) fN(n) = Γ(n+r) n!Γ(r) pn(1 −p)r for n ∈Z, where Z = {0, 1, . . .}. Our definition of the BNBP follows 2 those of [10, 7, 11], where for inference [10, 7] used finite truncation and [11] used slice sampling. There are two recent papers [16, 17] that both marginalize out the beta process from the negative binomial process, with the predictive structures of the BNBP described as the negative binomial Indian buffet process (IBP) [16] and “ice cream” buffet process [17], respectively. Both processes are also related to the “multi-scoop” IBP of [10], and they all generalize the binary-valued IBP [18]. Different from these two papers on infinite random count matrices, this paper focuses on generating a latent column-exchangeable random count matrix, each of whose row sums to a fixed observed integer. This paper generalizes the techniques developed in [17, 12] to define an EPPF for mixedmembership modeling and derive truncation-free fully collapsed inference. The BNBP by nature is an integer-valued stochastic process as Xj(A) is a random count for each Borel set A ⊂Ω. As the negative binomial process is also a gamma-Poisson mixture process, we can augment (2) as a beta-gamma-Poisson process as Xj|Θj ∼PP(Θj), Θj|rj, B ∼ΓP[rj, B/(1 −B)], B ∼BP(c, B0), where Xj|Θj ∼PP(Θj) is a Poisson process such that Xj(A) ∼Pois[Θj(A)], and Θj|B ∼ ΓP[rj, B/(1−B)] is a gamma process such that Θj(A) = P k:ωk∈A θjk, θjk ∼Gamma[rj, pk/(1− pk)], for each Borel set A ⊂Ω. The mixed-membership-modeling potentials of the BNBP become clear under this augmented representation. The Poisson process provides a bridge to link count modeling and mixture modeling [7], since Xj ∼PP(Θj) can be equivalently generated by first drawing a total random count mj := Xj(Ω) ∼Pois[Θj(Ω)] and then assigning this random count to disjoint disjoint Borel sets of Ωusing a multinomial distribution. 2.1 Exchangeable Cluster Probability Function In mixed-membership modeling, the size of each group is observed rather being random, thus although the BNBP’s augmented representation is instructive, it is still unclear how exactly the integervalued stochastic process leads to a prior distribution on exchangeable random partitions of grouped data. The first step for us to arrive at such a prior distribution is to build a sample size dependent model that treats the number of data points to be clustered (partitioned) in each group as random. Below we will first derive an exchangeable cluster probability function (ECPF) governed by the BNBP to describe the joint distribution of the random group sizes and their random partitions over a random, potentially unbounded number of exchangeable clusters shared across all the groups. Later we will show how to derive the EPPF from the ECPF using Bayes’ rule. Lemma 1. Denote δk(zji) as a unit point mass at zji = k, njk = Pmj i=1 δk(zji), and Xj(A) = P k:ωk∈A njk as the number of data points in group j assigned to the atoms within the Borel set A ⊂Ω. The Xj’s generated via the group size dependent model as zji ∼P∞ k=1 θjk Θj(Ω)δk, mj ∼Pois(Θj(Ω)), Θj ∼ΓP[rj, B/(1 −B)], B ∼BP(c, B0) (3) is equivalent in distribution to the Xj’s generated from a BNBP as in (2). Proof. With B = P∞ k=1 pkδωk and Θj = P∞ k=1 θjkδωk, the joint distribution of the cluster indices zj = (zj1, . . . , zjmj) given Θj and mj can be expressed as p(zj|Θj, mj) = Qmj i=1 θjzji P∞ k′=1 θjk′ = 1 (P∞ k=1 θjk)mj Q∞ k=1 θnjk jk , which is not in a fully factorized form. As mj is linked to the total random mass Θj(Ω) with a Poisson distribution, we have the joint likelihood of zj and mj given Θj as f(zj, mj|Θj) = f(zj|Θj, mj)Pois(mj, Θj(Ω)) = 1 mj! Q∞ k=1 θnjk jk e−θjk, (4) which is fully factorized and hence amenable to marginalization. Since θjk ∼Gamma[rj, pk/(1 − pk)], we can marginalize θjk out analytically as f(zj, mj|rj, B) = EΘj[f(zj, mj|Θj)], leading to f(zj, mj|rj, B) = 1 mj! Q∞ k=1 Γ(njk+rj) Γ(rj) pnjk k (1 −pk)rj. (5) Multiplying the above equation with a multinomial coefficient transforms the prior distribution for the categorical random variables zj to the prior distribution for a random count vector as f(nj1, . . . , nj∞|rj, B) = mj! Q∞ k=1 njk!f(zj, mj|rj, B) = Q∞ k=1 NB(njk; rj, pk). Thus in the prior, 3 for each group, the sample size dependent model in ( 3) draws njk ∼NB(rj, pk) random number of data points independently at each atom. With Xj := P∞ k=1 njkδωk, we have Xj|B ∼NBP(rj, B) such that Xj(A) = P k:ωk∈A njk, njk ∼NB(rj, pk). The Lemma below provides a finite-dimensional distribution obtained by marginalizing out the infinite-dimensional beta process from the BNBP. The proof is provided in the Appendix. Lemma 2 (ECPF). The exchangeable cluster probability function (ECPF) of the BNBP, which describes the joint distribution of the random count vector m := (m1, . . . , mJ)T and its exchangeable random partitions z = (z11, . . . , zJmJ), can be expressed as f(z, m|r, γ0, c) = γ KJ 0 e−γ0[ψ(c+r·)−ψ(c)] QJ j=1 mj! QKJ k=1 h Γ(n·k)Γ(c+r·) Γ(c+n·k+r·) QJ j=1 Γ(njk+rj) Γ(rj) i , (6) where KJ is the number of observed points of discontinuity for which n·k > 0, r := (r1, . . . , rJ)T , r· := PJ j=1 rj, n·k := PJ j=1 njk, and mj ∈Z is the random size of group j. 2.2 Exchangeable Partition Probability Function and Prediction Rule Having the ECPF does not directly lead to the EPPF for the BNBP, as an EPPF describes the distribution of the exchangeable random partitions of the data groups whose sizes are observed rather than being random. To arrive at the EPPF, first we organize z into a random count matrix NJ ∈ZJ×KJ, whose jth row represents the partition of the mj data points into the KJ shared exchangeable clusters and whose order of these KJ nonzero columns is chosen uniformly at random from one of the KJ! possible permutations, then we obtain a prior distribution on a BNBP random count matrix as f(NJ|r, γ0, c) = 1 KJ! QJ j=1 mj! QKJ k=1 njk!f(z, m|r, γ0, c) = γ KJ 0 e−γ0[ψ(c+r·)−ψ(c)] KJ! QKJ k=1 Γ(n·k)Γ(c+r·) Γ(c+n·k+r·) QJ j=1 Γ(njk+rj) njk!Γ(rj) . (7) As described in detail in [17], although the matrix prior does not appear to be simple, direct calculation shows that this random count matrix has KJ ∼Pois {γ0 [ψ(c + r·) −ψ(c)]} independent and identically distributed (i.i.d.) columns that can be generated via n:k ∼DirMult(n·k, r1, . . . , rJ), n·k ∼Digam(r·, c), (8) where n:k := (n1k, . . . , nJk)T is the count vector for the kth column (cluster), the Dirichlet-multinomial (DirMult) distribution [19] has PMF DirMult(n:k|n·k, r) = n·k! QJ j=1 njk! Γ(r·) Γ(n·k+r·) QJ j=1 Γ(njk+rj) Γ(rj) , and the digamma distribution [20] has PMF Digam(n|r, c) = 1 ψ(c+r)−ψ(c) Γ(r+n)Γ(c+r) nΓ(c+n+r)Γ(r), where n = 1, 2, . . .. Thus in the prior, the BNBP generates a Poisson random number of clusters, the size of each cluster follows a digamma distribution, and each cluster is further partitioned into the J groups using a Dirichlet-multinomial distribution [17]. With both the ECPF and random count matrix prior governed by the BNBP, we are ready to derive both the EPPF and prediction rule, given in the following two Lemmas, with proofs in the Appendix. Lemma 3 (EPPF). Let PPK k=1 n:k=m denote a summation over all sets of count vectors with PK k=1 n:k = m, where m· = PJ j=1 mj and n·k ≥1. The group-size dependent exchangeable partition probability function (EPPF) governed by the BNBP can be expressed as f(z|m, r, γ0, c) = γ KJ 0 QJ j=1 mj! QKJ k=1 h Γ(n·k)Γ(c+r·) Γ(c+n·k+r·) QJ j=1 Γ(njk+rj) Γ(rj) i Pm· K′=1 γK′ 0 K′! P PK′ k′=1 n:k′=m QK′ k′=1 Γ(n·k′)Γ(c+r·) Γ(c+n·k′+r·) QJ j=1 Γ(njk′+rj) njk′!Γ(rj) , (9) which is a function of the cluster sizes {njk}k=1,KJ, regardless of the orders of the indices k’s. Although the EPPF is fairly complicated, one may derive a simple prediction rule, as shown below, to simulate exchangeable random partitions of grouped data governed by this EPPF. Lemma 4 (Prediction Rule). With y−ji representing the variable y that excludes the contribution of xji, the prediction rule of the BNBP group size dependent model in (3) can be expressed as P(zji = k|z−ji, m, r, γ0, c) ∝    n−ji ·k c+n−ji ·k +r· (n−ji jk + rj), for k = 1, . . . , K−ji J ; γ0 c+r· rj, if k = K−ji J + 1. (10) 4 27 49 48 4 1 1 49 49 49 23 45 48 49 50 1 1 1 2 1 1 1 Partition Group (a) ri = 1 1 2 3 4 5 6 7 8 2 4 6 8 10 25 37 39 11 28 20 33 34 33 33 18 2 5 18 14 26 8 9 12 1 1 1 1 3 2 3 3 3 1 13 5 3 11 4 2 7 2 3 1 2 1 1 1 1 1 1 1 1 3 2 1 1 1 1 1 2 1 1 1 Partition Group (b) ri = 10 2 4 6 8 10 12 14 2 4 6 8 10 26 19 15 21 16 13 10 17 22 17 9 24 14 14 18 29 31 17 21 18 9 2 13 9 12 7 3 7 6 13 1 2 1 1 1 1 2 2 1 2 2 1 1 4 1 1 2 1 1 1 1 1 1 1 2 1 1 1 1 1 1 1 1 1 1 1 1 1 Partition Group (c) ri = 100 5 10 15 2 4 6 8 10 Figure 1: Random draws from the EPPF that governs the BNBP’s exchangeable random partitions of 10 groups (rows), each of which has 50 data points. The parameters of the EPPF are set as c = 2, γ0 = 12 ψ(c+P j rj)−ψ(c), and (a) rj = 1, (b) rj = 10, or (c) rj = 100 for all the 10 groups. The jth row of each matrix, which sums to 50, represents the partition of the mj = 50 data points of the jth group over a random number of exchangeable clusters, and the kth column of each matrix represents the kth nonempty cluster in order of appearance in Gibbs sampling (the empty clusters are deleted). 2.3 Simulating Exchangeable Random Partitions of Grouped Data While the EPPF (9) is not simple, the prediction rule (10) clearly shows that the probability of selecting k is proportional to the product of two terms: one is related to the kth cluster’s overall popularity across groups, and the other is only related to the kth cluster’s popularity at that group and that group’s dispersion parameter; and the probability of creating a new cluster is related to γ0, c, r· and rj. The BNBP’s exchangeable random partitions of the group-size count vector m, whose prior distribution is governed by (9), can be easily simulated via Gibbs sampling using (10). Running Gibbs sampling using (10) for 2500 iterations and displaying the last sample, we show in Figure 1 (a)-(c) three distinct exchangeable random partitions of the same group-size count vector, under three different parameter settings. It is clear that the dispersion parameters {rj}j play a critical role in controlling how overdispersed the counts are: the smaller the {rj}j are, the more overdispersed the counts in each row and those in each column are. This is unsurprising as in the BNBP’s prior, we have njk ∼NB(rj, pk) and n:k ∼DirMult(n·k, r1, . . . , rJ). Figure 1 suggests that it is important to infer rj rather than setting them in a heuristic manner or fixing them. 3 Beta-Negative Binomial Process Topic Model With the BNBP’s EPPF derived, it becomes evident that the integer-valued BNBP also governs a prior distribution for exchangeable random partitions of grouped data. To demonstrate the use of the BNBP, we apply it to topic modeling [21] of a document corpus, a special case of mixture modeling of grouped data, where the words of the jth document xj1, . . . , xjmj constitute a group xj (mj words in document j), each word xji is an exchangeable group member indexed by vji in a vocabulary with V unique terms. We choose the base distribution as a V dimensional Dirichlet distribution as g0(φ) = Dir(φ; η, . . . , η), and choose a multinomial distribution to link a word to a topic (atom). We express the hierarchical construction of the BNBP topic model as xji ∼Mult(φzji), φk ∼Dir(η, . . . , η), zji ∼P∞ k=1 θjk Θj(Ω)δk, mj ∼Pois(Θj(Ω)), Θj ∼ΓP rj, B 1−B  , rj ∼Gamma(a0, 1/b0), B ∼BP(c, B0), γ0 ∼Gamma(e0, 1/f0). (11) Let nvjk := Pmj i=1 δv(xji)δk(zji). Multiplying (4) and the data likelihood f(xj|zj, Φ) = QV v=1 Q∞ k=1(φvk)nvjk, where Φ = (φ1, . . . , φ∞), we have f(xj, zj, mj|Φ, Θj) = Q∞ k=1 QV v=1 nvjk! mj! Q∞ k=1 QV v=1 Pois(nvjk; φvkθjk). Thus the BNBP topic model can also be considered as an infinite Poisson factor model [10], where the term-document word count matrix (mvj)v=1:V, j=1:J is factorized under the Poisson likelihood as mvj = P∞ k=1 nvjk, nvjk ∼ Pois(φvkθjk), whose likelihood f({nvjk}v,k|Φ, Θj) is different from f(xj, zj, mj|Φ, Θj) up to a multinomial coefficient. The full conditional likelihood f(x, z, m|Φ, Θ) = QJ j=1 f(xj, zj, mj|Φ, Θj) can be further expressed as f(x, z, m|Φ, Θ) = nQ∞ k=1 QV v=1 φnv·k vk o ·  Q∞ k=1 QJ j=1 θ njk jk e−θjk QJ j=1 mj!  , where the marginalization of Φ from the first right-hand-side term is the product of Dirichlet-multinomial distributions and the second right-hand-side term leads to the ECPF. Thus we have a fully marginalized 5 likelihood as f(x, z, m|γ0, c, r) = f(z, m|γ0, c, r) QKJ k=1 h Γ(V η) Γ(n·k+V η) QV v=1 Γ(nv·k+η) Γ(η) i . Directly applying Bayes’ rule to this fully marginalized likelihood, we construct a nonparametric Bayesian fully collapsed Gibbs sampler for the BNBP topic model as P(zji = k|x, z−ji, γ0, m, c, r)∝      η+n−ji vji·k V η+n−ji ·k · n−ji ·k c+n−ji ·k +r· · (n−ji jk + rj), for k = 1, . . . , K−ji J ; 1 V · γ0 c+r· · rj, if k = K−ji J + 1. (12) In the Appendix we include all the other closed-form Gibbs sampling update equations. 3.1 Comparison with Other Collapsed Gibbs Samplers One may compare the collapsed Gibbs sampler of the BNBP topic model with that of latent Dirichlet allocation (LDA) [22], which, in our notation, can be expressed as P(zji = k|x, z−ji, m, α, K) ∝ η+n−ji vji·k V η+n−ji ·k · (n−ji jk + α), for k = 1, . . . , K, (13) where the number of topics K and the topic proportion Dirichlet smoothing parameter α are both tuning parameters. The BNBP topic model is a nonparametric Bayesian algorithm that removes the need to tune these parameters. One may also compare the BNBP topic model with the HDP-LDA [6, 23], whose direct assignment sampler in our notation can be expressed as P(zji = k|x, z−ji, m, α, ˜r) ∝    η+n−ji vji·k V η+n−ji ·k · (n−ji jk + α˜rk), for k = 1, . . . , K−ji J ; 1 V · (α˜r∗), if k = K−ji J + 1; (14) where α is the concentration parameter for the group-specific Dirichlet processes eΘj ∼DP(α, eG), and ˜rk = eG(ωk) and ˜r∗= eG(Ω\DJ) are the measures of the globally shared Dirichlet process eG ∼ DP(γ0, eG0) over the observed points of discontinuity and absolutely continuous space, respectively. Comparison between (14) and (12) shows that distinct from the HDP-LDA that combines a topic’s global and local popularities in an additive manner as (n−ji jk + α˜rk), the BNBP topic model combines them in a multiplicative manner as n−ji ·k c+n−ji ·k +r· · (n−ji jk + rj). This term can also be rewritten as the product of n−ji ·k and n−ji jk +rj c+n−ji ·k +r· , the latter of which represents how much the jth document contributes to the overall popularity of the kth topic. Therefore, the BNBP and HDP-LDA have distinct mechanisms to automatically shrink the number of topics. Note that while the BNBP sampler in (12) is fully collapsed, the direct assignment sampler of the HDP-LDA in (14) is only partially collapsed as neither the globally shared Dirichlet process eG nor the concentration parameter α are marginalized out. To derive a collapsed sampler for the HDP-LDA that marginalizes out eG (but still not α), one has to use the Chinese restaurant franchise [6], which has cumbersome book-keeping as each word is indirectly linked to its topic via a latent table index. 4 Example Results We consider the JACM1, PsyReview2, and NIPS123 corpora, restricting the vocabulary to terms that occur in five or more documents. The JACM corpus includes 536 documents, with V = 1539 unique terms and 68,055 total word counts. The PsyReview corpus includes 1281 documents, with V = 2566 and 71,279 total word counts. The NIPS12 corpus includes 1740 documents, with V = 13, 649 and 2,301,375 total word counts. To evaluate the BNBP topic model4 and its performance relative to that of the HDP-LDA, which are both nonparametric Bayesian algorithms, we randomly choose 50% 1http://www.cs.princeton.edu/∼blei/downloads/ 2http://psiexp.ss.uci.edu/research/programs data/toolbox.htm 3http://www.cs.nyu.edu/∼roweis/data.html 4Matlab code available in http://mingyuanzhou.github.io/ 6 0 500 1000 1500 10 100 1000 (d) BNBP Topic Model, PsyReview 0 500 1000 1500 10 100 1000 (c) HDP−LDA, PsyReview Number of topics 0 500 1000 1500 10 100 1000 (b) BNBP Topic Model, JACM 0 500 1000 1500 10 100 1000 (a) HDP−LDA, JACM Number of topics 0 500 1000 1500 10 100 1000 (f) BNBP Topic Model, NIPS12 Gibbs sampling iteration 0 500 1000 1500 10 100 1000 (e) HDP−LDA, NIPS12 Gibbs sampling iteration Number of topics Figure 2: The inferred number of topics KJ for the first 1500 Gibbs sampling iterations for the (a) HDP-LDA and (b) BNBP topic model on JACM. (c)-(d) and (e)-(f) are analogous plots to (a)-(c) for the PsyReview and NIPS12 corpora, respectively. From bottom to top in each plot, the red, blue, magenta, black, green, yellow, and cyan curves correspond to the results for η = 0.50, 0.25, 0.10, 0.05, 0.02, 0.01, and 0.005, respectively. of the words in each document as training, and use the remaining ones to calculate per-word heldout perplexity. We set the hyperparameters as a0 = b0 = e0 = f0 = 0.01. We consider 2500 Gibbs sampling iterations and collect the last 1500 samples. In each iteration, we randomize the ordering of the words. For each collected sample, we draw the topics (φk|−) ∼Dir(η + n1·k, . . . , η + nJ·k), and the topics weights (θjk|−) ∼Gamma(njk + rj, pk) for the BNBP and topic proportions (θk|−) ∼Dir(nj1 + α˜r1, . . . , njKJ + α˜rKJ) for the HDP, with which the per-word perplexity is computed as exp  − 1 mtest ·· P v P j mtest vj ln P s P k φ(s) vk θ(s) jk P s P v P k φ(s) vk θ(s) jk  , where s ∈{1, . . . , S} is the index of a collected MCMC sample, mtest vj is the number of test words at term v in document j, and mtest = P v P j mtest vj . The final results are averaged over five random training/testing partitions. The evaluation method is similar to those used in [24, 25, 26, 10]. Similar to [26, 10], we set the topic Dirichlet smoothing parameter as η = 0.01, 0.02, 0.05, 0.10, 0.25, or 0.50. To test how the algorithms perform in more extreme settings, we also consider η = 0.001, 0.002, and 0.005. All algorithms are implemented with unoptimized Matlab code. On a 3.4 GHz CPU, the fully collapsed Gibbs sampler of the BNBP topic model takes about 2.5 seconds per iteration on the NIPS12 corpus when the inferred number of topics is around 180. The direct assignment sampler of the HDP-LDA has comparable computational complexity when the inferred number of topics is similar. Note that when the inferred number of topics KJ is large, the sparse computation technique for LDA [27, 28] may also be used to considerably speed up the sampling algorithm of the BNBP topic model. We first diagnose the convergence and mixing of the collapsed Gibbs samplers for the HDP-LDA and BNBP topic model via the trace plots of their samples. The three plots in the left column of Figures 2 show that the HDP-LDA travels relatively slowly to the target distributions of the number of topics, often reaching them in more than 300 iterations, whereas the three plots in the right column show that the BNBP topic model travels quickly to the target distributions, usually reaching them in less than 100 iterations. Moreover, Figures 2 shows that the chains of the HDP-LDA are taking in small steps and do not traverse their distributions quickly, whereas the chains of the BNBP topic models mix very well locally and traverse their distributions relatively quickly. A smaller topic Dirichlet smoothing parameter η generally supports a larger number of topics, as shown in the left column of Figure 3, and hence often leads to lower perplexities, as shown in the middle column of Figure 3; however, an η that is as small as 0.001 (not commonly used in practice) may lead to more than a thousand topics and consequently overfit the corpus, which is particularly evident for the HDP-LDA on both the JACM and PsyReview corpora. Similar trends are also likely to be observed on the larger NIPS2012 corpus if we allow the values of η to be even smaller than 0.001. As shown in the middle column of Figure 3, for the same η, the BNBP topic model, usually representing the corpus with a smaller number of topics, often have higher perplexities than those of the HDP-LDA, which is unsurprising as the BNBP topic model has a multiplicative control mechanism to more strongly shrink the number of topics, whereas the HDP has a softer additive shrinkage mechanism. Similar performance differences have also been observed 7 0.01 0.1 0.5 10 0 10 2 10 4 Topic Dirichlet parameter η Number of topics K (a) 0.01 0.1 0.5 240 260 280 300 320 Topic Dirichlet parameter η Heldout perplexity (b) 10 100 1000 240 260 280 300 320 Number of topics K Heldout perplexity (c) BNBP Topic Model HDP−LDA 0.01 0.1 0.5 10 0 10 2 10 4 Topic Dirichlet parameter η Number of topics K (d) 0.01 0.1 0.5 800 900 1000 1100 1200 Topic Dirichlet parameter η Heldout perplexity (e) 10 100 1000 2000 800 900 1000 1100 1200 Number of topics K Heldout perplexity (f) BNBP Topic Model HDP−LDA 0.01 0.1 0.5 10 0 10 2 10 4 Topic Dirichlet parameter η Number of topics K (g) 0.01 0.1 0.5 1000 1200 1400 1600 1800 2000 2200 Topic Dirichlet parameter η Heldout perplexity (h) 10 100 1000 1000 1200 1400 1600 1800 2000 2200 Number of topics K Heldout perplexity (i) BNBP Topic Model HDP−LDA Figure 3: Comparison between the HDP-LDA and BNBP topic model with the topic Dirichlet smoothing parameter η ∈{0.001, 0.002, 0.005, 0.01, 0.02, 0.05, 0.10, 0.25, 0.50}. For the JACM corpus: (a) the posterior mean of the inferred number of topics KJ and (b) per-word heldout perplexity, both as a function of η, and (c) per-word heldout perplexity as a function of the inferred number of topics KJ; the topic Dirichlet smoothing parameter η and the number of topics KJ are displayed in the logarithmic scale. (d)-(f) Analogous plots to (a)-(c) for the PsyReview corpus. (g)-(i) Analogous plots to (a)-(c) for the NIPS12 corpus, where the results of η = 0.002 and 0.001 for the HDP-LDA are omitted. in [7], where the HDP and BNBP are inferred under finite approximations with truncated block Gibbs sampling. However, it does not necessarily mean that the HDP-LDA has better predictive performance than the BNBP topic model. In fact, as shown in the right column of Figure 3, the BNBP topic model’s perplexity tends to be lower than that of the HDP-LDA if their inferred number of topics are comparable and the η is not overly small, which implies that the BNBP topic model is able to achieve the same predictive power as the HDP-LDA, but with a more compact representation of the corpus under common experimental settings. While it is interesting to understand the ultimate potentials of the HDP-LDA and BNBP topic model for out-of-sample prediction by setting the η to be very small, a moderate η that supports a moderate number of topics is usually preferred in practice, for which the BNBP topic model could be a preferred choice over the HDP-LDA, as our experimental results on three different corpora all suggest that the BNBP topic model could achieve lower-perplexity using the same number of topics. To further understand why the BNBP topic model and HDP-LDA have distinct characteristics, one may view them from a count-modeling perspective [7] and examine how they differently control the relationship between the variances and means of the latent topic usage count vectors {(n1k, . . . , nJk)}k. We also find that the BNBP collapsed Gibbs sampler clearly outperforms the blocked Gibbs sampler of [7] in terms of convergence speed, computational complexity and memory requirement. But a blocked Gibbs sampler based on finite truncation [7] or adaptive truncation [11] does have a clear advantage that it is easy to parallelize. The heuristics used to parallelize an HDP collapsed sampler [24] may also be modified to parallelize the proposed BNBP collapsed sampler. 5 Conclusions A group size dependent exchangeable partition probability function (EPPF) for mixed-membership modeling is developed using the integer-valued beta-negative binomial process (BNBP). The exchangeable random partitions of grouped data, governed by the EPPF of the BNBP, are strongly influenced by the group-specific dispersion parameters. We construct a BNBP nonparametric Bayesian topic model that is distinct from existing ones, intuitive to interpret, and straightforward to implement. The fully collapsed Gibbs sampler converges fast, mixes well, and has state-of-the-art predictive performance when a compact representation of the corpus is desired. The method to derive the EPPF for the BNBP via a group size dependent model is unique, and it is of interest to further investigate whether this method can be generalized to derive new EPPFs for mixed-membership modeling that could be introduced by other integer-valued stochastic processes, including the gamma-Poisson and gamma-negative binomial processes. 8 References [1] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Ann. Statist., 1973. [2] E. Regazzini, A. Lijoi, and I. Pr¨unster. Distributional results for means of normalized random measures with independent increments. Annals of Statistics, 2003. [3] A. Lijoi and I. Pr¨unster. Models beyond the Dirichlet process. In N. L. Hjort, C. Holmes, P. M¨uller, and S. G. Walker, editors, Bayesian nonparametrics. Cambridge Univ. Press, 2010. [4] D. Blackwell and J. MacQueen. Ferguson distributions via P´olya urn schemes. The Annals of Statistics, 1973. [5] J. Pitman. Combinatorial stochastic processes. Lecture Notes in Mathematics. SpringerVerlag, 2006. [6] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. JASA, 2006. [7] M. Zhou and L. Carin. Negative binomial process count and mixture modeling. To appear in IEEE Trans. Pattern Anal. Mach. Intelligence, 2014. [8] A. Y. Lo. Bayesian nonparametric statistical inference for Poisson point processes. Zeitschrift fur, pages 55–66, 1982. [9] M. K. Titsias. The infinite gamma-Poisson feature model. In NIPS, 2008. [10] M. Zhou, L. Hannah, D. Dunson, and L. Carin. Beta-negative binomial process and Poisson factor analysis. In AISTATS, 2012. [11] T. Broderick, L. Mackey, J. Paisley, and M. I. Jordan. Combinatorial clustering and the beta negative binomial process. To appear in IEEE Trans. Pattern Anal. Mach. Intelligence, 2014. [12] M. Zhou and S. G. Walker. Sample size dependent species models. arXiv:1410.3155, 2014. [13] M. Lomel´ı, S. Favaro, and Y. W. Teh. A marginal sampler for σ-Stable Poisson-Kingman mixture models. arXiv preprint arXiv:1407.4211, 2014. [14] N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data. Ann. Statist., 1990. [15] R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet process. In AISTATS, 2007. [16] C. Heaukulani and D. M. Roy. The combinatorial structure of beta negative binomial processes. arXiv:1401.0062, 2013. [17] M. Zhou, O.-H. Madrid-Padilla, and J. G. Scott. Priors for random count matrices derived from a family of negative binomial processes. arXiv:1404.3331v2, 2014. [18] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. In NIPS, 2005. [19] R. E. Madsen, D. Kauchak, and C. Elkan. Modeling word burstiness using the Dirichlet distribution. In ICML, 2005. [20] M. Sibuya. Generalized hypergeometric, digamma and trigamma distributions. Annals of the Institute of Statistical Mathematics, pages 373–390, 1979. [21] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. J. Mach. Learn. Res., 2003. [22] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS, 2004. [23] C. Wang, J. Paisley, and D. M. Blei. Online variational inference for the hierarchical Dirichlet process. In AISTATS, 2011. [24] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed algorithms for topic models. JMLR, 2009. [25] H. M. Wallach, I. Murray, R. Salakhutdinov, and D. Mimno. Evaluation methods for topic models. In ICML, 2009. [26] J. Paisley, C. Wang, and D. Blei. The discrete infinite logistic normal distribution for mixedmembership modeling. In AISTATS, 2011. [27] I. Porteous, D. Newman, A. Ihler, A. Asuncion, P. Smyth, and M. Welling. Fast collapsed Gibbs sampling for latent Dirichlet allocation. In SIGKDD, 2008. [28] D. Mimno, M. Hoffman, and D. Blei. Sparse stochastic inference for latent Dirichlet allocation. In ICML, 2012. 9
2014
99
5,590
Quartz: Randomized Dual Coordinate Ascent with Arbitrary Sampling Zheng Qu Department of Mathematics The University of Hong Kong Hong Kong zhengqu@maths.hku.hk Peter Richt´arik School of Mathematics The University of Edinburgh EH9 3FD, United Kingdom peter.richtarik@ed.ac.uk Tong Zhang Department of Statistics Rutgers University Piscataway, NJ, 08854 tzhang@stat.rutgers.edu Abstract We study the problem of minimizing the average of a large number of smooth convex functions penalized with a strongly convex regularizer. We propose and analyze a novel primal-dual method (Quartz) which at every iteration samples and updates a random subset of the dual variables, chosen according to an arbitrary distribution. In contrast to typical analysis, we directly bound the decrease of the primal-dual error (in expectation), without the need to first analyze the dual error. Depending on the choice of the sampling, we obtain efficient serial and mini-batch variants of the method. In the serial case, our bounds match the best known bounds for SDCA (both with uniform and importance sampling). With standard mini-batching, our bounds predict initial data-independent speedup as well as additional data-driven speedup which depends on spectral and sparsity properties of the data. Keywords: empirical risk minimization, dual coordinate ascent, arbitrary sampling, data-driven speedup. 1 Introduction In this paper we consider a primal-dual pair of structured convex optimization problems which has in several variants of varying degrees of generality attracted a lot of attention in the past few years in the machine learning and optimization communities [4, 22, 20, 23, 21, 27]. Let A1, . . . , An be a collection of d-by-m real matrices and φ1, . . . , φn be 1/γ-smooth convex functions from Rm to R, where γ > 0. Further, let g : Rd →R ∪{+∞} be a 1-strongly convex function and λ > 0 a regularization parameter. We are interested in solving the following primal problem: minw=(w1,...,wd)∈Rd h P(w) def = 1 n Pn i=1 φi(A⊤ i w) + λg(w) i . (1) In the machine learning context, matrices {Ai} are interpreted as examples/samples, w is a (linear) predictor, function φi is the loss incurred by the predictor on example Ai, g is a regularizer, λ is a regularization parameter and (1) is the regularized empirical risk minimization problem. In 1 this paper we are especially interested in problems where n is very big (millions, billions), and much larger than d. This is often the case in big data applications. Stochastic Gradient Descent (SGD) [18, 11, 25] was designed for solving this type of large-scale optimization problems. In each iteration SGD computes the gradient of one single randomly chosen function φi and approximates the gradient using this unbiased but noisy estimation. Because of the variance of the stochastic estimation, SGD has slow convergence rate O(1/ϵ). Recently, many methods achieving fast (linear) convergence rate O(log(1/ϵ)) have been proposed, including SAG [19], SVRG [6], S2GD [8], SAGA [1], mS2GD [7] and MISO [10], all using different techniques to reduce the variance. Another approach, such as Stochastic Dual Coordinate Ascent (SDCA) [22], solves (1) by considering its dual problem that is defined as follows. For each i, let φ∗ i : Rm →R be the convex conjugate of φi, namely, φ∗ i (u) = maxs∈Rm s⊤u −φi(s) and similarly let g∗: Rd →R be the convex conjugate of g. The dual problem of (1) is defined as: max α=(α1,...,αn)∈RN=Rnm h D(α) def = −f(α) −ψ(α) i , (2) where α = (α1, . . . , αn) ∈RN = Rnm is obtained by stacking dual variables (blocks) αi ∈Rm, i = 1, . . . , n, on top of each other and functions f and ψ are defined by f(α) def = λg∗ 1 λn Pn i=1 Aiαi  ; ψ(α) def = 1 n Pn i=1 φ∗ i (−αi). (3) SDCA [22] and its proximal extension Prox-SDCA [20] first solve the dual problem (2) by updating uniformly at random one dual variable at each round and then recover the primal solution by setting w = ∇g∗(α). Let Li = λmax(A⊤ i Ai). It is known that if we run SDCA for at least O  n + maxi Li λγ  log  n + maxi Li λγ  1 ϵ  iterations, then SDCA finds a pair (w, α) such that E[P(w) −D(α)] ≤ϵ. By applying accelerated randomized coordinate descent on the dual problem, APCG [9] needs at most ˜O(n + q maxi Li λγ ) number of iterations to get ϵ-accuracy. ASDCA [21] and SPDC [26] are also accelerated and randomized primal-dual methods. Moreover, they can update a mini-batch of dual variables in each round. We propose a new algorithm (Algorithm 1), which we call Quartz, for simultaneously solving the primal (1) and dual (2) problems. On the dual side, at each iteration our method selects and updates a random subset (sampling) ˆS ⊆{1, . . . , n} of the dual variables/blocks. We assume that these sets are i.i.d. throughout the iterations. However, we do not impose any additional assumptions on the distribution of ˆS apart from the necessary requirement that each block i needs to be chosen with a positive probability: pi def = P(i ∈ˆS) > 0. Quartz is the first SDCA-like method analyzed for an arbitrary sampling. The dual updates are then used to perform an update to the primal variable w and the process is repeated. Our primal updates are different (less aggressive) from those used in SDCA [22] and Prox-SDCA [20], thanks to which the decrease in the primal-dual error can be bounded directly without first establishing the dual convergence as in [20], [23] and [9]. Our analysis is novel and directly primal-dual in nature. As a result, our proof is more direct, and the logarithmic term in our bound has a simpler form. Main result. We prove that starting from an initial pair (w0, α0), Quartz finds a pair (w, α) for which P(w) −D(α) ≤ϵ (in expectation) in at most max i  1 pi + vi piλγn  log  P (w0)−D(α0) ϵ  (4) iterations. The parameters v1, . . . , vn are assumed to satisfy the following ESO (expected separable overapproximation) inequality: E ˆS h P i∈ˆS Aihi 2i ≤Pn i=1 pivi∥hi∥2, (5) where ∥·∥denotes the standard Euclidean norm. Moreover, the parameters v1, . . . , vn are needed to run the method (they determine stepsizes), and hence it is critical that they can be cheaply computed 2 before the method starts. We wish to point out that (5) always holds for some parameters {vi}. Indeed, the left hand side is a quadratic function of h and hence the inequality holds for largeenough vi. Having said that, the size of these parameters directly influences the complexity, and hence one would want to obtain as tight bounds as possible. As we will show, for many samplings of interest small enough parameter v can be obtained in time required to read the data {Ai}. In particular, if the data matrix A = (A1, . . . , An) is sufficiently sparse, our iteration complexity result (4) specialized to the case of standard mini-batching can be better than that of accelerated methods such as ASDCA [21] and SPDC [26] even when the condition number maxi Li/λγ is larger than n, see Proposition 4 and Figure 2. As described above, Quartz uses an arbitrary sampling for picking the dual variables to be updated in each iteration. To the best of our knowledge, only two papers exist in the literature where a stochastic method using an arbitrary sampling was analyzed: NSync [16] for unconstrained minimization of a strongly convex function and ALPHA [15] for composite minimization of non-strongly convex function. Assumption (5) was for the first time introduced in [16]. However, NSync is not a primal-dual method. Besides NSync, the closest works to ours in terms of the generality of the sampling are PCDM [17], SPCDM [3] and APPROX [2]. All these are randomized coordinate descent methods, and all were analyzed for arbitrary uniform samplings (i.e., samplings satisfying P(i ∈ˆS) = P(i′ ∈ˆS) for all i, i′ ∈{1, . . . , n}). Again, none of these methods were analyzed in a primal-dual framework. In Section 2 we describe the algorithm, show that it admits a natural interpretation in terms of Fenchel duality and discuss the flexibility of Quartz. We then proceed to Section 3 where we state the main result, specialize it to the samplings discussed in Section 2, and give detailed comparison of our results with existing results for related primal-dual stochastic methods in the literature. In Section 4 we demonstrate how Quartz compares to other related methods through numerical experiments. 2 The Quartz Algorithm Throughout the paper we consider the standard Euclidean norm, denoted by ∥· ∥. A function φ : Rm →R is (1/γ)-smooth if it is differentiable and has Lipschitz continuous gradient with Lispchitz constant 1/γ: ∥∇φ(x) −∇φ(y)∥≤1 γ ∥x −y∥, for all x, y ∈Rm. A function g : Rd →R ∪{+∞} is 1-strongly convex if g(w) ≥g(w′) + ⟨∇g(w′), w −w′⟩+ 1 2∥w −w′∥2 for all w, w′ ∈dom(g), where dom(g) denotes the domain of g and ∇g(w′) is a subgradient of g at w′. The most important parameter of Quartz is a random sampling ˆS, which is a random subset of [n] = {1, 2, . . . , n}. The only assumption we make on the sampling ˆS in this paper is the following: Assumption 1 (Proper sampling) ˆS is a proper sampling; that is, pi def = P(i ∈ˆS) > 0, i ∈[n]. (6) This assumption guarantees that each block (dual variable) has a chance to get updated by the method. Prior to running the algorithm, we compute positive constants v1, . . . , vn satisfying (5) to define the stepsize parameter θ used throughout in the algorithm: θ = min i piλγn vi+λγn. (7) Note from (5) that θ depends on both the data matrix A and the sampling ˆS. We shall show how to compute in less than two passes over the data the parameter v satisfying (5) for some examples of sampling in Section 2.2. 2.1 Interpretation of Quartz through Fenchel duality 3 Algorithm 1 Quartz Parameters: proper random sampling ˆS and a positive vector v ∈Rn Initialization: α0 ∈RN; w0 ∈Rd; pi = P(i ∈ˆS); θ = min i piλγn vi+λγn; ¯α0 = 1 λn Pn i=1 Aiα0 i for t ≥1 do wt = (1 −θ)wt−1 + θ∇g∗(¯αt−1) αt = αt−1 Generate a random set St ⊆[n], following the distribution of ˆS for i ∈St do αt i = (1 −θp−1 i )αt−1 i −θp−1 i ∇φi(A⊤ i wt) end for ¯αt = ¯αt−1 + (λn)−1 P i∈St Ai(αt i −αt−1 i ) end for Output: wt, αt Quartz (Algorithm 1) has a natural interpretation in terms of Fenchel duality. Let (w, α) ∈Rd ×RN and define ¯α = 1 λn Pn i=1 Aiαi. The duality gap for the pair (w, α) can be decomposed as: P(w) −D(α) (1)+(2) = λ (g(w) + g∗(¯α)) + 1 n Pn i=1 φi(A⊤ i w) + φ∗ i (−αi) = λ(g(w) + g∗(¯α) −⟨w, ¯α⟩ | {z } GAPg(w,α) ) + 1 n Pn i=1 φi(A⊤ i w) + φ∗ i (−αi) + ⟨A⊤ i w, αi⟩ | {z } GAPφi(w,αi) . By Fenchel-Young inequality, GAPg(w, α) ≥0 and GAPφi(w, αi) ≥0 for all i, which proves weak duality for the problems (1) and (2), i.e., P(w) ≥D(α). The pair (w, α) is optimal when both GAPg and GAPφi for all i are zero. It is known that this happens precisely when the following optimality conditions hold: w = ∇g∗(¯α) (8) αi = −∇φi(A⊤ i w), i ∈[n]. (9) We will now interpret the primal and dual steps of Quartz in terms of the above discussion. It is easy to see that Algorithm 1 updates the primal and dual variables as follows: wt = (1 −θ)wt−1 + θ∇g∗(¯αt−1) (10) αt i =  1 −θp−1 i  αt−1 i + θp−1 i −∇φi(A⊤ i wt)  , i ∈St αt−1 i , i /∈St (11) where ¯αt−1 = 1 λn Pn i=1 Aiαt−1 i , θ is a constant defined in (7) and St ∼ˆS is a random subset of [n]. In other words, at iteration t we first set the primal variable wt to be a convex combination of its current value wt−1 and a value reducing GAPg to zero: see (10). This is followed by adjusting a subset of dual variables corresponding to a randomly chosen set of examples St such that for each example i ∈St, the i-th dual variable αt i is set to be a convex combination of its current value αt−1 i and a value reducing GAPφi to zero, see (11). 2.2 Flexibility of Quartz Clearly, there are many ways in which the distribution of ˆS can be chosen, leading to numerous variants of Quartz. The convex combination constant θ used throughout the algorithm should be tuned according to (7) where v1, . . . , vn are constants satisfying (5). Note that the best possible v is obtained by computing the maximal eigenvalue of the matrix (A⊤A) ◦P where ◦denotes the Hadamard (component-wise) product of matrices and P ∈RN×N is an n-by-n block matrix with all elements in block (i, j) equal to P(i ∈ˆS, j ∈ˆS), see [14]. However, the worst-case complexity of computing directly the maximal eigenvalue of (A⊤A) ◦P amounts to O(N 2), which requires unreasonable preprocessing time in the context of machine learning where N is assumed to be very large. We now describe some examples of sampling ˆS and show how to compute in less than two passes over the data the corresponding constants v1, . . . , vn. More examples including distributed sampling are presented in the supplementary material. 4 Serial sampling. The most studied sampling in the literature on stochastic optimization is the serial sampling, which corresponds to the selection of a single block i ∈[n]. That is, | ˆS| = 1 with probability 1. The name “serial” is pointing to the fact that a method using such a sampling will typically be a serial (as opposed to being parallel) method; updating a single block (dual variable) at a time. A serial sampling is uniquely characterized by the vector of probabilities p = (p1, . . . , pn), where pi is defined by (6). For serial sampling ˆS, it is easy to see that (5) is satisfied for vi = Li def = λmax(A⊤ i Ai), i ∈[n], (12) where λmax(·) denotes the maximal eigenvalue. Standard mini-batching. We now consider ˆS which selects subsets of [n] of cardinality τ, uniformly at random. In the terminology established in [17], such ˆS is called τ-nice. This sampling satisfies pi = pj for all i, j ∈[n]; and hence it is uniform. This sampling is well suited for parallel computing. Indeed, Quartz could be implemented as follows. If we have τ processors available, then at the beginning of iteration t we can assign each block (dual variable) in St to a dedicated processor. The processor assigned to i would then compute ∆αt i and apply the update. If all processors have fast access to the memory where all the data is stored, as is the case in a shared-memory multicore workstation, then this way of assigning workload to the individual processors does not cause any major problems. For τ-nice sampling, (5) is satisfied for vi = λmax(Mi), Mi = Pd j=1  1 + (ωj−1)(τ−1) n−1  A⊤ jiAji, i ∈[n], (13) where for each j ∈[d], ωj is the number of nonzero blocks in the j-th row of matrix A, i.e., ωj def = |{i ∈[n] : Aji ̸= 0}|, j ∈[d]. (14) Note that (13) follows from an extension of a formula given in [2] from m = 1 to m ≥1. 3 Main Result The complexity of our method is given by the following theorem. The proof can be found in the supplementary material. Theorem 2 (Main Result) Assume that g is 1-strongly convex and that for each i ∈[n], φi is convex and (1/γ)-smooth. Let ˆS be a proper sampling (Assumption 1) and v1, . . . , vn be positive scalars satisfying (5). Then the sequence of primal and dual variables {wt, αt}t≥0 of Quartz (Algorithm 1) satisfies: E[P(wt) −D(αt)] ≤(1 −θ)t(P(w0) −D(α0)), (15) where θ is defined in (7). In particular, if we fix ϵ ≤P(w0) −D(α0), then for T ≥max i  1 pi + vi piλγn  log  P (w0)−D(α0) ϵ  ⇒ E[P(wT ) −D(αT )] ≤ϵ. (16) In order to put the above result into context, in the rest of this section we will specialize the above result to two special samplings: a serial sampling, and the τ-nice sampling. 3.1 Quartz with serial sampling When ˆS is a serial sampling, we just need to plug (12) into (16) and derive the bound T ≥max i  1 pi + Li piλγn  log  P (w0)−D(α0) ϵ  =⇒ E[P(wT ) −D(αT )] ≤ϵ. (17) If in addition, ˆS is uniform, then pi = 1/n for all i ∈[n] and we refer to this special case of Quartz as Quartz-U. By replacing pi = 1/n in (17) we obtain directly the complexity of Quartz-U: T ≥  n + maxi Li λγ  log  P (w0)−D(α0) ϵ  =⇒ E[P(wT ) −D(αT )] ≤ϵ. (18) 5 Otherwise, we can seek to maximize the right-hand side of the inequality in (17) with respect to the sampling probability p to obtain the best bound. A simple calculation reveals that the optimal probability is given by: P( ˆS = {i}) = p∗ i def = (Li + λγn)/ Pn i=1 (Li + λγn) . (19) We shall call Quartz-IP the algorithm obtained by using the above serial sampling probability. The following complexity result of Quartz-IP can be derived easily by plugging (19) into (17): T ≥  n + Pn i=1 Li nλγ  log  P (w0)−D(α0) ϵ  =⇒ E[P(wT ) −D(αT )] ≤ϵ. (20) Note that in contrast with the complexity result of Quartz-U (18), we now have dependence on the average of the eigenvalues Li. Quartz-U vs Prox-SDCA. Quartz-U should be compared to Proximal Stochastic Dual Coordinate Ascent (Prox-SDCA) [22, 20]. Indeed, the dual update of Prox-SDCA takes exactly the same form of Quartz-U1, see (11). The main difference is how the primal variable wt is updated: while Quartz performs the update (10), Prox-SDCA (see also [24, 5]) performs the more aggressive update wt = ∇g∗(¯αt−1) and the complexity result of Prox-SDCA is as follows: T ≥  n + maxi Li λγ  log  n + maxi Li λγ   D(α∗)−D(α0) ϵ  ⇒E[P(wT ) −D(αT )] ≤ϵ, (21) where α∗is the dual optimal solution. Notice that the dominant terms in (18) and (21) exactly match, although our logarithmic term is better and simpler. This is due to a direct bound on the decrease of the primal-dual error of Quartz, without the need to first analyze the dual error, in contrast to the typical approach for most of the dual coordinate ascent methods [22, 23, 20, 21, 9]. Quartz-IP vs Iprox-SDCA. The importance sampling (19) was previously used in the algorithm Iprox-SDCA [27], which extends Prox-SDCA to non-uniform serial sampling. The complexity of Quartz-IP (20) should then be compared with the following complexity result of Iprox-SDCA [27]: T ≥  n + Pn i=1 Li nλγ  log  n + Pn i=1 Li nλγ   D(α∗)−D(α0) ϵ  ⇒E[P(wT ) −D(αT )] ≤ϵ. (22) Again, the dominant terms in (20) and (22) exactly match but our logarithmic term is smaller. 3.2 Quartz with τ-nice Sampling (standard mini-batching) We now specialize Theorem 2 to the case of the τ-nice sampling. We define ˜w such that: maxi λmax Pd j=1  1 + (ωj−1)(τ−1) n−1  A⊤ jiAji  =  1 + (˜ω−1)(τ−1) n−1  maxi Li It is clear that 1 ≤˜w ≤maxj wj ≤n and can be considered as a measure of the density of the data. By plugging (13) into (16) we obtain directly the following corollary. Corollary 3 Assume ˆS is the τ-nice sampling and v is chosen as in (13). If we let ϵ ≤P(w0) − D(α0) and T ≥  n τ +  1+ (˜ω−1)(τ−1) n−1  maxi Li λγτ  log  P (w0)−D(α0) ϵ  ⇒E[P(wT ) −D(αT )] ≤ϵ. (23) Let us now have a detailed look at the above result, especially in terms of how it compares with the serial uniform case (18). For fully sparse data, we get perfect linear speedup: the bound in (23) is a 1/τ fraction of the bound in (18). For fully dense data, the condition number (κ def = maxi Li/(λγ)) is unaffected by mini-batching. For general data, the behaviour of Quartz with τ-nice sampling interpolates these two extreme cases. It is important to note that regardless of the condition number κ, as long as τ ≤1 + (n −1)/( ˜w −1) the bound in (23) is at most a 2/τ fraction of the bound in (18). Hence, for sparser problems, Quartz can achieve linear speedup for larger mini-batch sizes. 1In [20] the authors proposed five options of dual updating rule. Our dual updating formula (11) should be compared with option V in Prox-SDCA. For the same reason as given in the beginning of [20, Appendix A.], Quartz implemented with the same other four options achieves the same complexity result as Theorem 2. 6 3.3 Quartz vs existing primal-dual mini-batch methods We now compare the above result with existing mini-batch stochastic dual coordinate ascent methods. The mini-batch variants of SDCA, to which Quartz with τ-nice sampling can be naturally compared, have been proposed and analyzed previously in [23], [21] and [26]. In [23], the authors proposed to use a so-called safe mini-batching, which is precisely equivalent to finding the stepsize parameter v satisfying (5) (in the special case of τ-nice sampling). However, they only analyzed the case where the functions {φi}i are non-smooth. In [21], the authors studied accelerated minibatch SDCA (ASDCA), specialized to the case when the regularizer g is the squared L2 norm. They showed that the complexity of ASDCA interpolates between that of SDCA and accelerated gradient descent (AGD) [13] through varying the mini-batch size τ. In [26], the authors proposed a mini-batch extension of their stochastic primal-dual coordinate algorithm (SPDC). Both ASDCA and SPDC reach the same complexity as AGD when the mini-batch size equals to n, thus should be considered as accelerated algorithms 2. The complexity bounds for all these algorithms are summarized in Table 1. In Table 2 we compare the complexities of SDCA, ASDCA, SPDC and Quartz in several regimes. Algorithm Iteration complexity g SDCA [22] n + 1 λγ 1 2∥· ∥2 ASDCA [21] 4 × max  n τ , q n λγτ , 1 λγτ , n 1 3 (λγτ) 2 3  1 2∥· ∥2 SPDC [26] n τ + q n λγτ general Quartz with τ-nice sampling n τ +  1 + (˜ω−1)(τ−1) n−1  1 λγτ general Table 1: Comparison of the iteration complexity of several primal-dual algorithms performing stochastic coordinate ascent steps in the dual using a mini-batch of examples of size τ (with the exception of SDCA, which is a serial method using τ = 1. Algorithm γλn = Θ( 1 √n) γλn = Θ(1) γλn = Θ(τ) γλn = Θ(√n) SDCA [22] n3/2 n n n ASDCA [21] n3/2/τ + n5/4/√τ + n4/3/τ 2/3 n/√τ n/τ n/τ + n3/4/√τ SPDC [26] n5/4/√τ n/√τ n/τ n/τ + n3/4/√τ Quartz (τ-nice) n3/2/τ + ˜ω√n n/τ + ˜ω n/τ n/τ + ˜ω/√n Table 2: Comparison of leading factors in the complexity bounds of several methods in 5 regimes. Looking at Table 2, we see that in the γλn = Θ(τ) regime (i.e., if the condition number is κ = Θ(n/τ)), Quartz matches the linear speedup (when compared to SDCA) of ASDCA and SPDC. When the condition number is roughly equal to the sample size (κ = Θ(n)), then Quartz does better than both ASDCA and SPDC as long as n/τ + ˜ω ≤n/√τ. In particular, this is the case when the data is sparse: ˜ω ≤n/√τ. If the data is even more sparse (and in many big data applications one has ˜ω = O(1)) and we have ˜ω ≤n/τ, then Quartz significantly outperforms both ASDCA and SPDC. Note that Quartz can be better than both ASDCA and SPDC even in the domain of accelerated methods, that is, when the condition number is larger than the number of examples: κ = 1/(γλ) ≥n. Indeed, we have the following result: Proposition 4 Assume that nλγ ≤1 and that maxi Li = 1. If the data is sufficiently sparse so that λγτn ≥  1 + nλγ + (˜ω−1)(τ−1) n−1 2 , (24) then the iteration complexity (in ˜O order) of Quartz is better than that of ASDCA and SPDC. The result can be interpreted as follows: if n ≤κ ≤τn/(1 + n/κ)2 (that is, τ ≥λγτn ≥ (1 + nλγ)2), then there are sparse-enough problems for which Quartz is better than both ASDCA and SPDC. 2APCG [9] also reaches accelerated convergence rate but was not proposed in the mini-batch setting. 7 4 Experimental Results In this section we demonstrate how Quartz specialized to different samplings compares with other methods. All of our experiments are performed with m = 1, for smoothed hinge-loss functions {φi} with γ = 1 and squared L2-regularizer g, see [20]. The experiments were performed on the three datasets reported in Table 3, and three randomly generated large dataset [12] with n = 100, 000 examples, d = 100, 000 features with different sparsity. In Figure 1 we compare Quartz specialized to serial sampling and for both uniform and optimal sampling with Prox-SDCA and Iprox-SDCA, previously discussed in Section 3.1, on three datasets. Due to the conservative primal date in Quartz, Quartz-U appears to be slower than Prox-SDCA in practice. Nevertheless, in all the experiments, Quartz-IP shows almost identical convergence behaviour to that of Iprox-SDCA. In Figure 2 we compare Quartz specialized to τ-nice sampling with mini-batch SPDC for different values of τ, in the domain of accelerated methods (κ = 10n). The datasets are randomly generated following [13, Section 6]. When τ = 1, it is clear that SPDC outperforms Quartz as the condition number is larger than n. However, as τ increases, the number of data processed by SPDC is increased by √τ as predicted by its theory but the number of data processed by Quartz remains almost the same by taking advantage of the large sparsity of the data. Hence, Quartz is much better in the large τ regime. Dataset # Training size n # features d Sparsity (# nnz/(nd)) cov1 522,911 54 22.22% w8a 49,749 300 3.91% ijcnn1 49,990 22 59.09% Table 3: Datasets used in our experiments. 0 50 100 150 10 −15 10 −10 10 −5 10 0 nb of epochs Primal dual gap Prox-SDCA Quartz-U Iprox-SDCA Quartz-IP (a) cov1; n = 522911; λ = 1e-06 0 100 200 300 10 −15 10 −10 10 −5 10 0 nb of epochs Primal dual gap Prox-SDCA Quartz-U Iprox-SDCA Quartz-IP (b) w8a; n = 49749; λ = 1e-05 0 50 100 10 −15 10 −10 10 −5 10 0 nb of epochs Primal dual gap Prox-SDCA Quartz-U Iprox-SDCA Quartz-IP (c) ijcnn1; n = 49990; λ = 1e-05 Figure 1: Comparison of Quartz-U (uniform sampling), Quartz-IP (optimal importance sampling), ProxSDCA (uniform sampling) and Iprox-SDCA (optimal importance sampling). 0 100 200 300 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 nb of epochs Primal dual gap Quartz-τ=1 SPDC-τ=1 Quartz-τ=100 SPDC-τ=100 Quartz-τ=1000 SPDC-τ=1000 (a) Rand1; n = 105; λ = 1e-06 0 100 200 300 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 nb of epochs Primal dual gap Quartz-τ=1 SPDC-τ=1 Quartz-τ=10 SPDC-τ=10 Quartz-τ=100 SPDC-τ=100 (b) Rand2; n = 105; λ = 1e-06 0 100 200 300 10 −10 10 −8 10 −6 10 −4 10 −2 10 0 nb of epochs Primal dual gap Quartz-τ=1 SPDC-τ=1 Quartz-τ=10 SPDC-τ=10 Quartz-τ=40 SPDC-τ=40 (c) Rand3; n = 105; λ = 1e-06 Figure 2: Comparison of Quartz with SPDC for different mini-batch size τ in the regime κ = 10n. The three random datasets Random1, Random2 and Random2 have respective sparsity 0.01%, 0.1% and 1%. 8 References [1] A. Defazio, F. Bach, and S. Lacoste-julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, pages 1646–1654. 2014. [2] O. Fercoq and P. Richt´arik. Accelerated, parallel and proximal coordinate descent. SIAM Journal on Optimization (after minor revision), arXiv:1312.5799, 2013. [3] O. Fercoq and P. Richt´arik. Smooth minimization of nonsmooth functions by parallel coordinate descent. arXiv:1309.5885, 2013. [4] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S.S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In Proc. of the 25th International Conference on Machine Learning, ICML ’08, pages 408–415, 2008. [5] M. Jaggi, V. Smith, M. Takac, J. Terhorst, S. Krishnan, T. Hofmann, and M.I. Jordan. Communicationefficient distributed dual coordinate ascent. In Advances in Neural Information Processing Systems 27, pages 3068–3076. Curran Associates, Inc., 2014. [6] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In C.j.c. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 315–323. 2013. [7] J. Koneˇcn´y, J. Lu, P. Richt´arik, and M. Tak´aˇc. mS2GD: Mini-batch semi-stochastic gradient descent in the proximal setting. arXiv:1410.4744, 2014. [8] J. Koneˇcn´y and P. Richt´arik. S2GD: Semi-stochastic gradient descent methods. arXiv:1312.1666, 2013. [9] Q. Lin, Z. Lu, and L. Xiao. An accelerated proximal coordinate gradient method and its application to regularized empirical risk minimization. Technical Report MSR-TR-2014-94, July 2014. [10] J. Mairal. Incremental majorization-minimization optimization with application to large-scale machine learning. SIAM J. Optim., 25(2):829–855, 2015. [11] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. Optim., 19(4):1574–1609, 2008. [12] Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim., 22(2):341–362, 2012. [13] Y. Nesterov. Gradient methods for minimizing composite functions. Math. Program., 140(1, Ser. B):125– 161, 2013. [14] Z. Qu and P. Richt´arik. Coordinate descent methods with arbitrary sampling II: Expected separable overapproximation. arXiv:1412.8063, 2014. [15] Z. Qu and P. Richt´arik. Coordinate descent methods with arbitrary sampling I: Algorithms and complexity. arXiv:1412.8060, 2014. [16] P. Richt´arik and M. Tak´aˇc. On optimal probabilities in stochastic coordinate descent methods. Optimization Letters, published online 2015. [17] P. Richt´arik and M. Tak´aˇc. Parallel coordinate descent methods for big data optimization. Math. Program., published online 2015. [18] H. Robbins and S. Monro. A stochastic approximation method. Ann. Math. Statistics, 22:400–407, 1951. [19] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. arXiv:1309.2388, 2013. [20] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent. arXiv:1211.2717, 2012. [21] S. Shalev-shwartz and T. Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems 26, pages 378–385. 2013. [22] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. J. Mach. Learn. Res., 14(1):567–599, February 2013. [23] M. Tak´aˇc, A.S. Bijral, P. Richt´arik, and N. Srebro. Mini-batch primal and dual methods for svms. In Proc. of the 30th International Conference on Machine Learning (ICML-13), pages 1022–1030, 2013. [24] T. Yang. Trading computation for communication: Distributed stochastic dual coordinate ascent. In Advances in Neural Information Processing Systems 26, pages 629–637. 2013. [25] T. Zhang. Solving large scale l.ear prediction problems using stochastic gradient descent algorithms. In Proc. of the 21st International Conference on Machine Learning (ICML-04), pages 919–926, 2004. [26] Y. Zhang and L. Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. In Proc. of the 32nd International Conference on Machine Learning (ICML-15), pages 353–361, 2015. [27] P. Zhao and T. Zhang. Stochastic optimization with importance sampling. ICML, 2015. 9
2015
1
5,591
Parallel Recursive Best-First AND/OR Search for Exact MAP Inference in Graphical Models Akihiro Kishimoto IBM Research, Ireland akihirok@ie.ibm.com Radu Marinescu IBM Research, Ireland radu.marinescu@ie.ibm.com Adi Botea IBM Research, Ireland adibotea@ie.ibm.com Abstract The paper presents and evaluates the power of parallel search for exact MAP inference in graphical models. We introduce a new parallel shared-memory recursive best-first AND/OR search algorithm, called SPRBFAOO, that explores the search space in a best-first manner while operating with restricted memory. Our experiments show that SPRBFAOO is often superior to the current state-of-the-art sequential AND/OR search approaches, leading to considerable speed-ups (up to 7-fold with 12 threads), especially on hard problem instances. 1 Introduction Graphical models provide a powerful framework for reasoning with probabilistic information. These models use graphs to capture conditional independencies between variables, allowing a concise knowledge representation and efficient graph-based query processing algorithms. Combinatorial maximization, or maximum a posteriori (MAP) tasks arise in many applications and often can be efficiently solved by search schemes, especially in the context of AND/OR search spaces that are sensitive to the underlying problem structure [1]. Recursive best-first AND/OR search (RBFAOO) is a recent yet very powerful scheme for exact MAP inference that was shown to outperform current state-of-the-art depth-first and best-first methods by several orders of magnitude on a variety of benchmarks [2]. RBFAOO explores the context minimal AND/OR search graph associated with a graphical model in a best-first manner (even with nonmonotonic heuristics) while running within restricted memory. RBFAOO extends Recursive BestFirst Search (RBFS) [3] to graphical models and thus uses a threshold controlling technique to drive the search in a depth-first like manner while using the available memory for caching. Up to now, search-based MAP solvers were developed primarily as sequential search algorithms. However, parallel, multi-core processing can be a powerful approach to boosting the performance of a problem solver. Now that multi-core computing systems are ubiquitous, one way to extract substantial speed-ups from the hardware is to resort to parallel processing. Parallel search has been successfully employed in a variety of AI areas, including planning [4], satisfiability [5], and game playing [6, 7]. However, little research has been devoted to solving graphical models in parallel. The only parallel search scheme for MAP inference in graphical models that we are aware of is the distributed AND/OR Branch and Bound algorithm (daoopt) [8]. This assumes however a large and distributed computational grid environment with hundreds of independent and loosely connected computing systems, without access to a shared memory space for caching and reusing partial results. Contribution In this paper, we take a radically different approach and explore the potential of parallel search for MAP tasks in a shared-memory environment which, to our knowledge, has not been attempted before. We introduce SPRBFAOO, a new parallelization of RBFAOO in sharedmemory environments. SPRBFAOO maintains a single cache table shared among the threads. In this way, each thread can effectively reuse the search effort performed by others. Since all threads start from the root of the search graph using the same search strategy, an effective load balancing is 1 (a) Primal graph (b) Pseudo tree (c) Context minimal AND/OR search graph Figure 1: A simple graphical model and its associated AND/OR search graph. obtained without using sophisticated schemes, as done in previous work [8]. An extensive empirical evaluation shows that our new parallel recursive best-first AND/OR search scheme improves considerably over current state-of-the-art sequential AND/OR search approaches, in many cases leading to considerable speed-ups (up to 7-fold using 12 threads) especially on hard problem instances. 2 Background Graphical models (e.g., Bayesian Networks [9] or Markov Random Fields [10]) capture the factorization structure of a distribution over a set of variables. A graphical model is a tuple M = ⟨X, D, F⟩, where X = {Xi : i ∈V } is a set of variables indexed by set V and D = {Di : i ∈V } is the set of their finite domains of values. F = {ψα : α ∈F} is a set of discrete positive realvalued local functions defined on subsets of variables, where F ⊆2V is a set of variable subsets. We use α ⊆V and Xα ⊆X to indicate the scope of function ψα, i.e., Xα = var(ψα) = {Xi : i ∈α}. The function scopes yield a primal graph whose vertices are the variables and whose edges connect any two variables that appear in the scope of the same function. The graphical model M defines a factorized probability distribution on X, as follows: P(X) = 1 Z Q α∈F ψα(Xα) where the partition function, Z, normalizes the probability. An important inference task which appears in many real world applications is maximum a posteriori (MAP, sometimes called maximum probable explanation or MPE). MAP/MPE finds a complete assignment to the variables that has the highest probability (i.e., a mode of the joint probability), namely: x∗= argmaxx Q α∈F ψα(xα) The task is NP-hard to solve in general [9]. In this paper we focus on solving MAP as a minimization problem by taking the negative logarithm of the local functions to avoid numerical issues, namely: x∗= argminx P α∈F −log (ψα(xα)). Significant improvements for MAP inference have been achieved by using AND/OR search spaces, which often capture problem structure far better than standard OR search methods [11]. A pseudo tree of the primal graph captures the problem decomposition and is used to define the search space. A pseudo tree of an undirected graph G = (V, E) is a directed rooted tree T = (V, E′), such that every arc of G not included in E′ is a back-arc in T , namely it connects a node in T to an ancestor in T . The arcs in E′ may not all be included in E. Given a graphical model M = ⟨X, D, F⟩with a primal graph G and a pseudo tree T of G, the AND/OR search tree ST has alternating levels of OR nodes corresponding to the variables and AND nodes corresponding to the values of the OR parent’s variable, with edges weighted according to F. We denote the weight on the edge from OR node n to AND node m by w(n, m). Identical subproblems, identified by their context (the partial instantiation that separates the sub-problem from the rest of the problem graph), can be merged, yielding an AND/OR search graph [11]. Merging all context-mergeable nodes yields the context minimal AND/OR search graph, denoted by CT . The size of CT is exponential in the induced width of G along a depth-first traversal of T [11]. A solution tree T of CT is a subtree such that: (1) it contains the root node of CT ; (2) if an internal AND node n is in T then all its children are in T; (3) if an internal OR node n is in T then exactly one of its children is in T; (4) every tip node in T (i.e., nodes with no children) is a terminal node. The cost of a solution tree is the sum of the weights associated with its edges. 2 Each node n in CT is associated with a value v(n) capturing the optimal solution cost of the conditioned sub-problem rooted at n. It was shown that v(n) can be computed recursively based on the values of n’s children: OR nodes by minimization, AND nodes by summation (see also [11]). Example 1. Figure 1(a) shows the primal graph of a simple graphical model with 5 variables and 7 binary functions. Figure 1(c) displays the context minimal AND/OR search graph based on the pseudo tree from Figure 1(b) (the contexts are shown next to the pseudo tree nodes). A solution tree corresponding to the assignment (A = 0, B = 1, C = 1, D = 0, E = 0) is shown in red. Current state-of-the-art sequential search methods for exact MAP inference perform either depthfirst or best-first search. Prominent methods studied and evaluated extensively are the AND/OR Branch and Bound (AOBB) [1] and Best-First AND/OR Search (AOBF) [12]. More recently, Recursive Best-First AND/OR Search (RBFAOO) [2] has emerged as the best performing algorithm for exact MAP inference. RBFAOO belongs to the class of RBFS algorithms and employs a local threshold controlling mechanism to explore the AND/OR search graph in a depth-first like manner [3, 13]. RBFAOO maintains at each node n a lower-bound q(n) (called q-value) on v(n). During search, RBFAOO improves and caches in a fixed size table q(n) which is calculated by propagating back the q-values of n’s children. RBFAOO stops when q(r) = v(r) at the root r or it proves that there is no solution, namely q(r) = v(r) = ∞. 3 Our Parallel Algorithm Algorithm 1 SPRBFAOO for all i from 1 to nr CPU cores do root.th ←∞−ϵ; root.thub ←∞ launch tRBFS(root) on a separate thread wait for threads to finish their work return optimal cost (e.g., as root’s q-value in the cache) We now describe SPRBFAOO, a parallelization of RBFAOO in shared-memory environments. SPRBFAOO’s threads start from the root and run in parallel, as shown in Algorithm 1. Threads share one cache table, allowing them to reuse the results of each other. An entry in the cache table, corresponding to a node n, is a tuple with 6 fields: a q-value q(n), being a lower bound on the optimal cost of node n; n.solved, a flag indicating whether n is solved optimally; a virtual q-value vq(n), defined later in this section; a best known solution cost bs(n) for node n; the number of threads currently working on n; and a lock. When accessing a cache entry, threads lock it temporarily for other threads. The method Ctxt(n) identifies the context of n, which is further used to access the corresponding cache entry. Besides the cache, shared among threads, each thread will use two threshold values, n.th and n.thub, for each node n. These are separated from one thread to another. Algorithm 2 shows the procedure invoked on each thread. When a thread examines a node n, it first increments in the cache the number of threads working on node n (line 1). Then it increases vq(n) by an increment ζ, and stores the new value in the cache (line 2). The virtual q-value vq(n) is initially set to q(n). As more threads work on solving n, vq(n) grows due to the repeated increases by ζ. In effect, vq(n) reflects both the estimated cost of node n (through its q(n) component) and the number of threads working on n. By computing vq(n) this way, our goal is to dynamically control the degree to which threads overlap when exploring the search space. When a given area of the search space is more promising than others, more than one thread are encouraged to work together within that area. On the other hand, when several areas are roughly equally promising, threads should diverge and work on different areas. Indeed, in Algorithm 2, the tests on lines 13 and 23 prevent a thread from working on a node n if n.th < vq(n). (Other conditions in these tests are discussed later.) A large vq(n), which increases the likelihood that n.th < vq(n), may reflect a less promising node (i.e., large q-value), or many threads working on n, or both. Thus, our strategy is an automated and dynamic way of tuning the number of threads working on solving a node n as a function of how promising that node is. We call this the thread coordination mechanism. Lines 4–7 address the case of nodes with no children, which are either terminal nodes or deadends. In both cases, method Evaluate sets the solved flag to true. The q-value q is set to 0 for terminal 3 Algorithm 2 Method tRBFS. Handling locks skipped for clarity. Require: node n 1: IncrementNrThreadsInCache(Ctxt(n)) 2: IncreaseVQInCache(Ctxt(n), ζ)) 3: if n has no children then 4: (q, solved) ←Evaluate(n) 5: SaveInCache(Ctxt(n), q, solved, q, q) 6: DecrementNrThreadsInCache(Ctxt(n)) 7: return 8: GenerateChildren(n) 9: if n is an OR node then 10: loop 11: (cbest, vq, vq2, q, bs) ←BestChild(n) 12: n.thub ←min(n.thub, bs) 13: if n.th < vq∨q ≥n.thub∨n.solved then 14: break 15: cbest.th ←min(n.th, vq2+δ)−w(n, cbest) 16: cbest.thub ←n.thub −w(n, cbest) 17: tRBSF(cbest) 18: [continued from previous column] 19: if n is an AND node then 20: loop 21: (q, vq, bs) ←Sum(n) 22: n.thub ←min(n.thub, bs) 23: if n.th < vq∨q ≥n.thub∨n.solved then 24: break 25: (cbest, qcbest, vqcbest) ←UnsolvedChild(n) 26: cbest.th ←n.th −(vq −vqcbest) 27: cbest.thub ←n.thub −(q −qcbest) 28: tRBSF(cbest) 29: if n.solved ∨NrThreadsCache(Ctxt(n)) = 1 then 30: vq ←q 31: DecrementNrThreadsInCache(Ctxt(n)) 32: SaveInCache(Ctxt(n), q, n.solved, vq, bs) nodes and to ∞otherwise. Method SaveInCache takes as argument the context of the node, and four values to be stored in order in these fields of the corresponding cache entry: q, solved, vq and bs. Lines 10–17 and 20–28 show respectively the cases when the current node n is an OR node or an AND node. Both these follow a similar high-level sequence of steps: • Update vq, q, and bs for n, from the children’s values (lines 11, 21). Also update n.thub (lines 12, 22), an upper bound for the best solution cost known for n so far. Methods BestChild and Sum are shown in Algorithm 3. In these, child node information is either retrieved from the cache, if available, or initialized with an admissible heuristic function h. • Perform the backtracking test (lines 13–14 and 23–24). The thread backtracks to n’s parent if at least one of the following conditions hold: th(n) < vq(n), discussed earlier; q(n) ≥ n.thub i.e., a solution containing n cannot possibly beat the best known solution (we call this the suboptimality test); or the node is solved. The solved flag is true iff the node cost has been proven to be optimal, or the node was proven not to have any solution. • Otherwise, select a successor cbest to continue with (lines 11, 25). At OR nodes n, cbest is the child with the smallest vq among all children not solved yet (see method BestChild). At AND nodes, any unsolved child can be chosen. Then, update the thresholds of cbest (lines 15–16 and 26–27), and recursively process cbest (lines 17, 28). The threshold n.th is updated in a similar way to RBFAOO, including the overestimation parameter δ (see [2]). However, there are two key differences. First, we use vq instead of q, to obtain the thread coordination mechanism presented earlier. Secondly, we use two thresholds, th and thub, instead of just th, with thub being used to implement the suboptimality test q(n) ≥n.thub. When a thread backtracks to n’s parent, if either n’s solved flag is set, or no other thread currently examines n, the thread sets vq(n) to q(n) (lines 29–30 in Algorithm 2). In this way, SPRBFAOO reduces the frequency of the scenarios where n is considered to be less promising. Finally, the thread decrements in the cache the number of threads working on n (line 31), and saves in the cache the recalculated vq(n), q(n), bs(n), and the solved flag (line 32). Theorem 3.1. With an admissible heuristic in use, SPRBFAOO returns optimal solutions. Proof sketch. SPRBFAOO’s bs(r) at the root r is computed from a solution tree, therefore, bs(r) ≥ v(r). Additionally, SPRBFAOO determines solution optimality by using not vq(n) but q(n) saved in the cache table. By an induction-based discussion similar to Theorem 3.1 in [2], q(n) ≤v(n) holds for any q(n) saved in the cache table with admissible h, which indicates q(r) ≤v(r). When SPRBFAOO returns a solution, bs(r) = q(r), therefore, bs(r) = q(r) = v(r). We conjecture that SPRBFAOO is also complete, and leave a more in-depth analysis as future work. 4 Algorithm 3 Methods BestChild (left) and Sum (right) Require: node n 1: n.solved ←⊥(⊥stands for false); 2: initialize vq, vq2, q, bs to ∞ 3: for all ci child of n do 4: if Ctxt(ci) in cache then 5: (qci, sci, vqci, bsci) ←FromCache(Ctxt(ci)) 6: else 7: (qci, sci, vqci, bsci) ←(h(ci), ⊥, h(ci), ∞) 8: qci ←w(n, ci) + qci 9: vqci ←w(n, ci) + vqci 10: bs = min(bs, w(n, ci) + bsci) 11: if (qci < q) ∨(qci = q ∧¬n.solved) then 12: n.solved ←sci; q ←qci 13: if vqci < vq ∧¬sci then 14: vq2 ←vq; vq ←vqci; cbest ←ci 15: else if vqci < vq2 ∧¬sci then 16: vq2 ←vqci 17: return (cbest, vq, vq2, q, bs) Require: node n 1: n.solved ←⊤(⊤stands for true) 2: initialize vq, q, bs to 0 3: for all ci child of n do 4: if Ctxt(ci) in cache then 5: (qci, sci, vqci, bsci) ←FromCache(Ctxt(ci)) 6: else 7: (qci, sci, vqci, bsci) ←(h(ci), ⊥, h(ci), ∞) 8: q ←q + qci 9: vq ←vq + vqci 10: bs ←bs + bsci 11: n.solved ←n.solved ∧sci 12: return (q, vq, bs) 4 Experiments We evaluate empirically our parallel SPRBFAOO and compare it against sequential RBFAOO and AOBB. We also considered parallel shared-memory AOBB, denote by SPAOBB, which uses a master thread to explore centrally the AND/OR search graph up to a certain depth and solves the remaining conditioned sub-problems in parallel using a set of worker threads. The cache table is shared among the workers so that some workers may reuse partial search results recorded by others. In our implementation, the search space explored by the master corresponds to the first m variables in the pseudo tree. The performance of SPAOBB was very poor across all benchmarks due to noticeably large search overhead as well as poor load balancing, and therefore its results are omitted hereafter. All competing algorithms (SPRBFAOO, RBFAOO and AOBB) use the pre-compiled mini-bucket heuristic [1] for guiding the search. The heuristic is controlled by a parameter called i-bound which allows a trade-off between accuracy and time/space requirements – higher values of i yield a more accurate heuristic but take more time and space to compute. The search algorithms were also restricted to a static variable ordering obtained as a depth-first traversal of a min-fill pseudo tree [1]. Our benchmark problems1 include three sets of instances from genetic linkage analysis (denoted pedigree) [14], grid networks and protein side-chain interaction networks (denoted protein) [15]. In total, we evaluated 21 pedigrees, 32 grids and 240 protein networks. The algorithms were implemented in C++ (64-bit) and the experiments were run on a 2.6GHz 12-core processor with 80GB of RAM. Following [2], RBFAOO ran with a 10-20GB cache table (134,217,728 entries) and overestimation parameter δ = 1. However, SPRBFAOO allocated only 95,869,805 entries with the same amount of memory, due to extra information such as virtual q-values. We set ζ = 0.01 throughout the experiments (except those where we vary ζ). The time limit was set to 2 hours. We also record typical ranges of problem specific parameters shown in Table 1 such as the number of variables (n), maximum domain size (k), induced width (w∗), and depth of the pseudo tree (h). Table 1: Ranges (min-max) of the benchmark problems parameters. benchmark n k w∗ h grid 144 – 676 2 15 – 36 48 – 136 pedigree 334 – 1289 3 – 7 15 – 33 51 – 140 protein 26 – 177 81 6 – 16 15 – 43 Table 2: Number of unsolved problem instances (1 vs 12 cores). grid pedigree protein method i = 6 i = 14 i = 6 i = 14 i = 2 i=4 RBFAOO 9 5 8 6 41 16 SPRBFAOO 7 5 7 3 32 9 The primary performance measures reported are the run time and node expansions during search. When the run time of a solver is discussed, the total CPU time reported in seconds is one metric to show overall performance. The total CPU time consists of the heuristic compilation time and search 1http://graphmod.ics.uci.edu 5 Table 3: Total CPU time (sec) and nodes on grid and pedigree instances. Time limit 2 hours. instance algorithm i = 6 i = 8 i = 10 i = 12 i = 14 (n, k, w∗, h) time nodes time nodes time nodes time nodes time nodes (mbe) (0.06 (0.07) (0.1) (0.2) (0.7) 75-22-5 AOBB 5221 761867041 2100 314622599 884 144092486 (484,2,30,107) RBFAOO 629 133143216 2018 331885596 2036 334441548 638 113597702 85 18728991 SPRBFAOO 116 153612683 483 410230906 466 385071090 152 129817500 17 25076772 (mbe) (0.08) (0.1) (0.1) (0.3) (0.8) 75-24-5 AOBB (576,2,32,116) RBFAOO 4182 665237411 2792 465384385 229 47015068 SPRBFAOO 2794 2273916962 2959 2309390159 1012 804068930 579 511894256 43 59504303 (mbe) (0.2) (0.2) (0.3) (0.5) (1.4) 90-30-5 AOBB (900,2,42,151) RBFAOO 3783 565053698 SPRBFAOO 869 665947009 (mbe) (0.1) (0.2) (0.3) (0.6) (2.1) pedigree7 AOBB (1068,4,28,140) RBFAOO 1873 226436502 1642 201063828 1239 135387634 SPRBFAOO 4560 3062954989 353 249562472 314 222896697 267 151794050 (mbe) (0.1) (0.2) (0.2) (0.5) (1.6) pedigree9 AOBB (1119,7,25,123) RBFAOO SPRBFAOO 3021 2807834881 (mbe) (0.1) (0.2) (0.4) (1.3) (10) pedigree19 AOBB (793,5,21,107) RBFAOO SPRBFAOO 3792 2721253097 2083 1914585138 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO grids : total CPU time, i-bound (i = 6) 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO pedigree : total CPU time, i-bound (i = 6) 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO protein : total CPU time, i-bound (i = 2) 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO grids : total CPU time, i-bound (i = 14) 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO pedigree : total CPU time, i-bound (i = 14) 100 101 102 103 104 RBFAOO 100 101 102 103 104 SPRBFAOO protein : total CPU time, i-bound (i = 4) Figure 2: Total CPU time (sec) for RBFAOO vs. SPRBFAOO with smaller (top) and larger (bottom) i-bounds. Time limit 2 hours. i ∈{6, 14} for grid and pedigree, i ∈{2, 4} for protein. time. SPRBFAOO does not reduce the heuristic compilation time calculated sequentially. Note that parallelizing the heuristic compilation is an important extension as future work. Parallel versus sequential search Table 3 shows detailed results (as total CPU time in seconds and nodes expanded) for solving grid and pedigree instances using parallel and sequential search. The columns are indexed by the i-bound. For each problem instance, we also record the mini-bucket heuristic pre-compilation time, denoted by (mbe), corresponding to each i-bound. SPRBFAOO ran with 12 threads. We can see that SPRBFAOO improves considerably over RBFAOO across all reported i-bounds. The benefit of parallel search is more clearly observed at smaller i-bounds that correspond to relatively weak heuristics. In this case, the heuristic is less likely to guide the search towards more promising regions of the search space and therefore diversifying the search via multiple parallel threads is key to achieving significant speed-ups. For example, on grid 75-225, SPRBFAOO(6) is almost 6 times faster than RBFAOO(6). Similarly, SPRBFAOO(8) solves the pedigree7 instance while RBFAOO(8) runs out of time. This is important since on very hard problem instances it may only be possible to compute rather weak heuristics given limited resources. Notice 6 0.001 0.01 0.1 parameter ζ 0 5000 10000 15000 20000 25000 30000 35000 Total search time (sec) grids pedigree protein 0.001 0.01 0.1 parameter ζ 0 1 2 3 4 5 6 Average speed-up grids pedigree protein Figure 3: Total search time (sec) and average speed-up as a function of parameter ζ. Time limit 2 hours. i = 14 for grid and pedigree, i = 4 for protein. also that the pre-processing time (mbe) increases with the i-bound. Table 2 shows the number of unsolved problems in each domain. Note that SPRBFAOO solved all instances solved by RBFAOO. Figure 2 plots the total CPU time obtained by RBFAOO and SPRBFAOO using smaller (resp. larger) i-bounds corresponding to relatively weak (resp. strong) heuristics. We selected i ∈{6, 14} for grid and pedigree, and i ∈{2, 4} for protein. Specifically, i = 6 (grids, pedigrees) and i = 2 (proteins) were the smallest i-bounds for which SPRBFAOO could solve at least two thirds of instances within the 2 hour time limit, while i = 14 (grids, pedigrees) and i = 4 (proteins) were the largest possible i-bounds for which we could compile the heuristics without running out of memory on all instances. The data points shown in green correspond to problem instances that were solved only by SPRBFAOO. As before, we notice the benefit of parallel search when using relatively weak heuristics. The largest speed-up of 9.59 is obtained on the pdbilk protein instance with i = 2. As the i-bound increases and the heuristics become more accurate, the difference between RBFAOO(i) and SPRBFAOO(i) decreases because both algorithms are guided more effectively towards the subspace containing the optimal solution. In addition, the overhead associated with larger i-bounds, which is calculated sequentially, offsets considerably the speed-up obtained by SPRBFAOO(i) over RBFAOO(i) (see for example the plot for protein instances with i = 4). We also observed that SPRBFAOO’s speed-up over RBFAOO increases sublinearly as more threads are used (we experimented with 3, 6, and 12 threads, respectively). In addition to search overhead, synchronization overhead is another cause for achieving only sublinear speed-ups. The synchronization overhead can be estimated by checking the node expansion rate per thread. For example, in case of SPRBFAOO with 12 threads, the node expansion rate per thread slows down to 47 %, 50 %, and 61 % of RBFAOO in grid (i = 6), pedigree (i = 6), and protein (i = 2), respectively. This implies that the overhead related to locks is large. Since these numbers with 6 threads are 73 %, 79 %, and 96 %, respectively, the slowdown becomes severer with more threads. We hypothesize that due to the property of the virtual q-value, SPRBFAOO’s threads tend to follow the same path from the root until search directions are diversified, and frequently access the cache table entries of the these internal nodes located on that path, where lock contentions occur non-negligibly. Finally, SPRBFAOO’s load balance is quite stable in all domains, especially when all threads are invoked and perform search after a while. For example, its load balance ranges between 1.0051.064, 1.013-1.049, and 1.004-1.117 for grid (i = 6), pedigree (i = 6), and protein (i = 2), especially on those instances where SPRBFAOO expands at least 1 million nodes with 12 threads. Impact of parameter ζ In Figure 3 we analyze the performance of SPRBFAOO with 12 threads as a function of the parameter ζ which controls the way different threads are encouraged or discouraged to start exploring a specific subproblem (see also Section 3). For this purpose and to better understand SPRBFAOO’s scaling behavior, we ignore the heuristic compilation time. Therefore, we show the total search time (in seconds) over the instances that all parallel versions solve, and the search-time-based average speed-ups based on the instances where RBFAOO needs at least 1 second to solve. We obtained these numbers for ζ ∈{0.001, 0.01, 0.1}. We see that all ζ values lead to improved speed-ups. This is important because, unlike the approach of [8] which involves a sophisticated scheme, it is considerably simpler yet extremely efficient and only requires tuning a single parameter (ζ). Of the three ζ values, while SPRBFAOO with ζ = 0.1 spends the largest total search time, it yields the best speed-up. This indicates a trade-off about selecting ζ. Since the instances used to calculate speed-up values are solved by RBFAOO, they contain relatively easy instances. 7 Table 4: Total CPU time (sec) and node expansions for hard pedigree instances. SPRBFAOO ran with 12 threads, i = 20 (type4b) and i = 16 (largeFam). Time limit 100 hours. instance (n, k, w∗, h) (mbe) RBFAOO SPRBFAOO time time nodes time nodes type4b-100-19 (7308,5,29,354) 400 132711 22243047591 42846 50509174040 type4b-120-17 (7766,5,24,319) 191 210 4297063 195 6046663 type4b-130-21 (8883,5,29,416) 281 290760 51481315386 149321 177393525747 type4b-140-19 (9274,5,30,366) 488 248376 39920187143 74643 85152364623 largeFam3-10-52 (1905,3,36,80) 13 154994 19363865449 50700 44073583335 On the other hand, several difficult instances solved by SPRBFAOO with 12 threads are included in calculating the total search time. In case of ζ = 0.1, because of increased search overhead, SPRBFAOO needs more search time to solve these difficult instances. There is also one protein instance unsolved with ζ = 0.1 but solved with ζ = 0.01 and 0.001. This phenomenon can be explained as follows. With large ζ, SPRBFAOO searches in more diversified directions which could reduce lock contentions, resulting in improved speed-up values. However, due to larger diversification, when SPRBFAOO with ζ = 0.1 solves difficult instances, it might focus on less promising portions of the search space, resulting in increased total search time. Summary of the experiments In terms of search-time-based speed-ups, our parallel sharedmemory method SPRBFAOO improved considerably over its sequential counterpart RBFAOO, by up to 7 times using 12 threads. At relatively larger i-bounds, their corresponding computational overhead typically outweighed the gains obtained by parallel search. Still, parallel search had an advantage of solving additional instances unsolved by serial search. Finally, in Table 4 we report the results obtained on 5 very hard pedigree instances from [2] (mbe records the heuristic compilation time). We see again that SPRBFAOO improved over RBFAOO on all instances, while achieving a total-time-based speed-up of 3 on two of them (i.e., type4b-100-19 and largeFam3-10-52). 5 Related Work The distributed AOBB algorithm daoopt [8] which builds on the notion of parallel tree search [16], explores centrally the search tree up to a certain depth and solves the remaining conditioned sub-problems in parallel using a large grid of distributed processing units without a shared cache. In parallel evidence propagation, the notion of pointer jumping has been used for exact probabilistic inference. For example, Pennock [17] performs a theoretical analysis. Xia and Prasanna [18] split a junction tree into chains where evidence propagation is performed in parallel using a distributedmemory environment, and the results are merged later on. Proof-number search (PNS) in AND/OR spaces [19] and its parallel variants [20] have been shown to be effective in two-player games. As PNS is suboptimal, it cannot be applied as is to exact MAP inference. Kaneko [21] presents shared-memory parallel depth-first proof-number search with virtual proof and disproof numbers (vpdn). These combine proof and disproof numbers [19] and the number of threads examining a node. Thus, our vq(n) is closely related to vpdn. However, vpdn has an over-counting problem, which we avoid due to the way we dynamically update vq(n). Saito et al. [22] uses threads that probabilistically avoid the best-first strategy. Hoki et al. [23] adds small random values the proof and disproof numbers of each thread without sharing any cache table. 6 Conclusion We presented SPRBFAOO, a new shared-memory parallel recursive best-first AND/OR search scheme in graphical models. Using the virtual q-values shared in a single cache table, SPRBFAOO enables threads to work on promising regions of the search space with effective reuse of the search effort performed by others. A homogeneous search mechanism across the threads achieves an effective load balancing without resorting to sophisticated schemes used in related work [8]. We prove the correctness of the algorithm. In experiments, SPRBFAOO improves considerably over current stateof-the-art sequential AND/OR search approaches, in many cases leading to considerable speed-ups (up to 7-fold using 12 threads) especially on hard problem instances. Ongoing and future research directions include proving the completeness conjecture, extending SPRBFAOO to distributed memory environments, and parallelizing the mini-bucket heuristic for shared and distributed memory. 8 References [1] R. Marinescu and R. Dechter. AND/OR branch-and-bound search for combinatorial optimization in graphical models. Artificial Intelligence, 173(16-17):1457–1491, 2009. [2] A. Kishimoto and R. Marinescu. Recursive best-first AND/OR search for optimization in graphical models. In International Conference on Uncertainty in Artificial Intelligence (UAI), pages 400–409, 2014. [3] R. Korf. Linear-space best-first search. Artificial Intelligence, 62(1):41–78, 1993. [4] A. Kishimoto, A. Fukunaga, and A. Botea. Evaluation of a simple, scalable, parallel best-first search strategy. Artificial Intelligence, 195:222–248, 2013. [5] Wahid Chrabakh and Rich Wolski. Gradsat: A parallel SAT solver for the Grid. Technical report, University of California at Santa Barbara, 2003. [6] M. Campbell, A. Joseph Hoane Jr., and F.-h. Hsu. Deep Blue. Artificial Intelligence, 134(1– 2):57–83, 2002. [7] M. Enzenberger, M. M¨uller, B. Arneson, and R. Segal. FUEGO - an open-source framework for board games and Go engine based on Monte-Carlo tree search. IEEE Transactions on Computational Intelligence and AI in Games, 2(4):259–270, 2010. [8] L. Otten and R. Dechter. A case study in complexity estimation: Towards parallel branch-andbound over graphical models. In Uncertainty in Artificial Intelligence (UAI), pages 665–674, 2012. [9] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988. [10] S. L. Lauritzen. Graphical Models. Clarendon Press, 1996. [11] R. Dechter and R. Mateescu. AND/OR search spaces for graphical models. Artificial Intelligence, 171(2-3):73–106, 2007. [12] R. Marinescu and R. Dechter. Memory intensive AND/OR search for combinatorial optimization in graphical models. Artificial Intelligence, 173(16-17):1492–1524, 2009. [13] A. Nagai. Df-pn Algorithm for Searching AND/OR Trees and Its Applications. PhD thesis, The University of Tokyo, 2002. [14] M. Fishelson and D. Geiger. Exact genetic linkage computations for general pedigrees. Bioinformatics, 18(1):189–198, 2002. [15] C. Yanover, O. Schueler-Furman, and Y. Weiss. Minimizing and learning energy functions for side-chain prediction. Journal of Computational Biology, 15(7):899–911, 2008. [16] A. Grama and V. Kumar. State of the art in parallel search techniques for discrete optimization problems. IEEE Transactions on Knowledge and Data Engineering, 11(1):28–35, 1999. [17] D. Pennock. Logarithmic time parallel Bayesian inference. In Uncertainty in Artificial Intelligence (UAI), pages 431–438, 1998. [18] Y. Xia and V. K. Prasanna. Junction tree decomposition for parallel exact inference. In IEEE International Symposium on Parallel and Distributed Processing (IPDPS), 2008. [19] L. V. Allis, M. van der Meulen, and H. J. van den Herik. Proof-number search. Artificial Intelligence, 66(1):91–124, 1994. [20] A. Kishimoto, M. Winands, M. M¨uller, and J.-T. Saito. Game-tree search using proof numbers: The first twenty years. ICGA Journal, Vol. 35, No. 3, 35(3):131–156, 2012. [21] T. Kaneko. Parallel depth first proof number search. In AAAI Conference on Artificial Intelligence, pages 95–100, 2010. [22] J-T. Saito, M. H. M. Winands, and H. J. van den Herik. Randomized parallel proof-number search. In Advances in Computers Games Conference (ACG’09), volume 6048 of Lecture Notes in Computer Science, pages 75–87. Springer, 2010. [23] K. Hoki, T. Kaneko, A. Kishimoto, and T. Ito. Parallel dovetailing and its application to depthfirst proof-number search. ICGA Journal, 36(1):22–36, 2013. 9
2015
10
5,592
Streaming, Distributed Variational Inference for Bayesian Nonparametrics Trevor Campbell1 Julian Straub2 John W. Fisher III2 Jonathan P. How1 1LIDS, 2CSAIL, MIT {tdjc@ , jstraub@csail. , fisher@csail. , jhow@}mit.edu Abstract This paper presents a methodology for creating streaming, distributed inference algorithms for Bayesian nonparametric (BNP) models. In the proposed framework, processing nodes receive a sequence of data minibatches, compute a variational posterior for each, and make asynchronous streaming updates to a central model. In contrast to previous algorithms, the proposed framework is truly streaming, distributed, asynchronous, learning-rate-free, and truncation-free. The key challenge in developing the framework, arising from the fact that BNP models do not impose an inherent ordering on their components, is finding the correspondence between minibatch and central BNP posterior components before performing each update. To address this, the paper develops a combinatorial optimization problem over component correspondences, and provides an efficient solution technique. The paper concludes with an application of the methodology to the DP mixture model, with experimental results demonstrating its practical scalability and performance. 1 Introduction Bayesian nonparametric (BNP) stochastic processes are streaming priors – their unique feature is that they specify, in a probabilistic sense, that the complexity of a latent model should grow as the amount of observed data increases. This property captures common sense in many data analysis problems – for example, one would expect to encounter far more topics in a document corpus after reading 106 documents than after reading 10 – and becomes crucial in settings with unbounded, persistent streams of data. While their fixed, parametric cousins can be used to infer model complexity for datasets with known magnitude a priori [1, 2], such priors are silent with respect to notions of model complexity growth in streaming data settings. Bayesian nonparametrics are also naturally suited to parallelization of data processing, due to the exchangeability, and thus conditional independence, they often exhibit via de Finetti’s theorem. For example, labels from the Chinese Restaurant process [3] are rendered i.i.d. by conditioning on the underlying Dirichlet process (DP) random measure, and feature assignments from the Indian Buffet process [4] are rendered i.i.d. by conditioning on the underlying beta process (BP) random measure. Given these properties, one might expect there to be a wealth of inference algorithms for BNPs that address the challenges associated with parallelization and streaming. However, previous work has only addressed these two settings in concert for parametric models [5, 6], and only recently has each been addressed individually for BNPs. In the streaming setting, [7] and [8] developed streaming inference for DP mixture models using sequential variational approximation. Stochastic variational inference [9] and related methods [10–13] are often considered streaming algorithms, but their performance depends on the choice of a learning rate and on the dataset having known, fixed size a priori [5]. Outside of variational approaches, which are the focus of the present paper, there exist exact parallelized MCMC methods for BNPs [14, 15]; the tradeoff in using such methods is that they provide samples from the posterior rather than the distribution itself, and results regarding assessing 1 yt yt+1 . . . . . . Data Stream Node Central Node ηo1 ηo1 Retrieve Original Central Posterior Retrieve Data (a) Retrieve the data/prior yt yt+1 . . . . . . Data Stream Node Central Node ηi1 ηi2 ηm1 ηm2 ηm3 Minibatch Posterior Intermediate Posterior (b) Perform inference yt yt+1 . . . . . . Data Stream Node Central Node ηi1 ηi2 ηm1 ηm2 ηm3 η1 η2 η3 σ (c) Perform component ID yt+1 yt+2 . . . . . . Data Stream Node Central Node η1 (= ηi1 + ηm1 −ηo1) η2 (= ηi2 + ηm3 −η0) η3 (= ηm2) (d) Update the model Figure 1: The four main steps of the algorithm that is run asynchronously on each processing node. convergence remain limited. Sequential particle filters for inference have also been developed [16], but these suffer issues with particle degeneracy and exponential forgetting. The main challenge posed by the streaming, distributed setting for BNPs is the combinatorial problem of component identification. Most BNP models contain some notion of a countably infinite set of latent “components” (e.g. clusters in a DP mixture model), and do not impose an inherent ordering on the components. Thus, in order to combine information about the components from multiple processors, the correspondence between components must first be found. Brute force search is intractable even for moderately sized models – there are K1+K2 K1  possible correspondences for two sets of components of sizes K1 and K2. Furthermore, there does not yet exist a method to evaluate the quality of a component correspondence for BNP models. This issue has been studied before in the MCMC literature, where it is known as the “label switching problem”, but past solution techniques are generally model-specific and restricted to use on very simple mixture models [17, 18]. This paper presents a methodology for creating streaming, distributed inference algorithms for Bayesian nonparametric models. In the proposed framework (shown for a single node A in Figure 1), processing nodes receive a sequence of data minibatches, compute a variational posterior for each, and make asynchronous streaming updates to a central model using a mapping obtained from a component identification optimization. The key contributions of this work are as follows. First, we develop a minibatch posterior decomposition that motivates a learning-rate-free streaming, distributed framework suitable for Bayesian nonparametrics. Then, we derive the component identification optimization problem by maximizing the probability of a component matching. We show that the BNP prior regularizes model complexity in the optimization; an interesting side effect of this is that regardless of whether the minibatch variational inference scheme is truncated, the proposed algorithm is truncation-free. Finally, we provide an efficiently computable regularization bound for the Dirichlet process prior based on Jensen’s inequality1. The paper concludes with applications of the methodology to the DP mixture model, with experimental results demonstrating the scalability and performance of the method in practice. 2 Streaming, distributed Bayesian nonparametric inference The proposed framework, motivated by a posterior decomposition that will be discussed in Section 2.1, involves a collection of processing nodes with asynchronous access to a central variational posterior approximation (shown for a single node in Figure 1). Data is provided to each processing node as a sequence of minibatches. When a processing node receives a minibatch of data, it obtains the central posterior (Figure 1a), and using it as a prior, computes a minibatch variational posterior approximation (Figure 1b). When minibatch inference is complete, the node then performs component identification between the minibatch posterior and the current central posterior, accounting for possible modifications made by other processing nodes (Figure 1c). Finally, it merges the minibatch posterior into the central variational posterior (Figure 1d). In the following sections, we use the DP mixture [3] as a guiding example for the technical development of the inference framework. However, it is emphasized that the material in this paper generalizes to many other BNP models, such as the hierarchical DP (HDP) topic model [19], BP latent feature model [20], and Pitman-Yor (PY) mixture [21] (see the supplement for further details). 1Regularization bounds for other popular BNP priors may be found in the supplement. 2 2.1 Posterior decomposition Consider a DP mixture model [3], with cluster parameters θ, assignments z, and observed data y. For each asynchronous update made by each processing node, the dataset is split into three subsets y = yo ∪yi ∪ym for analysis. When the processing node receives a minibatch of data ym, it queries the central processing node for the original posterior p(θ, zo|yo), which will be used as the prior for minibatch inference. Once inference is complete, it again queries the central processing node for the intermediate posterior p(θ, zo, zi|yo, yi) which accounts for asynchronous updates from other processing nodes since minibatch inference began. Each subset yr, r ∈{o, i, m}, has Nr observations {yrj}Nr j=1, and each variable zrj ∈N assigns yrj to cluster parameter θzrj. Given the independence of θ and z in the prior, and the conditional independence of the data given the latent parameters, Bayes’ rule yields the following decomposition of the posterior of θ and z given y, Updated Central Posterior z }| { p(θ, z|y) ∝ p(zi, zm|zo) p(zi|zo)p(zm|zo) · Original Posterior z }| { p(θ, zo|yo)−1 · Minibatch Posterior z }| { p(θ, zm, zo|ym, yo) · Intermediate Posterior z }| { p(θ, zi, zo|yi, yo). (1) This decomposition suggests a simple streaming, distributed, asynchronous update rule for a processing node: first, obtain the current central posterior density p(θ, zo|yo), and using it as a prior, compute the minibatch posterior p(θ, zm, zo|yo, ym); and then update the central posterior density by using (1) with the current central posterior density p(θ, zi, zo|yi, yo). However, there are two issues preventing the direct application of the decomposition rule (1): Unknown component correspondence: Since it is generally intractable to find the minibatch posteriors p(θ, zm, zo|yo, ym) exactly, approximate methods are required. Further, as (1) requires the multiplication of densities, sampling-based methods are difficult to use, suggesting a variational approach. Typical mean-field variational techniques introduce an artificial ordering of the parameters in the posterior, thereby breaking symmetry that is crucial to combining posteriors correctly using density multiplication [6]. The use of (1) with mean-field variational approximations thus requires first solving a component identification problem. Unknown model size: While previous posterior merging procedures required a 1-to-1 matching between the components of the minibatch posterior and central posterior [5, 6], Bayesian nonparametric posteriors break this assumption. Indeed, the datasets yo, yi, and ym from the same nonparametric mixture model can be generated by the same, disjoint, or an overlapping set of cluster parameters. In other words, the global number of unique posterior components cannot be determined until the component identification problem is solved and the minibatch posterior is merged. 2.2 Variational component identification Suppose we have the following mean-field exponential family prior and approximate variational posterior densities in the minibatch decomposition (1), p(θk) = h(θk)eηT 0 T (θk)−A(η0) ∀k ∈N p(θ, zo|yo) ≃qo(θ, zo) = ζo(zo) Ko Y k=1 h(θk)eηT okT (θk)−A(ηok) p(θ, zm, zo|ym, yo) ≃qm(θ, zm, zo) = ζm(zm)ζo(zo) Km Y k=1 h(θk)eηT mkT (θk)−A(ηmk) (2) p(θ, zi, zo|yi, yo) ≃qi(θ, zi, zo) = ζi(zi)ζo(zo) Ki Y k=1 h(θk)eηT ikT (θk)−A(ηik), where ζr(·), r ∈{o, i, m} are products of categorical distributions for the cluster labels zr, and the goal is to use the posterior decomposition (1) to find the updated posterior approximation p(θ, z|y) ≃q(θ, z) = ζ(z) K Y k=1 h(θk)eηT k T (θk)−A(ηk). (3) 3 As mentioned in the previous section, the artificial ordering of components causes the na¨ıve application of (1) with variational approximations to fail, as disparate components from the approximate posteriors may be merged erroneously. This is demonstrated in Figure 3a, which shows results from a synthetic experiment (described in Section 4) ignoring component identification. As the number of parallel threads increases, more matching mistakes are made, leading to decreasing model quality. To address this, first note that there is no issue with the first Ko components of qm and qi; these can be merged directly since they each correspond to the Ko components of qo. Thus, the component identification problem reduces to finding the correspondence between the last K′ m = Km −Ko components of the minibatch posterior and the last K′ i = Ki −Ko components of the intermediate posterior. For notational simplicity (and without loss of generality), fix the component ordering of the intermediate posterior qi, and define σ : [Km] →[Ki + K′ m] to be the 1-to-1 mapping from minibatch posterior component k to updated central posterior component σ(k), where [K] := {1, . . . , K}. The fact that the first Ko components have no ordering ambiguity can be expressed as σ(k) = k ∀k ∈[Ko]. Note that the maximum number of components after merging is Ki + K′ m, since each of the last K′ m components in the minibatch posterior may correspond to new components in the intermediate posterior. After substituting the three variational approximations (2) into (1), the goal of the component identification optimization is to find the 1-to-1 mapping σ⋆that yields the largest updated posterior normalizing constant, i.e. matches components with similar densities, σ⋆←argmax σ X z Z θ p(zi, zm|zo) p(zi|zo)p(zm|zo)qo(θ, zo)−1qσ m(θ, zm, zo)qi(θ, zi, zo) s.t. qσ m(θ, zm) = ζσ m(zm) Km Y k=1 h(θσ(k))eηT mkT (θσ(k))−A(ηmk) σ(k) = k, ∀k ∈[Ko] , σ 1-to-1 (4) where ζσ m(zm) is the distribution such that Pζσ m(zmj = σ(k)) = Pζm(zmj = k). Taking the logarithm of the objective and exploiting the mean-field decoupling allows the separation of the objective into a sum of two terms: one expressing the quality of the matching between components (the integral over θ), and one that regularizes the final model size (the sum over z). While the first term is available in closed form, the second is in general not. Therefore, using the concavity of the logarithm function, Jensen’s inequality yields a lower bound that can be used in place of the intractable original objective, resulting in the final component identification optimization: σ⋆←argmax σ Ki+K′ m X k=1 A (˜ησ k) + Eσ ζ [log p(zi, zm, zo)] s.t. ˜ησ k = ˜ηik + ˜ησ mk −˜ηok σ(k) = k ∀k ∈[Ko] , σ 1-to-1. (5) A more detailed derivation of the optimization may be found in the supplement. Eσ ζ denotes expectation under the distribution ζo(zo)ζi(zi)ζσ m(zm), and ˜ηrk =  ηrk k ≤Kr η0 k > Kr ∀r ∈{o, i, m}, ˜ησ mk =  ηmσ−1(k) k ∈σ ([Km]) η0 k /∈σ ([Km]) , (6) where σ ([Km]) denotes the range of the mapping σ. The definitions in (6) ensure that the prior η0 is used whenever a posterior r ∈{i, m, o} does not contain a particular component k. The intuition for the optimization (5) is that it combines finding component correspondences with high similarity (via the log-partition function) with a regularization term2 on the final updated posterior model size. Despite its motivation from the Dirichlet process mixture, the component identification optimization (5) is not specific to this model. Indeed, the derivation did not rely on any properties specific to the Dirichlet process mixture; the optimization applies to any Bayesian nonparametric model with a set of “components” θ, and a set of combinatorial “indicators” z. For example, the optimization applies to the hierarchical Dirichlet process topic model [10] with topic word distributions θ and local-toglobal topic correspondences z, and to the beta process latent feature model [4] with features θ and 2This is equivalent to the KL-divergence regularization −KL h ζo(zo)ζi(zi)ζσ m(zm) p(zi, zm, zo) i . 4 binary assignment vectors z. The form of the objective in the component identification optimization (5) reflects this generality. In order to apply the proposed streaming, distributed method to a particular model, one simply needs a black-box variational inference algorithm that computes posteriors of the form (2), and a way to compute or bound the expectation in the objective of (5). 2.3 Updating the central posterior To update the central posterior, the node first locks it and solves for σ⋆via (5). Locking prevents other nodes from solving (5) or modifying the central posterior, but does not prevent other nodes from reading the central posterior, obtaining minibatches, or performing inference; the synthetic experiment in Section 4 shows that this does not incur a significant time penalty in practice. Then the processing node transmits σ⋆and its minibatch variational posterior to the central processing node where the product decomposition (1) is used to find the updated central variational posterior q in (3), with parameters K = max  Ki, max k∈[Km] σ⋆(k)  , ζ(z) = ζi(zi)ζo(zo)ζσ⋆ m (zm), ηk = ˜ηik + ˜ησ⋆ mk −˜ηok. (7) Finally, the node unlocks the central posterior, and the next processing node to receive a new minibatch will use the above K, ζ(z), and ηk from the central node as their Ko, ζo(zo), and ηok. 3 Application to the Dirichlet process mixture model The expectation in the objective of (5) is typically intractable to compute in closed-form; therefore, a suitable lower bound may be used in its place. This section presents such a bound for the Dirichlet process, and discusses the application of the proposed inference framework to the Dirichlet process mixture model using the developed bound. Crucially, the lower bound decomposes such that the optimization (5) becomes a maximum-weight bipartite matching problem. Such problems are solvable in polynomial time [22] by the Hungarian algorithm, leading to a tractable component identification step in the proposed streaming, distributed framework. 3.1 Regularization lower bound For the Dirichlet process with concentration parameter α > 0, p(zi, zm, zo) is the Exchangeable Partition Probability Function (EPPF) [23] p(zi, zm, zo) ∝α|K|−1 Y k∈K (nk −1)!, (8) where nk is the amount of data assigned to cluster k, and K is the set of labels of nonempty clusters. Given that the variational distribution ζr(zr), r ∈{i, m, o} is a product of independent categorical distributions ζr(zr) = QNr j=1 QKr k=1 π1[zrj=k] rjk , Jensen’s inequality may be used to bound the regularization in (5) below (see the supplement for further details) by Eσ ζ [log p(zi, zm, zo)] ≥ Ki+K′ m X k=1  1 −e˜sσ k  log α + log Γ max  2, ˜tσ k  + C ˜sσ k = ˜sik + ˜sσ mk + ˜sok, ˜tσ k = ˜tik + ˜tσ mk + ˜tok, (9) where C is a constant with respect to the component mapping σ, and ˜srk =  PNr j=1 log(1−πrjk) k≤Kr 0 k>Kr ∀r∈{o,i,m} ˜trk =  PNr j=1 πrjk k≤Kr 0 k>Kr ∀r∈{o,i,m} ˜sσ mk =  PNm j=1 log(1−πmjσ−1(k)) k∈σ([Km]) 0 k/∈σ([Km]) ˜tσ mk =  PNm j=1 πmjσ−1(k) k∈σ([Km]) 0 k/∈σ([Km]) . (10) Note that the bound (9) allows incremental updates: after finding the optimal mapping σ⋆, the central update (7) can be augmented by updating the values of sk and tk on the central node to sk ←˜sik + ˜sσ⋆ mk + ˜sok, tk ←˜tik + ˜tσ⋆ mk + ˜tok. (11) 5 0 20 40 60 80 100 Number of Clusters -600 -400 -200 0 200 400 600 Regularization Truth Lower Bound Increasing α (a) 0.001 Certain 0.01 0.1 1.0 1000 Uncertain Clustering Uncertainty γ 40 60 80 100 120 140 Regularization Truth Lower Bound Increasing α (b) Figure 2: The Dirichlet process regularization and lower bound, with (2a) fully uncertain labelling and varying number of clusters, and (2b) the number of clusters fixed with varying labelling uncertainty. As with K, ηk, and ζ from (7), after performing the regularization statistics update (11), a processing node that receives a new minibatch will use the above sk and tk as their sok and tok, respectively. Figure 2 demonstrates the behavior of the lower bound in a synthetic experiment with N = 100 datapoints for various DP concentration parameter values α ∈  10−3, 103 . The true regularization log Eζ [p(z)] was computed by sample approximation with 104 samples. In Figure 2a, the number of clusters K was varied, with symmetric categorical label weights set to 1 K . This figure demonstrates two important phenomena. First, the bound increases as K →0; in other words, it gives preference to fewer, larger clusters, which is the typical BNP “rich get richer” property. Second, the behavior of the bound as K →N depends on the concentration parameter α – as α increases, more clusters are preferred. In Figure 2b, the number of clusters K was fixed to 10, and the categorical label weights were sampled from a symmetric Dirichlet distribution with parameter γ ∈  10−3, 103 . This figure demonstrates that the bound does not degrade significantly with high labelling uncertainty, and is nearly exact for low labelling uncertainty. Overall, Figure 2a demonstrates that the proposed lower bound exhibits similar behaviors to the true regularization, supporting its use in the optimization (5). 3.2 Solving the component identification optimization Given that both the regularization (9) and component matching score in the objective (5) decompose as a sum of terms for each k ∈[Ki + K′ m], the objective can be rewritten using a matrix of matching scores R ∈R(Ki+K′ m)×(Ki+K′ m) and selector variables X ∈{0, 1}(Ki+K′ m)×(Ki+K′ m). Setting Xkj = 1 indicates that component k in the minibatch posterior is matched to component j in the intermediate posterior (i.e. σ(k) = j), providing a score Rkj defined using (6) and (10) as Rkj = A (˜ηij+ ˜ηmk −˜ηoj)+ 1 −e˜sij+˜smk+˜soj log α+log Γ max  2, ˜tij + ˜tmk + ˜toj  . (12) The optimization (5) can be rewritten in terms of X and R as X⋆←argmax X tr  XT R  s.t. X1 = 1, XT 1 = 1, Xkk = 1, ∀k ∈[Ko] X ∈{0, 1}(Ki+K′ m)×(Ki+K′ m), 1 = [1, . . . , 1]T . (13) The first two constraints express the 1-to-1 property of σ(·). The constraint Xkk = 1∀k ∈[Ko] fixes the upper Ko×Ko block of X to I (due to the fact that the first Ko components are matched directly), and the off-diagonal blocks to 0. Denoting X′, R′ to be the lower right (K′ i + K′ m) × (K′ i + K′ m) blocks of X, R, the remaining optimization problem is a linear assignment problem on X′ with cost matrix −R′, which can be solved using the Hungarian algorithm3. Note that if Km = Ko or Ki = Ko, this implies that no matching problem needs to be solved – the first Ko components of the minibatch posterior are matched directly, and the last K′ m are set as new components. In practical implementation of the framework, new clusters are typically discovered at a diminishing rate as more data are observed, so the number of matching problems that are solved likewise tapers off. The final optimal component mapping σ⋆is found by finding the nonzero elements of X⋆: σ⋆(k) ←argmax j X⋆ kj ∀k ∈[Km] . (14) 3For the experiments in this work, we used the implementation at github.com/hrldcpr/hungarian. 6 1 2 4 8 16 24 32 40 48 # Threads 0.0 2.0 4.0 6.0 8.0 10.0 12.0 CPU Time (s) -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 Test Log Likelihood CPU Time Test Log Likelihood (a) SDA-DP without component ID 1 2 4 8 16 24 32 40 48 # Threads 0.0 2.0 4.0 6.0 8.0 10.0 12.0 CPU Time (s) -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 Test Log Likelihood CPU Time Test Log Likelihood (b) SDA-DP with component ID -5 -4 -3 -2 -1 0 1 2 3 4 Time (Log10 s) -11 -10 -9 -8 -7 -6 Test Log Likelihood SDA-DP Batch SVA SVI moVB SC (c) Test log-likelihood traces 0 10 20 30 40 50 60 70 Merge Time (microseconds) 0 5 10 15 20 25 30 35 40 45 Count (d) CPU time for component ID 0 20 40 60 80 100 # Minibatches Merged 0 20 40 60 80 100 120 140 Count # Clusters # Matchings True # Clusters (e) Cluster/component ID counts 1 2 4 8 16 24 32 40 48 # Threads 0 20 40 60 80 100 120 140 Count # Clusters # Matchings True # Clusters (f) Final cluster/component ID counts Figure 3: Synthetic results over 30 trials. (3a-3b) Computation time and test log likelihood for SDA-DP with varying numbers of parallel threads, with component identification disabled (3a) and enabled (3b). (3c) Test log likelihood traces for SDA-DP (40 threads) and the comparison algorithms. (3d) Histogram of computation time (in microseconds) to solve the component identification optimization. (3e) Number of clusters and number of component identification problems solved as a function of the number of minibatch updates (40 threads). (3f) Final number of clusters and matchings solved with varying numbers of parallel threads. 4 Experiments In this section, the proposed inference framework is evaluated on the DP Gaussian mixture with a normal-inverse-Wishart (NIW) prior. We compare the streaming, distributed procedure coupled with standard variational inference [24] (SDA-DP) to five state-of-the-art inference algorithms: memoized online variational inference (moVB) [13], stochastic online variational inference (SVI) [9] with learning rate (t+10)−1 2 , sequential variational approximation (SVA) [7] with cluster creation threshold 10−1 and prune/merge threshold 10−3, subcluster splits MCMC (SC) [14], and batch variational inference (Batch) [24]. Priors were set by hand and all methods were initialized randomly. Methods that use multiple passes through the data (e.g. moVB, SVI) were allowed to do so. moVB was allowed to make birth/death moves, while SVI/Batch had fixed truncations. All experiments were performed on a computer with 24 CPU cores and 12GiB of RAM. Synthetic: This dataset consisted of 100,000 2-dimensional vectors generated from a Gaussian mixture model with 100 clusters and a NIW(µ0, κ0, Ψ0, ν0) prior with µ0 = 0, κ0 = 10−3, Ψ0 = I, and ν0 = 4. The algorithms were given the true NIW prior, DP concentration α = 5, and minibatches of size 50. SDA-DP minibatch inference was truncated to K = 50 components, and all other algorithms were truncated to K = 200 components. Figure 3 shows the results from the experiment over 30 trials, which illustrate a number of important properties of SDA-DP. First and foremost, ignoring the component identification problem leads to decreasing model quality with increasing number of parallel threads, since more matching mistakes are made (Figure 3a). Second, if component identification is properly accounted for using the proposed optimization, increasing the number of parallel threads reduces execution time, but does not affect the final model quality (Figure 3b). Third, SDA-DP (with 40 threads) converges to the same final test log likelihood as the comparison algorithms in significantly reduced time (Figure 3c). Fourth, each component identification optimization typically takes ∼10−5 seconds, and thus matching accounts for less than a millisecond of total computation and does not affect the overall computation time significantly (Figure 3d). Fifth, the majority of the component matching problems are solved within the first 80 minibatch updates (out of a total of 2,000) – afterwards, the true clusters have all been discovered and the processing nodes contribute to those clusters rather than creating new ones, as per the discussion at the end of Section 3.2 (Figure 3e). Finally, increased parallelization can be advantageous in discovering the correct number of clusters; with only one thread, mistakes made early on are built upon and persist, whereas with more threads there are more component identification problems solved, and thus more chances to discover the correct clusters (Figure 3f). 7 (a) Airplane trajectory clusters 0 1 2 3 4 5 6 7 8 9 Cluster 0 500 1000 1500 2000 2500 3000 3500 Count (b) Airplane cluster weights (c) MNIST clusters (d) Numerical results on Airplane, MNIST, and SUN Airplane MNIST SUN Algorithm Time (s) TestLL Time (s) TestLL Time (s) TestLL SDA-DP 0.66 -0.55 3.0 -145.3 9.4 -150.3 SVI 1.50 -0.59 117.4 -147.1 568.9 -149.9 SVA 3.00 -4.71 57.0 -145.0 10.4 -152.8 moVB 0.69 -0.72 645.9 -149.2 1258.1 -149.7 SC 393.6 -1.06 1639.1 -146.8 1618.4 -150.6 Batch 1.07 -0.72 829.6 -149.5 1881.5 -149.7 Figure 4: (4a-4b) Highest-probability instances and counts for 10 trajectory clusters generated by SDA-DP. (4c) Highest-probability instances for 20 clusters discovered by SDA-DP on MNIST. (4d) Numerical results. Airplane Trajectories: This dataset consisted of ∼3,000,000 automatic dependent surveillance broadcast (ADS-B) messages collected from planes across the United States during the period 201303-22 01:30:00UTC to 2013-03-28 12:00:00UTC. The messages were connected based on plane call sign and time stamp, and erroneous trajectories were filtered based on reasonable spatial/temporal bounds, yielding 15,022 trajectories with 1,000 held out for testing. The latitude/longitude points in each trajectory were fit via linear regression, and the 3-dimensional parameter vectors were clustered. Data was split into minibatches of size 100, and SDA-DP used 16 parallel threads. MNIST Digits [25]: This dataset consisted of 70,000 28 × 28 images of hand-written digits, with 10,000 held out for testing. The images were reduced to 20 dimensions with PCA prior to clustering. Data was split into minibatches of size 500, and SDA-DP used 48 parallel threads. SUN Images [26]: This dataset consisted of 108,755 images from 397 scene categories, with 8,755 held out for testing. The images were reduced to 20 dimensions with PCA prior to clustering. Data was split into minibatches of size 500, and SDA-DP used 48 parallel threads. Figure 4 shows the results from the experiments on the three real datasets. From a qualitative standpoint, SDA-DP discovers sensible clusters in the data, as demonstrated in Figures 4a–4c. However, an important quantitative result is highlighted by Table 4d: the larger a dataset is, the more the benefits of parallelism provided by SDA-DP become apparent. SDA-DP consistently provides a model quality that is competitive with the other algorithms, but requires orders of magnitude less computation time, corroborating similar findings on the synthetic dataset. 5 Conclusions This paper presented a streaming, distributed, asynchronous inference algorithm for Bayesian nonparametric models, with a focus on the combinatorial problem of matching minibatch posterior components to central posterior components during asynchronous updates. The main contributions are a component identification optimization based on a minibatch posterior decomposition, a tractable bound on the objective for the Dirichlet process mixture, and experiments demonstrating the performance of the methodology on large-scale datasets. While the present work focused on the DP mixture as a guiding example, it is not limited to this model – exploring the application of the proposed methodology to other BNP models is a potential area for future research. Acknowledgments This work was supported by the Office of Naval Research under ONR MURI grant N000141110688. 8 References [1] Agostino Nobile. Bayesian Analysis of Finite Mixture Distributions. PhD thesis, Carnegie Mellon University, 1994. [2] Jeffrey W. Miller and Matthew T. Harrison. A simple example of Dirichlet process mixture inconsistency for the number of components. In Advances in Neural Information Processing Systems 26, 2013. [3] Yee Whye Teh. Dirichlet processes. In Encyclopedia of Machine Learning. Springer, New York, 2010. [4] Thomas L. Griffiths and Zoubin Ghahramani. Infinite latent feature models and the Indian buffet process. In Advances in Neural Information Processing Systems 22, 2005. [5] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson, and Michael I. Jordan. Streaming variational Bayes. In Advances in Neural Information Procesing Systems 26, 2013. [6] Trevor Campbell and Jonathan P. How. Approximate decentralized Bayesian inference. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014. [7] Dahua Lin. Online learning of nonparametric mixture models via sequential variational approximation. In Advances in Neural Information Processing Systems 26, 2013. [8] Xiaole Zhang, David J. Nott, Christopher Yau, and Ajay Jasra. A sequential algorithm for fast fitting of Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 23(4):1143–1162, 2014. [9] Matt Hoffman, David Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14:1303–1347, 2013. [10] Chong Wang, John Paisley, and David M. Blei. Online variational inference for the hierarchical Dirichlet process. In Proceedings of the 11th International Conference on Artificial Intelligence and Statistics, 2011. [11] Michael Bryant and Erik Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet processes. In Advances in Neural Information Proecssing Systems 23, 2009. [12] Chong Wang and David Blei. Truncation-free stochastic variational inference for Bayesian nonparametric models. In Advances in Neural Information Processing Systems 25, 2012. [13] Michael Hughes and Erik Sudderth. Memoized online variational inference for Dirichlet process mixture models. In Advances in Neural Information Processing Systems 26, 2013. [14] Jason Chang and John Fisher III. Parallel sampling of DP mixture models using sub-clusters splits. In Advances in Neural Information Procesing Systems 26, 2013. [15] Willie Neiswanger, Chong Wang, and Eric P. Xing. Asymptotically exact, embarassingly parallel MCMC. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014. [16] Carlos M. Carvalho, Hedibert F. Lopes, Nicholas G. Polson, and Matt A. Taddy. Particle learning for general mixtures. Bayesian Analysis, 5(4):709–740, 2010. [17] Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B, 62(4):795–809, 2000. [18] Ajay Jasra, Chris Holmes, and David Stephens. Markov chain Monte Carlo methods and the label switching problem in Bayesian mixture modeling. Statistical Science, 20(1):50–67, 2005. [19] Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [20] Finale Doshi-Velez and Zoubin Ghahramani. Accelerated sampling for the Indian buffet process. In Proceedings of the International Conference on Machine Learning, 2009. [21] Avinava Dubey, Sinead Williamson, and Eric Xing. Parallel Markov chain Monte Carlo for Pitman-Yor mixture models. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, 2014. [22] Jack Edmonds and Richard Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the Association for Computing Machinery, 19:248–264, 1972. [23] Jim Pitman. Exchangeable and partially exchangeable random partitions. Probability Theory and Related Fields, 102(2):145–158, 1995. [24] David M. Blei and Michael I. Jordan. Variational inference for Dirichlet process mixtures. Bayesian Analysis, 1(1):121–144, 2006. [25] Yann LeCun, Corinna Cortes, and Christopher J.C. Burges. MNIST database of handwritten digits. Online: yann.lecun.com/exdb/mnist. [26] Jianxiong Xiao, James Hays, Krista A. Ehinger, Aude Oliva, and Antonio Torralba. SUN 397 image database. Online: vision.cs.princeton.edu/projects/2010/SUN. 9
2015
100
5,593
Tree-Guided MCMC Inference for Normalized Random Measure Mixture Models Juho Lee and Seungjin Choi Department of Computer Science and Engineering Pohang University of Science and Technology 77 Cheongam-ro, Nam-gu, Pohang 37673, Korea {stonecold,seungjin}@postech.ac.kr Abstract Normalized random measures (NRMs) provide a broad class of discrete random measures that are often used as priors for Bayesian nonparametric models. Dirichlet process is a well-known example of NRMs. Most of posterior inference methods for NRM mixture models rely on MCMC methods since they are easy to implement and their convergence is well studied. However, MCMC often suffers from slow convergence when the acceptance rate is low. Tree-based inference is an alternative deterministic posterior inference method, where Bayesian hierarchical clustering (BHC) or incremental Bayesian hierarchical clustering (IBHC) have been developed for DP or NRM mixture (NRMM) models, respectively. Although IBHC is a promising method for posterior inference for NRMM models due to its efficiency and applicability to online inference, its convergence is not guaranteed since it uses heuristics that simply selects the best solution after multiple trials are made. In this paper, we present a hybrid inference algorithm for NRMM models, which combines the merits of both MCMC and IBHC. Trees built by IBHC outlines partitions of data, which guides Metropolis-Hastings procedure to employ appropriate proposals. Inheriting the nature of MCMC, our tree-guided MCMC (tgMCMC) is guaranteed to converge, and enjoys the fast convergence thanks to the effective proposals guided by trees. Experiments on both synthetic and realworld datasets demonstrate the benefit of our method. 1 Introduction Normalized random measures (NRMs) form a broad class of discrete random measures, including Dirichlet proccess (DP) [1] normalized inverse Gaussian process [2], and normalized generalized Gamma process [3, 4]. NRM mixture (NRMM) model [5] is a representative example where NRM is used as a prior for mixture models. Recently NRMs were extended to dependent NRMs (DNRMs) [6, 7] to model data where exchangeability fails. The posterior analysis for NRM mixture (NRMM) models has been developed [8, 9], yielding simple MCMC methods [10]. As in DP mixture (DPM) models [11], there are two paradigms in the MCMC algorithms for NRMM models: (1) marginal samplers and (2) slice samplers. The marginal samplers simulate the posterior distributions of partitions and cluster parameters given data (or just partitions given data provided that conjugate priors are assumed) by marginalizing out the random measures. The marginal samplers include the Gibbs sampler [10], and the split-merge sampler [12], although it was not formally extended to NRMM models. The slice sampler [13] maintains random measures and explicitly samples the weights and atoms of the random measures. The term ”slice” comes from the auxiliary slice variables used to control the number of atoms to be used. The slice sampler is known to mix faster than the marginal Gibbs sampler when applied to complicated DNRM mixture models where the evaluation of marginal distribution is costly [7]. 1 The main drawback of MCMC methods for NRMM models is their poor scalability, due to the nature of MCMC methods. Moreover, since the marginal Gibbs sampler and slice sampler iteratively sample the cluster assignment variable for a single data point at a time, they easily get stuck in local optima. Split-merge sampler may resolve the local optima problem to some extent, but is still problematic for large-scale datasets since the samples proposed by split or merge procedures are rarely accepted. Recently, a deterministic alternative to MCMC algorithms for NRM (or DNRM) mixture models were proposed [14], extending Bayesian hierarchical clustering (BHC) [15] which was developed as a tree-based inference for DP mixture models. The algorithm, referred to as incremental BHC (IBHC) [14] builds binary trees that reflects the hierarchical cluster structures of datasets by evaluating the approximate marginal likelihood of NRMM models, and is well suited for the incremental inferences for large-scale or streaming datasets. The key idea of IBHC is to consider only exponentially many posterior samples (which are represented as binary trees), instead of drawing indefinite number of samples as in MCMC methods. However, IBHC depends on the heuristics that chooses the best trees after the multiple trials, and thus is not guaranteed to converge to the true posterior distributions. In this paper, we propose a novel MCMC algorithm that elegantly combines IBHC and MCMC methods for NRMM models. Our algorithm, called the tree-guided MCMC, utilizes the trees built from IBHC to proposes a good quality posterior samples efficiently. The trees contain useful information such as dissimilarities between clusters, so the errors in cluster assignments may be detected and corrected with less efforts. Moreover, designed as a MCMC methods, our algorithm is guaranteed to converge to the true posterior, which was not possible for IBHC. We demonstrate the efficiency and accuracy of our algorithm by comparing it to existing MCMC algorithms. 2 Background Throughout this paper we use the following notations. Denote by [n] = {1, . . . , n} a set of indices and by X = {xi | i ∈[n]} a dataset. A partition Π[n] of [n] is a set of disjoint nonempty subsets of [n] whose union is [n]. Cluster c is an entry of Π[n], i.e., c ∈Π[n]. Data points in cluster c is denoted by Xc = {xi | i ∈c} for c ∈Πn. For the sake of simplicity, we often use i to represent a singleton {i} for i ∈[n]. In this section, we briefly review NRMM models, existing posterior inference methods such as MCMC and IBHC. 2.1 Normalized random measure mixture models Let µ be a homogeneous completely random measure (CRM) on measure space (Θ, F) with L´evy intensity ρ and base measure H, written as µ ∼CRM(ρH). We also assume that, Z ∞ 0 ρ(dw) = ∞, Z ∞ 0 (1 −e−w)ρ(dw) < ∞, (1) so that µ has infinitely many atoms and the total mass µ(Θ) is finite: µ = P∞ j=1 wjδθ∗ j , µ(Θ) = P∞ j=1 wj < ∞. A NRM is then formed by normalizing µ by its total mass µ(Θ). For each index i ∈[n], we draw the corresponding atoms from NRM, θi|µ ∼µ/µ(Θ). Since µ is discrete, the set {θi|i ∈[n]} naturally form a partition of [n] with respect to the assigned atoms. We write the partition as a set of sets Π[n] whose elements are non-empty and non-overlapping subsets of [n], and the union of the elements is [n]. We index the elements (clusters) of Π[n] with the symbol c, and denote the unique atom assigned to c as θc. Summarizing the set {θi|i ∈[n]} as (Π[n], {θc|c ∈ Π[n]}), the posterior random measure is written as follows: Theorem 1. ([9]) Let (Π[n], {θc|c ∈Π[n]}) be samples drawn from µ/µ(Θ) where µ ∼CRM(ρH). With an auxiliary variable u ∼Gamma(n, µ(Θ)), the posterior random measure is written as µ|u + X c∈Π[n] wcδθc, (2) where ρu(dw) := e−uwρ(dw), µ|u ∼CRM(ρuH), P(dwc) ∝w|c| c ρu(dwc). (3) 2 Moreover, the marginal distribution is written as P(Π[n], {dθc|c ∈Π[n]}, du) = un−1e−ψρ(u)du Γ(n) Y c∈Π[n] κρ(|c|, u)H(dθc), (4) where ψρ(u) := Z ∞ 0 (1 −e−uw)ρ(dw), κρ(|c|, u) := Z ∞ 0 w|c|ρu(dw). (5) Using (4), the predictive distribution for the novel atom θ is written as P(dθ|{θi}, u) ∝κρ(1, u)H(dθ) + X c∈Π[n] κρ(|c| + 1, u) κρ(|c|, u) δθc(dθ). (6) The most general CRM may be used is the generalized Gamma [3], with L´evy intensity ρ(dw) = ασ Γ(1−σ)w−σ−1e−wdw. In NRMM models, the observed dataset X is assusmed to be generated from a likelihood P(dx|θ) with parameters {θi} drawn from NRM. We focus on the conjugate case where H is conjugate to P(dx|θ), so that the integral P(dXc) := R Θ H(dθ) Q i∈c P(dxi|θ) is tractable. 2.2 MCMC Inference for NRMM models The goal of posterior inference for NRMM models is to compute the posterior P(Π[n], {dθc}, du|X) with the marginal likelihood P(dX). Marginal Gibbs Sampler: marginal Gibbs sampler is basesd on the predictive distribution (6). At each iteration, cluster assignments for each data point is sampled, where xi may join an existing cluster c with probability proportional to κρ(|c|+1,u) κρ(|c|,u) P(dxi|Xc), or create a novel cluster with probability proportional to κρ(1, u)P(dxi). Slice sampler: instead of marginalizing out µ, slice sampler explicitly sample the atoms and weights {wj, θ∗ j } of µ. Since maintaining infinitely many atoms is infeasible, slice variables {si} are introduced for each data point, and atoms with masses larger than a threshold (usually set as mini∈[n] si) are kept and remaining atoms are added on the fly as the threshold changes. At each iteration, xi is assigned to the jth atom with probability 1[si < wj]P(dxi|θ∗ j ). Split-merge sampler: both marginal Gibbs and slice sampler alter a single cluster assignment at a time, so are prone to the local optima. Split-merge sampler, originally developed for DPM, is a marginal sampler that is based on (6). At each iteration, instead of changing individual cluster assignments, split-merge sampler splits or merges clusters to propose a new partition. The split or merged partition is proposed by a procedure called the restricted Gibbs sampling, which is Gibbs sampling restricted to the clusters to split or merge. The proposed partitions are accepted or rejected according to Metropolis-Hastings schemes. Split-merge samplers are reported to mix better than marginal Gibbs sampler. 2.3 IBHC Inference for NRMM models Bayesian hierarchical clustering (BHC, [15]) is a probabilistic model-based agglomerative clustering, where the marginal likelihood of DPM is evaluated to measure the dissimilarity between nodes. Like the traditional agglomerative clustering algorithms, BHC repeatedly merges the pair of nodes with the smallest dissimilarities, and builds binary trees embedding the hierarchical cluster structure of datasets. BHC defines the generative probability of binary trees which is maximized during the construction of the tree, and the generative probability provides a lower bound on the marginal likelihood of DPM. For this reason, BHC is considered to be a posterior inference algorithm for DPM. Incremental BHC (IBHC, [14]) is an extension of BHC to (dependent) NRMM models. Like BHC is a deterministic posterior inference algorithm for DPM, IBHC serves as a deterministic posterior inference algorithms for NRMM models. Unlike the original BHC that greedily builds trees, IBHC sequentially insert data points into trees, yielding scalable algorithm that is well suited for online inference. We first explain the generative model of trees, and then explain the sequential algorithm of IBHC. 3 i d(; ) > 1 i Figure 1: (Left) in IBHC, a new data point is inserted into one of the trees, or create a novel tree. (Middle) three possible cases in SeqInsert. (Right) after the insertion, the potential funcitons for the nodes in the blue bold path should be updated. If a updated d(; ) > 1, the tree is split at that level. IBHC aims to maximize the joint probability of the data X and the auxiliary variable u: P(dX, du) = un−1e−ψρ(u)du Γ(n) X Π[n] Y c∈Π[n] κρ(|c|, u)P(dXc) (7) Let tc be a binary tree whose leaf nodes consist of the indices in c. Let l(c) and r(c) denote the left and right child of the set c in tree, and thus the corresponding trees are denoted by tl(c) and tr(c). The generative probability of trees is described with the potential function [14], which is the unnormalized reformulation of the original definition [15]. The potential function of the data Xc given the tree tc is recursively defined as follows: φ(Xc|hc) := κρ(|c|, u)P(dXc), φ(Xc|tc) = φ(Xc|hc) + φ(Xl(c)|tl(c))φ(Xr(c)|tr(c)). (8) Here, hc is the hypothesis that Xc was generated from a single cluster. The first therm φ(Xc|hc) is proportional to the probability that hc is true, and came from the term inside the product of (7). The second term is proportional to the probability that Xc was generated from more than two clusters embedded in the subtrees tl(c) and tr(c). The posterior probability of hc is then computed as P(hc|Xc, tc) = 1 1 + d(l(c), r(c)), where d(l(c), r(c)) := φ(Xl(c)|tl(c))φ(Xr(c)|tr(c)) φ(Xc|hc) . (9) d(·, ·) is defined to be the dissimilarity between l(c) and r(c). In the greedy construction, the pair of nodes with smallest d(·, ·) are merged at each iteration. When the minimum dissimilarity exceeds one (P(hc|Xc, tc) < 0.5), hc is concluded to be false and the construction stops. This is an important mechanism of BHC (and IBHC) that naturally selects the proper number of clusters. In the perspective of the posterior inference, this stopping corresponds to selecting the MAP partition that maximizes P(Π[n]|X, u). If the tree is built and the potential function is computed for the entire dataset X, a lower bound on the joint likelihood (7) is obtained [15, 14]: un−1e−ψρ(u)du Γ(n) φ(X|t[n]) ≤P(dX, du). (10) Now we explain the sequential tree construction of IBHC. IBHC constructs a tree in an incremental manner by inserting a new data point into an appropriate position of the existing tree, without computing dissimilarities between every pair of nodes. The procedure, which comprises three steps, is elucidated in Fig. 1. Step 1 (left): Given {x1, . . . , xi−1}, suppose that trees are built by IBHC, yielding to a partition Π[i−1]. When a new data point xi arrives, this step assigns xi to a tree tbc, which has the smallest distance, i.e., bc = arg minc∈Π[i−1] d(i, c), or create a new tree ti if d(i, bc) > 1. Step 2 (middle): Suppose that the tree chosen in Step 1 is tc. Then Step 2 determines an appropriate position of xi when it is inserted into the tree tc, and this is done by the procedure SeqInsert(c, i). SeqInsert(c, i) chooses the position of i among three cases (Fig. 1). Case 1 elucidates an option where xi is placed on the top of the tree tc. Case 2 and 3 show options where xi is added as a sibling of the subtree tl(c) or tr(c), respectively. Among these three cases, the one with the highest potential function φ(Xc∪i|tc∪i) is selected, which can easily be done by comparing d(l(c), r(c)), d(l(c), i) and d(r(c), i) [14]. If d(l(c), r(c)) is the smallest, then Case 1 is selected and the insertion terminates. Otherwise, if d(l(c), i) is the smallest, xi is inserted into tl(c) and SeqInsert(l(c), i) is recursively executed. The same procedure is applied to the case where d(r(c), i) is smallest. 4
2015
101
5,594
The Self-Normalized Estimator for Counterfactual Learning Adith Swaminathan Department of Computer Science Cornell University adith@cs.cornell.edu Thorsten Joachims Department of Computer Science Cornell University tj@cs.cornell.edu Abstract This paper identifies a severe problem of the counterfactual risk estimator typically used in batch learning from logged bandit feedback (BLBF), and proposes the use of an alternative estimator that avoids this problem. In the BLBF setting, the learner does not receive full-information feedback like in supervised learning, but observes feedback only for the actions taken by a historical policy. This makes BLBF algorithms particularly attractive for training online systems (e.g., ad placement, web search, recommendation) using their historical logs. The Counterfactual Risk Minimization (CRM) principle [1] offers a general recipe for designing BLBF algorithms. It requires a counterfactual risk estimator, and virtually all existing works on BLBF have focused on a particular unbiased estimator. We show that this conventional estimator suffers from a propensity overfitting problem when used for learning over complex hypothesis spaces. We propose to replace the risk estimator with a self-normalized estimator, showing that it neatly avoids this problem. This naturally gives rise to a new learning algorithm – Normalized Policy Optimizer for Exponential Models (Norm-POEM) – for structured output prediction using linear rules. We evaluate the empirical effectiveness of NormPOEM on several multi-label classification problems, finding that it consistently outperforms the conventional estimator. 1 Introduction Most interactive systems (e.g. search engines, recommender systems, ad platforms) record large quantities of log data which contain valuable information about the system’s performance and user experience. For example, the logs of an ad-placement system record which ad was presented in a given context and whether the user clicked on it. While these logs contain information that should inform the design of future systems, the log entries do not provide supervised training data in the conventional sense. This prevents us from directly employing supervised learning algorithms to improve these systems. In particular, each entry only provides bandit feedback since the loss/reward is only observed for the particular action chosen by the system (e.g. the presented ad) but not for all the other actions the system could have taken. Moreover, the log entries are biased since actions that are systematically favored by the system will by over-represented in the logs. Learning from historical logs data can be formalized as batch learning from logged bandit feedback (BLBF) [2, 1]. Unlike the well-studied problem of online learning from bandit feedback [3], this setting does not require the learner to have interactive control over the system. Learning in such a setting is closely related to the problem of off-policy evaluation in reinforcement learning [4] – we would like to know how well a new system (policy) would perform if it had been used in the past. This motivates the use of counterfactual estimators [5]. Following an approach analogous to Empirical Risk Minimization (ERM), it was shown that such estimators can be used to design learning algorithms for batch learning from logged bandit feedback [6, 5, 1]. 1 However the conventional counterfactual risk estimator used in prior works on BLBF exhibits severe anomalies that can lead to degeneracies when used in ERM. In particular, the estimator exhibits a new form of Propensity Overfitting that causes severely biased risk estimates for the ERM minimizer. By introducing multiplicative control variates, we propose to replace this risk estimator with a Self-Normalized Risk Estimator that provably avoids these degeneracies. An extensive empirical evaluation confirms that the desirable theoretical properties of the Self-Normalized Risk Estimator translate into improved generalization performance and robustness. 2 Related work Batch learning from logged bandit feedback is an instance of causal inference. Classic inference techniques like propensity score matching [7] are, hence, immediately relevant. BLBF is closely related to the problem of learning under covariate shift (also called domain adaptation or sample bias correction) [8] as well as off-policy evaluation in reinforcement learning [4]. Lower bounds for domain adaptation [8] and impossibility results for off-policy evaluation [9], hence, also apply to propensity score matching [7], costing [10] and other importance sampling approaches to BLBF. Several counterfactual estimators have been developed for off-policy evaluation [11, 6, 5]. All these estimators are instances of importance sampling for Monte Carlo approximation and can be traced back to What-If simulations [12]. Learning (upper) bounds have been developed recently [13, 1, 14] that show that these estimators can work for BLBF. We additionally show that importance sampling can overfit in hitherto unforeseen ways with the capacity of the hypothesis space during learning. We call this new kind of overfitting Propensity Overfitting. Classic variance reduction techniques for importance sampling are also useful for counterfactual evaluation and learning. For instance, importance weights can be “clipped” [15] to trade-off bias against variance in the estimators [5]. Additive control variates give rise to regression estimators [16] and doubly robust estimators [6]. Our proposal uses multiplicative control variates. These are widely used in financial applications (see [17] and references therein) and policy iteration for reinforcement learning (e.g. [18]). In particular, we study the self-normalized estimator [12] which is superior to the vanilla estimator when fluctuations in the weights dominate the variance [19]. We additionally show that the self-normalized estimator neatly addresses propensity overfitting. 3 Batch learning from logged bandit feedback Following [1], we focus on the stochastic, cardinal, contextual bandit setting and recap the essence of the CRM principle. The inputs of a structured prediction problem x ∈X are drawn i.i.d. from a fixed but unknown distribution Pr(X). The outputs are denoted by y ∈Y. The hypothesis space H contains stochastic hypotheses h(Y | x) that define a probability distribution over Y. A hypothesis h∈H makes predictions by sampling from the conditional distribution y ∼h(Y |x). This definition of H also captures deterministic hypotheses. For notational convenience, we denote the probability distribution h(Y |x) by h(x), and the probability assigned by h(x) to y as h(y|x). We use (x, y)∼h to refer to samples of x∼Pr(X), y∼h(x), and when clear from the context, we will drop (x, y). Bandit feedback means we only observe the feedback δ(x, y) for the specific y that was predicted, but not for any of the other possible predictions Y \ {y}. The feedback is just a number, called the loss δ : X × Y 7→R. Smaller numbers are desirable. In general, the loss is the (noisy) realization of a stochastic random variable. The following exposition can be readily extended to the general case by setting δ(x, y) = E [δ | x, y]. The expected loss – called risk – of a hypothesis R(h) is R(h) = Ex∼Pr(X)Ey∼h(x) [δ(x, y)] = Eh [δ(x, y)] . (1) The aim of learning is to find a hypothesis h ∈H that has minimum risk. Counterfactual estimators. We wish to use the logs of a historical system to perform learning. To ensure that learning will not be impossible [9], we assume the historical algorithm whose predictions we record in our logged data is a stationary policy h0(x) with full support over Y. For a new hypothesis h ̸= h0, we cannot use the empirical risk estimator used in supervised learning [20] to directly approximate R(h), because the data contains samples drawn from h0 while the risk from Equation (1) requires samples from h. 2 Importance sampling fixes this distribution mismatch, R(h) = Eh [δ(x, y)] = Eh0  δ(x, y) h(y|x) h0(y|x)  . So, with data collected from the historical system D = {(x1, y1, δ1, p1), . . . , (xn, yn, δn, pn)}, where (xi, yi) ∼h0, δi ≡δ(xi, yi) and pi ≡h0(yi | xi), we can derive an unbiased estimate of R(h) via Monte Carlo approximation, ˆR(h) = 1 n n X i=1 δi h(yi |xi) pi . (2) This classic inverse propensity estimator [7] has unbounded variance: pi ≃0 in D can cause ˆR(h) to be arbitrarily far away from the true risk R(h). To remedy this problem, several thresholding schemes have been proposed and studied in the literature [15, 8, 5, 11]. The straightforward option is to cap the propensity weights [15, 1], i.e. pick M > 1 and set ˆRM(h) = 1 n n X i=1 δi min  M, h(yi |xi) pi  . Smaller values of M reduce the variance of ˆRM(h) but induce a larger bias. Counterfactual Risk Minimization. Importance sampling also introduces variance in ˆRM(h) through the variability of h(yi|xi) pi . This variance can be drastically different for different h ∈H. The CRM principle is derived from a generalization error bound that reasons about this variance using an empirical Bernstein argument [1, 13]. Let δ(·, ·) ∈[−1, 0] and consider the random variable uh = δ(x, y) min n M, h(y|x) h0(y|x) o . Note that D contains n i.i.d. observations uhi. Theorem 1. Denote the empirical variance of uh by ˆ V ar(uh). With probability at least 1−γ in the random vector (xi, yi) ∼h0, for a stochastic hypothesis space H with capacity C(H) and n ≥16, ∀h ∈H : R(h) ≤ˆRM(h) + s 18 ˆ V ar(uh) log( 10C(H) γ ) n + M 15 log( 10C(H) γ ) n −1 . Proof. Refer Theorem 1 of [1] and the proof of Theorem 6 of [13]. Following Structural Risk Minimization [20], this bound motivates the CRM principle for designing algorithms for BLBF. A learning algorithm should jointly optimize the estimate ˆRM(h) as well as its empirical standard deviation, where the latter serves as a data-dependent regularizer. ˆhCRM = argmin h∈H    ˆRM(h) + λ s ˆ V ar(uh) n   . (3) M > 1 and λ ≥0 are regularization hyper-parameters. 4 The Propensity Overfitting problem The CRM objective in Equation (3) penalizes those h ∈H that are “far” from the logging policy h0 (as measured by their empirical variance ˆ V ar(uh)). This can be intuitively understood as a safeguard against overfitting. However, overfitting in BLBF is more nuanced than in conventional supervised learning. In particular, the unbiased risk estimator of Equation (2) has two anomalies. Even if δ(·, ·) ∈[▽, △], the value of ˆR(h) estimated on a finite sample need not lie in that range. Furthermore, if δ(·, ·) is translated by a constant δ(·, ·)+C, R(h) becomes R(h)+C by linearity of expectation – but the unbiased estimator on a finite sample need not equal ˆR(h) + C. In short, this risk estimator is not equivariant [19]. The various thresholding schemes for importance sampling only exacerbate this effect. These anomalies leave us vulnerable to a peculiar kind of overfitting, as we see in the following example. 3 Example 1. For the input space of integers X = {1..k} and the output space Y = {1..k}, define δ(x, y) = −2 if y = x −1 otherwise. The hypothesis space H is the set of all deterministic functions f : X 7→Y. hf(y|x) = 1 if f(x) = y 0 otherwise. Data is drawn uniformly, x ∼U(X) and h0(Y|x) = U(Y) for all x. The hypothesis h∗with minimum true risk is h∗ f with f ∗(x) = x, which has risk R(h∗) = −2. When drawing a training sample D = ((x1, y1, δ1, p1), ..., (xn, yn, δn, pn)), let us first consider the special case where all xi in the sample are distinct. This is quite likely if n is small relative to k. In this case H contains a hypothesis hoverfit, which assigns f(xi) = yi for all i. This hypothesis has the following empirical risk as estimated by Equation (2): ˆR(hoverfit) = 1 n n X i=1 δi hoverfit(yi | xi) pi = 1 n n X i=1 δi 1 1/k ≤1 n n X i=1 −1 1 1/k = −k. Clearly this risk estimate shows severe overfitting, since it can be arbitrarily lower than the true risk R(h∗) = −2 of the best hypothesis h∗with appropriately chosen k (or, more generally, the choice of h0). This is in stark contrast to overfitting in full-information supervised learning, where at least the overfitted risk is bounded by the lower range of the loss function. Note that the empirical risk ˆR(h∗) of h∗concentrates around −2. ERM will, hence, almost always select hoverfit over h∗. Even if we are not in the special case of having a sample with all distinct xi, this type of overfitting still exists. In particular, if there are only l distinct xi in D, then there still exists a hoverfit with ˆR(hoverfit) ≤−k l n. Finally, note that this type of overfitting behavior is not an artifact of this example. Section 7 shows that this is ubiquitous in all the datasets we explored. Maybe this problem could be avoided by transforming the loss? For example, let’s translate the loss by adding 2 to δ so that now all loss values become non-negative. This results in the new loss function δ′(x, y) taking values 0 and 1. In conventional supervised learning an additive translation of the loss does not change the empirical risk minimizer. Suppose we draw a sample D in which not all possible values y for xi are observed for all xi in the sample (again, such a sample is likely for sufficiently large k). Now there are many hypotheses hoverfit′ that predict one of the unobserved y for each xi, basically avoiding the training data. ˆR(hoverfit′) = 1 n n X i=1 δi hoverfit′(yi | xi) pi = 1 n n X i=1 δi 0 1/k = 0. Again we are faced with overfitting, since many overfit hypotheses are indistinguishable from the true risk minimizer h∗with true risk R(h∗) = 0 and empirical risk ˆR(h∗) = 0. These examples indicate that this overfitting occurs regardless of how the loss is transformed. Intuitively, this type of overfitting occurs since the risk estimate according to Equation (2) can be minimized not only by putting large probability mass h(y | x) on the examples with low loss δ(x, y), but by maximizing (for negative losses) or minimizing (for positive losses) the sum of the weights ˆS(h) = 1 n n X i=1 h(yi | xi) pi . (4) For this reason, we call this type of overfitting Propensity Overfitting. This is in stark contrast to overfitting in supervised learning, which we call Loss Overfitting. Intuitively, Loss Overfitting occurs because the capacity of H fits spurious patterns of low δ(x, y) in the data. In Propensity Overfitting, the capacity in H allows overfitting of the propensity weights pi – for positive δ, hypotheses that avoid D are selected; for negative δ, hypotheses that overrepresent D are selected. The variance regularization of CRM combats both Loss Overfitting and Propensity Overfitting by optimizing a more informed generalization error bound. However the empirical variance estimate is also affected by Propensity Overfitting – especially for positive losses. Can we avoid Propensity Overfitting more directly? 4 5 Control variates and the Self-Normalized estimator To avoid Propensity Overfitting, we must first detect when and where it is occurring. For this, we draw on diagnostic tools used in importance sampling. Note that for any h ∈H, the sum of propensity weights ˆS(h) from Equation (4) always has expected value 1 under the conditions required for the unbiased estimator of Equation (2). E h ˆS(h) i = 1 n n X i=1 Z h(yi | xi) h0(yi | xi)h0(yi | xi) Pr(xi)dyidxi = 1 n n X i=1 Z 1 Pr(xi)dxi = 1. (5) This means that we can identify hypotheses that suffer from Propensity Overfitting based on how far ˆS(h) deviates from its expected value of 1. Since h(y|x) h0(y|x) is likely correlated with δ(x, y) h(y|x) h0(y|x), a large deviation in ˆS(h) suggests a large deviation in ˆR(h) and consequently a bad risk estimate. How can we use the knowledge that ∀h ∈H : E h ˆS(h) i = 1 to avoid degenerate risk estimates in a principled way? While one could use concentration inequalities to explicitly detect and eliminate overfit hypotheses based on ˆS(h), we use control variates to derive an improved risk estimator that directly incorporates this knowledge. Control variates. Control variates – random variables whose expectation is known – are a classic tool used to reduce the variance of Monte Carlo approximations [21]. Let V (X) be a control variate with known expectation EX [V (X)] = v ̸= 0, and let EX [W(X)] be an expectation that we would like to estimate based on independent samples of X. Employing V (X) as a multiplicative control variate, we can write EX [W(X)] = E[W (X)] E[V (X)] v. This motivates the ratio estimator ˆW SN = Pn i=1 W(Xi) Pn i=1 V (Xi) v, (6) which is called the Self-Normalized estimator in the importance sampling literature [12, 22, 23]. This estimator has substantially lower variance if W(X) and V (X) are correlated. Self-Normalized risk estimator. Let us use S(h) as a control variate for R(h), yielding ˆRSN(h) = Pn i=1 δi h(yi|xi) pi Pn i=1 h(yi|xi) pi . (7) Hesterberg reports that this estimator tends be more accurate than the unbiased estimator of Equation (2) when fluctuations in the sampling weights dominate the fluctuations in δ(x, y) [19]. Observe that the estimate is just a convex combination of the δi observed in the sample. If δ(·, ·) is now translated by a constant δ(·, ·) + C, both the true risk R(h) and the finite sample estimate ˆRSN(h) get shifted by C. Hence ˆRSN(h) is equivariant, unlike ˆR(h) [19]. Moreover, ˆRSN(h) is always bounded within the range of δ. So, the overfitted risk due to ERM will now be bounded by the lower range of the loss, analogous to full-information supervised learning. Finally, while the self-normalized risk estimator is not unbiased (E h ˆ R(h) ˆS(h) i ̸= R(h) E[ ˆS(h)] in general), it is strongly consistent and approaches the desired expectation when n is large. Theorem 2. Let D be drawn (xi, yi) i.i.d. ∼h0, from a h0 that has full support over Y. Then, ∀h ∈H : Pr( lim n→∞ ˆRSN(h) = R(h)) = 1. Proof. The numerator of ˆRSN(h) in (7) are i.i.d. observations with mean R(h). Strong law of large numbers gives Pr(limn→∞1 n Pn i=1 δi h(yi|xi) pi = R(h)) = 1. Similarly, the denominator has i.i.d. observations with mean 1. So, the strong law of large numbers implies Pr(limn→∞1 n Pn i=1 h(yi|xi) pi = 1) = 1. Hence, Pr(limn→∞ˆRSN(h) = R(h)) = 1. In summary, the self-normalized risk estimator ˆRSN(h) in Equation (7) resolves all the problems of the unbiased estimator ˆR(h) from Equation (2) identified in Section 4. 5 6 Learning method: Norm-POEM We now derive a learning algorithm, called Norm-POEM, for structured output prediction. The algorithm is analogous to POEM [1] in its choice of hypothesis space and its application of the CRM principle, but it replaces the conventional estimator (2) with the self-normalized estimator (7). Hypothesis space. Following [1, 24], Norm-POEM learns stochastic linear rules hw ∈Hlin parametrized by w that operate on a d−dimensional joint feature map φ(x, y). hw(y | x) = exp(w · φ(x, y))/Z(x). Z(x) = P y′∈Y exp(w · φ(x, y′)) is the partition function. Variance estimator. In order to instantiate the CRM objective from Equation (3), we need an empirical variance estimate ˆ V ar( ˆRSN(h)) for the self-normalized risk estimator. Following [23, Section 4.3], we use an approximate variance estimate for the ratio estimator of Equation (6). Using the Normal approximation argument [21, Equation 9.9], ˆ V ar( ˆRSN(h)) = Pn i=1(δi −ˆRSN(h))2( h(yi|xi) pi )2 (Pn i=1 h(yi|xi) pi )2 . (8) Using the delta method to approximate the variance [22] yields the same formula. To invoke asymptotic normality of the estimator (and indeed, for reliable importance sampling estimates) we require the true variance of the self-normalized estimator V ar( ˆRSN(h)) to exist. We can guarantee this by thresholding the importance weights, analogous to ˆRM(h). The benefits of the self-normalized estimator come at a computational cost. The risk estimator of POEM had a simpler variance estimate which could be approximated by Taylor expansion and optimized using stochastic gradient descent. The variance of Equation (8) does not admit stochastic optimization. Surprisingly, in our experiments in Section 7 we find that the improved robustness of Norm-POEM permits fast convergence during training even without stochastic optimization. Training objective of Norm-POEM. The objective is now derived by substituting the selfnormalized risk estimator of Equation (7) and its sample variance estimate from Equation (8) into the CRM objective (3) for the hypothesis space Hlin. By design, hw lies in the exponential family of distributions. So, the gradient of the resulting objective can be tractably computed whenever the partition functions Z(xi) are tractable. Doing so yields a non-convex objective in the parameters w which we optimize using L-BFGS. The choice of L-BFGS for non-convex and non-smooth optimization is well supported [25, 26]. Analogous to POEM, the hyper-parameters M (clipping to prevent unbounded variance) and λ (strength of variance regularization) can be calibrated via counterfactual evaluation on a held out validation set. In summary, the per-iteration cost of optimizing the Norm-POEM objective has the same complexity as the per-iteration cost of POEM with L-BFGS. It requires the same set of hyper-parameters. And it can be done tractably whenever the corresponding supervised CRF can be learnt efficiently. Software implementing Norm-POEM is available at http://www.cs.cornell.edu/∼adith/POEM. 7 Experiments We will now empirically verify if the self-normalized estimator as used in Norm-POEM can indeed guard against propensity overfitting and attain robust generalization performance. We follow the Supervised 7→Bandit methodology [2, 1] to test the limits of counterfactual learning in a wellcontrolled environment. As in prior work [1], the experiment setup uses supervised datasets for multi-label classification from the LibSVM repository. In these datasets, the inputs x ∈Rp. The predictions y ∈{0, 1}q are bitvectors indicating the labels assigned to x. The datasets have a range of features p, labels q and instances n: Name p(# features) q(# labels) ntrain ntest Scene 294 6 1211 1196 Yeast 103 14 1500 917 TMC 30438 22 21519 7077 LYRL 47236 4 23149 781265 6 POEM uses the CRM principle instantiated with the unbiased estimator while Norm-POEM uses the self-normalized estimator. Both use a hypothesis space isomorphic to a Conditional Random Field (CRF) [24]. We therefore report the performance of a full-information CRF (essentially, logistic regression for each of the q labels independently) as a “skyline” for what we can possibly hope to reach by partial-information batch learning from logged bandit feedback. The joint feature map φ(x, y) = x ⊗y for all approaches. To simulate a bandit feedback dataset D, we use a CRF with default hyper-parameters trained on 5% of the supervised dataset as h0, and replay the training data 4 times and collect sampled labels from h0. This is inspired by the observation that supervised labels are typically hard to collect relative to bandit feedback. The BLBF algorithms only have access to the Hamming loss ∆(y∗, y) between the supervised label y∗and the sampled label y for input x. Generalization performance R is measured by the expected Hamming loss on the held-out supervised test set. Lower is better. Hyper-parameters λ, M were calibrated as recommended and validated on a 25% hold-out of D – in summary, our experimental setup is identical to POEM [1]. We report performance of BLBF approaches without l2−regularization here; we observed NormPOEM dominated POEM even after l2−regularization. Since the choice of optimization method could be a confounder, we use L-BFGS for all methods and experiments. What is the generalization performance of Norm-POEM ? The key question is whether the appealing theoretical properties of the self-normalized estimator actually lead to better generalization performance. In Table 1, we report the test set loss for Norm-POEM and POEM averaged over 10 runs. On each run, h0 has varying performance (trained on random 5% subsets) but Norm-POEM consistently beats POEM. Table 1: Test set Hamming loss averaged over 10 runs. Norm-POEM significantly outperforms POEM on all four datasets (one-tailed paired difference t-test at significance level of 0.05). R Scene Yeast TMC LYRL h0 1.511 5.577 3.442 1.459 POEM 1.200 4.520 2.152 0.914 Norm-POEM 1.045 3.876 2.072 0.799 CRF 0.657 2.830 1.187 0.222 The plot below (Figure 1) shows how generalization performance improves with more training data for a single run of the experiment on the Yeast dataset. We achieve this by varying the number of times we replay the training set to collect samples from h0 (ReplayCount). Norm-POEM consistently outperforms POEM for all training sample sizes. 20 21 22 23 24 25 26 27 28 3 3.5 4 ReplayCount R h0 CRF POEM Norm-POEM Figure 1: Test set Hamming loss as n →∞on the Yeast dataset. All approaches will converge to CRF performance in the limit, but the rate of convergence is slow since h0 is thin-tailed. Does Norm-POEM avoid Propensity Overfitting? While the previous results indicate that Norm-POEM achieves better performance, it remains to be verified that this improved performance is indeed due to improved control over Propensity Overfitting. Table 2 (left) shows the average ˆS(ˆh) for the hypothesis ˆh selected by each approach. Indeed, ˆS(ˆh) is close to its known expectation of 1 for Norm-POEM, while it is severely biased for POEM. Furthermore, the value of ˆS(ˆh) depends heavily on how the losses δ are translated for POEM, as predicted by theory. As anticipated by our earlier observation that the self-normalized estimator is equivariant, Norm-POEM is unaffected by translations of δ. Table 2 (right) shows that the same is true for the prediction error on the test 7 set. Norm-POEM is consistenly good while POEM fails catastrophically (for instance, on the TMC dataset, POEM is worse than random guessing). Table 2: Mean of the unclipped weights ˆS(ˆh) (left) and test set Hamming loss R (right), averaged over 10 runs. δ > 0 and δ < 0 indicate whether the loss was translated to be positive or negative. ˆS(ˆh) R(ˆh) Scene Yeast TMC LYRL Scene Yeast TMC LYRL POEM(δ > 0) 0.274 0.028 0.000 0.175 2.059 5.441 17.305 2.399 POEM(δ < 0) 1.782 5.352 2.802 1.230 1.200 4.520 2.152 0.914 Norm-POEM(δ > 0) 0.981 0.840 0.941 0.945 1.058 3.881 2.079 0.799 Norm-POEM(δ < 0) 0.981 0.821 0.938 0.945 1.045 3.876 2.072 0.799 Is CRM variance regularization still necessary? It may be possible that the improved selfnormalized estimator no longer requires variance regularization. The loss of the unregularized estimator is reported (Norm-IPS) in Table 3. We see that variance regularization still helps. Table 3: Test set Hamming loss for Norm-POEM and the variance agnostic Norm-IPS averaged over the same 10 runs as Table 1. On Scene, TMC and LYRL, Norm-POEM is significantly better than Norm-IPS (one-tailed paired difference t-test at significance level of 0.05). R Scene Yeast TMC LYRL Norm-IPS 1.072 3.905 3.609 0.806 Norm-POEM 1.045 3.876 2.072 0.799 How computationally efficient is Norm-POEM ? The runtime of Norm-POEM is surprisingly faster than POEM. Even though normalization increases the per-iteration computation cost, optimization tends to converge in fewer iterations than for POEM. We find that POEM picks a hypothesis with large ∥w∥, attempting to assign a probability of 1 to all training points with negative losses. However, Norm-POEM converges to a much shorter ∥w∥. The loss of an instance relative to others in a sample D governs how Norm-POEM tries to fit to it. This is another nice consequence of the fact that the overfitted risk of ˆRSN(h) is bounded and small. Overall, the runtime of Norm-POEM is on the same order of magnitude as those of a full-information CRF, and is competitive with the runtimes reported for POEM with stochastic optimization and early stopping [1], while providing substantially better generalization performance. Table 4: Time in seconds, averaged across validation runs. CRF is implemented by scikit-learn [27]. Time(s) Scene Yeast TMC LYRL POEM 78.69 98.65 716.51 617.30 Norm-POEM 7.28 10.15 227.88 142.50 CRF 4.94 3.43 89.24 72.34 We observe the same trends for Norm-POEM when different properties of h0 are varied (e.g. stochasticity and quality), as reported for POEM [1]. 8 Conclusions We identify the problem of propensity overfitting when using the conventional unbiased risk estimator for ERM in batch learning from bandit feedback. To remedy this problem, we propose the use of a multiplicative control variate that leads to the self-normalized risk estimator. This provably avoids the anomalies of the conventional estimator. Deriving a new learning algorithm called Norm-POEM based on the CRM principle using the new estimator, we show that the improved estimator leads to significantly improved generalization performance. Acknowledgement This research was funded in part through NSF Awards IIS-1247637, IIS-1217686, IIS-1513692, the JTCII Cornell-Technion Research Fund, and a gift from Bloomberg. 8 References [1] Adith Swaminathan and Thorsten Joachims. Counterfactual risk minimization: Learning from logged bandit feedback. In ICML, 2015. [2] Alina Beygelzimer and John Langford. The offset tree for learning with partial labels. In KDD, pages 129–138, 2009. [3] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, New York, NY, USA, 2006. [4] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, 1998. [5] L´eon Bottou, Jonas Peters, Joaquin Q. Candela, Denis X. Charles, Max Chickering, Elon Portugaly, Dipankar Ray, Patrice Y. Simard, and Ed Snelson. Counterfactual reasoning and learning systems: the example of computational advertising. Journal of Machine Learning Research, 14(1):3207–3260, 2013. [6] Miroslav Dud´ık, John Langford, and Lihong Li. Doubly robust policy evaluation and learning. In ICML, pages 1097–1104, 2011. [7] P. Rosenbaum and D. Rubin. The central role of the propensity score in observational studies for causal effects. Biometrika, 70(1):41–55, 1983. [8] C. Cortes, Y. Mansour, and M. Mohri. Learning bounds for importance weighting. In NIPS, pages 442– 450, 2010. [9] John Langford, Alexander Strehl, and Jennifer Wortman. Exploration scavenging. In ICML, pages 528– 535, 2008. [10] Bianca Zadrozny, John Langford, and Naoki Abe. Cost-sensitive learning by cost-proportionate example weighting. In ICDM, pages 435–, 2003. [11] Alexander L. Strehl, John Langford, Lihong Li, and Sham Kakade. Learning from logged implicit exploration data. In NIPS, pages 2217–2225, 2010. [12] H. F. Trotter and J. W. Tukey. Conditional monte carlo for normal samples. In Symposium on Monte Carlo Methods, pages 64–79, 1956. [13] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample-variance penalization. In COLT, 2009. [14] Philip S. Thomas, Georgios Theocharous, and Mohammad Ghavamzadeh. High-confidence off-policy evaluation. In AAAI, pages 3000–3006, 2015. [15] Edward L. Ionides. Truncated importance sampling. Journal of Computational and Graphical Statistics, 17(2):295–311, 2008. [16] Lihong Li, R. Munos, and C. Szepesvari. Toward minimax off-policy value estimation. In AISTATS, 2015. [17] Phelim Boyle, Mark Broadie, and Paul Glasserman. Monte carlo methods for security pricing. Journal of Economic Dynamics and Control, 21(89):1267–1321, 1997. [18] John Schulman, Sergey Levine, Pieter Abbeel, Michael I. Jordan, and Philipp Moritz. Trust region policy optimization. In ICML, pages 1889–1897, 2015. [19] Tim Hesterberg. Weighted average importance sampling and defensive mixture distributions. Technometrics, 37:185–194, 1995. [20] V. Vapnik. Statistical Learning Theory. Wiley, Chichester, GB, 1998. [21] Art B. Owen. Monte Carlo theory, methods and examples. 2013. [22] Augustine Kong. A note on importance sampling using standardized weights. Technical Report 348, Department of Statistics, University of Chicago, 1992. [23] R. Rubinstein and D. Kroese. Simulation and the Monte Carlo Method. Wiley, 2 edition, 2008. [24] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289, 2001. [25] Adrian S. Lewis and Michael L. Overton. Nonsmooth optimization via quasi-newton methods. Mathematical Programming, 141(1-2):135–163, 2013. [26] Jin Yu, S. V. N. Vishwanathan, S. G¨unter, and N. Schraudolph. A quasi-Newton approach to nonsmooth convex optimization problems in machine learning. JMLR, 11:1145–1200, 2010. [27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825–2830, 2011. 9
2015
102
5,595
Information-theoretic lower bounds for convex optimization with erroneous oracles Yaron Singer Harvard University Cambridge, MA 02138 yaron@seas.harvard.edu Jan Vondr´ak IBM Almaden Research Center San Jose, CA 95120 jvondrak@us.ibm.com Abstract We consider the problem of optimizing convex and concave functions with access to an erroneous zeroth-order oracle. In particular, for a given function x →f(x) we consider optimization when one is given access to absolute error oracles that return values in [f(x) −ϵ, f(x) + ϵ] or relative error oracles that return value in [(1 −ϵ)f(x), (1 + ϵ)f(x)], for some ϵ > 0. We show stark information theoretic impossibility results for minimizing convex functions and maximizing concave functions over polytopes in this model. 1 Introduction Consider the problem of minimizing a convex function over some convex domain. It is well known that this problem is solvable in the sense that there are algorithms which make polynomially-many calls to an oracle that evaluates the function at every given point, and return a point which is arbitrarily close to the true minimum of the function. But suppose that instead of the true value of the function, the oracle has some small error. Would it still be possible to optimize the function efficiently? To formalize the notion of error, we can consider two types of erroneous oracles: • For a given function f : [0, 1]n →[0, 1] we say that ef : [0, 1]n →[0, 1] is an absolute ϵ-erroneous oracle if ∀x ∈[0, 1]n we have that: ef(x) = f(x) + ξx where ξx ∈[−ϵ, ϵ]. • For a given function f : [0, 1]n →R we say that ef : [0, 1]n →R is a relative ϵ-erroneous oracle if ∀x ∈[0, 1]n we have that: ef(x) = ξxf(x) where ξx ∈[1 −ϵ, 1 + ϵ]. Note that we intentionally do not make distributional assumptions about the errors. This is in contrast to noise, where the errors are assumed to be random and independently generated from some distribution. In such cases, under reasonable conditions on the distribution, one can obtain arbitrarily good approximations of the true function value by averaging polynomially many points in some ϵ-ball around the point of interest. Stated in terms of noise, in this paper we consider oracles that have some small adversarial noise, and wish to understand whether desirable optimization guarantees are obtainable. To avoid ambiguity, we refrain from using the term noise altogether, and refer to such as inaccuracies in evaluation as error. While distributional i.i.d. assumptions are often reasonable models, evaluating our dependency on these assumptions seems necessary. From a practical perspective, there are cases in which noise can be correlated, or where the data we use to estimate the function is corrupted in some arbitrary way. Furthermore, since we often optimize over functions that we learn from data, the process of fitting a model to a function may also introduce some bias that does not necessarily vanish, or may vanish. But more generally, it seems like we should morally know the consequences that modest inaccuracies may have. 1 x f(x) Figure 1: An illustration of an erroneous oracle to a convex function that fools a gradient descent algorithm. Benign cases. In the special case of a linear function f(x) = c⊺x, for some c ∈Rn, a relative ϵ-error has little effect on the optimization. By querying f(ei), for every i ∈[n] we can extract eci ∈[(1−ϵ)ci, (1+ϵ)ci] and then optimize over f ′(x) = ec⊺x. This results in a (1±ϵ)-multiplicative approximation. Alternatively, if the erroneous oracle ef happens to be a convex function, optimizing ˜f(x) directly retains desirable optimization guarantees, up to either additive and multiplicative errors. We are therefore interested in scenarios where the error does not necessarily have nice properties. Gradient descent fails with error. For a simple example, consider the function illustrated in Figure 1. The figure illustrates a convex function (depicted in blue) and an erroneous version of it (dotted red), s.t. on every point, the oracle is at most some additive ϵ > 0 away from the true function value (the ϵ margins of the function are depicted in grey). If we assume that a gradient descent algorithm is given access to the erroneous version (dotted red) instead of the true function (blue), the algorithm will be trapped in a local minimum that can be arbitrarily far from the true minimum. But the fact that a naive gradient descent algorithm fails does not necessarily mean that there isn’t an algorithm that can overcome small errors. This narrates the main question in this paper. Is convex optimization robust to error? Main Results. Our results are largely spoilers. We present stark information-theoretic lower bounds for both relative and absolute ϵ-erroneous oracles, for any constant and even sub-constant ϵ > 0. In particular, we show that: • For minimizing a convex function (or maximizing a concave function) f : [0, 1]n →[0, 1] over [0, 1]n: we show that for any fixed δ > 0, no algorithm can achieve an additive approximation within 1/2 −δ of the optimum, using a subexponential number of calls to an absolute n−1/2+δ-erroneous oracle. • For minimizing a convex function f : [0, 1]n →[0, 1] over a polytope P ⊂[0, 1]n: for any fixed ϵ > 0, no algorithm can achieve a finite multiplicative factor using a subexponential number of calls to a relative ϵ-erroneous oracle. • For maximizing a concave function f : [0, 1]n →[0, 1] over a polytope P ⊂[0, 1]n: for any fixed ϵ > 0, no algorithm can achieve a multiplicative factor better than Θ(n−1/2+ϵ) using a subexponential number of calls to a relative ϵ-erroneous oracle; • For maximizing a concave function f : [0, 1]n →[0, 1] over [0, 1]n: for any fixed ϵ > 0, no algorithm can obtain a multiplicative factor better than 1/2 + ϵ using a subexponential number of calls to a relative ϵ-erroneous oracle. (And there is a trivial 1/2-approximation without asking any queries.) Somewhat surprisingly, many of the impossibility results listed above are shown for a class of extremely simple convex and concave functions, namely, affine functions: f(x) = c⊺x + b. This is 2 in sharp contrast to the case of linear functions (without the constant term b) with relative erroneous oracles as discussed above. In addition, we note that our results extend to strongly convex functions. 1.1 Related work The oracle models we study here fall in the category of zeroth-order or derivative free. Derivativefree methods have a rich history in convex optimization and were among the earliest to numerically solve unconstrained optimization problems. Recently these approaches have enjoyed increasing interest, as they are useful in scenarios where black-box access is given to the function or cases in which gradient information is difficult to compute or does not exist [9, 8, 11, 15, 14, 6] There has been a rich line of work for noisy oracles, where the oracles return some erroneous version of the function value which is random. In a stochastic framework, these settings correspond to repeatedly choosing points in some convex domain, obtaining noisy realizations of some underlying convex function’s value. Most frequently, the assumption is that one is given a first-order noisy oracle with some assumptions about the distribution that generates the error [13, 12]. In the learning theory community, optimization with stochastic noisy oracles is often motivated by multi-armed bandits settings [4, 1], and regret minimization with zeroth-order feedback [2]. All these models consider the case in which the error is drawn from a distribution. The model of adversarial noise in zeroth order oracles has been mentioned in [10] which considers a related model of erroneous oracles and informally argues that exponentially many queries are required to approximately minimize a convex function in this model (under an ℓ2-ball constraint). In recent work, Belloni et al. [3] study convex optimization with erroneous oracles. Interestingly, Belloni et al. show positive results. In their work they develop a novel algorithm that is based on sampling from an approximately log-concave distribution using the Hit-and-Run method and show that their method has polynomial query complexity. In contrast to the negative results we show in this work, the work of Belloni et al. assumes the (absolute) erroneous oracle returns f(x) + ξx with ξx ∈[−ϵ n, ϵ n]. That is, the error is not a constant term, but rather is inversely proportional to the dimension. Our lower bounds for additive approximation hold when the oracle error is not necessarily a constant but ξx ∈[ 1 n1/2−δ , 1 n1/2−δ ] for a constant 0 < δ < 1/2. 2 Preliminaries Optimization and convexity. For a minimization problem, given a nonnegative objective function f and a polytope P we will say that an algorithm provides a (multiplicative) α-approximation (α > 1) if it finds a point ¯x ∈P s.t. f(¯x) ≤α minx∈P f(x). For a maximization problem, an algorithm provides an α-approximation (α < 1) if it finds a point ¯x s.t. f(¯x) ≥α maxx∈P f(x). For absolute erroneous oracles, given an objective function f and a polytope P we will aim to find a point ¯x ∈P which is within an additive error of δ from the optimum, with δ as small as possible. That is, for a δ > 0 we aim to find a point ¯x s.t. |f(¯x)−minx f(x)| < δ in the case of minimization. A function f : P →R is convex on P if f(tx + (1 −t)y) ≤tf(x) + (1 −t)f(y) (or concave if f(tx + (1 −t)y) ≥tf(x) + (1 −t)f(y)) for every x, y ∈P and t ∈[0, 1]. Chernoff bounds. Throughout the paper we appeal to the Chernoff bounds. We note that while typically stated for independent random variables X1, . . . , Xm, Chernoff bounds also hold for negatively associated random variables. Definition 2.1 ([5], Definition 1). Random variables X1, . . . , Xn are negatively associated, if for every I ⊆[n] and every non-decreasing f : RI →R, g : R¯I →R, E[f(Xi, i ∈I)g(Xj, j ∈¯I)] ≤E[f(Xi, i ∈I)]E[g(Xj, j ∈¯I)]. Claim 2.2 ([5], Theorem 14). Let X1, . . . , Xn be negatively associated random variables that take values in [0, 1] and µ = E[Pn i=1 Xi]. Then, for any δ ∈[0, 1] we have that: Pr[ n X i=1 Xi > (1 + δ)µ] ≤e−δ2µ/3, 3 Pr[ n X i=1 Xi < (1 −δ)µ] ≤e−δ2µ/2. We apply this to random variables that are formed by selecting a random subset of a fixed size. In particular, we use the following. Claim 2.3. Let x1, . . . , xn ≥0 be fixed. For 1 ≤k ≤n, let R be a uniformly random subset of k elements out of [n]. Let Xi = xi if i ∈R and Xi = 0 otherwise. Then X1, . . . , Xn are negatively associated. Proof. For x1 = x2 = . . . = xn = 1, the statement holds by Corollary 11 of [5] (which refers to this distribution as the Fermi-Dirac model). The generalization to arbitrary xi ≥0 follows from Proposition 4 of [5] with Ij = {j} and hj(t) = xjt. 3 Optimization over the unit cube We start with optimization over [0, 1]n, arguably the simplest possible polytope. We show that already in this setting, the presence of adversarial noise prevents us from achieving much more than trivial results. 3.1 Convex minimization First let us consider convex minimization over [0, 1]n. In this setting, we show that errors as small as n−(1−δ)/2 prevent us from optimizing within a constant additive error. Theorem 3.1. Let δ > 0 be a constant. There are instances of a convex function f : [0, 1]n → [0, 1] accessible through an absolute n−(1−δ)/2-erroneous oracle, such that a (possibly randomized) algorithm that makes eO(nδ) queries cannot find a solution of value better than within additive 1/2 −o(1) of the optimum with probability more than e−Ω(nδ). We remark that the proof of this theorem is inspired by the proof of hardness of ( 1 2 + ϵ)approximation for unconstrained submodular maximization [7]; in particular it can be viewed as a simple application of the “symmetry gap” argument (see [16] for a more general exposition). Proof. Let ϵ = n−(1−δ)/2; we can assume that ϵ < 1 2, otherwise n is constant and the statement is trivial. We will construct an ϵ-erroneous oracle (both in the relative and absolute sense) for a convex function f : [0, 1]n →[0, 1]. Consider a partition of [n] into two subsets A, B of size |A| = |B| = n/2 (which will be eventually chosen randomly). We define the following function: • f(x) = 1 2 + 1 n(P i∈A xi −P j∈B xj). This is a convex (in fact linear) function. Next, we define the following modification of f, which could be the function returned by an ϵ-erroneous oracle. • If | P i∈A xi −P j∈B xj| > 1 2ϵn, then ˜f(x) = f(x) = 1 2 + 1 n(P i∈A xi −P j∈B xj). • If | P i∈A xi −P j∈B xj| ≤1 2ϵn, then ˜f(x) = 1 2. Note that f(x) and ˜f(x) differ only in the region where | P i∈A xi−P j∈B xj| ≤1 2ϵn. In particular, the value of f(x) in this region is within [ 1−ϵ 2 , 1+ϵ 2 ], while ˜f(x) = 1 2, so an ϵ-erroneous oracle for f(x) (both in the relative and absolute sense) could very well return ˜f(x) instead. Now assume that (A, B) is a random partition, unknown to the algorithm. We argue that with high probability, a fixed query x issued by the algorithm will have the property that | P i∈A xi − P j∈B xj| ≤1 2ϵn. More precisely, since (A, B) is chosen at random subject to |A| = |B| = n/2, 4 we have that P i∈A xi is a sum of negatively associated random variables in [0, 1] (by Claim 2.3). The expectation of this quantity is µ = E[P i∈A xi] = 1 2 Pn i=1 xi ≤1 2n. By Claim 2.2, we have Pr[ X i∈A xi > µ + 1 4ϵn] = Pr[ X i∈A xi > (1 + n 4µϵ)µ] < e−(nϵ/(4µ))2µ/3 ≤e−ϵ2n/24. Since 1 2 P i∈A xi + 1 2 P i∈B xi = 1 2 Pn i=1 xi = µ, we get Pr[ X i∈A xi − X i∈B xi > 1 2ϵn] = Pr[ X i∈A xi −µ > 1 4ϵn] < e−ϵ2n/24. By symmetry, Pr[| X i∈B xi − X j∈A xj| > 1 2ϵn] < 2e−ϵ2n/24. We emphasize that this holds for a fixed query x. Recall that we assumed the algorithm to be deterministic. Hence, as long as its queries satisfy the property above, the answers will be ˜f(x) = 1/2, and the algorithm will follow the same path of computation, no matter what the choice of (A, B) is. (Effectively we will not learn anything about A and B.) Considering the sequence of queries on this computation path, if the number of queries is t then with probability at least 1−2te−ϵ2n/24 the queries will indeed fall in the region where ˜f(x) = 1/2 and the algorithm will follow this path. If t ≤eϵ2n/48, this happens with probability at least 1 −2e−ϵ2n/48. In this case, all the points queried by the algorithm as well as the returned solution xout (by the same argument) satisfies ˜f(xout) = 1/2, and hence f(xout) ≥1−ϵ 2 . In contrast, the actual optimum is f(1B) = 0. Recall that ϵ = n−(1−δ)/2; hence, f(xout) ≥1 2(1 −n−(1−δ)/2) and the bounds on the number of queries and probability of success are as in the statement of the theorem. Finally, consider a randomized algorithm. Denote by (R1, R2, . . . , ...) the random variables used by the algorithm in its decisions. We can condition on a fixed choice of (R1 = r1, R2 = r2, . . .) which makes the algorithm deterministic. By our proof, the algorithm conditioned on this choice cannot succeed with probability more than e−Ω(nδ). Since this is true for each particular choice of (r1, r2, . . .), by averaging it is also true for a random choice of (R1, R2, . . .). Hence, we obtain the same result for randomized algorithms as well. 3.2 Concave maximization Here we consider the problem of maximizing a concave function f : [0, 1]n →[0, 1]. One can obtain a result for concave maximization analogous to Theorem 3.1, which we do not state; in terms of additive errors, there is really no difference between convex minimization and concave maximization. However, in the case of concave maximization we can also formulate the following hardness result for multiplicative approximation. Theorem 3.2. If a concave function f : [0, 1]n →[0, 1] is accessible through a relative-δ-erroneous oracle, then for any ϵ ∈[0, δ], an algorithm that makes less than eϵ2n/48 queries cannot find a solution of value greater than 1+ϵ 2 OPT with probability more than 2e−ϵ2n/48. Proof. This result follows from the same construction as Theorem 3.1. Recall that f(x) is a linear function, hence also concave. As we mentioned in the proof of Theorem 3.1, ˜f(x) could be the values returned by a relative ϵ-erroneous oracle. Now we consider an arbitrary ϵ > 0; note that for δ ≥ϵ it still holds that ˜f(x) is a relative δ-erroneous oracle. By the same proof, an algorithm querying less than eϵnn/48 points cannot find a solution of value better than 1+ϵ 2 with probability more than 2e−ϵ2n/48. In contrast, the optimum of the maximization problem is supx∈[0,1]n f(x) = 1. Therefore, the algorithm cannot achieve multiplicative approximation better than 1+ϵ 2 . We note that this hardness result is optimal due to the following easy observation. 5 Theorem 3.3. For any concave function f : [0, 1]n →R+, let OPT = supx∈[0,1]n f(x). Then f 1 2, 1 2, . . . , 1 2  ≥1 2OPT. Proof. By compactness, the optimum is attained at a point: let OPT = f(x∗). Let also x′ = (1, 1, . . . , 1) −x∗. We have x′ ∈[0, 1]n and hence f(x′) ≥0. By concavity, we obtain f 1 2, 1 2, . . . , 1 2  = f x∗+ x′ 2  ≥f(x∗) + f(x′) 2 ≥1 2f(x∗) = 1 2OPT. In other words, a multiplicative 1 2-approximation for this problem is trivial to obtain — even without asking any queries about f. We just return the point ( 1 2, 1 2, . . . , 1 2). Thus we can conclude that for concave maximization, a relative ϵ-erroneous oracle is not useful at all. 4 Optimization over polytopes In this section we consider optimization of convex and concave functions over a polytope P = {x ≥0 : Ax = b}. We will show inappoximability results for the relative error model. Note that for the absolute error case, the lower bound on convex minimization from the previous section holds, and can be applied to show a lower bound for concave maximization with absolute errors. Theorem 4.1. Let ϵ, δ ∈(0, 1/2) be some constants. There are convex functions for which no algorithm can obtain a finite approximation ratio to minx∈P f(x) using Ω(enδ) queries to a relative ϵ-erroneous oracle of the function. Proof. We will prove our theorem for the case in which P = {x ≥0 : P i xi ≤n1/2+δ}. Let H be a subset of indices chosen uniformly at random from all subsets of size exactly n1/2+δ. We construct two functions: f(x) = n1+δ −n1/2 X i∈H xi g(x) = n1+δ −nδ X i xi Observe that both these functions are convex and non-negative. Also, observe that the minimizer of f is x∗= 1H and f(x∗) = 0, while the minimizer of g is any vector x′ : P i xi = n1/2+δ and g(x′) = n1+δ −n1/2+2δ = Θ(n1+δ). Therefore, the ratio between these two functions is unbounded. We will now construct the erroneous oracle in the following manner: ef(x) = g(x), if (1 −ϵ)f(x) ≤g(x) ≤(1 + ϵ)f(x) f(x) otherwise By definition, ef is an ϵ-erroneous oracle to f. The claim will follow from the fact that given access to ef one cannot distinguish between f and g using a subexponential number of queries. This implies the inapproximability result since an approximation algorithm which guarantees a finite approximation ratio using a subexponential number of queries could be used to distinguish between the two functions: if the algorithm returns an answer strictly greater than 0 then we know the underlying function is g and otherwise it is f. Given a query x ∈[0, 1]n to the oracle, we will consider two cases. • In case the query x is such that P i xi ≤n1/2 then we have that: n1+δ −n ≤f(x) ≤n1+δ n1+δ −nδ+1/2 ≤g(x) ≤n1+δ 6 Since for any ϵ, δ > 0 there is a large enough n s.t. nδ > (1 + ϵ)/ϵ, this implies that for any query for which P i xi ≤n1/2 then we have that g(x) ∈[(1 −ϵ)f(x), (1 + ϵ)f(x)] and thus the oracle returns g(x). • In case the query is such that P i xi > n1/2 then we can interpret the value of P i∈H xi which determines value of f as a sum of negatively associated random variables X1, . . . , Xn where Xi realizes with probability n−1/2+δ and takes value xi if realized (see Claim 2.3). We can then apply the Chernoff bound (Claim 2.2), using the fact that E[f(x)] = n1/2−δ P i xi, and get that for any constant 0 < β < 1 we have that with probability 1 −e−Ω(nδ):  1 −β  P i xi n1/2−δ ≤ X i∈H xi ≤  1 + β  P i xi n1/2−δ By using β ≤ϵ/(1 + ϵ), this implies that with probability at least 1 −e−Ω(nδ) we get that: (1 −ϵ)f(x) ≤g(x) ≤(1 + ϵ)f(x) Since the likelihood of distinguishing between f and g on a single query is exponentially small in nδ, the same arguments used throughout the paper imply that it takes an exponential number of queries to distinguish between f and g. To conclude, for any query x ∈[0, 1]n it takes Ω(enδ) queries to distinguish between f and g. As discussed above, due to the fact that the ratio between the optima of these two functions is unbounded, this concludes the proof. Theorem 4.2. ∀constants ϵ, δ ∈(0, 1/2) there is a concave function f : [0, 1]n →R+ for which no algorithm can obtain an approximation strictly better than O(n−1/2+δ) to maxx∈P f(x) using Ω(enδ) queries to a relative ϵ-erroneous oracle of the function. Proof. We follow a similar methodology as in the proof of Theorem 4.1. We again we select a set H of size n1/2+δ u.a.r. and construct two functions: f(x) = n1/2 P i∈H xi + n1/2+δ ϵ and g(x) = nδ P i xi + n1/2+δ ϵ . As in the proof of Theorem 4.1 the noisy oracle ef(x) = g(x) when (1−ϵ)f(x) ≤g(x) ≤(1+ϵ)f(x) and otherwise ef(x) = f(x). Note that both functions are concave and non-negative, and by its definition the oracle is ϵ-erroneous for the function f. For b = n1/2+δ it is easy to see that the optimal value when the objective is f is n1+δ while the optimal value is O(n1/2+δ) when the objective is g, which implies that one cannot obtain an approximation better than Ω(n−1/2+δ) with a subexponential number of queries. In case the query to the oracle is a point x s.t. P i xi ≤n1/2, then by Chernoff bound arguments, similar to the ones we used above, with probability at least 1 −e−Ω(nδ) we get (1 −ϵ)f(x) ≤g(x) ≤(1 + ϵ)f(x). Thus, for any query in which P i xi ≤n1/2, the likelihood of the oracle returning f is exponentially small in nδ. In case the query is a point x s.t. P i xi > n1/2 standard concentration bound arguments as before, imply that with probability at least 1 −e−Ω(nδ) we get (1 −ϵ)f(x) ≤g(x) ≤(1 + ϵ)f(x). Since the likelihood of distinguishing between f and g on a single query is exponentially small in nδ, we can conclude that it takes an exponential number of queries to distinguish between f and g. 5 Optimization over assignments In this section, we consider the concave maximization problem over a more specific polytope, Pn,k =   x ∈Rn×k + : k X j=1 xij = 1 ∀i ∈[n]   . This can be viewed as the matroid polytope for a partition matroid on n blocks of k elements, or alternatively the convex hull of assignments of n items to k agents. In this case, there is a trivial 1 k-approximation, similar to the 1 2-approximation in the case of a unit cube. 7 Theorem 5.1. For any k ≥2 and a concave function f : Pn,k →R+, let OPT = supx∈Pn,k f(x). Then f 1 k , 1 k , . . . , 1 k  ≥1 k OPT. Proof. By compactness, the optimum is attained at a point: let OPT = f(x∗). Let x(ℓ) ij = x∗ i,(j+ℓmod k) ; i.e., x(ℓ) is a cyclic shift of the coordinates of x∗by ℓin each block. We have x(ℓ) ∈Pn,k and 1 k Pk−1 ℓ=0 x(ℓ) ij = 1 k Pk j=1 x∗ ij = 1 k. By concavity and nonnegativity of f, we obtain f 1 k , 1 k , . . . , 1 k  = f 1 k k−1 X ℓ=0 x(ℓ) ! ≥1 k f(x(0)) = 1 k OPT. We show that this approximation is best possible, if we have access only to a δ-erroneous oracle. Theorem 5.2. If k ≥2 and a concave function f : Pn,k →[0, 1] is accessible through a relative-δerroneous oracle, then for any ϵ ∈[0, δ], an algorithm that makes less than eϵ2n/6k queries cannot find a solution of value greater than 1+ϵ k OPT with probability more than 2e−ϵ2n/6k. Note that this result is nontrivial only for n ≫k. In other words, the hardness factor of k is never worse than a square root of the dimension of the problem. Therefore, this result can be viewed as interpolating between the hardness of 1+ϵ 2 -approximation over the unit cube (Theorem 3.2), and the hardness of nδ−1/2-approximation over a general polytope (Theorem 4.2). Proof. Given π : [n] →[k], we construct a function f π : Pn,k →[0, 1] (where π describes the intended optimal solution): • f π(x) = 1 n Pn i=1 xi,π(i). Next we define a modified function ˜f π as follows: • If |f π(x) −1 k| > ϵ k then ˜f π(x) = f π(x) • If |f π(x) −1 k| ≤ϵ k then ˜f π(x) = 1 k. By definition, f π(x) and ˜f π(x) differ only if |f π(x) −1 k| ≤ϵ k, and then f π(x) ∈[ 1−ϵ k , 1+ϵ k ] while ˜f π(x) = 1 k. Therefore, ˜f π(x) is a valid relative ϵ-erroneous oracle for f π. Similarly to the proofs above, we argue that if π is chosen uniformly at random, then with high probability ˜f π(x) = 1 k for any fixed query x ∈Pn,k. This holds again by a Chernoff bound: For a fixed xij such that Pk j=1 xij = 1, we have that f π(x) = 1 n Pn i=1 xi,π(i) = 1 nZ where Z is a sum of independent random variables with expectation 1 k P i,j xij = n k . The random variables attain values in [0, 1]. By the Chernoff bound, Pr[|Z −n k | > ϵ n k ] < 2e−ϵ2n/3k. This gives Pr  |f π(x) −1 k | > ϵ k  < 2e−ϵ2n/3k. By the same arguments as before, if the algorithm asks less than eϵ2n/6k queries, then it will not detect a point such that |f π(x) −1 k| > ϵ k with probability more than 2e−ϵ2n/6k. Then the query answers will all be ˜f π(x) = 1 k and the value of the returned solution will be at most 1+ϵ k . Meanwhile, the optimum solution is x∗ i,π(i) = 1 for all i, which gives f(x∗) = 1. Acknowledgements. YS was supported by NSF grant CCF-1301976, CAREER CCF-1452961 and a Google Faculty Research Award. 8 References [1] Alekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In COLT 2010 - The 23rd Conference on Learning Theory, Haifa, Israel, June 27-29, 2010, pages 28–40, 2010. [2] Alekh Agarwal, Dean P. Foster, Daniel Hsu, Sham M. Kakade, and Alexander Rakhlin. Stochastic convex optimization with bandit feedback. SIAM Journal on Optimization, 23(1):213–240, 2013. [3] Alexandre Belloni, Tengyuan Liang, Hariharan Narayanan, and Alexander Rakhlin. Escaping the local minima via simulated annealing: Optimization of approximately convex functions. COLT 2015. [4] S´ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [5] Devdatt Dubhashi, Volker Priebe, and Desh Ranjan. Negative dependence through the FKG inequality. In Research report MPI-I-96-1-020, Max-Planck Institut f¨ur Informatik, Saarbr¨ucken, 1996. [6] John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zeroorder convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory, 61(5):2788–2806, 2015. [7] Uriel Feige, Vahab S. Mirrokni, and Jan Vondr´ak. Maximizing non-monotone submodular functions. SIAM J. Comput., 40(4):1133–1153, 2011. [8] Abraham Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, Vancouver, British Columbia, Canada, January 23-25, 2005, pages 385–394, 2005. [9] Kevin G. Jamieson, Robert D. Nowak, and Benjamin Recht. Query complexity of derivative-free optimization. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 2681–2689, 2012. [10] A.S. Nemirovsky and D.B. Yudin. Problem Complexity and Method Efficiency in Optimization. J. Wiley & Sons, New York, 1983. [11] Yurii Nesterov. Random gradient-free minimization of convex functions. CORE Discussion Papers 2011001, Universit´e catholique de Louvain, Center for Operations Research and Econometrics (CORE), 2011. [12] Aaditya Ramdas, Barnab´as P´oczos, Aarti Singh, and Larry A. Wasserman. An analysis of active learning with uniform feature noise. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, AISTATS 2014, Reykjavik, Iceland, April 22-25, 2014, pages 805–813, 2014. [13] Aaditya Ramdas and Aarti Singh. Optimal rates for stochastic convex optimization under tsybakov noise condition. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 365–373, 2013. [14] Ohad Shamir. On the complexity of bandit and derivative-free stochastic convex optimization. In COLT 2013 - The 26th Annual Conference on Learning Theory, June 12-14, 2013, Princeton University, NJ, USA, pages 3–24, 2013. [15] Sebastian U. Stich, Christian L. M¨uller, and Bernd G¨artner. Optimization of convex functions with random pursuit. CoRR, abs/1111.0194, 2011. [16] Jan Vondr´ak. Symmetry and approximability of submodular maximization problems. SIAM J. Comput., 42(1):265–304, 2013. 9
2015
103
5,596
A Nonconvex Optimization Framework for Low Rank Matrix Estimation⇤ Tuo Zhao Johns Hopkins University Zhaoran Wang Han Liu Princeton University Abstract We study the estimation of low rank matrices via nonconvex optimization. Compared with convex relaxation, nonconvex optimization exhibits superior empirical performance for large scale instances of low rank matrix estimation. However, the understanding of its theoretical guarantees are limited. In this paper, we define the notion of projected oracle divergence based on which we establish sufficient conditions for the success of nonconvex optimization. We illustrate the consequences of this general framework for matrix sensing. In particular, we prove that a broad class of nonconvex optimization algorithms, including alternating minimization and gradient-type methods, geometrically converge to the global optimum and exactly recover the true low rank matrices under standard conditions. 1 Introduction Let M ⇤2 Rm⇥n be a rank k matrix with k much smaller than m and n. Our goal is to estimate M ⇤based on partial observations of its entires. For example, matrix sensing is based on linear measurements hAi, M ⇤i, where i 2 {1, . . . , d} with d much smaller than mn and Ai is the sensing matrix. In the past decade, significant progress has been established on the recovery of low rank matrix [4, 5, 23, 18, 15, 16, 12, 22, 7, 25, 19, 6, 14, 11, 13, 8, 9, 10, 27]. Among all these existing works, most are based upon convex relaxation with nuclear norm constraint or regularization. Nevertheless, solving these convex optimization problems can be computationally prohibitive in high dimensional regimes with large m and n [27]. A computationally more efficient alternative is nonconvex optimization. In particular, we reparameterize the m ⇥n matrix variable M in the optimization problem as UV > with U 2 Rm⇥k and V 2 Rn⇥k, and optimize over U and V . Such a reparametrization automatically enforces the low rank structure and leads to low computational cost per iteration. Due to this reason, the nonconvex approach is widely used in large scale applications such as recommendation systems [17]. Despite the superior empirical performance of the nonconvex approach, the understanding of its theoretical guarantees is relatively limited in comparison with the convex relaxation approach. Only until recently has there been progress on coordinate descent-type nonconvex optimization methods, which is known as alternating minimization [14, 8, 9, 10]. They show that, provided a desired initialization, the alternating minimization algorithm converges at a geometric rate to U ⇤2 Rm⇥k and V ⇤2 Rn⇥k, which satisfy M = U ⇤V ⇤>. Meanwhile, [15, 16] establish the convergence of gradient-type methods, and [27] further establish the convergence of a broad class of nonconvex algorithms including both gradient-type and coordinate descent-type methods. However, [15, 16, 27] only establish the asymptotic convergence for an infinite number of iterations, rather than the explicit rate of convergence. Besides these works, [18, 12, 13] consider projected gradient-type methods, which optimize over the matrix variable M 2 Rm⇥n rather than U 2 Rm⇥k and V 2 Rn⇥k. These methods involve calculating the top k singular vectors of an m ⇥n matrix at each iteration. For ⇤Research supported by NSF IIS1116730, NSF IIS1332109, NSF IIS1408910, NSF IIS1546482-BIGDATA, NSF DMS1454377-CAREER, NIH R01GM083084, NIH R01HG06841, NIH R01MH102339, and FDA HHSF223201000072C. 1 k much smaller than m and n, they incur much higher computational cost per iteration than the aforementioned methods that optimize over U and V . All these works, except [27], focus on specific algorithms, while [27] do not establish the explicit optimization rate of convergence. In this paper, we propose a general framework that unifies a broad class of nonconvex algorithms for low rank matrix estimation. At the core of this framework is a quantity named projected oracle divergence, which sharply captures the evolution of generic optimization algorithms in the presence of nonconvexity. Based on the projected oracle divergence, we establish sufficiently conditions under which the iteration sequences geometrically converge to the global optima. For matrix sensing, a direct consequence of this general framework is that, a broad family of nonconvex algorithms, including gradient descent, coordinate gradient descent and coordinate descent, converge at a geometric rate to the true low rank matrices U ⇤and V ⇤. In particular, our general framework covers alternating minimization as a special case and recovers the results of [14, 8, 9, 10] under standard conditions. Meanwhile, our framework covers gradient-type methods, which are also widely used in practice [28, 24]. To the best of our knowledge, our framework is the first one that establishes exact recovery guarantees and geometric rates of convergence for a broad family of nonconvex matrix sensing algorithms. To achieve maximum generality, our unified analytic framework significantly differs from previous works. In detail, [14, 8, 9, 10] view alternating minimization as a perturbed version of the power method. However, their point of view relies on the closed form solution of each iteration of alternating minimization, which makes it hard to generalize to other algorithms, e.g., gradient-type methods. Meanwhile, [27] take a geometric point of view. In detail, they show that the global optimum of the optimization problem is the unique stationary point within its neighborhood and thus a broad class of algorithms succeed. However, such geometric analysis of the objective function does not characterize the convergence rate of specific algorithms towards the stationary point. Unlike existing analytic frameworks, we analyze nonconvex optimization algorithms as perturbed versions of their convex counterparts. For example, under our framework we view alternating minimization as a perturbed version of coordinate descent on convex objective functions. We use the key quantity, projected oracle divergence, to characterize such a perturbation effect, which results from the local nonconvexity at intermediate solutions. This framework allows us to establish explicit rate of convergence in an analogous way as existing convex optimization analysis. Notation: For a vector v = (v1, . . . , vd)T 2 Rd, let the vector `q norm be kvkq q = P j vq j. For a matrix A 2 Rm⇥n, we use A⇤j = (A1j, ..., Amj)> to denote the j-th column of A, and Ai⇤= (Ai1, ..., Ain)> to denote the i-th row of A. Let σmax(A) and σmin(A) be the largest and smallest nonzero singular values of A. We define the following matrix norms: kAk2 F = P j kA⇤jk2 2, kAk2 = σmax(A). Moreover, we define kAk⇤to be the sum of all singular values of A. Given another matrix B 2 Rm⇥n, we define the inner product as hA, Bi = P i,j AijBij. We define ei as an indicator vector, where the i-th entry is one, and all other entries are zero. For a bivariate function f(u, v), we define ruf(u, v) to be the gradient with respect to u. Moreover, we use the common notations of ⌦(·), O(·), and o(·) to characterize the asymptotics of two real sequences. 2 Problem Formulation and Algorithms Let M ⇤2 Rm⇥n be the unknown low rank matrix of interest. We have d sensing matrices Ai 2 Rm⇥n with i 2 {1, . . . , d}. Our goal is to estimate M ⇤based on bi = hAi, M ⇤i in the high dimensional regime with d much smaller than mn. Under such a regime, a common assumption is rank(M ⇤) = k ⌧min{d, m, n}. Existing approaches generally recover M ⇤by solving the following convex optimization problem min M2Rm⇥n kMk⇤ subject to b = A(M), (2.1) where b = [b1, ..., bd]> 2 Rd, and A(M) : Rm⇥n ! Rd is an operator defined as A(M) = [hA1, Mi, ..., hAi, Mi]> 2 Rd. (2.2) Existing convex optimization algorithms for solving (2.1) are computationally inefficient, in the sense that they incur high per-iteration computational cost, and only attain sublinear rates of convergence to the global optimum [14]. Instead, in large scale settings we usually consider the following nonconvex 2 optimization problem min U2Rm⇥k,V 2Rn⇥k F(U, V ). where F(U, V ) = 1 2kb −A(UV >)k2 2. (2.3) The reparametrization of M = UV >, though making the optimization problem in (2.3) nonconvex, significantly improves the computational efficiency. Existing literature [17, 28, 21, 24] has established convincing empirical evidence that (2.3) can be effectively solved by a board variety of gradient-based nonconvex optimization algorithms, including gradient descent, alternating exact minimization (i.e., alternating least squares or coordinate descent), as well as alternating gradient descent (i.e., coordinate gradient descent), which are shown in Algorithm 1. It is worth noting the QR decomposition and rank k singular value decomposition in Algorithm 1 can be accomplished efficiently. In particular, the QR decomposition can be accomplished in O(k2 max{m, n}) operations, while the rank k singular value decomposition can be accomplished in O(kmn) operations. In fact, the QR decomposition is not necessary for particular update schemes, e.g., [14] prove that the alternating exact minimization update schemes with or without the QR decomposition are equivalent. Algorithm 1 A family of nonconvex optimization algorithms for matrix sensing. Here (U, D, V ) KSVD(M) is the rank k singular value decomposition of M. Here D is a diagonal matrix containing the top k singular values of M in decreasing order, and U and V contain the corresponding top k left and right singular vectors of M. Here (V , RV ) QR(V ) is the QR decomposition, where V is the corresponding orthonormal matrix and RV is the corresponding upper triangular matrix. Input: {bi}d i=1, {Ai}d i=1 Parameter: Step size ⌘, Total number of iterations T (U (0), D(0), V (0)) KSVD(Pd i=1 biAi), V (0) V (0)D(0), U (0) U (0)D(0) For: t = 0, ...., T −1 Alternating Exact Minimization : V (t+0.5) argminV F(U (t), V ) (V (t+1), R(t+0.5) V ) QR(V (t+0.5)) Alternating Gradient Descent : V (t+0.5) V (t) −⌘rV F(U (t), V (t)) (V (t+1), R(t+0.5) V ) QR(V (t+0.5)), U (t) U (t)R(t+0.5)> V Gradient Descent : V (t+0.5) V (t) −⌘rV F(U (t), V (t)) (V (t+1), R(t+0.5) V ) QR(V (t+0.5)), U (t+1) U (t)R(t+0.5)> V 9 > > > > > > > > > = > > > > > > > > > ; Updating V Alternating Exact Minimization : U (t+0.5) argminU F(U, V (t+1)) (U (t+1), R(t+0.5) U ) QR(U (t+0.5)) Alternating Gradient Descent : U (t+0.5) U (t) −⌘rUF(U (t), V (t+1)) (U (t+1), R(t+0.5) U ) QR(U (t+0.5)), V (t+1) V t+1R(t+0.5)> U Gradient Descent : U (t+0.5) U (t) −⌘rUF(U (t), V (t)) (U (t+1), R(t+0.5) U ) QR(U (t+0.5)), V (t+1) V tR(t+0.5)> U 9 > > > > > > > > > = > > > > > > > > > ; Updating U End for Output: M (T ) U (T −0.5)V (T )> (for gradient descent we use U (T )V (T )>) 3 Theoretical Analysis We analyze the convergence properties of the general family of nonconvex optimization algorithms illustrated in §2. Before we present the main results, we first introduce a unified analytic framework based on a key quantity named projected oracle divergence. Such a unified framework equips our theory with the maximum generality. Without loss of generality, we assume m n throughout the rest of this paper. 3.1 Projected Oracle Divergence We first provide an intuitive explanation for the success of nonconvex optimization algorithms, which forms the basis of our later proof for the main results. Recall that (2.3) is a special instance of the following optimization problem, min U2Rm⇥k,V 2Rn⇥k f(U, V ). (3.1) A key observation is that, given fixed U, f(U, ·) is strongly convex and smooth in V under suitable conditions, and the same also holds for U given fixed V correspondingly. For the convenience of 3 discussion, we summarize this observation in the following technical condition, which will be later verified for matrix sensing under suitable conditions. Condition 3.1 (Strong Biconvexity and Bismoothness). There exist universal constants µ+ > 0 and µ−> 0 such that µ− 2 kU 0 −Uk2 F f(U 0, V ) −f(U, V ) −hU 0 −U, rUf(U, V )i µ+ 2 kU 0 −Uk2 F for all U, U 0, µ− 2 kV 0 −V k2 F f(U, V 0) −f(U, V ) −hV 0 −V, rV f(U, V )i µ+ 2 kV 0 −V k2 F for all V, V 0. For the simplicity of discussion, for now we assume U ⇤and V ⇤are the unique global minimizers to the generic optimization problem in (3.1). Assuming U ⇤is given, we can obtain V ⇤by V ⇤= argmin V 2Rn⇥k f(U ⇤, V ). (3.2) Condition 3.1 implies the objective function in (3.2) is strongly convex and smooth. Hence, we can choose any gradient-based algorithm to obtain V ⇤. For example, we can directly solve for V ⇤in rV f(U ⇤, V ) = 0, (3.3) or iteratively solve for V ⇤using gradient descent, i.e., V (t) = V (t−1) −⌘rV f(U ⇤, V (t−1)), (3.4) where ⌘is the step size. For the simplicity of discussion, we put aside the renormalization issue for now. In the example of gradient descent, by invoking classical convex optimization results [20], it is easy to prove that kV (t) −V ⇤kF kV (t−1) −V ⇤kF for all t = 0, 1, 2, . . . , where 2 (0, 1) is a contraction coefficient, which depends on µ+ and µ−in Condition 3.1. However, the first-order oracle rV f(U ⇤, V (t−1)) is not accessible in practice, since we do not know U ⇤. Instead, we only have access to rV f(U, V (t−1)), where U is arbitrary. To characterize the divergence between the ideal first-order oracle rV f(U ⇤, V (t−1)) and the accessible first-order oracle rV f(U, V (t−1)), we define a key quantity named projected oracle divergence, which takes the form D(V, V 0, U) = ⌦ rV f(U ⇤, V 0) −rV f(U, V 0), V −V ⇤/(kV −V ⇤kF) ↵ , (3.5) where V 0 is the point for evaluating the gradient. In the above example, it holds for V 0 = V (t−1). Later we will illustrate that, the projection of the difference of first-order oracles onto a specific one dimensional space, i.e., the direction of V −V ⇤, is critical to our analysis. In the above example of gradient descent, we will prove later that for V (t) = V (t−1) −⌘rV f(U, V (t−1)), we have kV (t) −V ⇤kF kV (t−1) −V ⇤kF + 2/µ+ · D(V (t), V (t−1), U). (3.6) In other words, the projection of the divergence of first-order oracles onto the direction of V (t) −V ⇤ captures the perturbation effect of employing the accessible first-order oracle rV f(U, V (t−1)) instead of the ideal rV f(U ⇤, V (t−1)). For V (t+1) = argminV f(U, V ), we will prove that kV (t) −V ⇤kF 1/µ−· D(V (t), V (t), U). (3.7) According to the update schemes shown in Algorithm 1, for alternating exact minimization, we set U = U (t) in (3.7), while for gradient descent or alternating gradient descent, we set U = U (t−1) or U = U (t) in (3.6) respectively. Correspondingly, similar results hold for kU (t) −U ⇤kF. To establish the geometric rate of convergence towards the global minima U ⇤and V ⇤, it remains to establish upper bounds for the projected oracle divergence. In the example of gradient decent we will prove that for some ↵2 (0, 1 −), 2/µ+ · D(V (t), V (t−1), U (t−1)) ↵kU (t−1) −U ⇤kF, which together with (3.6) (where we take U = U (t−1)) implies kV (t) −V ⇤kF kV (t−1) −V ⇤kF + ↵kU (t−1) −U ⇤kF. (3.8) Correspondingly, similar results hold for kU (t) −U ⇤kF, i.e., kU (t) −U ⇤kF kU (t−1) −U ⇤kF + ↵kV (t−1) −V ⇤kF. (3.9) Combining (3.8) and (3.9) we then establish the contraction max{kV (t) −V ⇤kF, kU (t) −U ⇤kF} (↵+ ) · max{kV (t−1) −V ⇤kF, kU (t−1) −U ⇤kF}, 4 which further implies the geometric convergence, since ↵2 (0, 1−). Respectively, we can establish similar results for alternating exact minimization and alternating gradient descent. Based upon such a unified analytic framework, we now simultaneously establish the main results. Remark 3.2. Our proposed projected oracle divergence is inspired by previous work [3, 2, 1], which analyzes the Wirtinger Flow algorithm for phase retrieval, the expectation maximization (EM) Algorithm for latent variable models, and the gradient descent algorithm for sparse coding. Though their analysis exploits similar nonconvex structures, they work on completely different problems, and the delivered technical results are also fundamentally different. 3.2 Matrix Sensing Before we present our main results, we first introduce an assumption known as the restricted isometry property (RIP). Recall that k is the rank of the target low rank matrix M ⇤. Assumption 3.3. The linear operator A(·) : Rm⇥n ! Rd defined in (2.2) satisfies 2k-RIP with parameter δ2k 2 (0, 1), i.e., for all ∆2 Rm⇥n such that rank(∆) 2k, it holds that (1 −δ2k)k∆k2 F kA(∆)k2 2 (1 + δ2k)k∆k2 F. Several random matrix ensembles satisfy k-RIP for a sufficiently large d with high probability. For example, suppose that each entry of Ai is independently drawn from a sub-Gaussian distribution, A(·) satisfies 2k-RIP with parameter δ2k with high probability for d = ⌦(δ−2 2k kn log n). The following theorem establishes the geometric rate of convergence of the nonconvex optimization algorithms summarized in Algorithm 1. Theorem 3.4. Assume there exists a sufficiently small constant C1 such that A(·) satisfies 2k-RIP with δ2k C1/k, and the largest and smallest nonzero singular values of M ⇤are constants, which do not scale with (d, m, n, k). For any pre-specified precision ✏, there exist an ⌘and universal constants C2 and C3 such that for all T ≥C2 log(C3/✏), we have kM (T ) −M ⇤kF ✏. The proof of Theorems 3.4 is provided in Appendices 4.1, A.1, and A.2. Theorem 3.4 implies that all three nonconvex optimization algorithms geometrically converge to the global optimum. Moreover, assuming that each entry of Ai is independently drawn from a sub-Gaussian distribution with mean zero and variance proxy one, our result further suggests, to achieve exact low rank matrix recovery, our algorithm requires the number of measurements d to satisfy d = ⌦(k3n log n), (3.10) since we assume that δ2k C1/k. This sample complexity result matches the state-of-the-art result for nonconvex optimization methods, which is established by [14]. In comparison with their result, which only covers the alternating exact minimization algorithm, our results holds for a broader variety of nonconvex optimization algorithms. Note that the sample complexity in (3.10) depends on a polynomial of σmax(M ⇤)/σmin(M ⇤), which is treated as a constant in our paper. If we allow σmax(M ⇤)/σmin(M ⇤) to increase with the dimension, we can plug the nonconvex optimization algorithms into the multi-stage framework proposed by [14]. Following similar lines to the proof of Theorem 3.4, we can derive a new sample complexity, which is independent of σmax(M ⇤)/σmin(M ⇤). See more details in [14]. 4 Proof of Main Results Due to space limitation, we only sketch the proof of Theorem 3.4 for alternating exact minimization. The proof of Theorem 3.4 for alternating gradient descent and gradient descent, and related lemmas are provided in the appendix. For notational simplicity, let σ1 = σmax(M ⇤) and σk = σmin(M ⇤). Before we proceed with the main proof, we first introduce the following lemma, which verifies Condition 3.1. Lemma 4.1. Suppose that A(·) satisfies 2k-RIP with parameter δ2k. Given an arbitrary orthonormal matrix U 2 Rm⇥k, for any V, V 0 2 Rn⇥k, we have 1 + δ2k 2 kV 0 −V k2 F ≥F(U, V 0) −F(U, V ) −hrV F(U, V ), V 0 −V i ≥1 −δ2k 2 kV 0 −V k2 F. The proof of Lemma 4.1 is provided in Appendix B.1. Lemma 4.1 implies that F(U, ·) is strongly convex and smooth in V given a fixed orthonormal matrix U, as specified in Condition 3.1. Equipped with Lemma 4.1, we now lay out the proof for each update scheme in Algorithm 1. 5 4.1 Proof of Theorem 3.4 (Alternating Exact Minimization) Proof. Throughout the proof of alternating exact minimization, we define a constant ⇠2 (1, 1) for notational simplicity. We assume that at the t-th iteration, there exists a matrix factorization of M ⇤= U ⇤(t)V ⇤(t)>, where U ⇤(t) is orthonormal. We choose the projected oracle divergence as D(V (t+0.5), V (t+0.5), U (t))= ⌧ rV F(U ⇤(t), V (t+0.5))−rV F(U (t), V (t+0.5)), V (t+0.5)−V ⇤(t) kV (t+0.5)−V ⇤(t)kF % . Remark 4.2. Note that the matrix factorization is not necessarily unique. Because given a factorization of M ⇤= UV >, we can always obtain a new factorization of M ⇤= eU eV >, where eU = UO and eV = V O for an arbitrary unitary matrix O 2 Rk⇥k. However, this is not a issue to our convergence analysis. As will be shown later, we can prove that there always exists a factorization of M ⇤satisfying the desired computational properties for each iteration (See Lemma 4.5, Corollaries 4.7 and 4.8). The following lemma establishes an upper bound for the projected oracle divergence. Lemma 4.3. Suppose that δ2k and U (t) satisfy δ2k  p 2(1 −δ2k)2σk 4⇠k(1 + δ2k)σ1 and kU (t) −U ⇤(t)kF  (1 −δ2k)σk 4⇠(1 + δ2k)σ1 . (4.1) Then we have D(V (t+0.5), V (t+0.5), U (t)) (1 −δ2k)σk 2⇠ kU (t) −U ⇤(t)kF. The proof of Lemma 4.3 is provided in Appendix B.2. Lemma 4.3 shows that the projected oracle divergence for updating V diminishes with the estimation error of U (t).The following lemma quantifies the progress of an exact minimization step using the projected oracle divergence. Lemma 4.4. We have kV (t+0.5) −V ⇤(t)kF 1/(1 −δ2k) · D(V (t+0.5), V (t+0.5), U (t)). The proof of Lemma 4.4 is provided in Appendix B.3. Lemma 4.4 illustrates that the estimation error of V (t+0.5) diminishes with the projected oracle divergence. The following lemma characterizes the effect of the renormalization step using QR decomposition. Lemma 4.5. Suppose that V (t+0.5) satisfies kV (t+0.5) −V ⇤(t)kF σk/4. (4.2) Then there exists a factorization of M ⇤= U ⇤(t+1)V ⇤(t+1) such that V ⇤(t+0.5) 2 Rn⇥k is an orthonormal matrix, and satisfies kV (t+1) −V ⇤(t+1)kF 2/σk · kV (t+0.5) −V ⇤(t)kF. The proof of Lemma 4.5 is provided in Appendix B.4. The next lemma quantifies the accuracy of the initialization U (0). Lemma 4.6. Suppose that δ2k satisfies δ2k  (1 −δ2k)2σ4 k 192⇠2k(1 + δ2k)2σ4 1 . (4.3) Then there exists a factorization of M ⇤= U ⇤(0)V ⇤(0)> such that U ⇤(0) 2 Rm⇥k is an orthonormal matrix, and satisfies kU (0) −U ⇤kF  (1−δ2k)σk 4⇠(1+δ2k)σ1 . The proof of Lemma 4.6 is provided in Appendix B.5. Lemma 4.6 implies that the initial solution U (0) attains a sufficiently small estimation error. Combining the above Lemmas, we obtain the next corollary for a complete iteration of updating V . Corollary 4.7. Suppose that δ2k and U (t) satisfy δ2k  (1 −δ2k)2σ4 k 192⇠2k(1 + δ2k)2σ4 1 and kU (t) −U ⇤(t)kF  (1 −δ2k)σk 4⇠(1 + δ2k)σ1 . (4.4) We then have kV (t+1) −V ⇤(t+1)kF  (1−δ2k)σk 4⇠(1+δ2k)σ1 . Moreover, we also have kV (t+1) −V ⇤(t+1)kF  1 ⇠kU (t) −U ⇤(t)kF and kV (t+0.5) −V ⇤(t)kF σk 2⇠kU (t) −U ⇤(t)kF. 6 The proof of Corollary 4.7 is provided in Appendix B.6. Since the alternating exact minimization algorithm updates U and V in a symmetric manner, we can establish similar results for a complete iteration of updating U in the next corollary. Corollary 4.8. Suppose that δ2k and V (t+1) satisfy δ2k  (1 −δ2k)2σ4 k 192⇠2k(1 + δ2k)2σ4 1 and kV (t+1) −V ⇤(t+1)kF  (1 −δ2k)σk 4⇠(1 + δ2k)σ1 . (4.5) Then there exists a factorization of M ⇤= U ⇤(t+1)V ⇤(t+1)> such U ⇤(t+1) is an orthonormal matrix, and satisfies kU (t+1) −U ⇤(t+1)kF  (1−δ2k)σk 4⇠(1+δ2k)σ1 . Moreover, we also have kU (t+1) −U ⇤(t+1)kF  1 ⇠kV (t+1) −V ⇤(t+1)kF and kU (t+0.5) −U ⇤(t+1)kF σk 2⇠kV (t+1) −V ⇤(t+1)kF. The proof of Corollary 4.8 directly follows Appendix B.6, and is therefore omitted. We then proceed with the proof of Theorem 3.4 for alternating exact minimization. Lemma 4.6 ensures that (4.4) of Corollary 4.7 holds for U (0). Then Corollary 4.7 ensures that (4.5) of Corollary 4.8 holds for V (1). By induction, Corollaries 4.7 and 4.8 can be applied recursively for all T iterations. Thus we obtain kV (T ) −V ⇤(T )kF 1 ⇠kU (T −1) −U ⇤(T −1)kF 1 ⇠2 kV (T −1) −V ⇤(T −1)kF · · ·  1 ⇠2T −1 kU (0) −U ⇤(0)kF  (1 −δ2k)σk 4⇠2T (1 + δ2k)σ1 , (4.6) where the last inequality comes from Lemma 4.6. Therefore, for a pre-specified accuracy ✏, we need at most T = l 1/2 · log ⇣ (1−δ2k)σk 2✏(1+δ2k)σ1 ⌘ log−1 ⇠ m iterations such that kV (T ) −V ⇤(T )kF  (1 −δ2k)σk 4⇠2T (1 + δ2k)σ1 ✏ 2. (4.7) Moreover, Corollary 4.8 implies kU (T −0.5) −U ⇤(T )kF σk 2⇠kV (T ) −V ⇤(T )kF  (1 −δ2k)σ2 k 8⇠2T +1(1 + δ2k)σ1 , where the last inequality comes from (4.6). Therefore, we need at most T = ⇠ 1/2 · log ✓(1 −δ2k)σ2 k 4⇠✏(1 + δ2k) ◆ log−1 ⇠ ⇡ iterations such that kU (T −0.5) −U ⇤kF  (1 −δ2k)σ2 k 8⇠2T +1(1 + δ2k)σ1  ✏ 2σ1 . (4.8) Then combining (4.7) and (4.8), we obtain kM (T ) −M ⇤k = kU (T −0.5)V (T )> −U ⇤(T )V ⇤(T )>kF kV (T )k2kU (T −0.5) −U ⇤(T )kF + kU ⇤(T )k2kV (T ) −V ⇤(T )kF ✏, (4.9) where the last inequality is from kV (T )k2 = 1 (since V (T ) is orthonormal) and kU ⇤k2 = kM ⇤k2 = σ1 (since U ⇤(T )V ⇤(T )> = M ⇤and V ⇤(T ) is orthonormal). Thus we complete the proof. 5 Extension to Matrix Completion Under the same setting as matrix sensing, we observe a subset of the entries of M ⇤, namely, W ✓ {1, . . . , m} ⇥{1, . . . , n}. We assume that W is drawn uniformly at random, i.e., M ⇤ i,j is observed independently with probability ¯⇢2 (0, 1]. To exactly recover M ⇤, a common assumption is the incoherence of M ⇤, which will be specified later. A popular approach for recovering M ⇤is to solve the following convex optimization problem min M2Rm⇥n kMk⇤ subject to PW(M ⇤) = PW(M), (5.1) where PW(M) : Rm⇥n ! Rm⇥n is an operator defined as [PW(M)]ij = Mij if (i, j) 2 W, and 0 otherwise. Similar to matrix sensing, existing algorithms for solving (5.1) are computationally 7 inefficient. Hence, in practice we usually consider the following nonconvex optimization problem min U2Rm⇥k,V 2Rn⇥k FW(U, V ), where FW(U, V ) = 1/2 · kPW(M ⇤) −PW(UV >)k2 F. (5.2) Similar to matrix sensing, (5.2) can also be efficiently solved by gradient-based algorithms. Due to space limitation, we present these matrix completion algorithms in Algorithm 2 of Appendix D. For the convenience of later convergence analysis, we partition the observation set W into 2T + 1 subsets W0,...,W2T using Algorithm 4 in Appendix D. However, in practice we do not need the partition scheme, i.e., we simply set W0 = · · · = W2T = W. Before we present the main results, we introduce an assumption known as the incoherence property. Assumption 5.1. The target rank k matrix M ⇤is incoherent with parameter µ, i.e., given the rank k singular value decomposition of M ⇤= U ⇤⌃⇤V ⇤>, we have max i kU ⇤ i⇤k2 µ p k/m and max j kV ⇤ j⇤k2 µ p k/n. The incoherence assumption guarantees that M ⇤is far from a sparse matrix, which makes it feasible to complete M ⇤when its entries are missing uniformly at random. The following theorem establishes the iteration complexity and the estimation error under the Frobenius norm. Theorem 5.2. Suppose that there exists a universal constant C4 such that ¯⇢satisfies ¯⇢≥C4µ2k3 log n log(1/✏)/m, (5.3) where ✏is the pre-specified precision. Then there exist an ⌘and universal constants C5 and C6 such that for any T ≥C5 log(C6/✏), we have kM (T ) −MkF ✏with high probability. Due to space limit, we defer the proof of Theorem 5.2 to the longer version of this paper. Theorem 5.2 implies that all three nonconvex optimization algorithms converge to the global optimum at a geometric rate. Furthermore, our results indicate that the completion of the true low rank matrix M ⇤ up to ✏-accuracy requires the entry observation probability ¯⇢to satisfy ¯⇢= ⌦(µ2k3 log n log(1/✏)/m). (5.4) This result matches the result established by [8], which is the state-of-the-art result for alternating minimization. Moreover, our analysis covers three nonconvex optimization algorithms. 6 Experiments We present numerical experiments for matrix sensing to support our theoretical analysis. We choose m = 30, n = 40, and k = 5, and vary d from 300 to 900. Each entry of Ai’s are independent sampled from N(0, 1). We then generate M = UV >, where eU 2 Rm⇥k and eV 2 Rn⇥k are two matrices with all their entries independently sampled from N(0, 1/k). We then generate d measurements by bi = hAi, Mi for i = 1, ..., d. Figure 1 illustrates the empirical performance of the alternating exact minimization and alternating gradient descent algorithms for a single realization. The step size for the alternating gradient descent algorithm is determined by the backtracking line search procedure. We see that both algorithms attain linear rate of convergence for d = 600 and d = 900. Both algorithms fail for d = 300, because d = 300 is below the minimum requirement of sample complexity for the exact matrix recovery. Number of Iterations 0 10 20 30 40 Estimation Error 10-5 100 d = 300 d = 600 d = 900 (a) Alternating Exact Minimization Number of Iterations 0 10 20 30 40 Estimation Error 10-5 100 d = 300 d = 600 d = 900 (b) Alternating Gradient Descent Figure 1: Two illustrative examples for matrix sensing. The vertical axis corresponds to estimation error kM (t) −MkF. The horizontal axis corresponds to numbers of iterations. Both the alternating exact minimization and alternating gradient descent algorithms attain linear rate of convergence for d = 600 and d = 900. But both algorithms fail for d = 300, because d = 300 is below the minimum requirement of sample complexity for the exact matrix recovery. 8 References [1] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. arXiv preprint arXiv:1503.00778, 2015. [2] Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm: From population to sample-based analysis. arXiv preprint arXiv:1408.2156, 2014. [3] Emmanuel J Cand`es, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. IEEE Transactions on Information Theory, 61(4):1985–2007, 2015. [4] Emmanuel J Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, 2009. [5] Emmanuel J Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2010. [6] Yudong Chen. Incoherence-optimal matrix completion. arXiv preprint arXiv:1310.0154, 2013. [7] David Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548–1566, 2011. [8] Moritz Hardt. Understanding alternating minimization for matrix completion. In Symposium on Foundations of Computer Science, pages 651–660, 2014. [9] Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz. Computational limits for matrix completion. arXiv preprint arXiv:1402.2331, 2014. [10] Moritz Hardt and Mary Wootters. Fast matrix completion without the condition number. arXiv preprint arXiv:1407.4070, 2014. [11] Trevor Hastie, Rahul Mazumder, Jason Lee, and Reza Zadeh. Matrix completion and low-rank SVD via fast alternating least squares. arXiv preprint arXiv:1410.2596, 2014. [12] Prateek Jain, Raghu Meka, and Inderjit S Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems, pages 937–945, 2010. [13] Prateek Jain and Praneeth Netrapalli. Fast exact matrix completion with finite samples. arXiv preprint arXiv:1411.1087, 2014. [14] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Symposium on Theory of Computing, pages 665–674, 2013. [15] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2010. [16] Raghunandan H Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from noisy entries. Journal of Machine Learning Research, 11:2057–2078, 2010. [17] Yehuda Koren. The Bellkor solution to the Netflix grand prize. Netflix Prize Documentation, 81, 2009. [18] Kiryung Lee and Yoram Bresler. Admira: Atomic decomposition for minimum rank approximation. IEEE Transactions on Information Theory, 56(9):4402–4416, 2010. [19] Sahand Negahban and Martin J Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. The Annals of Statistics, 39(2):1069–1097, 2011. [20] Yurii Nesterov. Introductory lectures on convex optimization: A basic course, volume 87. Springer, 2004. [21] Arkadiusz Paterek. Improving regularized singular value decomposition for collaborative filtering. In Proceedings of KDD Cup and workshop, volume 2007, pages 5–8, 2007. [22] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12:3413–3430, 2011. [23] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010. [24] Benjamin Recht and Christopher R´e. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201–226, 2013. [25] Angelika Rohde and Alexandre B Tsybakov. Estimation of high-dimensional low-rank matrices. The Annals of Statistics, 39(2):887–930, 2011. [26] Gilbert W Stewart, Ji-guang Sun, and Harcourt B Jovanovich. Matrix perturbation theory, volume 175. Academic press New York, 1990. [27] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via non-convex factorization. arXiv preprint arXiv:1411.8003, 2014. [28] G´abor Tak´acs, Istv´an Pil´aszy, Botty´an N´emeth, and Domonkos Tikk. Major components of the gravity recommendation system. ACM SIGKDD Explorations Newsletter, 9(2):80–83, 2007. 9
2015
104
5,597
Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Detection Kisuk Lee, Aleksandar Zlateski Massachusetts Institute of Technology {kisuklee,zlateski}@mit.edu Ashwin Vishwanathan, H. Sebastian Seung Princeton University {ashwinv,sseung}@princeton.edu Abstract Efforts to automate the reconstruction of neural circuits from 3D electron microscopic (EM) brain images are critical for the field of connectomics. An important computation for reconstruction is the detection of neuronal boundaries. Images acquired by serial section EM, a leading 3D EM technique, are highly anisotropic, with inferior quality along the third dimension. For such images, the 2D maxpooling convolutional network has set the standard for performance at boundary detection. Here we achieve a substantial gain in accuracy through three innovations. Following the trend towards deeper networks for object recognition, we use a much deeper network than previously employed for boundary detection. Second, we incorporate 3D as well as 2D filters, to enable computations that use 3D context. Finally, we adopt a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map. Backpropagation training is accelerated by ZNN, a new implementation of 3D convolutional networks that uses multicore CPU parallelism for speed. Our hybrid 2D3D architecture could be more generally applicable to other types of anisotropic 3D images, including video, and our recursive framework for any image labeling problem. 1 Introduction Neural circuits can be reconstructed by analyzing 3D brain images from electron microscopy (EM). Image analysis has been accelerated by semiautomated systems that use computer vision to reduce the amount of human labor required [1, 2, 3]. However, analysis of large image datasets is still laborious [4], so it is critical to increase automation by improving the accuracy of computer vision algorithms. A variety of machine learning approaches have been explored for the 3D reconstruction of neurons, a problem that can be formulated as image segmentation or boundary detection [5, 6]. This paper focuses on neuronal boundary detection in images from serial section EM, the most widespread kind of 3D EM [7]. The technique starts by cutting and collecting ultrathin (30 to 100 nm) sections of brain tissue. A 2D image is acquired from each section, and then the 2D images are aligned. The spatial resolution of the resulting 3D image stack along the z direction (perpendicular to the cutting plane) is set by the thickness of the sections. This is generally much worse than the resolution that EM yields in the xy plane. In addition, alignment errors may corrupt the image along the z direction. Due to these issues with the z direction of the image stack [6, 8], most existing analysis pipelines begin with 2D processing and only later transition to 3D. The stages are: (1) neuronal boundary detection within each 2D image, (2) segmentation of neuron cross sections within each 2D image, and (3) 3D reconstruction of individual neurons by linking across multiple 2D images [1, 9]. 1 Boundary detection in serial section EM images is done by a variety of algorithms. Many algorithms were compared in the ISBI’12 2D EM segmentation challenge, a publicly available dataset and benchmark [10]. The winning submission was an ensemble of max-pooling convolutional networks (ConvNets) created by IDSIA [11]. One of the ConvNet architectures shown in Figure 1 (N4) is the largest architecture from [11], and serves as a performance baseline for the research reported here. We improve upon N4 by adding several new elements (Fig. 1): Increased depth Our VD2D architecture is deeper than N4 (Figure 1), and borrows other nowstandard practices from the literature, such as rectified linear units (ReLUs), small filter sizes, and multiple convolution layers between pooling layers. VD2D already outperforms N4, without any use of 3D context. VD2D is motivated by the principle “the deeper, the better,” which has become popular for ConvNets applied to object recognition [12, 13]. 3D as well as 2D When human experts detect boundaries in EM images, they use 3D context to disambiguate certain locations. VD2D3D is also able to use 3D context, because it contains 3D filters in its later layers. ConvNets with 3D filters were previously applied to block face EM images [2, 3, 14]. Block face EM is another class of 3D EM techniques, and produces nearly isotropic images, unlike serial section EM. VD2D3D also contains 2D filters in its earlier layers. This novel hybrid use of 2D and 3D filters is suited for the highly anisotropic nature of serial section EM images. Recursive training of ConvNets VD2D and VD2D3D are concatenated to create an extremely deep network. The output of VD2D is a preliminary boundary map, which is provided as input to VD2D3D in addition to the original image (Fig. 1). Based on these two inputs, VD2D3D is trained to compute the final boundary map. Such “recursive” training has previously been applied to neural networks for boundary detection [8, 15, 16], but not to ConvNets. ZNN for 3D deep learning Very deep ConvNets with 3D filters are computationally expensive, so an efficient software implementation is critical. We trained our networks with ZNN (https: //github.com/seung-lab/znn-release, [17]), which uses multicore CPU parallelism for speed. ZNN is one of the few deep learning implementations that is well-optimized for 3D. recursive input 85x85x5 2D ConvNet VD2D 2D-3D ConvNet VD2D3D 2D ConvNet VD2D 2nd input 85x85x5 boundary prediction 1x1x1 1st input 193x193x5 1st stage 2nd stage initialize with learned 2D representations N4 VD2D VD2D3D Conv1a 3x3x1 Conv1b 3x3x1 Conv1c 2x2x1 Conv2a 3x3x1 Conv2b 3x3x1 Conv3a 3x3x1 Conv3b 3x3x1 Conv1a 3x3x1 Conv1b 3x3x1 Conv1c 2x2x1 Conv2a 3x3x1 Conv2b 3x3x1 Conv3a 3x3x1 Conv3b 3x3x1 ReLU 24 ReLU 24 tanh 24 ReLU 36 tanh 36 ReLU 48 tanh 48 Pool1 2x2x1 Pool2 2x2x1 Pool3 2x2x1 Pool3 2x2x2 Conv1 4x4x1 Conv2 5x5x1 Conv3 4x4x1 Conv5 3x3x1 Conv4a 3x3x2 Conv4b 3x3x2 Conv4c 3x3x2 ReLU 60 ReLU 60 ReLU 60 tanh 60 ReLU 200 Conv4a 3x3x1 Conv4b 3x3x1 Conv5 3x3x1 Output 1x1x1 Output 1x1x1 Output 1x1x1 Input 95x95x1 Input 109x109x1 Input 85x85x5 ReLU 100 Softmax 2 Softmax 2 Softmax 2 Pool3 2x2x1 tanh 200 Pool2 2x2x1 tanh 48 Pool1 2x2x1 tanh 48 tanh 48 Pool4 2x2x1 Conv4 4x4x1 Pool4 2x2x1 tanh 48 Figure 1: An overview of our proposed framework (top) and model architectures (bottom). The number of trainable parameters in each model is 220K (N4), 230K (VD2D), 310K (VD2D3D). 2 While we have applied the above elements to serial section EM images, they are likely to be generally useful for other types of images. The hybrid use of 2D and 3D filters may be useful for video, which can also be viewed as an anisotropic 3D image. Previous 3D ConvNets applied to video processing [18, 19] have used 3D filters exclusively. Recursively trained ConvNets are potentially useful for any image labeling problem. The approach is very similar to recurrent ConvNets [20], which iterate the same ConvNet. The recursive approach uses different ConvNets for the successive iterations. The recursive approach has been justified in several ways. In MRF/CRF image labeling, it is viewed as the sequential refinement of the posterior probability of a pixel being assigned a label, given both an input image and recursive input from the previous step [21]. Another viewpoint on recursive training is that statistical dependencies in label (category) space can be directly modeled from the recursive input [15]. From the neurobiological viewpoint, using a preliminary boundary map for an image to guide the computation of a better boundary map for the image can be interpreted as employing a top-down or attentional mechanism. We expect ZNN to have applications far beyond the one considered in this paper. ZNN can train very large networks, because CPUs can access more memory than GPUs. Task parallelism, rather than the SIMD parallelism of GPUs, allows for efficient training of ConvNets with arbitrary topology. A self-tuning capability automatically optimizes each layer by choosing between direct and FFT-based convolution. FFT convolution may be more efficient for wider layers or larger filter size [22, 23]. Finally, ZNN may incur less software development cost, owing to the relative ease of the generalpurpose CPU programming model. Finally, we applied our ConvNets to images from a new serial section EM dataset from the mouse piriform cortex. This dataset is important to us, because we are interested in conducting neuroscience research concerning this brain region. Even to those with no interest in piriform cortex, the dataset could be useful for research on image segmentation algorithms. Therefore we make the annotated dataset publicly available (http://seunglab.org/data/). 2 Dataset and evaluation Images of mouse piriform cortex The datasets described here were acquired from the piriform cortex of an adult mouse prepared with aldehyde fixation and reduced osmium staining [24]. The tissue was sectioned using the automatic tape collecting ultramicrotome (ATUM) [25] and sections were imaged on a Zeiss field emission scanning electron microscope [26]. The 2D images were assembled into 3D stacks using custom MATLAB routines and TrakEM2, and each stack was manually annotated using VAST (https://software.rc.fas.harvard.edu/ lichtman/vast/, [25]) (Figure 2). Then each stack was checked and corrected by another annotator. The properties of the four image stacks are detailed in Table 1. It should be noted that image quality varies across the stacks, due to aging of the field emission source in the microscope. In all experiments we used stack1 for testing, stack2 and stack3 for training, and stack4 as an additional training data for recursive training. Figure 2: Example dataset (stack1, Table 1) and results of each architecture on stack1. 3 Table 1: Piriform cortex datasets Name stack1 stack2 stack3 stack4 Resolution (nm3) 7 · 7 · 40 7 · 7 · 40 7 · 7 · 40 10 · 10 · 40 Dimension (voxel3) 255 · 255 · 168 512 · 512 · 170 512 · 512 · 169 256 · 256 · 121 # samples 10.9M 44.6 M 44.3 M 7.9 M Usage Test Training Training Training (extra) Pixel error We use softmax activation in the output layer of our networks to produce per-pixel real-valued outputs between 0 and 1, each of which is interpreted as the probability of an output pixel being boundary, or vice versa. This real-valued “boundary map” can be thresholded to generate a binary boundary map, from which the pixel-wise classification error is computed. We report the best classification error obtained by optimizing the binarization threshold with line search. Rand score We evaluate 2D segmentation performance with the Rand scoring system [27, 28]. Let nij denote the number of pixels simultaneously in the ith segment of the proposal segmentation and the jth segment of the ground truth segmentation. The Rand merge score and the Rand split score V Rand merge = P ij n2 ij P i(P j nij)2 , V Rand split = P ij n2 ij P j(P i nij)2 . are closer to one when there are fewer merge and split errors, respectively. The Rand F-score is the harmonic mean of V Rand merge and V Rand split . To compute the Rand scores, we need to first obtain 2D neuronal segmentation based on the realvalued boundary map. To this end, we apply two segmentation algorithms with different levels of sophistication: (1) simple thresholding followed by computing 2D connected components, and (2) modified graph-based watershed algorithm [29]. We report the best Rand F-score obtained by optimizing parameters for each algorithm with line search, as well as the precision-recall curve for the Rand scores. 3 Training with ZNN ZNN [17] was built for 3D ConvNets. 2D convolution is regarded as a special case of 3D convolution, in which one of the three filter dimensions has size 1. For the details on how ZNN implements task parallelism on multicore CPUs, we refer interested readers to [17]. Here we describe only aspects of ZNN that are helpful for understanding how it was used to implement the ConvNets of this paper. Dense output with maximum filtering In object recognition, a ConvNet is commonly applied to produce a single output value for an entire input image. However, there are many applications in which “dense output” is required, i.e., the ConvNet should produce an output image with the same resolution as the original input image. Such applications include boundary detection [11], image labeling [30], and object localization [31]. ZNN was built from the ground up for dense output and also for dense feature maps.1 ZNN employs max-filtering, which slides a window across the image and applies the maximum operation to the window (Figure 3). Max-filtering is the dense variant of max-pooling. Consequently all feature maps remain intact as dense 3D volumes during both forward and backward passes, making them straightforward for visualization and manipulation. On the other hand, all filtering operations are sparse, in the sense that the sliding window samples sparsely from a regularly spaced set of voxels in the image (Figure 3). ZNN can control the spacing/sparsity of any filtering operation, either convolution or max-filtering. ZNN can efficiently compute the dense output of a sliding window max-pooling ConvNet by making filter sparsity depend on the number of prior max-filterings. More specifically, each max-filtering 1Feature maps with the same resolution as the original input image. See Figure 5 for example. Note that the feature maps shown in Figure 5 keep the original resolution even after a couple of max-pooling layers. 4 Convolution Convolution Max-Pool Convolution Max-Filter Sparse Convolution Figure 3: Sliding window max-pooling ConvNet (left) applied on three color-coded adjacent input windows producing three outputs. Equivalent outputs produced by a max-filtering ConvNet with sparse filters (right) applied on a larger window. Computation is minimized by reusing the intermediate values for computing multiple outputs (as color coded). increases the sparsity of all subsequent filterings by a factor equal to the size of the max-pooling window. This approach, which we employ for the paper, is also called “skip-kernels” [31] or “filter rarefaction” [30], and is equivalent in its results to “max-fragmentation-pooling” [32, 33]. Note however that ZNN is more general, as the sparsity of filters need not depend on max-filtering, but can be controlled independently. Output patch training Training in ZNN is based on loss computed over a dense output patch of arbitrary size. The patch can be arbitrarily large, limited only by memory. This includes the case of a patch that spans the entire image [30, 33]. Although large patch sizes reduce the computational cost per output pixel, neighboring pixels in the patch may provide redundant information. In practice, we choose an intermediate output patch size. 4 Network architecture N4 As a baseline for performance comparisons, we adopted the largest 2D ConvNet architecture (named N4) from Cires¸an et al. [11] (Figure 1). VD2D The architecture of VD2D (“Very Deep 2D”) is shown in Figure 1. Multiple convolution layers are between each max-pooling layer. All convolution filters are 3×3×1, except that Conv1c uses a 2 × 2 × 1 filter to make the “field of view” or “receptive field” for a single output pixel have an odd-numbered size and therefore centerable around the output pixel. Due to the use of smaller filters, the number of trainable parameters in VD2D (230K) is roughly the same as in the shallower N4 (220K). VD2D3D The architecture of VD2D3D (“Very Deep 2D-3D”) is initially identical to VD2D (Figure 1), except that later convolution layers switch to 3 × 3 × 2 filters. This causes the number of trainable parameters to increase, so we compensate by trimming the size of Conv4c to just 100 feature maps. The 3D filters in the later layers should enable the network to use 3D context to detect neuronal boundaries. The use of 2D filters in the initial layers makes the network faster to run and train. Recursive training It is possible to apply VD2D3D by itself to boundary detection, giving the raw image as the only input. However, we use a recursive approach in which VD2D3D receives an extra input, the output of VD2D. As we will see below, this produces a significant improvement in performance. It should be noted that instead of providing the recursive input directly to VD2D3D, we added new layers 2 dedicated to processing it. This separate, parallel processing stream for recursive input joins the main stream at Conv1c, allowing for more complex, highly nonlinear interaction between the low-level features and the contextual information in the recursive input. 2These layers are identical to Conv1a, Conv1b, and Conv1c. 5 5 Training procedures Networks were trained using backpropagation with the cross-entropy loss function. We first trained VD2D, and then trained VD2D3D. The 2D layers of VD2D3D were initialized using trained weights from VD2D. This initialization meant that our recursive approach bore some similarity to recurrent ConvNets, in which the first and second stage networks are constrained to be identical [20]. However, we did not enforce exact weight sharing, but fine-tuned the weights of VD2D3D. Output patch As mentioned earlier, training with ZNN is done by dense output patch-based gradient update with per-pixel loss. During training, an output patch of specified size is randomly drawn from the training stacks at the beginning of each forward pass. Class rebalancing In dense output patch-based training, imbalance between the number of training samples in different classes (e.g. boundary/non-boundary) can be handled by either sampling a balanced number of pixels from an output patch, or by differentially weighting the per-pixel loss [30]. In our experiment, we adopted the latter approach (loss weighting) to deal with the high imbalance between boundary and non-boundary pixels. Data augmentation We used the same data augmentation method used in [11], randomly rotating and flipping 2D image patches. Hyperparameter We always used the fixed learning rate of 0.01 with the momentum of 0.9. When updating weights we divided the gradient by the total number of pixels in an output patch, similar to the typical minibatch averaging. We first trained N4 with an output patch of size 200 × 200 × 1 for 90K gradient updates. Next, we trained VD2D with 150×150×1 output patches, reflecting the increased size of model compared to N4. After 60K updates, we evaluated the trained VD2D on the training stacks to obtain preliminary boundary maps, and started training VD2D3D with 100×100×1 output patches, again reflecting the increased model complexity. We trained VD2D3D for 90K updates. In this recursive training stage we additionally used stack4 to prevent VD2D3D from being overly dependent on the good-quality boundary maps for training stacks. It should be noted that stack4 has slightly lower xy-resolution than other stacks (Table 1), which we think is helpful in terms of learning multi-scale representation. Our proposed recursive framework is different from the training of recurrent ConvNets [20] in that recursive input is not dynamically produced by the first ConvNet during training, but evaluated before and being fixed throughout the recursive training stage. However, it is also possible to further train the first ConvNet even after evaluating its preliminary output as recursive input to the second ConvNet. We further trained VD2D for another 30K updates while VD2D3D is being trained. We report the final performance of VD2D after a total of 90K updates. We also replaced the initial VD2D boundary map with the final one when evaluating VD2D3D results. With ZNN, it took two days to train both N4 and VD2D for 90K updates, and three days to train VD2D3D for 90K updates. 6 Results In this section, we show both quantitative and qualitative results obtained by the three architectures shown in Figure 1, namely N4, VD2D, and VD2D3D. The pixel-wise classification error of each model on test set was 10.63% (N4), 9.77% (VD2D), and 8.76% (VD2D3D). Quantitative comparison Figure 4 compares the result of each architecture on test set (stack1), both quantitatively and qualitatively. The leftmost bar graph shows the best 2D Rand F-score of each model obtained by 2D segmentation with (1) simpler connected component clustering and (2) more sophisticated watershed-based segmentation. The middle and rightmost graphs show the precision-recall curve of each model for the Rand scores obtained with the connected component and watershed-based segmentation, respectively. We observe that VD2D performs significantly better than N4, and also VD2D3D outperforms VD2D by a significant margin in terms of both best Rand F-score and overall precision-recall curve. Qualitative comparison Figure 2 shows the visualization of boundary detection results of each model on test set, along with the original EM images and ground truth boundary map. We observe that false detection of boundary on intracellular regions was significantly reduced in VD2D3D, 6 Rand F-score 0.8 0.85 0.9 0.95 1 Best Rand F-score Connected component Watershed N4 VD2D VD2D3D Rand split score 0.6 0.7 0.8 0.9 1 Rand merge score 0.6 0.7 0.8 0.9 1Rand score (connected component) N4 VD2D VD2D3D Rand split score 0.6 0.7 0.8 0.9 1 Rand merge score 0.6 0.7 0.8 0.9 1 Rand score (watershed) N4 VD2D VD2D3D image data ground-truth N4 VD2D VD2D3D Figure 4: Quantitative (top) and qualitative (middle and bottom) evaluation of results. which demonstrates the effectiveness of the proposed 2D-3D ConvNet combined with recursive approach. The middle and bottom rows in Figure 4 show some example locations in test set where both 2D models (N4 and VD2D) failed to correctly detect the boundary, or erroneously detected false boundaries, whereas VD2D3D correctly predicted on those ambiguous locations. Visual analysis on the boundary detection results of each model again demonstrates the superior performance of the proposed recursively trained 2D-3D ConvNet over 2D models. 7 Discussion Biologically-inspired recursive framework Our proposed recursive framework is greatly inspired by the work of Chen et al. [34]. In this work, they examined the close interplay between neurons in the primary and higher visual cortical areas (V1 and V4, respectively) of monkeys performing contour detection tasks. In this task, monkeys were trained to detect a global contour pattern that consists of multiple collinearly aligned bars in a cluttered background. The main discovery of their work is as follows: initially, V4 neurons responded to the global contour pattern. After a short time delay (∼40 ms), the activity of V1 neurons responding to each bar composing the global contour pattern was greatly enhanced, whereas those responding to the background was largely suppressed, despite the fact that those “foreground” and “background” V1 neurons have similar response properties. They referred to it as “push-pull response mode” of V1 neurons between foreground and background, which is attributable to the top-down influence from the higher level V4 neurons. This process is also referred to as “countercurrent disambiguating process” [34]. This experimental result readily suggests a mechanistic interpretation on the recursive training of deep ConvNets for neuronal boundary detection. We can roughly think of V1 responses as lower level feature maps in a deep ConvNet, and V4 responses as higher level feature maps or output activations. Once the overall ‘contour’ of neuonal boundaries is detected by the feedforward processing of VD2D, this preliminary boundary map can then be recursively fed to VD2D3D. This process 7 VD2D Conv2a VD2D3D Conv2a VD2D Conv3b VD2D3D Conv3b Figure 5: Visualization of the effect of recursive training. Left: an example feature map from the lower layer Conv2a in VD2D, and its corresponding feature map in VD2D3D. Right: an example feature map from the higher layer Conv3b in VD2D, and its corresponding feature map in VD2D3D. Note that recursive training greatly enhances the signal-to-noise ratio of boundary representations. can be thought of as corresponding to the initial detection of global contour patterns by V4 and its top-down influence on V1. During recursive training, VD2D3D will learn how to integrate the pixel-level contextual information in the recursive input with the low-level features, presumably in such a way that feature activations on the boundary location are enhanced, whereas activations unrelated to the neuronal boundary (intracellular space, mitochondria, etc.) are suppressed. Here the recursive input can also be viewed as the modulatory ‘gate’ through which only the signals relevant to the given task of neuronal boundary detection can pass. This is convincingly demonstrated by visualizing and comparing the feature maps of VD2D and VD2D3D. In Figure 5, the noisy representations of oriented boundary segments in VD2D (first and third volumes) are greatly enhanced in VD2D3D (second and fourth volumes), with signals near boundary being preserved or amplified, and noises in the background being largely suppressed. This is exactly what we expected from the proposed interpretation of our recursive framework. Potential of ZNN We have shown that ZNN can serve as a viable alternative to the mainstream GPU-based deep learning frameworks, especially when processing 3D volume data with 3D ConvNets. ZNN’s unique features including the large output patch-based training and the dense computation of feature maps can be further utilized for additional computations to better perform the given task. In theory, we can perform any kind of computation on the dense output prediction between each forward and backward passes. For instance, objective functions that consider topological constraints (e.g. MALIS [35]) or sampling of topologically relevant locations (e.g. LED weighting [15]) can be applied to the dense output patch, in addition to loss computation, before each backward pass. Dense feature maps also enable the straighforward implementation of multi-level feature integration for fine-grained segmentation. Long et al. [30] resorted to upsampling of the higher level features with lower resolution in order to integrate them with the lower level features with higher resolution. Since ZNN maintains every feature map at the original resolution of input, it is straighforward enough to combine feature maps from any level, removing the need for upsampling. Acknowledgments We thank Juan C. Tapia, Gloria Choi and Dan Stettler for initial help with tissue handling and Jeff Lichtman and Richard Schalek with help in setting up tape collection. Kisuk Lee was supported by a Samsung Scholarship. The recursive approach proposed in this paper was partially motivated by Matthew J. Greene’s preliminary experiments. We are grateful for funding from the Mathers Foundation, Keating Fund for Innovation, Simons Center for the Social Brain, DARPA (HR001114-2-0004), and ARO (W911NF-12-1-0594). References [1] S. Takemura et al. A visual motion detection circuit suggested by Drosophila connectomics. Nature, 500:175-181, 2013. 8 [2] M. Helmstaedter, K. L. Briggman, S. C. Turaga, V. Jain, H. S. Seung, and W. Denk. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500:168-174, 2013. [3] J. S. Kim et al. Space-time wiring specificity supports direction selectivity in the retina. Nature, 509:331336, 2014. [4] M. Helmstaedter. Cellular-resolution connectomics: challenges of dense neural circuit reconstruction. Nature Methods, 10(6):501-507, 2013. [5] V. Jain, H. S. Seung, and S. C. Turaga. Machines that learn to segment images: a crucial technology for connectomics. Current Opinion in Neurobiology, 20:653-666, 2010. [6] T. Tasdizen et al. Image segmentation for connectomics using machine learning. In Computational Intelligence in Biomedical Imaging, pp. 237-278, ed. K. Suzuki, Springer New York, 2014. [7] K. L. Briggman and D. D. Bock. Volume electron microscopy for neuronal circuit reconstruction. Current Opinion in Neurobiology, 22(1):154-61, 2012. [8] E. Jurrus et al. Detection of neuron membranes in electron microscopy images using a serial neural network architecture. Medical Image Analysis, 14(6):770-783, 2010. [9] T. Liu, C. Jones, M. Seyedhosseini, and T. Tasdizen. A modular hierarchical approach to 3D electron microscopy image segmentation. Journal of Neuroscience Methods, 26:88-102, 2014. [10] Segmentation of neuronal structures in EM stacks challenge - ISBI 2012. http://brainiac2.mit. edu/isbi_challenge/. [11] D. C. Cires¸an, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In NIPS, 2012. [12] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [13] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [14] S. C. Turaga et al. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Computation, 22:511-538, 2010. [15] G. B. Huang and V. Jain. Deep and wide multiscale recursive networks for robust image labeling. In ICLR, 2014. [16] M. Seyedhosseini and T. Tasdizen. Multi-class multi-scale series contextual model for image segmentation. Image Processing, IEEE Transactions on, 22(11):4486-4496, 2013. [17] A. Zlateski, K. Lee, and H. S. Seung. ZNN - A fast and scalable algorithm for training 3D convolutional networks on multi-core and many-core shared memory machines. arXiv:1510.06706, 2015. [18] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning Spatiotemporal Features with 3D Convolutional Networks. arXiv:1412.0767, 2014. [19] L. Yao, A. Torabi, K. Cho, N. Ballas, C. Pal, H. Larochelle, and A. Courville. Describing Videos by Exploiting Temporal Structure. arXiv:1502.08029, 2015. [20] P. O. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene labeling. In ICML, 2014. [21] Z. Tu. Auto-context and its application to high-level vision tasks. In CVPR, 2008. [22] M. Mathieu, M. Henaff, and Y. LeCun. Fast training of convolutional networks through FFTs. In ICLR, 2014. [23] N. Vasilache, J. Johnson, M. Mathieu, S. Chintala, S. Piantino, and Y. LeCun. Fast convolutional nets with fbfft: a GPU performance evaluation. In ICLR, 2015. [24] J. C. Tapia et al. High-contrast en bloc staining of neuronal tissue for field emission scanning electron microscopy. Nature Protocols, 7(2):193-206, 2012. [25] N. Kasthuri et al. Saturated reconstruction of a volume of neocortex. Cell 162, 648-61, 2015. [26] K. J. Hayworth et al. Imaging ATUM ultrathin section libraries with WaferMapper: a multi-scale approach to EM reconstruction of neural circuits. Frontiers in Neural Circuits, 8, 2014. [27] W. M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):847-850, 1971. [28] R. Unnikrishnan, C. Pantofaru, and M. Hebert. Toward objective evaluation of image segmentation algorithms. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 29(6):929-944, 2007. [29] A. Zlateski and H. S. Seung. Image segmentation by size-dependent single linkage clustering of a watershed basin graph. arXiv:1505.00249, 2015. [30] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [31] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. OverFeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [32] A. Giusti, D. C. Cires¸an, J. Masci, L. M. Gambardella, and J. Schmidhuber. Fast image scanning with deep max-pooling convolutional neural networks. In ICIP, 2013. [33] J. Masci, A. Giusti, D. C. Cires¸an, G. Fricout, and J. Schmidhuber. A fast learning algorithm for image segmentation with max-pooling convolutional networks. In ICIP, 2013. [34] M. Chen, Y. Yan, X. Gong, C. D. Gilbert, H. Liang, and W. Li. Incremental integration of global contours through interplay between visual cortical areas. Neuron, 82(3):682-694, 2014. [35] S. C. Turaga et al. Maximin affinity learning of image segmentation. In NIPS, 2009. 9
2015
105
5,598
Multi-class SVMs: From Tighter Data-Dependent Generalization Bounds to Novel Algorithms Yunwen Lei Department of Mathematics City University of Hong Kong yunwelei@cityu.edu.hk ¨Ur¨un Dogan Microsoft Research Cambridge CB1 2FB, UK udogan@microsoft.com Alexander Binder ISTD Pillar Singapore University of Technology and Design Machine Learning Group, TU Berlin alexander binder@sutd.edu.sg Marius Kloft Department of Computer Science Humboldt University of Berlin kloft@hu-berlin.de Abstract This paper studies the generalization performance of multi-class classification algorithms, for which we obtain—for the first time—a data-dependent generalization error bound with a logarithmic dependence on the class size, substantially improving the state-of-the-art linear dependence in the existing data-dependent generalization analysis. The theoretical analysis motivates us to introduce a new multi-class classification machine based on ℓp-norm regularization, where the parameter p controls the complexity of the corresponding bounds. We derive an efficient optimization algorithm based on Fenchel duality theory. Benchmarks on several real-world datasets show that the proposed algorithm can achieve significant accuracy gains over the state of the art. 1 Introduction Typical multi-class application domains such as natural language processing [1], information retrieval [2], image annotation [3] and web advertising [4] involve tens or hundreds of thousands of classes, and yet these datasets are still growing [5]. To handle such learning tasks, it is essential to build algorithms that scale favorably with respect to the number of classes. Over the past years, much progress in this respect has been achieved on the algorithmic side [4–7], including efficient stochastic gradient optimization strategies [8]. Although also theoretical properties such as consistency [9–11] and finite-sample behavior [1, 12– 15] have been studied, there still is a discrepancy between algorithms and theory in the sense that the corresponding theoretical bounds do often not scale well with respect to the number of classes. This discrepancy occurs the most strongly in research on data-dependent generalization bounds, that is, bounds that can measure generalization performance of prediction models purely from the training samples, and which thus are very appealing in model selection [16]. A crucial advantage of these bounds is that they can better capture the properties of the distribution that has generated the data, which can lead to tighter estimates [17] than conservative data-independent bounds. To our best knowledge, for multi-class classification, the first data-dependent error bounds were given by [14]. These bounds exhibit a quadratic dependence on the class size and were used by [12] and [18] to derive bounds for kernel-based multi-class classification and multiple kernel learning (MKL) problems, respectively. More recently, [13] improve the quadratic dependence to a linear dependence by introducing a novel surrogate for the multi-class margin that is independent on the true realization of the class label. 1 However, a heavy dependence on the class size, such as linear or quadratic, implies a poor generalization guarantee for large-scale multi-class classification problems with a massive number of classes. In this paper, we show data-dependent generalization bounds for multi-class classification problems that—for the first time—exhibit a sublinear dependence on the number of classes. Choosing appropriate regularization, this dependence can be as mild as logarithmic. We achieve these improved bounds via the use of Gaussian complexities, while previous bounds are based on a wellknown structural result on Rademacher complexities for classes induced by the maximum operator. The proposed proof technique based on Gaussian complexities exploits potential coupling among different components of the multi-class classifier, while this fact is ignored by previous analyses. The result shows that the generalization ability is strongly impacted by the employed regularization. Which motivates us to propose a new learning machine performing block-norm regularization over the multi-class components. As a natural choice we investigate here the application of the proven ℓp norm [19]. This results in a novel ℓp-norm multi-class support vector machine (MC-SVM), which contains the classical model by Crammer & Singer [20] as a special case for p = 2. The bounds indicate that the parameter p crucially controls the complexity of the resulting prediction models. We develop an efficient optimization algorithm for the proposed method based on its Fenchel dual representation. We empirically evaluate its effectiveness on several standard benchmarks for multiclass classification taken from various domains, where the proposed approach significantly outperforms the state-of-the-art method of [20]. The remainder of this paper is structured as follows. Section 2 introduces the problem setting and presents the main theoretical results. Motivated by which we propose a new multi-class classification model in Section 3 and give an efficient optimization algorithm based on Fenchel duality theory. In Section 4 we evaluate the approach for the application of visual image recognition and on several standard benchmark datasets taken from various application domains. Section 5 concludes. 2 Theory 2.1 Problem Setting and Notations This paper considers multi-class classification problems with c ≥2 classes. Let X denote the input space and Y = {1, 2, . . . , c} denote the output space. Assume that we are given a sequence of examples S = {(x1, y1), . . . , (xn, yn)} ∈(X × Y)n, independently drawn according to a probability measure P defined on the sample space Z = X × Y. Based on the training examples S, we wish to learn a prediction rule h from a space H of hypotheses mapping from Z to R and use the mapping x →arg maxy∈Y h(x, y) to predict (ties are broken by favoring classes with a lower index, for which our loss function defined below always counts an error). For any hypothesis h ∈H, the margin ρh(x, y) of the function h at a labeled example (x, y) is ρh(x, y) := h(x, y)−maxy′̸=y h(x, y′). The prediction rule h makes an error at (x, y) if ρh(x, y) ≤0 and thus the expected risk incurred from using h for prediction is R(h) := E[1ρh(x,y)≤0]. Any function h : X × Y →R can be equivalently represented by the vector-valued function (h1, . . . , hc) with hj(x) = h(x, j), ∀j = 1, . . . , c. We denote by eH := {ρh : h ∈H} the class of margin functions associated to H. Let k : X × X →R be a Mercer kernel with φ being the associated feature map, i.e., k(x, ˜x) = ⟨φ(x), φ(˜x)⟩for all x, ˜x ∈X. We denote by ∥· ∥∗the dual norm of ∥· ∥, i.e., ∥w∥∗:= sup∥¯ w∥≤1⟨w, ¯w⟩. For a convex function f, we denote by f ∗its Fenchel conjugate, i.e., f ∗(v) := supw[⟨w, v⟩−f(w)]. For any w = (w1, . . . , wc) we define the ℓ2,p-norm by ∥w∥2,p := [Pc j=1 ∥wj∥p 2]1/p. For any p ≥1, we denote by p∗the dual exponent of p satisfying 1/p + 1/p∗= 1 and ¯p := p(2 −p)−1. We require the following definitions. Definition 1 (Strong Convexity). A function f : X →R is said to be β-strongly convex w.r.t. a norm ∥· ∥iff ∀x, y ∈X and ∀α ∈(0, 1), we have f(αx + (1 −α)y) ≤αf(x) + (1 −α)f(y) −β 2 α(1 −α)∥x −y∥2. Definition 2 (Regular Loss). We call ℓa L-regular loss if it satisfies the following properties: (i) ℓ(t) bounds the 0-1 loss from above: ℓ(t) ≥1t≤0; (ii) ℓis L-Lipschitz in the sense |ℓ(t1) −ℓ(t2)| ≤L|t1 −t2|; 2 (iii) ℓ(t) is decreasing and it has a zero point cℓ, i.e., ℓ(cℓ) = 0. Some examples of L-regular loss functions include the hinge ℓh(t) = (1 −t)+ and the margin loss ℓρ(t) = 1t≤0 + (1 −tρ−1)10<t≤ρ, ρ > 0. (1) 2.2 Main results Our discussion on data-dependent generalization error bounds is based on the established methodology of Rademacher and Gaussian complexities [21]. Definition 3 (Rademacher and Gaussian Complexity). Let H be a family of real-valued functions defined on Z and S = (z1, . . . , zn) a fixed sample of size n with elements in Z. Then, the empirical Rademacher and Gaussian complexities of H with respect to the sample S are defined by RS(H) = Eσ  sup h∈H 1 n n X i=1 σih(zi)  , GS(H) = Eg  sup h∈H 1 n n X i=1 gih(zi)  , where σ1, . . . , σn are independent random variables with equal probability taking values +1 or −1, and g1, . . . , gn are independent N(0, 1) random variables. Note that we have the following comparison inequality relating Rademacher and Gaussian complexities (Cf. Section 4.2 in [22]): RS(H) ≤ rπ 2 GS(H) ≤3 rπ 2 p log nRS(H). (2) Existing work on data-dependent generalization bounds for multi-class classifiers [12–14, 18] builds on the following structural result on Rademacher complexities (e.g., [12], Lemma 8.1): RS(max{h1, . . . , hc} : hj ∈Hj, j = 1, . . . , c) ≤ c X j=1 RS(Hj), (3) where H1, . . . , Hc are c hypothesis sets. This result is crucial for the standard generalization analysis of multi-class classification since the margin ρh involves the maximum operator, which is removed by (3), but at the expense of a linear dependency on the class size. In the following we show that this linear dependency is suboptimal because (3) does not take into account the coupling among different classes. For example, a common regularizer used in multi-class learning algorithms is r(h) = Pc j=1 ∥hj∥2 2 [20], for which the components h1, . . . , hc are correlated via a ∥· ∥2,2 regularizer, and the bound (3) ignoring this correlation would not be effective in this case [12–14, 18]. As a remedy, we here introduce a new structural complexity result on function classes induced by general classes via the maximum operator, while allowing to preserve the correlations among different components meanwhile. Instead of considering the Rademacher complexity, Lemma 4 concerns the structural relationship of Gaussian complexities since it is based on a comparison result among different Gaussian processes. Lemma 4 (Structural result on Gaussian complexity). Let H be a class of functions defined on X × Y with Y = {1, . . . , c}. Let g1, . . . , gnc be independent N(0, 1) distributed random variables. Then, for any sample S = {x1, . . . , xn} of size n, we have GS {max{h1, . . . , hc} : h = (h1, . . . , hc) ∈H}  ≤1 nEg sup h=(h1,...,hc)∈H n X i=1 c X j=1 g(j−1)n+ihj(xi), (4) where Eg denotes the expectation w.r.t. to the Gaussian variables g1, . . . , gnc. The proof of Lemma 4 is given in Supplementary Material A. Equipped with Lemma 4, we are now able to present a general data-dependent margin-based generalization bound. The proof of the following results (Theorem 5, Theorem 7 and Corollary 8) is given in Supplementary Material B. Theorem 5 (Data-dependent generalization bound for multi-class classification). Let H ⊂RX×Y be a hypothesis class with Y = {1, . . . , c}. Let ℓbe a L-regular loss function and denote Bℓ:= sup(x,y),h ℓ(ρh(x, y)). Suppose that the examples S = {(x1, y1), . . . , (xn, yn)} are independently 3 drawn from a probability measure defined on X × Y. Then, for any δ > 0, with probability at least 1 −δ, the following multi-class classification generalization bound holds for any h ∈H: R(h) ≤1 n n X i=1 ℓ(ρh(xi, yi)) + 2L √ 2π n Eg sup h=(h1,...,hc)∈H n X i=1 c X j=1 g(j−1)n+ihj(xi) + 3Bℓ s log 2 δ 2n , where g1, . . . , gnc are independent N(0, 1) distributed random variables. Remark 6. Under the same condition of Theorem 5, [12] derive the following data-dependent generalization bound (Cf. Corollary 8.1 in [12]): R(h) ≤1 n n X i=1 ℓ(ρh(xi, yi)) + 4Lc n RS({x →h(x, y) : y ∈Y, h ∈H}) + 3Bℓ s log 2 δ 2n . This linear dependence on c is due to the use of (3). For comparison, Theorem 5 implies that the dependence on c is governed by the term Pn i=1 Pc j=1 g(j−1)n+ihj(xi), an advantage of which is that the components h1, . . . , hc are jointly coupled. As we will see, this allows us to derive an improved result with a favorable dependence on c, when a constraint is imposed on (h1, . . . , hc). The following Theorem 7 applies the general result in Theorem 5 to kernel-based methods. The hypothesis space is defined by imposing a constraint with a general strongly convex function. Theorem 7 (Data-dependent generalization bound for kernel-based multi-class learning algorithms and MC-SVMs). Suppose that the hypothesis space is defined by H := Hf,Λ = {hw = (⟨w1, φ(x)⟩, . . . , ⟨wc, φ(x)⟩) : f(w) ≤Λ}, where f is a β-strongly convex function w.r.t. a norm ∥·∥defined on H satisfying f ∗(0) = 0. Let ℓbe a L-regular loss function and denote Bℓ:= sup(x,y),h ℓ(ρh(x, y)). Let g1, . . . , gnc be independent N(0, 1) distributed random variables. Then, for any δ > 0, with probability at least 1 −δ we have R(hw) ≤1 n n X i=1 ℓ(ρhw(xi, yi)) + 4L n v u u tπΛ β Eg n X i=1 g(j−1)n+iφ(xi)  j=1,...,c 2 ∗+ 3Bℓ s log 2 δ 2n . We now consider the following specific hypothesis spaces using a ∥· ∥2,p constraint: Hp,Λ := {hw = (⟨w1, φ(x)⟩, . . . , ⟨wc, φ(x)⟩) : ∥w∥2,p ≤Λ}, 1 ≤p ≤2. (5) Corollary 8 (ℓp-norm MC-SVM generalization bound). Let ℓbe a L-regular loss function and denote Bℓ:= sup(x,y),h ℓ(ρh(x, y)). Then, with probability at least 1 −δ, for any hw ∈Hp,Λ the generalization error R(hw) can be upper bounded by: 1 n n X i=1 ℓ(ρhw(xi, yi))+3Bℓ s log 2 δ 2n + 2LΛ n v u u t n X i=1 k(xi, xi) × (√e(4 log c)1+ 1 2 log c , if p∗≥2 log c, 2p∗1+ 1 p∗c 1 p∗, otherwise. Remark 9. The bounds in Corollary 8 enjoy a mild dependence on the number of classes. The dependence is polynomial with exponent 1/p∗for 2 < p∗< 2 log c and becomes logarithmic if p∗≥ 2 log c. Even in the theoretically unfavorable case of p = 2 [20], the bounds still exhibit a radical dependence on the number of classes, which is substantially milder than the quadratic dependence established in [12, 14, 18] and the linear dependence established in [13]. Our generalization bound is data-dependent and shows clearly how the margin would affect the generalization performance (when ℓis the margin loss ℓρ): a large margin ρ would increase the empirical error while decrease the model’s complexity, and vice versa. 2.3 Comparison of the Achieved Bounds to the State of the Art Related work on data-independent bounds. The large body of theoretical work on multi-class learning considers data-independent bounds. Based on the ℓ∞-norm covering number bound of linear operators, [15] obtain a generalization bound exhibiting a linear dependence on the class size, which is improved by [9] to a radical dependence of the form O(n−1 2 (log 3 2 n) √c ρ ). Under conditions 4 analogous to Corollary 8, [23] derive a class-size independent generalization guarantee. However, their bound is based on a delicate definition of margin, which is why it is commonly not used in the mainstream multi-class literature. [1] derive the following generalization bound E h1 p log  1 + X ˜y̸=y ep(ρ−⟨ˆwy−ˆw˜y,φ(x)⟩)i ≤inf w∈H E h1 p log  1 + X ˜y̸=y ep(ρ−⟨wy−w˜y,φ(x)⟩) + λn 2(n + 1)∥w∥2 2,2 i + 2 supx∈X k(x, x) λn , (6) where ρ is a margin condition, p > 0 a scaling factor, and λ a regularization parameter. Eq. (6) is class-size independent, yet Corollary 8 shows superiority in the following aspects: first, for SVMs (i.e., margin loss ℓρ), our bound consists of an empirical error ( 1 n Pn i=1 ℓρ(ρhw(xi, yi))) and a complexity term divided by the margin value (note that L = 1/ρ in Corollary 8). When the margin is large (which is often desirable) [14], the last term in the bound given by Corollary 8 becomes small, while—on the contrary—-the bound (6) is an increasing function of ρ, which is undesirable. Secondly, Theorem 7 applies to general loss functions, expressed through a strongly convex function over a general hypothesis space, while the bound (6) only applies to a specific regularization algorithm. Lastly, all the above mentioned results are conservative data-independent estimates. Related work on data-dependent bounds. The techniques used in above mentioned papers do not straightforwardly translate to data-dependent bounds, which is the type of bounds in the focus of the present work. The investigation of these was initiated, to our best knowledge, by [14]: with the structural complexity bound (3) for function classes induced via the maximal operator, [14] derive a margin bound admitting a quadratic dependency on the number of classes. [12] use these results in [14] to study the generalization performance of MC-SVMs, where the components h1, . . . , hc are coupled with an ∥· ∥2,p, p ≥1 constraint. Due to the usage of the suboptimal Eq. (3), [12] obtain a margin bound growing quadratically w.r.t. the number of classes. [18] develop a new multi-class classification algorithm based on a natural notion called the multi-class margin of a kernel. [18] also present a novel multi-class Rademacher complexity margin bound based on Eq. (3), and the bound also depends quadratically on the class size. More recently, [13] give a refined Rademacher complexity bound with a linear dependence on the class size. The key reason for this improvement is the introduction of ρθ,h := miny′∈Y[h(x, y)−h(x, y′)+θ1y′=y] bounding margin ρh from below, and since the maximum operation in ρθ,h is applied to the set Y rather than the subset Y −{yi} for ρh, one needs not to consider the random realization of yi. We also use this trick in our proof of Theorem 5. However, [13] fail to improve this linear dependence to a logarithmic dependence, as we achieved in Corollary 8, due to the use of the suboptimal structural result (3). 3 Algorithms Motivated by the generalization analysis given in Section 2, we now present a new multi-class learning algorithm, based on performing empirical risk minimization in the hypothesis space (5). This corresponds to the following ℓp-norm MC-SVM (1 ≤p ≤2): Problem 10 (Primal problem: ℓp-norm MC-SVM). min w 1 2 h c X j=1 ∥wj∥p 2 i 2 p + C n X i=1 ℓ(ti), s.t. ti = ⟨wyi, φ(xi)⟩−max y̸=yi⟨wy, φ(xi)⟩, (P) For p = 2 we recover the seminal multi-class algorithm by Crammer & Singer [20] (CS), which is thus a special case of the proposed formulation. An advantage of the proposed approach over [20] can be that, as shown in Corollary 8, the dependence of the generalization performance on the class size becomes milder as p decreases to 1. 3.1 Dual problems Since the optimization problem (P) is convex, we can derive the associated dual problem for the construction of efficient optimization algorithms. The derivation of the following dual problem is deferred to Supplementary Material C. For a matrix α ∈Rn×c, we denote by αi the i-th row. Denote by ej the j-th unit vector in Rc and 1 the vector in Rc with all components being zero. 5 Problem 11 (Completely dualized problem for general loss). The Lagrangian dual of (10) is: sup α∈Rn×c −1 2 h c X j=1 n X i=1 αijφ(xi) p p−1 2 i 2(p−1) p −C n X i=1 ℓ∗(−αiyi C ) s.t. αij ≤0 ∧αi · 1 = 0, ∀j ̸= yi, i = 1, . . . , n. (D) Theorem 12 (REPRESENTER THEOREM). For any dual variable α ∈Rn×c, the associated primal variable w = (w1, . . . , wc) minimizing the Lagrangian saddle problem can be represented by: wj =  c X ˜j=1 ∥ n X i=1 αi˜jφ(xi)∥p∗ 2  2 p∗−1 n X i=1 αijφ(xi) p∗−2 2  n X i=1 αijφ(xi)  . For the hinge loss ℓh(t) = (1 −t)+, we know its Fenchel-Legendre conjugate is ℓ∗ h(t) = t if −1 ≤t ≤0 and ∞elsewise. Hence ℓ∗ h(− αiyi C ) = − αiyi C if −1 ≤− αiyi C ≤0 and ∞elsewise. Now we have the following dual problem for the hinge loss function: Problem 13 (Completely dualized problem for the hinge loss (ℓp-norm MC-SVM)). sup α∈Rn×c −1 2 h c X j=1 n X i=1 αijφ(xi) p p−1 2 i 2(p−1) p + n X i=1 αiyi s.t. αi ≤eyi · C ∧αi · 1 = 0, ∀i = 1, . . . , n. (7) 3.2 Optimization Algorithms The dual problems (D) and (7) are not quadratic programs for p ̸= 2, and thus generally not easy to solve. To circumvent this difficulty, we rewrite Problem 10 as the following equivalent problem: min w,β c X j=1 ∥wj∥2 2 2βj + C n X i=1 ℓ(ti) s.t. ti ≤⟨wyi, φ(xi)⟩−⟨wy, φ(xi)⟩, y ̸= yi, i = 1, . . . , n, ∥β∥¯p ≤1, ¯p = p(2 −p)−1, βj ≥0. (8) The class weights β1, . . . , βc in Eq. (8) play a similar role as the kernel weights in ℓp-norm MKL algorithms [19]. The equivalence between problem (P) and Eq. (8) follows directly from Lemma 26 in [24], which shows that the optimal β = (β1, . . . , βc) in Eq. (8) can be explicitly represented in closed form. Motivated by the recent work on ℓp-norm MKL, we propose to solve the problem (8) via alternately optimizing w and β. As we will show, given temporarily fixed β, the optimization of w reduces to a standard multi-class classification problem. Furthermore, the update of β, given fixed w, can be achieved via an analytic formula. Problem 14 (Partially dualized problem for a general loss). For fixed β, the partial dual problem for the sub-optimization problem (8) w.r.t. w is sup α∈Rn×c −1 2 c X j=1 βj n X i=1 αijφ(xi) 2 2 −C n X i=1 ℓ∗(−αiyi C ) s.t. αij ≤0 ∧αi · 1 = 0, ∀j ̸= yi, i = 1, . . . , n. (9) The primal variable w minimizing the associated Lagrangian saddle problem is wj = βj n X i=1 αijφ(xi). (10) We defer the proof to Supplementary Material C. Analogous to Problem 13, we have the following partial dual problem for the hinge loss. Problem 15 (Partially dualized problem for the hinge loss (ℓp-norm MC-SVM)). sup α∈Rn×c f(α) := −1 2 c X j=1 βj n X i=1 αijφ(xi) 2 2 + n X i=1 αiyi s.t. αi ≤eyi · C ∧αi · 1 = 0, ∀i = 1, . . . , n. (11) 6 The Problems 14 and 15 are quadratic, so we can use the dual coordinate ascent algorithm [25] to very efficiently solve them for the case of linear kernels. To this end, we need to compute the gradient and solve the restricted problem of optimizing only one αi, ∀i, keeping all other dual variables fixed [25]. The gradient of f can be exactly represented by w: ∂f ∂αij = −βj n X ˜i=1 α˜ijk(xi, x˜i) + 1yi=j = 1yi=j −⟨wj, φ(xi)⟩. (12) Suppose the additive change to be applied to the current αi is δαi, then from (12) we have f(α1, . . . , αi−1, αi + δαi, αi+1, . . . , αn) = c X j=1 ∂f ∂αij δαij −1 2 c X j=1 βjk(xi, xi)[δαij]2 + const. Therefore, the sub-problem of optimizing δαi is given by max δαi −1 2 c X j=1 βjk(xi, xi)[δαij]2 + c X j=1 ∂f ∂αij δαij s.t. δαi ≤eyi · C −αi ∧δαi · 1 = 0. (13) We now consider the subproblem of updating class weights β with temporarily fixed w, for which we have the following analytic solution. The proof is deferred to the Supplementary Material C.1. Proposition 16. (Solving the subproblem with respect to the class weights) Given fixed wj, the minimal βj optimizing the problem (8) is attained at βj = ∥wj∥2−p 2  c X ˜j=1 ∥w˜j∥p 2  p−2 p . (14) The update of βj based on Eq. (14) requires calculating ∥wj∥2 2, which can be easily fulfilled by recalling the representation established in Eq. (10). The resulting training algorithm for the proposed ℓp-norm MC-SVM is given in Algorithm 1. The algorithm alternates between solving a MC-SVM problem for fixed class weights (Line 3) and updating the class weights in a closed-form manner (Line 5). Recall that Problem 11 establishes a completely dualized problem, which can be used as a sound stopping criterion for Algorithm 1. Algorithm 1: Training algorithm for ℓp-norm MC-SVM. input: examples {(xi, yi)n i=1} and the kernel k. initialize βj = ¯ pp 1/c, wj = 0 for all j = 1, . . . , c while Optimality conditions are not satisfied do optimize the multi-class classification problem (9) compute ∥wj∥2 2 for all j = 1, . . . , c, according to Eq. (10) update βj for all j = 1, . . . , c, according to Eq. (14) end 4 Empirical Analysis We implemented the proposed ℓp-norm MC-SVM algorithm (Algorithm 1) in C++ and solved the involved MC-SVM problem using dual coordinate ascent [25]. We experiment on six benchmark datasets: the Sector dataset studied in [26], the News 20 dataset collected by [27], the Rcv1 dataset collected by [28], the Birds 15, Birds 50 as a part from [29] and the Caltech 256 collected by griffin2007caltech. We used fc6 features from the BVLC reference caffenet from [30]. Table 1 gives an information on these datasets. We compare with the classical CS in [20], which constitutes a strong baseline for these datasets [25]. We employ a 5-fold cross validation on the training set to tune the regularization parameter C by grid search over the set {2−12, 2−11, . . . , 212} and p from 1.1 to 2 with 10 equidistant points. We repeat the experiments 10 times, and report in Table 2 on the average accuracy and standard deviations attained on the test set. 7 Dataset No. of Classes No. of Training Examples No. of Test Examples No. of Attributes Sector 105 6, 412 3, 207 55, 197 News 20 20 15, 935 3, 993 62, 060 Rcv1 53 15, 564 518, 571 47, 236 Birds 15 200 3, 000 8, 788 4, 096 Birds 50 200 9, 958 1, 830 4, 096 Caltech 256 256 12, 800 16, 980 4, 096 Table 1: Description of datasets used in the experiments. Method / Dataset Sector News 20 Rcv1 Birds 15 Birds 50 Caltech 256 ℓp-norm MC-SVM 94.20±0.34 86.19±0.12 85.74±0.71 13.73±1.4 27.86±0.2 56.00±1.2 Crammer & Singer 93.89±0.27 85.12±0.29 85.21±0.32 12.53±1.6 26.28±0.3 54.96±1.1 Table 2: Accuracies achieved by CS and the proposed ℓp-norm MC-SVM on the benchmark datasets. We observe that the proposed ℓp-norm MC-SVM consistently outperforms CS [20] on all considered datasets. Specifically, our method attains 0.31% accuracy gain on Sector, 1.07% accuracy gain on News 20, 0.53% accuracy gain on Rcv1, 1.2% accuracy gain on Birds 15, 1.58% accuracy gain on Birds 50, and 1.04% accuracy gain on Birds 15. We perform a Wilcoxon signed rank test between the accuracies of CS and our method on the benchmark datasets, and the p-value is 0.03, which means our method is significantly better than CS at the significance level of 0.05. These promising results indicate that the proposed ℓp-norm MC-SVM could further lift the state of the art in multiclass classification, even in real-world applications beyond the ones studied in this paper. 5 Conclusion Motivated by the ever growing size of multi-class datasets in real-world applications such as image annotation and web advertising, which involve tens or hundreds of thousands of classes, we studied the influence of the class size on the generalization behavior of multi-class classifiers. We focus here on data-dependent generalization bounds enjoying the ability to capture the properties of the distribution that has generated the data. Of independent interest, for hypothesis classes that are given as a maximum over base classes, we developed a new structural result on Gaussian complexities that is able to preserve the coupling among different components, while the existing structural results ignore this coupling and may yield suboptimal generalization bounds. We applied the new structural result to study learning rates for multi-class classifiers, and derived, for the first time, a data-dependent bound with a logarithmic dependence on the class size, which substantially outperforms the linear dependence in the state-of-the-art data-dependent generalization bounds. Motivated by the theoretical analysis, we proposed a novel ℓp-norm MC-SVM, where the parameter p controls the complexity of the corresponding bounds. This class of algorithms contains the classical CS [20] as a special case for p = 2. We developed an effective optimization algorithm based on the Fenchel dual representation. For several standard benchmarks taken from various domains, the proposed approach surpassed the state-of-the-art method of CS [20] by up to 1.5%. A future direction will be to derive a data-dependent bound that is completely independent of the class size (even overcoming the mild logarithmic dependence here). To this end, we will study more powerful structural results than Lemma 4 for controlling complexities of function classes induced via the maximum operator. As a good starting point, we will consider ℓ∞-norm covering numbers. Acknowledgments We thank Mehryar Mohri for helpful discussions. This work was partly funded by the German Research Foundation (DFG) award KL 2698/2-1. References [1] T. Zhang, “Class-size independent generalization analsysis of some discriminative multi-category classification,” in Advances in Neural Information Processing Systems, pp. 1625–1632, 2004. [2] T. Hofmann, L. Cai, and M. Ciaramita, “Learning with taxonomies: Classifying documents and words,” in NIPS workshop on syntax, semantics, and statistics, 2003. 8 [3] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 248–255, IEEE, 2009. [4] A. Beygelzimer, J. Langford, Y. Lifshits, G. Sorkin, and A. Strehl, “Conditional probability tree estimation analysis and algorithms,” in Proceedings of UAI, pp. 51–58, AUAI Press, 2009. [5] S. Bengio, J. Weston, and D. Grangier, “Label embedding trees for large multi-class tasks,” in Advances in Neural Information Processing Systems, pp. 163–171, 2010. [6] P. Jain and A. Kapoor, “Active learning for large multi-class problems,” in Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pp. 762–769, IEEE, 2009. [7] O. Dekel and O. Shamir, “Multiclass-multilabel classification with more classes than examples,” in International Conference on Artificial Intelligence and Statistics, pp. 137–144, 2010. [8] M. R. Gupta, S. Bengio, and J. Weston, “Training highly multiclass classifiers,” The Journal of Machine Learning Research, vol. 15, no. 1, pp. 1461–1492, 2014. [9] T. Zhang, “Statistical analysis of some multi-category large margin classification methods,” The Journal of Machine Learning Research, vol. 5, pp. 1225–1251, 2004. [10] A. Tewari and P. L. Bartlett, “On the consistency of multiclass classification methods,” The Journal of Machine Learning Research, vol. 8, pp. 1007–1025, 2007. [11] T. Glasmachers, “Universal consistency of multi-class support vector classification,” in Advances in Neural Information Processing Systems, pp. 739–747, 2010. [12] M. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of machine learning. MIT press, 2012. [13] V. Kuznetsov, M. Mohri, and U. Syed, “Multi-class deep boosting,” in Advances in Neural Information Processing Systems, pp. 2501–2509, 2014. [14] V. Koltchinskii and D. Panchenko, “Empirical margin distributions and bounding the generalization error of combined classifiers,” Annals of Statistics, pp. 1–50, 2002. [15] Y. Guermeur, “Combining discriminant models with new multi-class svms,” Pattern Analysis & Applications, vol. 5, no. 2, pp. 168–179, 2002. [16] L. Oneto, D. Anguita, A. Ghio, and S. Ridella, “The impact of unlabeled patterns in rademacher complexity theory for kernel classifiers,” in Advances in Neural Information Processing Systems, pp. 585–593, 2011. [17] V. Koltchinskii and D. Panchenko, “Rademacher processes and bounding the risk of function learning,” in High Dimensional Probability II, pp. 443–457, Springer, 2000. [18] C. Cortes, M. Mohri, and A. Rostamizadeh, “Multi-class classification with maximum margin multiple kernel,” in ICML-13, pp. 46–54, 2013. [19] M. Kloft, U. Brefeld, S. Sonnenburg, and A. Zien, “Lp-norm multiple kernel learning,” The Journal of Machine Learning Research, vol. 12, pp. 953–997, 2011. [20] K. Crammer and Y. Singer, “On the algorithmic implementation of multiclass kernel-based vector machines,” The Journal of Machine Learning Research, vol. 2, pp. 265–292, 2002. [21] P. L. Bartlett and S. Mendelson, “Rademacher and gaussian complexities: Risk bounds and structural results,” J. Mach. Learn. Res., vol. 3, pp. 463–482, 2002. [22] M. Ledoux and M. Talagrand, Probability in Banach Spaces: isoperimetry and processes, vol. 23. Berlin: Springer, 1991. [23] S. I. Hill and A. Doucet, “A framework for kernel-based multi-category classification.,” J. Artif. Intell. Res.(JAIR), vol. 30, pp. 525–564, 2007. [24] C. A. Micchelli and M. Pontil, “Learning the kernel function via regularization,” Journal of Machine Learning Research, pp. 1099–1125, 2005. [25] S. S. Keerthi, S. Sundararajan, K.-W. Chang, C.-J. Hsieh, and C.-J. Lin, “A sequential dual method for large scale multi-class linear svms,” in 14th ACM SIGKDD, pp. 408–416, ACM, 2008. [26] J. D. Rennie and R. Rifkin, “Improving multiclass text classification with the support vector machine,” tech. rep., AIM-2001-026, MIT, 2001. [27] K. Lang, “Newsweeder: Learning to filter netnews,” in Proceedings of the 12th international conference on machine learning, pp. 331–339, 1995. [28] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li, “Rcv1: A new benchmark collection for text categorization research,” The Journal of Machine Learning Research, vol. 5, pp. 361–397, 2004. [29] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, “Caltech-UCSD Birds 200,” Tech. Rep. CNS-TR-2010-001, California Institute of Technology, 2010. [30] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell, “Caffe: Convolutional architecture for fast feature embedding,” arXiv preprint arXiv:1408.5093, 2014. 9
2015
106
5,599
Scalable Inference for Gaussian Process Models with Black-Box Likelihoods Amir Dezfouli The University of New South Wales akdezfuli@gmail.com Edwin V. Bonilla The University of New South Wales e.bonilla@unsw.edu.au Abstract We propose a sparse method for scalable automated variational inference (AVI) in a large class of models with Gaussian process (GP) priors, multiple latent functions, multiple outputs and non-linear likelihoods. Our approach maintains the statistical efficiency property of the original AVI method, requiring only expectations over univariate Gaussian distributions to approximate the posterior with a mixture of Gaussians. Experiments on small datasets for various problems including regression, classification, Log Gaussian Cox processes, and warped GPs show that our method can perform as well as the full method under high sparsity levels. On larger experiments using the MNIST and the SARCOS datasets we show that our method can provide superior performance to previously published scalable approaches that have been handcrafted to specific likelihood models. 1 Introduction Developing automated yet practical approaches to Bayesian inference is a problem that has attracted considerable attention within the probabilisitic machine learning community (see e.g. [1, 2, 3, 4]). In the case of models with Gaussian process (GP) priors, the main challenge is that of dealing with a large number of highly-coupled latent variables. Although promising directions within the sampling community such as Elliptical Slice Sampling (ESS, [5]) have been proposed, they have been shown to be particularly slow compared to variational methods. In particular, [6] showed that their automated variational inference (AVI) method can provide posterior distributions that are practically indistinguishable from those obtained by ESS, while running orders of magnitude faster. One of the fundamental properties of the method proposed in [6] is its statistical efficiency, which means that, in order to approximate a posterior distribution via the maximization of the evidence lower bound (ELBO), it only requires expectations over univariate Gaussian distributions regardless of the likelihood model. Remarkably, this property holds for a large class of models involving multiple latent functions and multiple outputs. However, this method is still impractical for large datasets as it inherits the cubic computational cost of GP models on the number of observations (N). While there have been several approaches to large scale inference in GP models [7, 8, 9, 10, 11], these have been focused on regression and classification problems. The main obstacle to apply these approaches to inference with general likelihood models is that it is unclear how they can be extended to frameworks such as those in [6], while maintaining that desirable property of statistical efficiency. In this paper we build upon the inducing-point approach underpinning most sparse approximations to GPs [12, 13] in order to scale up the automated inference method of [6]. In particular, for models with multiple latent functions, multiple outputs and non-linear likelihoods (such as in multi-class classification and Gaussian process regression networks [14]) we propose a sparse approximation whose computational complexity is O(M 3) in time, where M ≪N is the number of inducing points. This approximation maintains the statistical efficiency property of the original AVI method. As the resulting ELBO decomposes over the training data points, our method can scale up to a very 1 large number of observations and is amenable to stochastic optimization and parallel computation. Moreover, it can, in principle, approximate arbitrary posterior distributions as it uses a Mixture-ofGaussians (MoG) as the family of approximate posteriors. We refer to our method as SAVIGP, which stands for scalable automated variational inference for Gaussian process models. Our experiments on small datasets for problems including regression, classification, Log Gaussian Cox processes, and warped GPs [15] show that SAVIGP can perform as well as the full method under high levels of sparsity. On a larger experiment on the MNIST dataset, our approach outperforms the distributed variational inference method in [9], who used a class-conditional density modeling approach. Our method, unlike [9], uses a single discriminative multi-class framework. Finally, we use SAVIGP to do inference for the Gaussian process regression network model [14] on the SARCOS dataset concerning an inverse robot dynamics problem [16]. We show that we can outperform previously published scalable approaches that used likelihood-specific inference algorithms. 2 Related work There has been a long-standing interest in the GP community to overcome the cubic scaling of inference in standard GP models [17, 18, 12, 13, 8]. However, none of these approaches actually dealt with the harder tasks of developing scalable inference methods for multi-output problems and general likelihood models. The former (multiple output problem) has been addressed, notably, by [19] and [20] using the convolution process formalism. Nevertheless, such approaches were specific to regression problems. The latter problem (general likelihood models) has been tackled from a sampling perspective [5] and within an optimization framework using variational inference [21]. In particular, the work of [21] proposes an efficient full Gaussian posterior approximation for GP models with iid observations. Our work pushes this breakthrough further by allowing multiple latent functions, multiple outputs, and more importantly, scalability to large datasets. A related area of research is that of modeling complex data with deep belief networks based on Gaussian process mappings [22]. Unlike our approach, these models target the unsupervised problem of discovering structure in high-dimensional data, do not deal with black-box likelihoods, and focus on small-data applications. Finally, very recent developments in speeding-up probabilistic kernel machines [9, 23, 24] show that the types of problems we are addressing here are highly relevant to the machine learning community. In particular, [23] has proposed efficient inference methods for large scale GP classification and [9] has developed a distributed variational approach for GP models, with a focus on regression and classification problems. Our work, unlike these approaches, allows practitioners and researchers to investigate new models with GP priors and complex likelihoods for which currently there is no machinery that can scale to very large datasets. 3 Gaussian Process priors and multiple-output nonlinear likelihoods We are given a dataset D = {xn, yn}N n=1, where xn is a D-dimensional input vector and yn is a P-dimensional output. Our goal is to learn the mapping from inputs to outputs, which can be established via Q underlying latent functions {fj}Q j=1. A sensible modeling approach to the above problem is to assume that the Q latent functions {fj} are uncorrelated a priori and that they are drawn from Q zero-mean Gaussian processes [25]: p(f) = Q Y j=1 p(f·j) = Q Y j=1 N(f·j; 0, Kj), (1) where f is the set of all latent function values; f·j = {fj(xn)}N n=1 denotes the values of latent function j; and Kj is the covariance matrix induced by the covariance function κj(·, ·), evaluated at every pair of inputs. Along with the prior in Equation (1), we can also assume that our multidimensional observations {yn} are iid given the corresponding set of latent functions {fn}: p(y|f) = N Y n=1 p(yn|fn·), (2) where y is the set of all output observations; yn is the nth output observation; and fn· = {fj(xn)}Q j=1 is the set of latent function values which yn depends upon. In short, we are inter2 ested in models for which the following criteria are satisfied: (i) factorization of the prior over the latent functions; and (ii) factorization of the conditional likelihood over the observations given the latent functions. Interestingly, a large class of problems can be well modeled with the above assumptions: binary classification [7, 26], warped GPs [15], log Gaussian Cox processes [27], multi-class classification [26], and multi-output regression [14] all belong to this family of models. 3.1 Automated variational inference One of the key inference challenges in the above models is that of computing the posterior distribution over the latent functions p(f|y). Ideally, we would like an efficient method that does not need to know the details of the likelihood in order to carry out posterior inference. This is exactly the main result in [6], which approximates the posterior with a mixture-of-Gaussians within a variational inference framework. This entails the optimization of an evidence lower bound, which decomposes as a KL-divergence term and an expected log likelihood (ELL) term. As the KL-divergence term is relatively straightforward to deal with, we focus on their main result regarding the ELL term: [6], Th. 1: “The expected log likelihood and its gradients can be approximated using samples from univariate Gaussian distributions”. More generally, we say that the ELL term and its gradients can be estimated using expectations over univariate Gaussian distributions. We refer to this result as that of statistical efficiency. One of the main limitations of this method is its poor scalability to large datasets, as it has a cubic time complexity on the number of data points, i.e. O(N 3). In the next section we describe our inference method that scales up to large datasets while maintaining the statistical efficiency property of the original model. 4 Scalable inference In order to make inference scalable we redefine our prior to be sparse by conditioning the latent processes on a set of inducing variables {u·j}Q j=1, which lie in the same space as {f·j} and are drawn from the same zero-mean GP priors. As before, we assume factorization of the prior across the Q latent functions. Hence the resulting sparse prior is given by: p(u) = Q Y j=1 N(u·j; 0, κ(Zj, Zj)), p(f|u) = Q Y j=1 N(f·j; ˜µj, eKj), (3) ˜µj = κ(X, Zj)κ(Zj, Zj)−1u·j, (4) eKj = κj(X, X) −Ajκ(Zj, X) with Aj = κ(X, Zj)κ(Zj, Zj)−1, (5) where u·j are the inducing variables for latent process j; u is the set of all the inducing variables; Zj are all the inducing inputs (i.e. locations) for latent process j; X is the matrix of all input locations {xi}; and κ(U, V) is the covariance matrix induced by evaluating the covariance function κj(·, ·) at all pairwise vectors of matrices U and V. We note that while each of the inducing variables in u·j lies in the same space as the elements in f·j, each of the M inducing inputs in Zj lies in the same space as each input data point xn. Given the latent function values fn·, the conditional likelihood factorizes across data points and is given by Equation (2). 4.1 Approximate posterior We will approximate the posterior using variational inference. Motivated by the fact that the true joint posterior is given by p(f, u|y) = p(f|u, y)p(u|y), our approximate posterior has the form: q(f, u|y) = p(f|u)q(u), (6) where p(f|u) is the conditional prior given in Equation (3) and q(u) is our approximate (variational) posterior. This decomposition has proved effective in problems with a single latent process and a single output (see e.g. [13]). Our variational distribution is a mixture of Gaussians (MoG): q(u|λ) = K X k=1 πkqk(u|mk, Sk) = K X k=1 πk Q Y j=1 N(u·j; mkj, Skj), (7) 3 where λ = {πk, mkj, Skj} are the variational parameters: the mixture proportions {πk}, the posterior means {mkj} and posterior covariances {Skj} of the inducing variables corresponding to mixture component k and latent function j. We also note that each of the mixture components qk(u|mk, Sk) is a Gaussian with mean mk and block-diagonal covariance Sk. 5 Posterior approximation via optimization of the evidence lower bound Following variational inference principles, the log marginal likelihood log p(y) (or evidence) is lower bounded by the variational objective: log p(y) ≥Lelbo = Z q(u|λ)p(f|u) log p(y|f)dfdu | {z } Lell −KL(q(u|λ)∥p(u)) | {z } Lkl , (8) where the evidence lower bound (Lelbo) decomposes as the sum of an expected log likelihood term (Lell) and a KL-divergence term (Lkl). Our goal is to estimate our posterior distribution q(u|λ) via maximization of Lelbo. We consider first the Lell term, as it is the most difficult to deal with since we do not know the details of the implementation of the conditional likelihood p(y|f). 5.1 Expected log likelihood term Here we need to compute the expectation of the log conditional likelihood log p(y|f) over the joint approximate posterior given in Equation (6). Our goal is to obtain expressions for the Lell term and its gradients wrt the variational parameters while maintaining the statistical efficiency property of needing only expectations from univariate Gaussians. For this we first introduce an intermediate distribution q(f|λ) that is obtained by integrating out u from the joint approximate posterior: Lell(λ) = Z f Z u q(u|λ)p(f|u) log p(y|f)dfdu = Z f log p(y|f) Z u p(f|u)q(u|λ)du | {z } q(f|λ) df. (9) Given our approximate posterior in Equation (7), q(f|λ) can be obtained analytically: q(f|λ) = K X k=1 πkqk(f|λk) = K X k=1 πk Q Y j=1 N(f·j; bkj, Σkj), with (10) bkj = Ajmkj, Σkj = eKj + AjSkjAT j , (11) where eKj and Aj are given in Equation (5). Now we can rewrite Equation (9) as: Lell(λ) = K X k=1 πkEqk(f|λk)[log p(y|f)] = N X n=1 K X k=1 πkEqk(n)(fn·)[log p(yn·|fn·)], (12) where Eq(x)[g(x)] denotes the expectation of function g(x) over the distribution q(x). Here we have used the mixture decomposition of q(f|λ) in Equation (10) and the factorization of the likelihood over the data points in Equation (2). Now we are ready to state formally our main result. Theorem 1 For the sparse GP model with prior defined in Equations (3) to (5), and likelihood defined in Equation (2), the expected log likelihood over the variational distribution in Equation (7) and its gradients can be estimated using expectations over univariate Gaussian distributions. Given the result in Equation (12), the proof is trivial for the computation of Lell as we only need to realize that qk(f|λk) = N(f; bk, Σk) given in Equation (10) has a block-diagonal covariance structure. Consequently, qk(n)(fn·) is a Q-dimensional Gaussian with diagonal covariance. For the gradients of Lell wrt the variational parameters, we use the following identity: ∇λkEqk(n)(fn·)[log p(yn|fn·)] = Eqk(n)(fn·)∇λk log qk(n)(fn·) log p(yn|fn·), (13) for λk ∈{mk, Sk}, and the result for {πk} is straightforward. ■ 4 Explicit computation of Lell We now provide explicit expressions for the computation of Lell. We know that qk(n)(fn·) is a Q-dimensional Gaussian with : qk(n)(fn·) = N(fn·; bk(n), Σk(n)), (14) where Σk(n) is a diagonal matrix. The jth element of the mean and the (j, j)th entry of the covariance are given by: [bk(n)]j = [Aj]n,:mkj, [Σk(n)]j,j = [ eKj]n,n + [Aj]n,:Skj[AT j ]:,n, (15) where [A]n,: and [A]:,n denote the nth row and nth column of matrix A respectively. Hence we can compute Lell as follows: n f (k,i) n· oS i=1 ∼N(fn·; bk(n), Σk(n)), k = 1, . . . , K, (16) bLell = 1 S N X n=1 K X k=1 πk S X i=1 log p(yn·|f (k,i) n· ). (17) The gradients of Lell wrt variational parameters are given in the supplementary material. 5.2 KL-divergence term We turn now our attention to the KL-divergence term, which can be decomposed as follows: −KL(q(u|λ)∥p(u)) = Eq[−log q(u|λ)] | {z } Lent + Eq[log p(u)] | {z } Lcross , (18) where the entropy term (Lent) can be lower bounded using Jensen’s inequality: Lent ≥− K X k=1 πk log K X ℓ=1 πℓN(mk; mℓ, Sk + Sℓ) def= ˆLent. (19) The negative cross-entropy term (Lcross) can be computed exactly: Lcross = −1 2 K X k=1 πk Q X j=1 [M log 2π +log |κ(Zj, Zj)|+mT kjκ(Zj, Zj)−1mkj + tr κ(Zj, Zj)−1Skj]. (20) The gradients of the above terms wrt the variational parameters are given in the supplementary material. 5.3 Hyperparameter learning and scalability to large datasets For simplicity in the notation we have omitted the parameters of the covariance functions and the likelihood parameters from the ELBO. However, in our experiments we optimize these along with the variational parameters in a variational-EM alternating optimization framework. The gradients of the ELBO wrt these parameters are given in the supplementary material. The original framework of [6] is completely unfeasible for large datasets, as its complexity is dominated by the inversion of the Gram matrix on all the training data, which is an O(N 3) operation where N is the number of training points. Our sparse framework makes automated variational inference practical for large datasets as its complexity is dominated by inversions of the kernel matrix on the inducing points, which is an O(M 3) operation where M is the number of inducing points per latent process. Furthermore, as the Lell and its gradients decompose over the training points, and the Lkl term decomposes over the number of latent process, our method is amenable to stochastic optimization and / or parallel computation, which makes it scalable to very large number of input observations, output dimensions and latent processes. In our experiments in section 6 we show that our sparse framework can achieve similar performance to the full method [6] on small datasets under high levels of sparsity. Moreover, we carried out experiments on larger datasets for which is practically impossible to apply the full (i.e. non-sparse) method. 5 FG MoG1 MoG2 0.25 0.50 0.75 1.00 SSE FG MoG1 MoG2 1 2 3 4 NLPD SF 0.1 0.2 0.5 1.0 Figure 1: The SSE and NLPD for warped GPs on the Abalone dataset, where lower values on both measures are better. Three approximate posteriors are used: FG (full Gaussian), MoG1 (diagonal Gaussian), and MoG2 (mixture of two diagonal Gaussians), along with various sparsity factors (SF = M/N). The smaller the SF the sparser the model, with SF=1 corresponding to no sparsity. 6 Experiments Our experiments first consider the same six benchmarks with various likelihood models analyzed by [6]. The number of training points (N) on these benchmarks ranges from 300 to 1233 and their input dimensionality (D) ranges from 1 to 256. The goal of this first set of experiments is to show that SAVIGP can attain as good performance as the full method under high sparsity levels. We also carried out experiments at a larger scale using the MNIST dataset and the SARCOS dataset [16]. The application of the original automated variational inference framework on these datasets is unfeasible. We refer the reader to the supplementary material for the details of our experimental set-up. We used two performance measures in each experiment: the standardized squared error (SSE) and the negative log predictive density (NLPD) for continuous-output problems, and the error rate and the negative log probability (NLP) for discrete-output problems. We use three versions of SAVIGP: FG, MoG1, and MoG2, corresponding to a full Gaussian, a diagonal Gaussian, and mixture of diagonal Gaussians with 2 components, respectively. We refer to the ratio of the number of inducing points over the number of training points (M/N) as sparsity factor. 6.1 Small-scale experiments In this section we describe the results on three (out of six) benchmarks used by [6] and analyze the performance of SAVIGP. The other three benchmarks are described in the supplementary material. Warped Gaussian processes (WGP), Abalone dataset [28], p(yn|fn) = ∇ynt(yn)N(t(yn)|fn, σ2). For this task we used the same neural-net transformation as in [15] and the results for the Abalone dataset are shown in Figure 1. We see that the performance of SAVIGP is practically indistinguishable across all sparsity factors for SSE and NLPD. Here we note that [6] showed that automated variational inference performed competitively when compared to hand-crafted methods for warped GPs [15]. Log Gaussian Cox process (LGCP), Coal-mining disasters dataset [29], p(yn|fn) = λyn n exp(−λn) yn! . Here we used the LGCP for modeling the number of coal-mining disasters between years 1851 to 1962. We note that [6] reported that automated variational inference (the focus of this paper) produced practically indistinguishable distributions (but run order of magnitude faster) when compared to sampling methods such as Elliptical Slice Sampling [5]. The results for our sparse models are shown in Figure 2, where we see that both models (FG and MoG1) remain mostly unaffected when using high levels of sparsity. We also confirm the findings in [6] that the MoG1 model underestimates the variance of the predictions. Binary classification, Wisconsin breast cancer dataset [28], p(yn = 1) = 1/(1 + exp(−fn)). Classification error rates and the negative log probability (NLP) on the Wisconsin breast cancer dataset are shown in Figure 3. We see that the error rates are comparable across all models and sparsity factors. Interestingly, sparser models achieved lower NLP values, suggesting overconfident predictions by the less sparse models, especially for the mixtures of diagonal Gaussians. 6 0 1 2 3 4 1850 1875 1900 1925 1950 time event counts SF = 0.1 SF = 0.2 SF = 0.5 SF = 1.0 0.2 0.4 0.6 0.2 0.4 0.6 FG MoG1 1850 1875 1900 1925 1950 1850 1875 1900 1925 1950 1850 1875 1900 1925 1950 1850 1875 1900 1925 1950 intensity Figure 2: Left: the coal-mining disasters data. Right: the posteriors for a Log Gaussian Cox process on these data when using a full Gaussian (FG) and a diagonal Gaussian (MoG1), for various sparsity factors (SF = M/N). The smaller the SF the sparser the model, with SF=1 corresponding to no sparsity. The solid line is the posterior mean and the shading area includes 90% confidence interval. FG MoG1 MoG2 0.00 0.01 0.02 0.03 0.04 error rate FG MoG1 MoG2 0.1 0.2 NLP SF 0.1 0.2 0.5 1.0 Figure 3: Error rates and NLP for binary classification on the Wisconsin breast cancer dataset. Three approximate posteriors are used: FG (full Gaussian), MoG1 (diagonal Gaussian), and MoG2 (mixture of two diagonal Gaussians), along with various sparsity factors (SF = M/N). The smaller the SF the sparser the model, with SF=1 corresponding to the original model without sparsity. Error bars on the left plot indicate 95% confidence interval around the mean. 6.2 Large-scale experiments In this section we show the results of the experiments carried out on larger datasets with non-linear non-Gaussian likelihoods. Multi-class classification on the MNIST dataset. We first considered a multi-class classification task on the MNIST dataset using the softmax likelihood. This dataset has been extensively used by the machine learning community and contains 50,000 examples for training, 10,000 for validation and 10,000 for testing, with 784-dimensional input vectors. Unlike most previous approaches, we did not tune additional parameters using the validation set. Instead we used our variational framework for learning all the model parameters using all the training and validation data. This setting most likely provides a lower bound on test accuracy but our goal here is simply to show that we can achieve competitive performance with highly-sparse models as our inference algorithm does not know the details of the conditional likelihood. Figure 4 (left and middle) shows error rates and NLPs where we see that, although the performance decreases with sparsity, the method is able to attain an accuracy of 97.49%, while using only around 2000 inducing points (SF = 0.04). To the best of our knowledge, we are the first to train a multi-class Gaussian process classifier using a single discriminative probabilistic framework on all classes on MNIST. For example, [17] used a 1-vs-rest approach and [23] focused on the binary classification task of distinguishing the odd digits from the even digits. Finally, [9] trained one model for each digit and used it as a density model, achieving an error rate of 5.95%. Our experiments show that by having a single discriminative probabilistic framework, even without exploiting the details of the conditional likelihood, we can bring this error rate down to 2.51%. As a reference, previous literature reports about 12% error rate by linear classifiers and less than 1% error rate by sate-of-the-art large/deep convolutional nets. 7 FG 0.00 0.02 0.04 0.06 0.08 0.001 0.004 0.02 0.04 SF error rate FG 0.1 0.2 0.3 0.4 0.5 0.001 0.004 0.02 0.04 SF NLP SF = 0.04 0.000 0.003 0.006 0.009 1 2 output SMSE Figure 4: Left and middle: classification error rates and negative log probabilities (NLP) for the multi-class problem on MNIST. Here we used the FG (full Gaussian) approximation with various sparsity factors (SF = M/N). The smaller the SF the sparser the model. Right: the SMSE for a Gaussian process regression network model on the SARCOS dataset when learning the 4th and 7th torques (output 1 and output 2) with a FG (full Gaussian) approximation and 0.04 sparsity factor. Our results show that our method, while solving the harder problem of full posterior estimation, can reduce the gap between GPs and deep nets. Gaussian process regression networks on the SARCOS dataset. Here we apply our SAVIGP inference method to the Gaussian process regression networks (GPRNs) model of [14], using the SARCOS dataset as a test bed. GPRNs are a very flexible regression approach where P outputs are a linear combination of Q latent Gaussian processes, with the weights of the linear combination also drawn from Gaussian processes. This yields a non-linear multiple output likelihood model where the correlations between the outputs can be spatially adaptive, i.e. input dependent. The SARCOS dataset concerns an inverse dynamics problem of a 7-degrees-of-freedom anthropomorphic robot arm [16]. The data consists of 44,484 training examples mapping from a 21-dimensional input space (7 joint positions, 7 joint velocities, 7 joint accelerations) to the corresponding 7 joint torques. Similarly to the work in [10], we consider joint learning for the 4th and 7th torques, which we refer to as output 1 and output 2 respectively, and make predictions on 4,449 test points per output. Figure 4 (right) shows the standardized mean square error (SMSE) with the full Gaussian approximation (FG) using SF=0.04, i.e. less than 2000 inducing points. The results are considerably better than those reported by [10] (0.2631 and 0.0127 for each output respectively), although their setting was much sparser than ours on the first output. This also corroborates previous findings that, on this problem, having more data does help [16]. To the best of our knowledge, we are the first to perform inference in GPRNs on problems at this scale. 7 Conclusion We have presented a scalable approximate inference method for models with Gaussian process (GP) priors, multiple outputs, and nonlinear likelihoods. One of the key properties of this method is its statistical efficiency in that it requires only expectations over univariate Gaussian distributions to approximate the posterior with a mixture of Gaussians. Extensive experimental evaluation shows that our approach can attain excellent performance under high sparsity levels and that it can outperform previous inference methods that have been handcrafted to specific likelihood models. Overall, this work makes a substantial contribution towards the goal of developing generic yet scalable Bayesian inference methods for models based on Gaussian processes. Acknowledgments This work has been partially supported by UNSW’s Faculty of Engineering Research Grant Program project # PS37866 and an AWS in Education Research Grant award. AD was also supported by a grant from the Australian Research Council # DP150104878. 8 References [1] Pedro Domingos, Stanley Kok, Hoifung Poon, Matthew Richardson, and Parag Singla. Unifying logical and statistical AI. In AAAI, 2006. [2] Noah D. Goodman, Vikash K. Mansinghka, Daniel M. Roy, Keith Bonawitz, and Joshua B. Tenenbaum. Church: A language for generative models. In UAI, 2008. [3] Matthew D. Hoffman and Andrew Gelman. The no-u-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. JMLR, 15(1):1593–1623, 2014. [4] Rajesh Ranganath, Sean Gerrish, and David M. Blei. Black box variational inference. In AISTATS, 2014. [5] Iain Murray, Ryan Prescott Adams, and David J.C. MacKay. Elliptical slice sampling. In AISTATS, 2010. [6] Trung V. Nguyen and Edwin V. Bonilla. Automated variational inference for Gaussian process models. In NIPS. 2014. [7] Hannes Nickisch and Carl Edward Rasmussen. Approximations for binary Gaussian process classification. JMLR, 9(10), 2008. [8] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. In UAI, 2013. [9] Yarin Gal, Mark van der Wilk, and Carl Rasmussen. Distributed variational inference in sparse Gaussian process regression and latent variable models. In NIPS. 2014. [10] Trung V. Nguyen and Edwin V. Bonilla. Collaborative multi-output Gaussian processes. In UAI, 2014. [11] Trung V. Nguyen and Edwin V. Bonilla. Fast allocation of Gaussian process experts. In ICML, 2014. [12] Joaquin Qui˜nonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. JMLR, 6:1939–1959, 2005. [13] Michalis Titsias. Variational learning of inducing variables in sparse Gaussian processes. In AISTATS, 2009. [14] Andrew G. Wilson, David A. Knowles, and Zoubin Ghahramani. Gaussian process regression networks. In ICML, 2012. [15] Edward Snelson, Carl Edward Rasmussen, and Zoubin Ghahramani. Warped Gaussian processes. In NIPS, 2003. [16] Sethu Vijayakumar and Stefan Schaal. Locally weighted projection regression: An O(n) algorithm for incremental real time learning in high dimensional space. In ICML, 2000. [17] Neil D Lawrence, Matthias Seeger, and Ralf Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In NIPS, 2002. [18] Ed Snelson and Zoubin Ghahramani. Sparse Gaussian processes using pseudo-inputs. In NIPS, 2006. [19] Mauricio A ´Alvarez and Neil D Lawrence. Computationally efficient convolved multiple output Gaussian processes. JMLR, 12(5):1459–1500, 2011. [20] Mauricio A. ´Alvarez, David Luengo, Michalis K. Titsias, and Neil D. Lawrence. Efficient multioutput Gaussian processes through variational inducing kernels. In AISTATS, 2010. [21] Manfred Opper and C´edric Archambeau. The variational Gaussian approximation revisited. Neural Computation, 21(3):786–792, 2009. [22] Andreas Damianou and Neil Lawrence. Deep Gaussian processes. In AISTATS, 2013. [23] James Hensman, Alexander Matthews, and Zoubin Ghahramani. Scalable variational Gaussian process classification. In AISTATS, 2015. [24] Zichao Yang, Andrew Gordon Wilson, Alexander J. Smola, and Le Song. `A la carte — learning fast kernels. In AISTATS, 2015. [25] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian processes for machine learning. The MIT Press, 2006. [26] Christopher K.I. Williams and David Barber. Bayesian classification with Gaussian processes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 20(12):1342–1351, 1998. [27] Jesper Møller, Anne Randi Syversveen, and Rasmus Plenge Waagepetersen. Log Gaussian Cox processes. Scandinavian journal of statistics, 25(3):451–482, 1998. [28] K. Bache and M. Lichman. UCI machine learning repository, 2013. [29] R.G. Jarrett. A note on the intervals between coal-mining disasters. Biometrika, 66(1):191–193, 1979. 9
2015
107