index
int64
0
20.3k
text
stringlengths
0
1.3M
year
stringdate
1987-01-01 00:00:00
2024-01-01 00:00:00
No
stringlengths
1
4
6,000
The Multi-fidelity Multi-armed Bandit Kirthevasan Kandasamy ♮, Gautam Dasarathy ♦, Jeff Schneider ♮, Barnabás Póczos ♮ ♮Carnegie Mellon University, ♦Rice University {kandasamy, schneide, bapoczos}@cs.cmu.edu, gautamd@rice.edu Abstract We study a variant of the classical stochastic K-armed bandit where observing the outcome of each arm is expensive, but cheap approximations to this outcome are available. For example, in online advertising the performance of an ad can be approximated by displaying it for shorter time periods or to narrower audiences. We formalise this task as a multi-fidelity bandit, where, at each time step, the forecaster may choose to play an arm at any one of M fidelities. The highest fidelity (desired outcome) expends cost λ(M). The mth fidelity (an approximation) expends λ(m) < λ(M) and returns a biased estimate of the highest fidelity. We develop MF-UCB, a novel upper confidence bound procedure for this setting and prove that it naturally adapts to the sequence of available approximations and costs thus attaining better regret than naive strategies which ignore the approximations. For instance, in the above online advertising example, MF-UCB would use the lower fidelities to quickly eliminate suboptimal ads and reserve the larger expensive experiments on a small set of promising candidates. We complement this result with a lower bound and show that MF-UCB is nearly optimal under certain conditions. 1 Introduction Since the seminal work of Robbins [11], the multi-armed bandit has become an attractive framework for studying exploration-exploitation trade-offs inherent to tasks arising in online advertising, finance and other fields. In the most basic form of the K-armed bandit [9, 12], we have a set K = {1, . . . , K} of K arms (e.g. K ads in online advertising). At each time step t = 1, 2, . . . , an arm is played and a corresponding reward is realised. The goal is to design a strategy of plays that minimises the regret after n plays. The regret is the comparison, in expectation, of the realised reward against an oracle that always plays the best arm. The well known Upper Confidence Bound (UCB) algorithm [3], achieves regret O(K log(n)) after n plays (ignoring mean rewards) and is minimax optimal [9]. In this paper, we propose a new take on this important problem. In many practical scenarios of interest, one can associate a cost to playing each arm. Furthermore, in many of these scenarios, one might have access to cheaper approximations to the outcome of the arms. For instance, in online advertising the goal is to maximise the cumulative number of clicks over a given time period. Conventionally, an arm pull maybe thought of as the display of an ad for a specific time, say one hour. However, we may approximate its hourly performance by displaying the ad for shorter periods. This estimate is biased (and possibly noisy), as displaying an ad for longer intervals changes user behaviour. It can nonetheless be useful in gauging the long run click through rate. We can also obtain biased estimates of an ad by displaying it only to certain geographic regions or age groups. Similarly one might consider algorithm selection for machine learning problems [4], where the goal is to be competitive with the best among a set of learning algorithms for a task. Here, one might obtain cheaper approximate estimates of the performance of algorithm by cheaper versions using less data or computation. In this paper, we will refer to such approximations as fidelities. Consider a 2-fidelity problem where the cost at the low fidelity is λ(1) and the cost at the high fidelity is λ(2). We will present a cost weighted notion of regret for this setting for a strategy that expends a capital 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of Λ units. A classical K-armed bandit strategy such as UCB, which only uses the highest fidelity, can obtain at best O(λ(2)K log(Λ/λ(2))) regret [9]. In contrast, this paper will present multi-fidelity strategies that achieve O (λ(1)K + λ(2)|Kg|) log(Λ/λ(2))  regret. Here Kg is a (typically) small subset of arms with high expected reward that can be identified using plays at the (cheaper) low fidelity. When |Kg| < K and λ(1) < λ(2), such a strategy will outperform the more standard UCB algorithms. Intuitively, this is achieved by using the lower fidelities to eliminate several of “bad” arms and reserving expensive higher fidelity plays for a small subset of the most promising arms. We formalise the above intuitions in the sequel. Our main contributions are, 1. A novel formalism for studying bandit tasks when one has access to multiple fidelities for each arm, with each successive fidelity providing a better approximation to the most expensive one. 2. A new algorithm that we call Multi-Fidelity Upper Confidence Bound (MF-UCB) that adapts the classical Upper Confidence Bound (UCB) strategies to our multi-fidelity setting. Empirically, we demonstrate that our algorithm outperforms naive UCB on simulations. 3. A theoretical characterisation of the performance of MF-UCB that shows that the algorithm (a) uses the lower fidelities to explore all arms and eliminates arms with low expected reward, and (b) reserves the higher fidelity plays for arms with rewards close to the optimal value. We derive a lower bound on the regret and demonstrate that MF-UCB is near-optimal on this problem. Related Work The K-armed bandit has been studied extensively in the past [1, 9, 11]. There has been a flurry of work on upper confidence bound (UCB) methods [2, 3], which adopt the optimism in the face of uncertainty principle for bandits. For readers unfamiliar with UCB methods, we recommend Chapter 2 of Bubeck and Cesa-Bianchi [5]. Our work in this paper builds on UCB ideas, but the multi-fidelity framework poses significantly new algorithmic and theoretical challenges. There has been some interest in multi-fidelity methods for optimisation in many applied domains of research [7, 10]. However, these works do not formalise or analyse notions of regret in the multi-fidelity setting. Multi-fidelity methods are used in the robotics community for reinforcement learning tasks by modeling each fidelity as a Markov decision process [6]. Zhang and Chaudhuri [16] study active learning with a cheap weak labeler and an expensive strong labeler. The objective of these papers however is not to handle the exploration-exploitation trade-off inherent to the bandit setting. A line of work on budgeted multi-armed bandits [13, 15] study a variant of the K-armed bandit where each arm has a random reward and cost and the goal is to play the arm with the highest reward/cost ratio as much as possible. This is different from our setting where each arm has multiple fidelities which serve as an approximation. Recently, in Kandasamy et al. [8] we extended ideas in this work to analyse multi-fidelity bandits with Gaussian process payoffs. 2 The Stochastic K-armed Multi-fidelity Bandit In the classical K-armed bandit, each arm k ∈K = {1, . . . , K} is associated with a real valued distribution θk with mean µk. Let K⋆= argmaxk∈K µk be the set of optimal arms, k⋆∈K⋆be an optimal arm and µ⋆= µk⋆denote the optimal mean value. A bandit strategy would play an arm It ∈K at each time step t and observe a sample from θIt. Its goal is to maximise the sum of expected rewards after n time steps Pn t=1 µIt, or equivalently minimise the cumulative pseudo-regret Pn t=1 µ⋆−µIt for all values of n. In other words, the objective is to be competitive, in expectation, against an oracle that plays an optimal arm all the time. In this work we differ from the usual bandit setting in the following aspect. For each arm k, we have access to M −1 successively approximate distributions θ(1) k , θ(2) k , . . . , θ(M−1) k to the desired distribution θ(M) k = θk. We will refer to these approximations as fidelities. Clearly, these approximations are meaningful only if they give us some information about θ(M) k . In what follows, we will assume that the mth fidelity mean of an arm is within ζ(m), a known quantity, of its highest fidelity mean, where ζ(m), decreasing with m, characterise the successive approximations. That is, |µ(M) k −µ(m) k | ≤ζ(m) for all k ∈K and m = 1, . . . , M, where ζ(1) > ζ(2) > · · · > ζ(M) = 0 and the ζ(m)’s are known. It is possible for the lower fidelities to be misleading under this assumption: there could exist an arm k with µ(M) k < µ⋆= µ(M) k⋆ but with µ(m) k > µ⋆and/or µ(m) k > µ(m) k⋆ for any m < M. In other words, we wish to explicitly account for the biases introduced by the lower fidelities, and not treat them 2 as just a higher variance observation of an expensive experiment. This problem of course becomes interesting only when lower fidelities are more attractive than higher fidelities in terms of some notion of cost. Towards this end, we will assign a cost λ(m) (such as advertising time, money etc.) to playing an arm at fidelity m where λ(1) < λ(2) · · · < λ(M). Notation: T (m) k,t denotes the number of plays at arm k, at fidelity m until t time steps. T (>m) k,t is the number of plays at fidelities greater than m. Q(m) t = P k∈K T (m) k,t is the number of fidelity m plays at all arms until time t. X (m) k,s denotes the mean of s samples drawn from θ(m) k . Denote ∆(m) k = µ⋆−µ(m) k −ζ(m). When s refers to the number of plays of an arm, we will take 1/s = ∞ if s = 0. A denotes the complement of a set A ⊂K. While discussing the intuitions in our proofs and theorems we will use ≍, ≲, ≳to denote equality and inequalities ignoring constants. Regret in the multi-fidelity setting: A strategy for a multi-fidelity bandit problem, at time t, produces an arm-fidelity pair (It, mt), where It ∈K and mt ∈{1, . . . , M}, and observes a sample Xt drawn (independently of everything else) from the distribution θ(mt) It . The choice of (It, mt) could depend on previous arm-observation-fidelity tuples {(Ii, Xi, mi)}t−1 i=1. The multi-fidelity setting calls for a new notion of regret. For any strategy A that expends Λ units of the resource, we will define the pseudo-regret R(Λ, A) as follows. Let qt denote the instantaneous pseudo-reward at time t and rt = µ⋆−qt denote the instantaneous pseudo-regret. We will discuss choices for qt shortly. Any notion of regret in the multi-fidelity setting needs to account for this instantaneous regret along with the cost of the fidelity at which we played at time t, i.e. λ(mt). Moreover, we should receive no reward (maximum regret) for any unused capital. These observations lead to the following definition, R(Λ, A) = Λµ⋆− N X t=1 λ(mt)qt = Λ − N X t=1 λ(mt) ! µ⋆ | {z } ˜r(Λ,A) + N X t=1 λ(mt)rt | {z } ˜ R(Λ,A) . (1) Above, N is the (random) number of plays within capital Λ by A, i.e. the largest n such that Pn t=1 λ(mt) ≤Λ. To motivate our choice of qt we consider an online advertising example where λ(m) is the advertising time at fidelity m and µ(m) k is the expected number of clicks per unit time. While we observe from θ(mt) It at time t, we wish to reward the strategy according to its highest fidelity distribution θ(M) It . Therefore regardless of which fidelity we play we set qt = µ(M) It . Here, we are competing against an oracle which plays an optimal arm at any fidelity all the time. Note that we might have chosen qt to be µ(mt) It . However, this does not reflect the motivating applications for the multi-fidelity setting that we consider. For instance, a clickbait ad might receive a high number of clicks in the short run, but its long term performance might be poor. Furthermore, for such a choice, we may as well ignore the rich structure inherent to the multi-fidelity setting and simply play the arm argmaxm,k µ(m) k at each time. There are of course other choices for qt that result in very different notions of regret; we discuss this briefly at the end of Section 7. The distributions θ(m) k need to be well behaved for the problem to be tractable. We will assume that they satisfy concentration inequalities of the following form. For all ϵ > 0, ∀m, k, P X (m) k,s −µ(m) k > ϵ  < νe−sψ(ϵ), P X (m) k,s −µ(m) k < −ϵ  < νe−sψ(ϵ). (2) Here ν > 0 and ψ is an increasing function with ψ(0) = 0 and is at least increasing linearly ψ(x) ∈Ω(x). For example, if the distributions are sub-Gaussian, then ψ(x) ∈Θ(x2). The performance of a multi-fidelity strategy which switches from low to high fidelities can be worsened by artificially inserting fidelities. Consider a scenario where λ(m+1) is only slightly larger than λ(m) and ζ(m+1) is only slightly smaller than ζ(m). This situation is unfavourable since there isn’t much that can be inferred from the (m + 1)th fidelity that cannot already be inferred from the mth by expending the same cost. We impose the following regularity condition to avoid such situations. Assumption 1. The ζ(m)’s decay fast enough such that Pm i=1 1 ψ(ζ(i)) ≤ 1 ψ(ζ(m+1)) for all m < M. Assumption 1 is not necessary to analyse our algorithm, however, the performance of MF-UCB when compared to UCB is most appealing when the above holds. In cases where M is small enough and 3 can be treated as a constant, the assumption is not necessary. For sub-Gaussian distributions, the condition is satisfied for an exponentially decaying (ζ(1), ζ(2), . . . ) such as (1/ √ 2, 1/2, 1/2 √ 2 . . . ). Our goal is to design a strategy A0 that has low expected pseudo-regret E[R(Λ, A0)] for all values of (sufficiently large) Λ, i.e. the equivalent of an anytime strategy, as opposed to a fixed time horizon strategy, in the usual bandit setting. The expectation is over the observed rewards which also dictates the number of plays N. From now on, for simplicity we will write R(Λ) when A is clear from context and refer to it just as regret. 3 The Multi-Fidelity Upper Confidence Bound (MF-UCB) Algorithm As the name suggests, the MF-UCB algorithm maintains an upper confidence bound corresponding to µ(m) k for each m ∈{1, . . . , M} and k ∈K based on its previous plays. Following UCB strategies [2, 3], we define the following set of upper confidence bounds, B(m) k,t (s) = X (m) k,s + ψ−1ρ log t s  + ζ(m), for all m ∈{1, . . . , M} , k ∈K Bk,t = min m=1,...,M B(m) k,t (T (m) k,t−1). (3) Here ρ is a parameter in our algorithm and ψ is from (2). Each B(m) k,t (T (m) k,t−1) provides a high probability upper bound on µ(M) k with their minimum Bk,t giving the tightest bound (See Appendix A). Similar to UCB, at time t we play the arm It with the highest upper bound It = argmaxk∈K Bk,t. Since our setup has multiple fidelities associated with each arm, the algorithm needs to determine at each time t which fidelity (mt) to play the chosen arm (It). For this consider an arbitrary fidelity m < M. The ζ(m) conditions on µ(m) k imply a constraint on the value of µ(M) k . If, at fidelity m, the uncertainty interval ψ−1(ρ log(t)/T (m) It,t−1) is large, then we have not constrained µ(M) It sufficiently well yet. There is more information to be gleaned about µ(M) It from playing the arm It at fidelity m. On the other hand, playing at fidelity m indefinitely will not help us much since the ζ(m) elongation of the confidence band caps off how much we can learn about µ(M) It from fidelity m; i.e. even if we knew µ(m) It , we will have only constrained µ(M) It to within a ±ζ(m) interval. Our algorithm captures this natural intuition. Having selected It, we begin checking at the first fidelity. If ψ−1(ρ log(t)/T (1) It,t−1) is smaller than a threshold γ(1) we proceed to check the second fidelity, continuing in a similar fashion. If at any point ψ−1(ρ log(t)/T (m) It,t−1) ≥γ(m), we play It at fidelity mt = m. If we go all the way to fidelity M, we play at mt = M. The resulting procedure is summarised below in Algorithm 1. Algorithm 1 MF-UCB • for t = 1, 2, . . . 1. Choose It ∈argmaxk∈K Bk,t. (See equation (3).) 2. mt = minm { m | ψ−1(ρ log t/T (m) It,t−1) ≥γ(m) ∨m = M} (See equation (4).) 3. Play X ∼θ(mt) It . Choice of γ(m): In our algorithm, we choose γ(m) = ψ−1  λ(m) λ(m+1) ψ ζ(m) (4) To motivate this choice, note that if ∆(m) k = µ⋆−µ(m) k −ζ(m) > 0 then we can conclude that arm k is not optimal. Step 2 of the algorithm attempts to eliminate arms for which ∆(m) k ≳γ(m) from plays above the mth fidelity. If γ(m) is too large, then we would not eliminate a sufficient number of arms whereas if it was too small we could end up playing a suboptimal arm k (for which µ(m) k > µ⋆) too many times at fidelity m. As will be revealed by our analysis, the given choice represents an optimal tradeoff under the given assumptions. 4 K(2) K(2) K(1) K(1) K(4) K(4) K(3) K(3) K⇤ K⇤ J (2) ⇣(2)+2γ(2) J (2) ⇣(2)+2γ(2) J (3) ⇣(3)+2γ(3) J (3) ⇣(3)+2γ(3) J (1) ⇣(1)+2γ(1) J (1) ⇣(1)+2γ(1) Figure 1: Illustration of the partition K(m)’s for a M = 4 fidelity problem. The sets J (m) ζ(m)+2γ(m) are indicated next to their boundaries. K(1), K(2), K(3), K(4) are shown in yellow, green, red and purple respectively. The optimal arms K⋆are shown as a black circle. 4 Analysis We will be primarily concerned with the term ˜R(Λ, A) = ˜R(Λ) from (1). ˜r(Λ, A) is a residual term; it is an artefact of the fact that after the N +1th play, the spent capital would have exceeded Λ. For any algorithm that operates oblivious to a fixed capital, it can be bounded by λ(M)µ⋆which is negligible compared to ˜R(Λ). According to the above, we have the following expressions for ˜R(Λ): ˜R(Λ) = X k∈K ∆(M) k M X m=1 λ(m)T (m) k,N ! , (5) Central to our analysis will be the following partitioning of K. First denote the set of arms whose fidelity m mean is within η of µ⋆to be J (m) η = {k ∈K; µ⋆−µ(m) k ≤η}. Define K(1) ≜ J (1) ζ(1)+2γ(1) = {k ∈K; ∆(1) k > 2γ(1)} to be the arms whose first fidelity mean µ(1) k is at least ζ(1) + 2γ(1) below the optimum µ⋆. Then we recursively define, K(m) ≜J (m) ζ(m)+2γ(m) ∩  m−1 \ ℓ=1 J (ℓ) ζ(ℓ)+2γ(ℓ)  , ∀m≤M −1, K(M) ≜K⋆∩  M−1 \ ℓ=1 J (ℓ) ζ(ℓ)+2γ(ℓ)  . Observe that for all k ∈K(m), ∆(m) k > 2γ(m) and ∆(ℓ) k ≤2γ(ℓ) for all ℓ< m. For what follows, for any k ∈K, JkK will denote the partition k belongs to, i.e. JkK = m s.t. k ∈K(m). We will see that K(m) are the arms that will be played at the mth fidelity but can be excluded from fidelities higher than m using information at fidelity m. See Fig. 1 for an illustration of these partitions. 4.1 Regret Bound for MF-UCB Recall that N = PM m=1 Q(m) N is the total (random) number of plays by a multi-fidelity strategy within capital Λ. Let nΛ = ⌊Λ/λ(M)⌋be the (non-random) number of plays by any strategy that operates only on the highest fidelity. Since λ(m) < λ(M) for all m < M, N could be large for an arbitrary multi-fidelity method. However, our analysis reveals that for MF-UCB, N ≲nΛ with high probability. The following theorem bounds R for MF-UCB. The proof is given in Appendix A. For clarity, we ignore the constants but they are fleshed out in the proofs. Theorem 2 (Regret Bound for MF-UCB). Let ρ > 4. There exists Λ0 depending on λ(m)’s such that for all Λ > Λ0, MF-UCB satisfies, E[R(Λ)] log(nΛ) ≲ X k/∈K⋆ ∆(M) k · λ(JkK) ψ(∆(JkK) k ) ≍ M X m=1 X k∈K(m) ∆(M) k λ(m) ψ(∆(m) k ) Let us compare the above bound to UCB whose regret is E[R(Λ)] log(nΛ) ≍P k/∈K⋆∆(M) k λ(M) ψ(∆(M) k ). We will first argue that MF-UCB does not do significantly worse than UCB in the worst case. Modulo the ∆(M) k log(nΛ) terms, regret for MF-UCB due to arm k is Rk,MF-UCB ≍λ(JkK)/ψ(∆(JkK) k ). Consider any k ∈K(m), m < M for which ∆(m) k > 2γ(m). Since ∆(M) k ≤∆(JkK) k + 2ζ(JkK) ≲ψ−1λ(JkK+1) λ(JkK) ψ(∆(JkK) k )  , 5 a (loose) lower bound for UCB for the same quantity is Rk,UCB ≍ λ(M)/ψ(∆(M) k ) ≳ λ(M) λ(JkK+1) Rk,MF-UCB. Therefore for any k ∈K(m), m < M, MF-UCB is at most a constant times worse than UCB. However, whenever ∆(JkK) k is comparable to or larger than ∆(M) k , MF-UCB outperforms UCB by a factor of λ(JkK)/λ(M) on arm k. As can be inferred from the theorem, most of the cost invested by MF-UCB on arm k is at the JkKth fidelity. For example, in Fig. 1, MF-UCB would not play the yellow arms K(1) beyond the first fidelity (more than a constant number of times). Similarly all green and red arms are played mostly at the second and third fidelities respectively. Only the blue arms are played at the fourth (most expensive) fidelity. On the other hand UCB plays all arms at the fourth fidelity. Since lower fidelities are cheaper MF-UCB achieves better regret than UCB. It is essential to note here that ∆(M) k is small for arms in in K(M). These arms are close to the optimum and require more effort to distinguish than arms that are far away. MF-UCB, like UCB , invests log(nΛ)λ(M)/ψ(∆(M) k ) capital in those arms. That is, the multi-fidelity setting does not help us significantly with the “hard-to-distinguish” arms. That said, in cases where K is very large and the sets K(M) is small the bound for MF-UCB can be appreciably better than UCB. 4.2 Lower Bound Since, N ≥nΛ = ⌊Λ/λ(M)⌋, any multi-fidelity strategy which plays a suboptimal arm a polynomial number of times at any fidelity after n time steps, will have worse regret than MF-UCB (and UCB). Therefore, in our lower bound we will only consider strategies which satisfy the following condition. Assumption 3. Consider the strategy after n plays at any fidelity. For any arm with ∆(M) k > 0, we have E[PM m=1 T (m) k,n ] ∈o(na) for any a > 0 . For our lower bound we will consider a set of Bernoulli distributions θ(m) k for each fidelity m and each arm k with mean µ(m) k . It is known that for Bernoulli distributions ψ(ϵ) ∈Θ(ϵ2) [14]. To state our lower bound we will further partition the set K(m) into two sets K(m)  , K(m)  as follows, K(m)  = {k ∈K(m) : ∆(ℓ) k ≤0 ∀ℓ< m}, K(m)  = {k ∈K(m) : ∃ℓ< m s.t. ∆(ℓ) k > 0}. For any k ∈K(m) our lower bound, given below, is different depending on which set k belongs to. Theorem 4 (Lower bound for R(Λ)). Consider any set of Bernoulli reward distributions with µ⋆∈(1/2, 1) and ζ(1) < 1/2. Then, for any strategy satisfying Assumption 3 the following holds. lim inf Λ→∞ E[R(Λ)] log(nΛ) ≥ c · M X m=1   X k∈K(m)  ∆(M) k λ(m) ∆(m) k 2 + X k∈K(m)  ∆(M) k min ℓ∈Lm(k) λ(ℓ) ∆(ℓ) k 2   (6) Here c is a problem dependent constant. Lm(k) = {ℓ< m : ∆(ℓ) k > 0} ∪{m} is the union of the mth fidelity and all fidelities smaller than m for which ∆(ℓ) k > 0. Comparing this with Theorem 2 we find that MF-UCB meets the lower bound on all arms k ∈ K(m)  , ∀m. However, it may be loose on any k ∈K(m)  . The gap can be explained as follows. For k ∈K(m)  , there exists some ℓ< m such that 0 < ∆(ℓ) k < 2γ(ℓ). As explained previously, the switching criterion of MF-UCB ensures that we do not invest too much effort trying to distinguish whether ∆(ℓ) k < 0 since ∆(ℓ) k could be very small. That is, we proceed to the next fidelity only if we cannot conclude ∆(ℓ) k ≲γ(ℓ). However, since λ(m) > λ(ℓ) it might be the case that λ(ℓ)/∆(ℓ) k 2 < λ(m)/∆(m) k 2 even though ∆(m) k > 2γ(m). Consider for example a two fidelity problem where ∆= ∆(1) k = ∆(2) k < 2 p λ(1)/λ(2)ζ(1). Here it makes sense to distinguish the arm as being suboptimal at the first fidelity with λ(1) log(nΛ)/∆2 capital instead of λ(2) log(nΛ)/∆2 at the second fidelity. However, MF-UCB distinguishes this arm at the higher fidelity as ∆< 2γ(m) and therefore does not meet the lower bound on this arm. While it might seem tempting to switch based on estimates for ∆(1) k , ∆(2) k , this idea is not desirable as estimating ∆(2) k for an arm requires log(nΛ)/ψ(∆(2) k ) samples at the second fidelity; this is is exactly what we are trying to avoid for the majority of the arms via the multi-fidelity setting. We leave it as an open problem to resolve this gap. 6 K(1) K(2) K(m) K(M) K⋆ E[T (1) k,n] log(n) ψ(∆(1) k ) log(n) ψ(γ(1)) ... log(n) ψ(γ(1)) ... log(n) ψ(γ(1)) log(n) ψ(γ(1)) E[T (2) k,n] O(1) log(n) ψ(∆(2) k ) ... log(n) ψ(γ(2)) ... log(n) ψ(γ(2)) log(n) ψ(γ(2)) ... E[T (m) k,n ] O(1) ... log(n) ψ(∆(m) k ) ... log(n) ψ(γ(m)) log(n) ψ(γ(m)) ... E[T (M) k,n ] O(1) log(n) ψ(∆(M) k ) Ω(n) Table 1: Bounds on the expected number of plays for each k ∈K(m) (columns) at each fidelity (rows) after n time steps (i.e. n plays at any fidelity) in MF-UCB. 5 Proof Sketches 5.1 Theorem 2 First we analyse MF-UCB after n plays (at any fidelity) and control the number of plays of an arm at various fidelities depending on which K(m) it belongs to. To that end we prove the following. Lemma 5. (Bounding E[T (m) k,n ] – Informal) After n time steps of MF-UCB for any k ∈K, T (ℓ) k,n ≲log(n) ψ(γ(m)), ∀ℓ< JkK, E[T (JkK) k,n ] ≲ log(n) ψ(∆(JkK) k /2) , E[T (>JkK) k,n ] ≤O(1). The bounds above are illustrated in Table 1. Let ˜Rk(Λ) = PM m=1 λ(m)∆(M) k T (m) k,N be the regret incurred due to arm k and ˜Rkn = E[ ˜Rk(Λ)|N = n]. Using Lemma 5 we have, ˜Rkn ∆(M) k log(n) ≲ JkK−1 X ℓ=1 λ(ℓ) ψ(γ(m)) + λ(JkK) ψ(∆(JkK) k /2) + o(1) (7) The next step will be to control the number of plays N within capital Λ which will bound E[log(N)]. While Λ/λ(1) is an easy bound, we will see that for MF-UCB, N will be on the order of nΛ = Λ/λ(M). For this we will use the following high probability bounds on T (m) k,n . Lemma 6. (Bounding P(T (m) k,n > · ) – Informal) After n time steps of MF-UCB for any k ∈K, P T (JkK) k,n ≳x · log(n) ψ(∆(JkK) k /2) ! ≲ 1 nxρ−1 , P  T (>JkK) k,n > x  ≲ 1 xρ−2 . We bound the number of plays at fidelities less than M via Lemma 6 and obtain n/2 > PM−1 m=1 Q(m) n with probability greater than, say δ, for all n ≥n0. By setting δ = 1/ log(Λ/λ(1)), we get E[log(N)] ≲log(nΛ). The actual argument is somewhat delicate since δ depends on Λ. This gives as an expression for the regret due to arm k to be of the form (7) where n is replaced by nΛ. Then we we argue that the regret incurred by an arm k at fidelities less than JkK (first term in the RHS of (7)) is dominated by λ(JkK)/ψ(∆(JkK) k ) (second term). This is possible due to the design of the sets K(m) and Assumption 1. While Lemmas 5, 6 require only ρ > 2, we need ρ > 4 to ensure that PM−1 m=1 Q(m) n remains sublinear when we plug-in the probabilities from Lemma 6. ρ > 2 is attainable with a more careful design of the sets K(m). The Λ > Λ0 condition is needed because initially MF-UCB is playing at lower fidelities and for small Λ, N could be much larger than nΛ. 5.2 Theorem 4 First we show that for an arm k with ∆(p) k > 0 and ∆(ℓ) k ≤0 for all ℓ< p, any strategy should satisfy Rk(Λ) ≳ log(nΛ) ∆(M) k  min ℓ≥p,∆(ℓ) k >0 λ(ℓ) ∆(ℓ) k 2  7 Λ ×10 5 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 R(Λ) ×10 4 1 2 3 4 5 6 7 8 9 10 K = 500, M = 3, costs = [1; 10; 100] MF-UCB UCB Λ ×10 5 0.5 1 1.5 2 2.5 R(Λ) ×10 5 0.5 1 1.5 2 2.5 3 K = 500, M = 4, costs = [1; 5; 20; 50] MF-UCB UCB Λ ×10 4 1 2 3 4 5 6 7 8 9 10 R(Λ) ×10 4 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 K = 200, M = 2, costs = [1; 10] MF-UCB UCB Λ ×10 5 1 2 3 4 5 6 7 8 9 10 R(Λ) ×10 5 0.5 1 1.5 2 2.5 3 K = 1000, M = 5, costs = [1; 3; 10; 30; 100] MF-UCB UCB Arm Index 0 50 100 150 200 250 300 350 400 450 500 Number of plays 0 20 40 60 80 100 120 140 160 MF-UCB m=3 m=2 m=1 Arm Index 0 50 100 150 200 250 300 350 400 450 500 Number of plays 0 20 40 60 80 100 120 140 160 UCB m=3 m=2 m=1 Figure 2: Simulations results on the synthetic problems. The first four figures compares UCB against MF-UCB on four synthetic problems. The title states K, M and the costs λ(1), . . . , λ(M). The first two used Gaussian rewards and the last two used Bernoulli rewards. The last two figures show the number of plays by UCB and MF-UCB on a K = 500, M = 3 problem with Gaussian observations (corresponding to the first figure). where Rk is the regret incurred due to arm k. The proof uses a change of measure argument. The modification has Bernoulli distributions with mean ˜µ(ℓ) k , ℓ= 1, . . . , M where ˜µ(ℓ) k = µ(ℓ) k for all ℓ< m. Then we push ˜µ(ℓ) k slightly above µ⋆−ζ(ℓ) from ℓ= m all the way to M where ˜µ(M) k > µ⋆. To control the probabilities after changing to ˜µ(ℓ) k we use the conditions in Assumption 3. Then for k ∈K(m) we argue that λ(ℓ)∆(ℓ) k 2 ≳λ(m)/∆(m) k 2 using, once again the design of the sets K(m). This yields the separate results for k ∈K(m)  , K(m)  . 6 Some Simulations on Synthetic Problems We compare UCB against MF-UCB on a series of synthetic problems. The results are given in Figure 2. Due to space constraints, the details on these experiments are given in Appendix C. Note that MF-UCB outperforms UCB on all these problems. Critically, note that the gradient of the curve is also smaller than that for UCB – corroborating our theoretical insights. We have also illustrated the number of plays by MF-UCB and UCB at each fidelity for one of these problems. The arms are arranged in increasing order of µ(M) k values. As predicted by our analysis, most of the very suboptimal arms are only played at the lower fidelities. As lower fidelities are cheaper, MF-UCB is able to use more higher fidelity plays at arms close to the optimum than UCB. 7 Conclusion We studied a novel framework for studying exploration exploitation trade-offs when cheaper approximations to a desired experiment are available. We propose an algorithm for this setting, MF-UCB, based on upper confidence bound techniques. It uses the cheap lower fidelity plays to eliminate several bad arms and reserves the expensive high fidelity queries for a small set of arms with high expected reward, hence achieving better regret than strategies which ignore multi-fidelity information. We complement this result with a lower bound which demonstrates that MF-UCB is near optimal. Other settings for bandit problems with multi-fidelity evaluations might warrant different definitions for the regret. For example, consider a gold mining robot where each high fidelity play is a real world experiment of the robot and incurs cost λ(2). However, a vastly cheaper computer simulation which incurs λ(1) approximate a robot’s real world behaviour. In applications like this λ(1) ≪λ(2). However, unlike our setting lower fidelity plays may not have any rewards (as simulations do not yield actual gold). Similarly, in clinical trials the regret due to a bad treatment at the high fidelity, would be, say, a dead patient. However, a bad treatment at a lower fidelity may not warrant a large penalty. These settings are quite challenging and we wish to work on them going forward. 8 References [1] Rajeev Agrawal. Sample Mean Based Index Policies with O(log n) Regret for the Multi-Armed Bandit Problem. Advances in Applied Probability, 1995. [2] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration-exploitation Tradeoff Using Variance Estimates in Multi-armed Bandits. Theor. Comput. Sci., 2009. [3] Peter Auer. Using Confidence Bounds for Exploitation-exploration Trade-offs. J. Mach. Learn. Res., 2003. [4] Yoram Baram, Ran El-Yaniv, and Kobi Luz. Online choice of active learning algorithms. The Journal of Machine Learning Research, 5:255–291, 2004. [5] Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 2012. [6] Mark Cutler, Thomas J. Walsh, and Jonathan P. How. Reinforcement Learning with Multi-Fidelity Simulators. In IEEE International Conference on Robotics and Automation (ICRA), 2014. [7] D. Huang, T.T. Allen, W.I. Notz, and R.A. Miller. Sequential kriging optimization using multiple-fidelity evaluations. Structural and Multidisciplinary Optimization, 2006. [8] Kirthevasan Kandasamy, Gautam Dasarathy, Junier Oliva, Jeff Schenider, and Barnabás Póczos. Gaussian Process Bandit Optimisation with Multi-fidelity Evaluations. In Advances in Neural Information Processing Systems, 2016. [9] T. L. Lai and Herbert Robbins. Asymptotically Efficient Adaptive Allocation Rules. Advances in Applied Mathematics, 1985. [10] Dev Rajnarayan, Alex Haas, and Ilan Kroo. A multifidelity gradient-free optimization method and application to aerodynamic design. In AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Victoria, Etats-Unis, 2008. [11] Herbert Robbins. Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 1952. [12] W. R. Thompson. On the Likelihood that one Unknown Probability Exceeds Another in View of the Evidence of Two Samples. Biometrika, 1933. [13] Long Tran-Thanh, Lampros C. Stavrogiannis, Victor Naroditskiy, Valentin Robu, Nicholas R. Jennings, and Peter Key. Efficient Regret Bounds for Online Bid Optimisation in Budget-Limited Sponsored Search Auctions. In UAI, 2014. [14] Larry Wasserman. All of Statistics: A Concise Course in Statistical Inference. Springer Publishing Company, Incorporated, 2010. [15] Yingce Xia, Haifang Li, Tao Qin, Nenghai Yu, and Tie-Yan Liu. Thompson Sampling for Budgeted Multi-Armed Bandits. In IJCAI, 2015. [16] Chicheng Zhang and Kamalika Chaudhuri. Active Learning from Weak and Strong Labelers. In Advances in Neural Information Processing Systems, 2015. 9
2016
105
6,001
Joint Line Segmentation and Transcription for End-to-End Handwritten Paragraph Recognition Théodore Bluche A2iA SAS 39 rue de la Bienfaisance 75008 Paris tb@a2ia.com Abstract Offline handwriting recognition systems require cropped text line images for both training and recognition. On the one hand, the annotation of position and transcript at line level is costly to obtain. On the other hand, automatic line segmentation algorithms are prone to errors, compromising the subsequent recognition. In this paper, we propose a modification of the popular and efficient Multi-Dimensional Long Short-Term Memory Recurrent Neural Networks (MDLSTM-RNNs) to enable end-to-end processing of handwritten paragraphs. More particularly, we replace the collapse layer transforming the two-dimensional representation into a sequence of predictions by a recurrent version which can select one line at a time. In the proposed model, a neural network performs a kind of implicit line segmentation by computing attention weights on the image representation. The experiments on paragraphs of Rimes and IAM databases yield results that are competitive with those of networks trained at line level, and constitute a significant step towards end-to-end transcription of full documents. 1 Introduction Offline handwriting recognition consists in recognizing a sequence of characters in an image of handwritten text. Unlike printed texts, images of handwriting are difficult to segment into characters. Early methods tried to compute segmentation hypotheses for characters, for example by performing a heuristic over-segmentation, followed by a scoring of groups of segments (e.g. in [4]). In the nineties, this kind of approach was progressively replaced by segmentation-free methods, where a whole word image is fed to a system providing a sequence of scores. A lexicon constrains a decoding step, allowing to retrieve the character sequence. Some examples are the sliding window approach [25], in which features are extracted from vertical frames of the line image, or space-displacement neural networks [4]. In the last decade, word segmentations were abandoned in favor of complete text line recognition with statistical language models [10]. Nowadays, the state of the art handwriting recognition systems are Multi-Dimensional Long ShortTerm Memory Recurrent Neural Networks (MDLSTM-RNNs [18]), which consider the whole image, alternating MDLSTM layers and convolutional layers. The transformation of the 2D structure into a sequence is computed by a simple collapse layer summing the activations along the vertical axis. Connectionist Temporal Classification (CTC [17]) allows to train the network to both align and recognize sequences of characters. These models have become very popular and won the recent evaluations of handwriting recognition [9, 34, 37]. However, current models still need segmented text lines, and full document processing pipelines should include automatic line segmentation algorithms. Although the segmentation of documents into lines is assumed in most descriptions of handwriting recognition systems, several papers or 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. surveys state that it is a crucial step for handwriting text recognition systems [8, 28]. The need of line segmentation to train the recognition system has also motivated several efforts to map a paragraph-level or page-level transcript to line positions in the image (e.g. recently [7, 16]). Handwriting recognition systems evolved from character to word segmentation, and to complete line processing nowadays. The performance has always improved by making less segmentation hypotheses. In this paper, we pursue this traditional tendency. We propose a model for multiline recognition based on the popular MDLSTM-RNNs, augmented with an attention mechanism inspired from the recent models for machine translation [3], image caption generation [38], or speech recognition [11, 12]. In the proposed model, the “collapse” layer is modified with an attention network, providing weights to modulate the importance given at different positions in the input. By iteratively applying this layer to a paragraph image, the network can transcribe each text line in turn, enabling a purely segmentation-free recognition of full paragraphs. We carried out experiments on two public datasets of handwritten paragraphs: Rimes and IAM. We report results that are competitive with the state-of-the-art systems, which use the ground-truth line segmentation. The remaining of this paper is organized as follows. Section 2 presents methods related to the one presented here, in terms of the tackled problem and modeling choices. In Section 3, we introduce the baseline model: MDLSTM-RNNs. We expose in Section 4 the proposed modification, and we give the details of the system. Experimental results are reported in Section 5, and followed by a short discussion in Section 6, in which we explain how the system could be improved, and present the challenge of generalizing it to complete documents. 2 Related Work Our work is clearly related to MDLSTM-RNNs [18], which we improve by replacing the simple collapse layer by a more elaborated mechanism, itself made of MDLSTM layers. The model we propose iteratively performs an implicit line segmentation at the level of intermediate representations. Classical text line segmentation algorithms are mostly based on image processing techniques and heuristics. However, some methods were devised using statistical models and machine learning techniques such as hidden Markov models [8], conditional random fields [21], or neural networks [24, 31, 32]. In our model, the line segmentation is performed implicitly and integrated in the neural network. The intermediate features are shared by the transcription and the segmentation models, and they are jointly trained to minimize the transcription error. Recently, many “attention-based” models were proposed to iteratively select in an encoded signal the relevant parts to make the next prediction. This paradigm, already suggested by Fukushima in 1987 [15], was successfully applied to various problems such as machine translation [3], image caption generation [38], speech recognition [11, 12], or cropped words in scene text [27]. Attention mechanisms were also parts of systems that can generate or recognize small pieces of handwriting (e.g. a few digits with DRAW [20] or RAM [2], or short online handwritten sequences [19]). Our system is designed to handle long sequences and multiple lines. In the field of computer vision, and particularly object detection and recognition, many neural architectures were proposed to both locate and recognize the objects, such as OverFeat [35] or spatial transformer networks (STN [22]). In a sense, our model is quite related to the DenseCap model for image captioning [23], itself similar to STNs. However, we do not aim at explicitly predicting line positions, and STNs are not as good with a large amount of small objects. We recently proposed an attention-based model to transcribe full paragraphs of handwritten text, which predicts each character in turn [6]. Outputting one token at a time turns out to be prohibitive in terms of memory and time consumption for full paragraphs, which typically contain about hundreds of characters. In the proposed system, the encoded image is not summarized as a single vector at each timestep, but as a sequence of vectors representing full text lines. It represents a huge speedup, and a comeback to the original MDLSTM-RNN architecture, in which the collapse layer is augmented with an MDLSTM attention network similar to the one presented in [6]. 3 Handwriting Recognition with MDLSTM and CTC MDLSTM-RNNs [18] were first introduced in the context of handwriting recognition. The Multi2 Figure 1: MDLSTM-RNN architecture for handwriting recognition. LSTM layers in four scanning directions are followed by convolutions. The feature maps of the top layer are are summed in the vertical dimension, and character predictions are obtained after a softmax normalization. Dimensional Long Short-Term Memory layers scan the input in the four possible directions. The LSTM cell inner state and output are computed from the states and outputs of previous positions in the considered horizontal and vertical directions. Each MDLSTM layer is followed by a convolutional layer. At the top of this network, there is one feature map for each character. These maps are collapsed into a sequence of prediction vectors, normalized with a softmax activation. The whole architecture is depicted in Figure 1. The Connectionist Temporal Classification (CTC [17]) algorithm, which considers all possible labellings of the sequence, may be applied to train the network to recognize text lines. The 2D to 1D conversion happens in the collapsing layer, which computes a simple aggregation of the feature maps into vector sequences, i.e. maps of height 1. This is achieved by a simple sum across the vertical dimension: zi = H X j=1 aij (1) where zi is the i-th output vector and aij is the input feature vector at coordinates (i, j). All the information in the vertical dimension is reduced to a single vector, regardless of its position in the feature maps, preventing the recognition of multiple lines within this framework. 4 An Iterative Weighted Collapse for End-to-End Handwriting Recognition In this paper, we replace the sum of Eqn. 1 by a weighted sum, in order to focus on a specific part of the input. The weighted collapse is defined as follows: z(t) i = H X j=1 ω(t) ij aij (2) where ω(t) ij are scalar weights between 0 and 1, computed at every time t for each position (i, j). The weights are provided by a recurrent neural network, illustrated in Figure 2, enabling the recognition of a text line at each timestep. Figure 2: Proposed modification of the collapse layer. While the standard collapse (left, top) computes a simple sum, the weighted collapse (right, bottom) includes a neural network to predict the weights of a weighted sum. 3 This collapse, weighted with a neural network, may be interpreted as the “attention” module of an attention-based neural network similar to those of [3, 38]. This mechanism is differentiable and can be trained with backpropagation. The complete architecture may be described as follows. An encoder extracts feature maps from the input image I: a = (aij)(i,j)∈[1,W ]×[1,H] = Encoder(I) (3) where (i, j) are coordinates in the feature maps. In this work, the Encoder module is an MDLSTM network with same architecture as the model presented in Section 3. A weighted collapse provides a view of the encoded image at each timestep in the form of a weighted sum of feature vector sequences. The attention network computes a score for the feature vectors at every position: α(t) ij = Attention(a, ω(t−1)) (4) We refer to ω(t) = {ω(t) ij }(1≤i≤W, 1≤j≤H) as the attention map at time t, which computation depends not only on the encoded image, but also on the previous attention features. A softmax normalization is applied to each column: ω(t) ij = eα(t) ij / X j′ eα(t) ij′ (5) In this work, the Attention module is an MDLSTM network. This module is applied several times to the features from the encoder. The output of the attention module at iteration t, computed with Eqn. 2, is a sequence of feature vectors z, intended to represent a text line. Therefore, we may see this module as a soft line segmentation neural network. The advantages over the neural networks trained for line segmentation [13, 24, 32, 31] are that (i) it works on the same features as those used for the transcription (multi-task encoder) and (ii) it is trained to maximize the transcription accuracy (i.e. more closely related to the goal of handwriting recognition systems, and easily interpretable). A decoder predicts a character sequence from the feature vectors: y = Decoder(z) (6) where z is the concatenation of z(1), z(2), . . . , z(T ). Alternatively, the decoder may be applied to z(i)s sub-sequences to get y(i)s and y is the concatenation of y(1), y(2), . . . , y(T ). In the standard MDLSTM architecture of Section 3, the decoder is a simple softmax. However, a Bidirectional LSTM (BLSTM) decoder could be applied to the collapsed representations. This is particularly interesting in the proposed model, as the BLSTM would potentially process the whole paragraph, allowing a modeling of dependencies across text lines. This model can be trained with CTC. If the line breaks are known in the transcript, the CTC could be applied to the segments corresponding to each line prediction. Otherwise, one can directly apply CTC to the whole paragraph. In this work, we opted for that strategy, with a BLSTM decoder applied to the concatenation of all collapsing steps. 5 Experiments 5.1 Experimental Setup We carried out the experiments on two public databases. The IAM database [29] is made of handwritten English texts copied from the LOB corpus. There are 747 documents (6,482 lines) in the training set, 116 documents (976 lines) in the validation set and 336 documents (2,915 lines) in the test set. The Rimes database [1] contains handwritten letters in French. The data consist of a training set of 1,500 paragraphs (11,333 lines), and a test set of 100 paragraphs (778 lines). We held out the last 100 paragraphs of the training set as a validation set. The networks have the following architecture. The encoder first computes a 2x2 tiling of the input and alternate MDLSTM layers of 4, 20 and 100 units and 2x4 convolutions of 12 and 32 filters with no overlap. The last layer is a linear layer with 80 outputs for IAM and 102 for Rimes. The attention network is an MDLSTM network with 2x16 units in each direction followed by a linear 4 layer with one output, and a softmax on columns (Eqn. 5). The decoder is a BLSTM network with 256 units. Dropout is applied after each LSTM layer [33]. The networks are trained with RMSProp [36] with a base learning rate of 0.001 and mini-batches of 8 examples, to minimize the CTC loss over entire paragraphs. The measure of performance is the Character (or Word) Error Rate (CER%), corresponding to the edit distance between the recognition and ground-truth, normalized by the number of ground-truth characters. 5.2 Impact of the Decoder In our model, the weighted collapse method is followed by a BLSTM decoder. In this experiment, we compare the baseline system (standard collapse followed by a softmax) with the proposed model. In order to dissociate the impact of the weighted collapse from that of the BLSTM decoder, we also trained an intermediate architecture with a BLSTM layer after the standard collapse, but still limited to text lines. Table 1: Character Error Rates (%) of CTC-trained RNNs on 150 dpi images. The Standard models are trained on segmented lines. The Attention models are trained on paragraphs. Collapse Decoder IAM Rimes Standard Softmax 8.4 4.9 Standard BLSTM + Softmax 7.5 4.8 Attention BLSTM + Softmax 6.8 2.5 The character error rates (CER%) on the validation sets are reported in Table 1 for 150dpi images. We observe that the proposed model outperforms the baseline by a large margin (relative 20% improvement on IAM, 50% on Rimes), and that the gain may be attributed to both the BLSTM decoder, and the attention mechanism. 5.3 Impact of Line Segmentation Our model performs an implicit line segmentation to transcribe paragraphs. The baseline considered in the previous section is somehow cheating, because it was evaluated on the ground-truth line segmentation. In this experiment, we add to the comparison the baseline models evaluated in a real scenario where they are applied to the result of an automatic line segmentation algorithm. Table 2: Character Error Rates (%) of CTC-trained RNNs on ground-truth lines and automatic segmentation of paragraphs with different resolutions. The last column contains the error rate of the attention-based model presented in this work, without an explicit line segmentation. Line segmentation Database Resolution GroundTruth Projection Shredding Energy This work IAM 150 dpi 8.4 15.5 9.3 10.2 6.8 300 dpi 6.6 13.8 7.5 7.9 4.9 Rimes 150 dpi 4.8 6.3 5.9 8.2 2.8 300 dpi 3.6 5.0 4.5 6.6 2.5 In Table 2, we report the CERs obtained with the ground-truth line positions, with three different segmentation algorithms, and with our end-to-end system, on the validation sets of both databases with different input resolutions. We see that applying the baseline networks on automatic segmentations increases the error rates, by an absolute 1% in the best case. We also observe that the models are better with higher resolutions. Our models yield better performance than methods based on an explicit and automatic line segmentation, and comparable or better results than with ground-truth segmentation, even with a resolution divided by two. Two factors may explain why our model yields better results than the line recognition from ground-truth segmentation. First, the ground-truth line positions are bounding boxes that may include some parts of adjacent lines and include irrelevant data, whereas the attention model will focus on smaller areas. But the main reason is probably that the proposed model includes a BLSTM operating on the whole paragraph, which may capture linguistic dependencies across text lines. 5 In Figure 3, we display a visualisation of the implicit line segmentation computed by the network. Each color corresponds to one step of the iterative weighted collapse. On the images, the color represents the weights given by the attention network (the transparency encodes their intensity). The texts below are the predicted transcriptions, and chunks are colored according to the corresponding timestep of the attention mechanism. Figure 3: Transcription of full paragraphs of text and implicit line segmentation learnt by the network on IAM (left) and Rimes (right). Best viewed in color. 5.4 Comparison to Published Results In this section, we also compute the word error rates (WER%) and evaluate our models on the test sets to compare the proposed approach to existing systems. For IAM, we applied a 3-gram language model with a lexicon of 50,000 words, trained on the LOB, Brown and Wellington corpora.1 This language model has a perplexity of 298 and out-of-vocabulary rate of 4.3% on the validation set (329 and 3.7% on the test set). The results are presented in Table 3 for different input resolutions. When comparing the error rates, it is important to note that all systems in the literature used an explicit (ground-truth) line segmentation and a language model. [14, 26, 30] used a hybrid character/word language model to tackle the issue of out-of-vocabulary words. Moreover, all systems except [30, 33] carefully pre-processed the line image (e.g. corrected the slant or skew, normalized the height, ...), whereas we just normalized the pixel values to zero mean and unit variance. Finally, [5] is a combination of four systems. Table 3: Final results on Rimes and IAM databases Rimes IAM WER% CER% WER% CER% 150 dpi no language model 13.6 3.2 29.5 10.1 with language model 16.6 6.5 300 dpi no language model 12.6 2.9 24.6 7.9 with language model 16.4 5.5 Bluche, 2015 [5] 11.2 3.5 10.9 4.4 Doetsch et al., 2014 [14] 12.9 4.3 12.2 4.7 Kozielski et al. 2013 [26] 13.7 4.6 13.3 5.1 Pham et al., 2014 [33] 12.3 3.3 13.6 5.1 Messina & Kermorvant, 2014 [30] 13.3 19.1 1 The parts of the LOB corpus used in the validation and evaluation sets were removed. 6 On Rimes, the system applied to 150 dpi images already outperforms the state of the art in CER%, while being competitive in terms of WER%. The system for 300 dpi images is comparable to the best single system [33] in WER% with a significantly better CER%. On IAM, the language model turned out to be quite important, probably because there is more variability in the language.2 On 150 dpi images, the results are not too far from the state of the art results. The WER% does not improve much on 300 dpi images, but we get a lower CER%. When analysing the errors, we noticed that there is a lot of punctuation in IAM, which was often missed by the attention mechanism. It may happen because punctuation marks are significantly smaller than characters. With the attention-based collapse and the weighted sum, they will be more easily missed than with the standard collapse, which gives the same weight to all vertical positions. 6 Discussion Table 4: Comparison of decoding times of different methods: using ground-truth line information, with explicit segmentation, with the attention-based method of [6] and with the system presented in this paper. Method Processing time (s) GroundTruth (crop+reco) 0.21 ± 0.07 Shredding (segment+crop+reco) 0.78 ± 0.26 Scan, Attend and Read [6] (reco) 21.2 ± 5.6 This Work (reco) 0.62 ± 0.14 The proposed model can transcribe complete paragraphs without segmentation and is orders of magnitude faster that the model of [6] (cf. Table 4). However, the mechanism cannot handle arbitrary reading orders. Rather, it implements a sort of implicit line segmentation. In the current implementation, the iterative collapse runs for a fixed number of timesteps. Yet, the model can handle a variable number of text lines, and, interestingly, the focus is put on interlines in the additional steps. A more elegant solution should include the prediction of a binary variable indicating when to stop reading. Our method was applied to paragraph images, so a document layout analysis is required to detect those paragraphs before applying the model. Naturally, the next step should be the transcription of complex documents without an explicit or assumed paragraph extraction. The limitation to paragraphs is inherent to this system. Indeed, the weighted collapse always outputs sequences corresponding to the whole width of the encoded image, which, in paragraphs, may correspond to text lines. In order to switch to full documents, several issues arise. On the one hand, the size of the lines is determined by the size of the text block. Thus a method should be devised to only select a smaller part of the feature maps, representing only the considered text line. This is not possible in the presented framework. A potential solution could come from spatial transformer networks [22], performing a differentiable crop. On the other hand, training will in practice become more difficult, not only because of the complexity of the task, but also because the reading order of text blocks in complex documents cannot be exactly inferred in many cases (even defining arbitrary rules may be tricky). 7 Conclusion We have presented a model to transcribe full paragraphs of handwritten texts without an explicit line segmentation. Contrary to classical methods relying on a two-step process (segment, then recognize), our system directly considers the paragraph image without an elaborated pre-processing, and outputs the complete transcription. We proposed a simple modification of the collapse layer in the standard MDLSTM architecture to iteratively focus on single text lines. This implicit line segmentation is learnt with backpropagation along with the rest of the network to minimize the CTC error at the paragraph level. We reported error rates comparable to the state of the art on two public databases. After switching from explicit to implicit character, then word segmentation for handwriting recognition, we showed that line segmentation can also be learnt inside the transcription model. The next step towards end-to-end handwriting recognition is now at the full page level. 2 A simple language model yields a perplexity of 18 on Rimes [5]. 7 References [1] E. Augustin, M. Carré, E. Grosicki, J.-M. Brodin, E. Geoffrois, and F. Preteux. RIMES evaluation campaign for handwritten mail processing. In Proceedings of the Workshop on Frontiers in Handwriting Recognition, number 1, 2006. [2] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple object recognition with visual attention. arXiv preprint arXiv:1412.7755, 2014. [3] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [4] Yoshua Bengio, Yann LeCun, Craig Nohl, and Chris Burges. Lerec: A NN/HMM hybrid for on-line handwriting recognition. Neural Computation, 7(6):1289–1303, 1995. [5] Théodore Bluche. Deep Neural Networks for Large Vocabulary Handwritten Text Recognition. Theses, Université Paris Sud - Paris XI, May 2015. [6] Théodore Bluche, Jérôme Louradour, and Ronaldo Messina. Scan, Attend and Read: End-to-End Handwritten Paragraph Recognition with MDLSTM Attention. arXiv preprint arXiv:1604.03286, 2016. [7] Théodore Bluche, Bastien Moysset, and Christopher Kermorvant. Automatic line segmentation and groundtruth alignment of handwritten documents. In International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014. [8] Vicente Bosch, Alejandro Hector Toselli, and Enrique Vidal. Statistical text line analysis in handwritten documents. In Frontiers in Handwriting Recognition (ICFHR), 2012 International Conference on, pages 201–206. IEEE, 2012. [9] Sylvie Brunessaux, Patrick Giroux, Bruno Grilhères, Mathieu Manta, Maylis Bodin, Khalid Choukri, Olivier Galibert, and Juliette Kahn. The Maurdor Project: Improving Automatic Processing of Digital Documents. In Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pages 349–354. IEEE, 2014. [10] Horst Bunke, Samy Bengio, and Alessandro Vinciarelli. Offline recognition of unconstrained handwritten texts using hmms and statistical language models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(6):709–720, 2004. [11] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015. [12] Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. Attentionbased models for speech recognition. In Advances in Neural Information Processing Systems, pages 577–585, 2015. [13] Manolis Delakis and Christophe Garcia. text detection with convolutional neural networks. In VISAPP (2), pages 290–294, 2008. [14] Patrick Doetsch, Michal Kozielski, and Hermann Ney. Fast and robust training of recurrent neural networks for offline handwriting recognition. pages –, 2014. [15] Kunihiko Fukushima. Neural network model for selective attention in visual pattern recognition and associative recall. Applied Optics, 26(23):4985–4992, 1987. [16] Basilis Gatos, Georgios Louloudis, Tim Causer, Kris Grint, Veronica Romero, Joan-Andreu Sánchez, Alejandro Hector Toselli, and Enrique Vidal. Ground-truth production in the transcriptorium project. In Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pages 237–241. IEEE, 2014. [17] A Graves, S Fernández, F Gomez, and J Schmidhuber. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In International Conference on Machine learning, pages 369–376, 2006. [18] A. Graves and J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. In Advances in Neural Information Processing Systems, pages 545–552, 2008. [19] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [20] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. DRAW: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623, 2015. 8 [21] David Hebert, Thierry Paquet, and Stephane Nicolas. Continuous crf with multi-scale quantization feature functions application to structure extraction in old newspaper. In Document Analysis and Recognition (ICDAR), 2011 International Conference on, pages 493–497. IEEE, 2011. [22] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2008–2016, 2015. [23] Justin Johnson, Andrej Karpathy, and Li Fei-Fei. Densecap: Fully convolutional localization networks for dense captioning. arXiv preprint arXiv:1511.07571, 2015. [24] Keechul Jung. Neural network-based text location in color images. Pattern Recognition Letters, 22(14):1503–1515, 2001. [25] Alfred Kaltenmeier, Torsten Caesar, Joachim M Gloger, and Eberhard Mandler. Sophisticated topology of hidden Markov models for cursive script recognition. In Document Analysis and Recognition, 1993., Proceedings of the Second International Conference on, pages 139–142. IEEE, 1993. [26] Michal Kozielski, Patrick Doetsch, Hermann Ney, et al. Improvements in RWTH’s System for Off-Line Handwriting Recognition. In Document Analysis and Recognition (ICDAR), 2013 12th International Conference on, pages 935–939. IEEE, 2013. [27] Chen-Yu Lee and Simon Osindero. Recursive recurrent nets with attention modeling for ocr in the wild. arXiv preprint arXiv:1603.03101, 2016. [28] Laurence Likforman-Sulem, Abderrazak Zahour, and Bruno Taconet. Text line segmentation of historical documents: a survey. International Journal of Document Analysis and Recognition (IJDAR), 9(2-4):123– 138, 2007. [29] U-V Marti and Horst Bunke. The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1):39–46, 2002. [30] R. Messina and C. Kermorvant. Surgenerative Finite State Transducer n-gram for Out-Of-Vocabulary Word Recognition. In 11th IAPR Workshop on Document Analysis Systems (DAS2014), pages 212–216, 2014. [31] Bastien Moysset, Pierre Adam, Christian Wolf, and Jérôme Louradour. Space displacement localization neural networks to locate origin points of handwritten text lines in historical documents. In International Workshop on Historical Document Imaging and Processing (HIP), 2015. [32] Bastien Moysset, Christopher Kermorvant, Christian Wolf, and Jérôme Louradour. Paragraph text segmentation into lines with recurrent neural networks. In International Conference of Document Analysis and Recognition (ICDAR), 2015. [33] Vu Pham, Théodore Bluche, Christopher Kermorvant, and Jérôme Louradour. Dropout improves recurrent neural networks for handwriting recognition. In 14th International Conference on Frontiers in Handwriting Recognition (ICFHR2014), pages 285–290, 2014. [34] Joan Andreu Sánchez, Verónica Romero, Alejandro Toselli, and Enrique Vidal. ICFHR 2014 HTRtS: Handwritten Text Recognition on tranScriptorium Datasets. In International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014. [35] Pierre Sermanet, David Eigen, Xiang Zhang, Michaël Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv preprint arXiv:1312.6229, 2013. [36] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4, 2012. [37] A. Tong, M. Przybocki, V. Maergner, and H. El Abed. NIST 2013 Open Handwriting Recognition and Translation (OpenHaRT13) Evaluation. In 11th IAPR Workshop on Document Analysis Systems (DAS2014), 2014. [38] Kelvin Xu, Jimmy Ba, Ryan Kiros, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. 9
2016
106
6,002
k⇤-Nearest Neighbors: From Global to Local Oren Anava The Voleon Group oren@voleon.com Kfir Y. Levy ETH Zurich yehuda.levy@inf.ethz.ch Abstract The weighted k-nearest neighbors algorithm is one of the most fundamental nonparametric methods in pattern recognition and machine learning. The question of setting the optimal number of neighbors as well as the optimal weights has received much attention throughout the years, nevertheless this problem seems to have remained unsettled. In this paper we offer a simple approach to locally weighted regression/classification, where we make the bias-variance tradeoff explicit. Our formulation enables us to phrase a notion of optimal weights, and to efficiently find these weights as well as the optimal number of neighbors efficiently and adaptively, for each data point whose value we wish to estimate. The applicability of our approach is demonstrated on several datasets, showing superior performance over standard locally weighted methods. 1 Introduction The k-nearest neighbors (k-NN) algorithm [1, 2], and Nadarays-Watson estimation [3, 4] are the cornerstones of non-parametric learning. Owing to their simplicity and flexibility, these procedures had become the methods of choice in many scenarios [5], especially in settings where the underlying model is complex. Modern applications of the k-NN algorithm include recommendation systems [6], text categorization [7], heart disease classification [8], and financial market prediction [9], amongst others. A successful application of the weighted k-NN algorithm requires a careful choice of three ingredients: the number of nearest neighbors k, the weight vector ↵, and the distance metric. The latter requires domain knowledge and is thus henceforth assumed to be set and known in advance to the learner. Surprisingly, even under this assumption, the problem of choosing the optimal k and ↵is not fully understood and has been studied extensively since the 1950’s under many different regimes. Most of the theoretic work focuses on the asymptotic regime in which the number of samples n goes to infinity [10, 11, 12], and ignores the practical regime in which n is finite. More importantly, the vast majority of k-NN studies aim at finding an optimal value of k per dataset, which seems to overlook the specific structure of the dataset and the properties of the data points whose labels we wish to estimate. While kernel based methods such as Nadaraya-Watson enable an adaptive choice of the weight vector ↵, theres still remains the question of how to choose the kernel’s bandwidth σ, which could be thought of as the parallel of the number of neighbors k in k-NN. Moreover, there is no principled approach towards choosing the kernel function in practice. In this paper we offer a coherent and principled approach to adaptively choosing the number of neighbors k and the corresponding weight vector ↵2 Rk per decision point. Given a new decision point, we aim to find the best locally weighted predictor, in the sense of minimizing the distance between our prediction and the ground truth. In addition to yielding predictions, our approach enbles us to provide a per decision point guarantee for the confidence of our predictions. Fig. 1 illustrates the importance of choosing k adaptively. In contrast to previous works on non-parametric regression/classification, we do not assume that the data {(xi, yi)}n i=1 arrives from some (unknown) underlying distribution, but rather make a weaker assumption that the labels {yi}n i=1 are independent 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (a) First scenario (b) Second scenario (c) Third scenario Figure 1: Three different scenarios. In all three scenarios, the same data points x1, . . . , xn 2 R2 are given (represented by black dots). The red dot in each of the scenarios represents the new data point whose value we need to estimate. Intuitively, in the first scenario it would be beneficial to consider only the nearest neighbor for the estimation task, whereas in the other two scenarios we might profit by considering more neighbors. given the data points {xi}n i=1, allowing the latter to be chosen arbitrarily. Alongside providing a theoretical basis for our approach, we conduct an empirical study that demonstrates its superiority with respect to the state-of-the-art. This paper is organized as follows. In Section 2 we introduce our setting and assumptions, and derive the locally optimal prediction problem. In Section 3 we analyze the solution of the above prediction problem, and introduce a greedy algorithm designed to efficiently find the exact solution. Section 4 presents our experimental study, and Section 5 concludes. 1.1 Related Work Asymptotic universal consistency is the most widely known theoretical guarantee for k-NN. This powerful guarantee implies that as the number of samples n goes to infinity, and also k ! 1, k/n ! 0, then the risk of the k-NN rule converges to the risk of the Bayes classifier for any underlying data distribution. Similar guarantees hold for weighted k-NN rules, with the additional assumptions that Pk i=1 ↵i = 1 and maxin ↵i ! 0, [12, 10]. In the regime of practical interest where the number of samples n is finite, using k = bpnc neighbors is a widely mentioned rule of thumb [10]. Nevertheless, this rule often yields poor results, and in the regime of finite samples it is usually advised to choose k using cross-validation. Similar consistency results apply to kernel based local methods [13, 14]. A novel study of k-NN by Samworth, [11], derives a closed form expression for the optimal weight vector, and extracts the optimal number of neighbors. However, this result is only optimal under several restrictive assumptions, and only holds for the asymptotic regime where n ! 1. Furthermore, the above optimal number of neighbors/weights do not adapt, but are rather fixed over all decision points given the dataset. In the context of kernel based methods, it is possible to extract an expression for the optimal kernel’s bandwidth σ [14, 15]. Nevertheless, this bandwidth is fixed over all decision points, and is only optimal under several restrictive assumptions. There exist several heuristics to adaptively choosing the number of neighbors and weights separately for each decision point. In [16, 17] it is suggested to use local cross-validation in order to adapt the value of k to different decision points. Conversely, Ghosh [18] takes a Bayesian approach towards choosing k adaptively. Focusing on the multiclass classification setup, it is suggested in [19] to consider different values of k for each class, choosing k proportionally to the class populations. Similarly, there exist several attitudes towards adaptively choosing the kernel’s bandwidth σ, for kernel based methods [20, 21, 22, 23]. Learning the distance metric for k-NN was extensively studied throughout the last decade. There are several approaches towards metric learning, which roughly divide into linear/non-linear learning methods. It was found that metric learning may significantly affect the performance of k-NN in numerous applications, including computer vision, text analysis, program analysis and more. A comprehensive survey by Kulis [24] provides a review of the metric learning literature. Throughout 2 this work we assume that the distance metric is fixed, and thus the focus is on finding the best (in a sense) values of k and ↵for each new data point. Two comprehensive monographs, [10] and [25], provide an extensive survey of the existing literature regarding k-NN rules, including theoretical guarantees, useful practices, limitations and more. 2 Problem Definition In this section we present our setting and assumptions, and formulate the locally weighted optimal estimation problem. Recall we seek to find the best local prediction in a sense of minimizing the distance between this prediction and the ground truth. The problem at hand is thus defined as follows: We are given n data points x1, . . . , xn 2 Rd, and n corresponding labels1 y1, . . . , yn 2 R. Assume that for any i 2 {1, . . . , n} = [n] it holds that yi = f(xi) + ✏i, where f(·) and ✏i are such that: (1) f(·) is a Lipschitz continuous function: For any x, y 2 Rd it holds that |f(x) −f(y)|  L · d(x, y), where the distance function d(·, ·) is set and known in advance. This assumption is rather standard when considering nearest neighbors-based algorithms, and is required in our analysis to bound the so-called bias term (to be later defined). In the binary classification setup we assume that f : Rd 7! [0, 1], and that given x its label y 2 {0, 1} is distributed Bernoulli(f(x)). (2) ✏i’s are noise terms: For any i 2 [n] it holds that E [✏i|xi] = 0 and |✏i| b for some given b > 0. In addition, it is assumed that given the data points {xi}n i=1 then the noise terms {✏i}n i=1 are independent. This assumption is later used in our analysis to apply Hoeffding’s inequality and bound the so-called variance term (to be later defined). Alternatively, we could assume that E ⇥ ✏2 i |xi ⇤ b (instead of |✏i| b), and apply Bernstein inequalities. The results and analysis remain qualitatively similar. Given a new data point x0, our task is to estimate f(x0), where we restrict the estimator ˆf(x0) to be of the form ˆf(x0) = Pn i=1 ↵iyi. That is, the estimator is a weighted average of the given noisy labels. Formally, we aim at minimizing the absolute distance between our prediction and the ground truth f(x0), which translates into min ↵2∆n $$$$$ n X i=1 ↵iyi −f(x0) $$$$$ (P1), where we minimize over the simplex, ∆n = {↵2 Rn| Pn i=1 ↵i = 1 and ↵i ≥0, 8i}. Decomposing the objective of (P1) into a sum of bias and variance terms, we arrive at the following relaxed objective: $$$$$ n X i=1 ↵iyi −f(x0) $$$$$ = $$$$$ n X i=1 ↵i (yi −f(xi) + f(xi)) −f(x0) $$$$$ = $$$$$ n X i=1 ↵i✏i + n X i=1 ↵i (f(xi) −f(x0)) $$$$$  $$$$$ n X i=1 ↵i✏i $$$$$ + $$$$$ n X i=1 ↵i (f(xi) −f(x0)) $$$$$  $$$$$ n X i=1 ↵i✏i $$$$$ + L n X i=1 ↵id(xi, x0). By Hoeffding’s inequality (see supplementary material) it follows that |Pn i=1 ↵i✏i| Ck↵k2 for C = b q 2 log ' 2 δ ( , w.p. at least 1 −δ. We thus arrive at a new optimization problem (P2), such that solving it would yield a guarantee for (P1) with high probability: min ↵2∆n Ck↵k2 + L n X i=1 ↵id(xi, x0) (P2). 1Note that our analysis holds for both setups of classification/regression. For brevity we use a classification task terminology, relating to the yi’s as labels. Our analysis extends directly to the regression setup. 3 The first term in (P2) corresponds to the noise in the labels and is therefore denoted as the variance term, whereas the second term corresponds to the distance between f(x0) and {f(xi)}n i=1 and is thus denoted as the bias term. 3 Algorithm and Analysis In this section we discuss the properties of the optimal solution for (P2), and present a greedy algorithm which is designed in order to efficiently find the exact solution of the latter objective (see Section 3.1). Given a decision point x0, Theorem 3.1 demonstrates that the optimal weight ↵i of the data point xi is proportional to −d(xi, x0) (closer points are given more weight). Interestingly, this weight decay is quite slow compared to popular weight kernels, which utilize sharper decay schemes, e.g., exponential/inversely-proportional. Theorem 3.1 also implies a cutoff effect, meaning that there exists k⇤2 [n], such that only the k⇤nearest neighbors of x0 donate to the prediction of its label. Note that both ↵and k⇤may adapt from one x0 to another. Also notice that the optimal weights depend on a single parameter L/C, namely the Lipschitz to noise ratio. As L/C grows k⇤tends to be smaller, which is quite intuitive. Without loss of generality, assume that the points are ordered in ascending order according to their distance from x0, i.e., d(x1, x0) d(x2, x0) . . . d(xn, x0). Also, let β 2 Rn be such that βi = Ld(xi, x0)/C. Then, the following is our main theorem: Theorem 3.1. There exists λ > 0 such that the optimal solution of (P2) is of the form ↵⇤ i = (λ −βi) · 1 {βi < λ} Pn i=1 (λ −βi) · 1 {βi < λ}. (1) Furthermore, the value of (P2) at the optimum is Cλ. Following is a direct corollary of the above Theorem: Corollary 3.2. There exists 1 k⇤n such that for the optimal solution of (P2) the following applies: ↵⇤ i > 0; 8i k⇤ and ↵⇤ i = 0; 8i > k⇤. Proof of Theorem 3.1. Notice that (P2) may be written as follows: min ↵2∆n C ' k↵k2 + ↵>β ( (P2). We henceforth ignore the parameter C. In order to find the solution of (P2), let us first consider its Lagrangian: L(↵, λ, ✓) = k↵k2 + ↵>β + λ 1 − n X i=1 ↵i ! − n X i=1 ✓i↵i, where λ 2 R is the multiplier of the equality constraint P i ↵i = 1, and ✓1, . . . , ✓n ≥0 are the multipliers of the inequality constraints ↵i ≥0, 8i 2 [n]. Since (P2) is convex, any solution satisfying the KKT conditions is a global minimum. Deriving the Lagrangian with respect to ↵, we get that for any i 2 [n]: ↵i k↵k2 = λ −βi + ✓i. Denote by ↵⇤the optimal solution of (P2). By the KKT conditions, for any ↵⇤ i > 0 it follows that ✓i = 0. Otherwise, for any i such that ↵⇤ i = 0 it follows that ✓i ≥0, which implies λ βi. Thus, for any nonzero weight ↵⇤ i > 0 the following holds: ↵⇤ i k↵⇤k2 = λ −βi. (2) Squaring and summing Equation (2) over all the nonzero entries of ↵, we arrive at the following equation for λ: 1 = X ↵⇤ i >0 (↵⇤ i )2 k↵⇤k2 2 = X ↵⇤ i >0 (λ −βi)2. (3) 4 Algorithm 1 k⇤-NN Input: vector of ordered distances β 2 Rn, noisy labels y1, . . . , yn 2 R Set: λ0 = β1 + 1, k = 0 while λk > βk+1 and k n −1 do Update: k k + 1 Calculate: λk = 1 k Pk i=1 βi + r k + ⇣Pk i=1 βi ⌘2 −k Pk i=1 β2 i ! end while Return: estimation ˆf(x0) = P i ↵iyi, where ↵2 ∆n is a weight vector such ↵i = (λk−βi)·1{βi<λk} Pn i=1(λk−βi)·1{βi<λk} Next, we show that the value of the objective at the optimum is Cλ. Indeed, note that by Equation (2) and the equality constraint P i ↵⇤ i = 1, any ↵⇤ i > 0 satisfies ↵⇤ i = λ −βi A , where A = X ↵⇤ i >0 (λ −βi). (4) Plugging the above into the objective of (P2) yields C ' k↵⇤k2 + ↵⇤>β ( = C A s X ↵⇤ i >0 (λ −βi)2 + C A X ↵⇤ i >0 (λ −βi)(βi −λ + λ) = C A −C A X ↵⇤ i >0 (λ −βi)2 + Cλ A X ↵⇤ i >0 (λ −βi) = Cλ, where in the last equality we used Equation (3), and substituted A = P ↵⇤ i >0(λ −βi). 3.1 Solving (P2) Efficiently Note that (P2) is a convex optimization problem, and it can be therefore (approximately) solved efficiently, e.g., via any first order algorithm. Concretely, given an accuracy ✏> 0, any off-the-shelf convex optimization method would require a running time which is poly(n, 1 ✏) in order to find an ✏-optimal solution to (P2)2. Note that the calculation of (the unsorted) β requires an additional computational cost of O(nd). Here we present an efficient method that computes the exact solution of (P2). In addition to the O(nd) cost for calculating β, our algorithm requires an O(n log n) cost for sorting the entries of β, as well as an additional running time of O(k⇤), where k⇤is the number of non-zero elements at the optimum. Thus, the running time of our method is independent of any accuracy ✏, and may be significantly better compared to any off-the-shelf optimization method. Note that in some cases [26], using advanced data structures may decrease the cost of finding the nearest neighbors (i.e., the sorted β), yielding a running time substantially smaller than O(nd + n log n). Our method is depicted in Algorithm 1. Quite intuitively, the core idea is to greedily add neighbors according to their distance form x0 until a stopping condition is fulfilled (indicating that we have found the optimal solution). Letting CsortNN, be the computational cost of calculating the sorted vector β, the following theorem presents our guarantees. Theorem 3.3. Algorithm 1 finds the exact solution of (P2) within k⇤iterations, with an O(k⇤+ CsortNN) running time. 2Note that (P2) is not strongly-convex, and therefore the polynomial dependence on 1/✏rather than log(1/✏) for first order methods. Other methods such as the Ellipsoid depend logarithmically on 1/✏, but suffer a worse dependence on n compared to first order methods. 5 Proof of Theorem 3.3. Denote by ↵⇤the optimal solution of (P2), and by k⇤the corresponding number of nonzero weights. By Corollary 3.2, these k⇤nonzero weights correspond to the k⇤smallest values of β. Thus, we are left to show that (1) the optimal λ is of the form calculated by the algorithm; and (2) the algorithm halts after exactly k⇤iterations and outputs the optimal solution. Let us first find the optimal λ. Since the non-zero elements of the optimal solution correspond to the k⇤smallest values of β, then Equation (3) is equivalent to the following quadratic equation in λ: k⇤λ2 −2λ k⇤ X i=1 βi + k⇤ X i=1 β2 i −1 ! = 0. Solving for λ and neglecting the solution that does not agree with ↵i ≥0, 8i 2 [n], we get λ = 1 k⇤ 0 B @ k⇤ X i=1 βi + v u u tk⇤+ k⇤ X i=1 βi !2 −k⇤ k⇤ X i=1 β2 i 1 C A . (5) The above implies that given k⇤, the optimal solution (satisfying KKT) can be directly derived by a calculation of λ according to Equation (5) and computing the ↵i’s according to Equation (1). Since Algorithm 1 calculates λ and ↵in the form appearing in Equations (5) and (1) respectively, it is therefore sufficient to show that it halts after exactly k⇤iterations in order to prove its optimality. The latter is a direct consequence of the following conditions: (1) Upon reaching iteration k⇤Algorithm 1 necessarily halts. (2) For any k k⇤it holds that λk 2 R. (3) For any k < k⇤Algorithm 1 does not halt. Note that the first condition together with the second condition imply that λk is well defined until the algorithm halts (in the sense that the “ > ”operation in the while condition is meaningful). The first condition together with the third condition imply that the algorithm halts after exactly k⇤iterations, which concludes the proof. We are now left to show that the above three conditions hold: Condition (1): Note that upon reaching k⇤, Algorithm 1 necessarily calculates the optimal λ = λk⇤. Moreover, the entries of ↵⇤whose indices are greater than k⇤are necessarily zero, and in particular, ↵⇤ k⇤+1 = 0. By Equation (1), this implies that λk⇤βk⇤+1, and therefore the algorithm halts upon reaching k⇤. In order to establish conditions (2) and (3) we require the following lemma: Lemma 3.4. Let λk be as calculated by Algorithm 1 at iteration k. Then, for any k k⇤the following holds: λk = min ↵2∆(k) n ' k↵k2 + ↵>β ( , where ∆(k) n = {↵2 ∆n : ↵i = 0, 8i > k} We are now ready to prove the remaining conditions. Condition (2): Lemma 3.4 states that λk is the solution of a convex program over a nonempty set, therefore λk 2 R. Condition (3): By definition ∆(k) n ⇢∆(k+1) n for any k < n. Therefore, Lemma 3.4 implies that λk ≥λk+1 for any k < k⇤(minimizing the same objective with stricter constraints yields a higher optimal value). Now assume by contradiction that Algorithm 1 halts at some k0 < k⇤, then the stopping condition of the algorithm implies that λk0 βk0+1. Combining the latter with λk ≥λk+1, 8k k⇤, and using βk βk+1, 8k n, we conclude that: λk⇤λk0+1 λk0 βk0+1 βk⇤. The above implies that ↵k⇤= 0 (see Equation (1)), which contradicts Corollary 3.2 and the definition of k⇤. 6 Running time: Note that the main running time burden of Algorithm 1 is the calculation of λk for any k k⇤. A naive calculation of λk requires an O(k) running time. However, note that λk depends only on Pk i=1 βi and Pk i=1 β2 i . Updating these sums incrementally implies that we require only O(1) running time per iteration, yielding a total running time of O(k⇤). The remaining O(CsortNN) running time is required in order to calculate the (sorted) β. 3.2 Special Cases The aim of this section is to discuss two special cases in which the bound of our algorithm coincides with familiar bounds in the literature, thus justifying the relaxed objective of (P2). We present here only a high-level description of both cases, and defer the formal details to the full version of the paper. The solution of (P2) is a high probability upper-bound on the true prediction error |Pn i=1 ↵iyi −f(x0)|. Two interesting cases to consider in this context are βi = 0 for all i 2 [n], and β1 = . . . = βn = β > 0. In the first case, our algorithm includes all labels in the computation of λ, thus yielding a confidence bound of 2Cλ = 2b p (2/n) log (2/δ) for the prediction error (with probability 1 −δ). Not surprisingly, this bound coincides with the standard Hoeffding bound for the task of estimating the mean value of a given distribution based on noisy observations drawn from this distribution. Since the latter is known to be tight (in general), so is the confidence bound obtained by our algorithm. In the second case as well, our algorithm will use all data points to arrive at the confidence bound 2Cλ = 2Ld + 2b p (2/n) log (2/δ), where we denote d(x1, x0) = . . . = d(xn, x0) = d. The second term is again tight by concentration arguments, whereas the first term cannot be improved due to Lipschitz property of f(·), thus yielding an overall tight confidence bound for our prediction in this case. 4 Experimental Results The following experiments demonstrate the effectiveness of the proposed algorithm on several datasets. We start by presenting the baselines used for the comparison. 4.1 Baselines The standard k-NN: Given k, the standard k-NN finds the k nearest data points to x0 (assume without loss of generality that these data points are x1, . . . , xk), and then estimates ˆf(x0) = 1 k Pk i=1 yi. The Nadaraya-Watson estimator: This estimator assigns the data points with weights that are proportional to some given similarity kernel K : Rd ⇥Rd 7! R+. That is, ˆf(x0) = Pn i=1 K(xi, x0)yi Pn i=1 K(xi, x0) . Popular choices of kernel functions include the Gaussian kernel K(xi, xj) = 1 σe− kxi−xj k2 2σ2 ; Epanechnikov Kernel K(xi, xj) = 3 4 ⇣ 1 −kxi−xjk2 σ2 ⌘ 1{kxi−xjkσ}; and the triangular kernel K(xi, xj) = ⇣ 1 −kxi−xjk σ ⌘ 1{kxi−xjkσ}. Due to lack of space, we present here only the best performing kernel function among the three listed above (on the tested datasets), which is the Gaussian kernel. 4.2 Datasets In our experiments we use 8 real-world datasets, all are available in the UCI repository website (https://archive.ics.uci.edu/ml/). In each of the datasets, the features vector consists of real values only, whereas the labels take different forms: in the first 6 datasets (QSAR, Diabetes, PopFailures, Sonar, Ionosphere, and Fertility), the labels are binary yi 2 {0, 1}. In the last two datasets (Slump and Yacht), the labels are real-valued. Note that our algorithm (as well as the other two baselines) applies to all datasets without requiring any adjustment. The number of samples n and the dimension of each sample d are given in Table 1 for each dataset. 7 Standard k-NN Nadarays-Watson Our algorithm (k⇤-NN) Dataset (n, d) Error (STD) Value of k Error (STD) Value of σ Error (STD) Range of k QSAR (1055,41) 0.2467 (0.3445) 2 0.2303 (0.3500) 0.1 0.2105* (0.3935) 1-4 Diabetes (1151,19) 0.3809 (0.2939) 4 0.3675 (0.3983) 0.1 0.3666 (0.3897) 1-9 PopFailures (360,18) 0.1333 (0.2924) 2 0.1155 (0.2900) 0.01 0.1218 (0.2302) 2-24 Sonar (208,60) 0.1731 (0.3801) 1 0.1711 (0.3747) 0.1 0.1636 (0.3661) 1-2 Ionosphere (351,34) 0.1257 (0.3055) 2 0.1191 (0.2937) 0.5 0.1113* (0.3008) 1-4 Fertility (100,9) 0.1900 (0.3881) 1 0.1884 (0.3787) 0.1 0.1760 (0.3094) 1-5 Slump (103,9) 3.4944 (3.3042) 4 2.9154 (2.8930) 0.05 2.8057 (2.7886) 1-4 Yacht (308,6) 6.4643 (10.2463) 2 5.2577 (8.7051) 0.05 5.0418* (8.6502) 1-3 Table 1: Experimental results. The values of k, σ and L/C are determined via 5-fold cross validation on the validation set. These value are then used on the test set to generate the (absolute) error rates presented in the table. In each line, the best result is marked with bold font, where asterisk indicates significance level of 0.05 over the second best result. 4.3 Experimental Setup We randomly divide each dataset into two halves (one used for validation and the other for test). On the first half (the validation set), we run the two baselines and our algorithm with different values of k, σ and L/C (respectively), using 5-fold cross validation. Specifically, we consider values of k in {1, 2, . . . , 10} and values of σ and L/C in {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10}. The best values of k, σ and L/C are then used in the second half of the dataset (the test set) to obtain the results presented in Table 1. For our algorithm, the range of k that corresponds to the selection of L/C is also given. Notice that we present here the average absolute error of our prediction, as a consequence of our theoretical guarantees. 4.4 Results and Discussion As evidenced by Table 1, our algorithm outperforms the baselines on 7 (out of 8) datasets, where on 3 datasets the outperformance is significant. It can also be seen that whereas the standard k-NN is restricted to choose one value of k per dataset, our algorithm fully utilizes the ability to choose k adaptively per data point. This validates our theoretical findings, and highlights the advantage of adaptive selection of k. 5 Conclusions and Future Directions We have introduced a principled approach to locally weighted optimal estimation. By explicitly phrasing the bias-variance tradeoff, we defined the notion of optimal weights and optimal number of neighbors per decision point, and consequently devised an efficient method to extract them. Note that our approach could be extended to handle multiclass classification, as well as scenarios in which predictions of different data points correlate (and we have an estimate of their correlations). Due to lack of space we leave these extensions to the full version of the paper. A shortcoming of current non-parametric methods, including our k⇤-NN algorithm, is their limited geometrical perspective. Concretely, all of these methods only consider the distances between the decision point and dataset points, i.e., {d(x0, xi)}n i=1, and ignore the geometrical relation between the dataset points, i.e., {d(xi, xj)}n i,j=1. We believe that our approach opens an avenue for taking advantage of this additional geometrical information, which may have a great affect over the quality of our predictions. References [1] Thomas M Cover and Peter E Hart. Nearest neighbor pattern classification. IEEE Transactions on Information Theory, 13(1):21–27, 1967. [2] Evelyn Fix and Joseph L Hodges Jr. Discriminatory analysis-nonparametric discrimination: consistency properties. Technical report, DTIC Document, 1951. [3] Elizbar A Nadaraya. On estimating regression. Theory of Probability & Its Applications, 9(1):141–142, 1964. 8 [4] Geoffrey S Watson. Smooth regression analysis. Sankhy¯a: The Indian Journal of Statistics, Series A, pages 359–372, 1964. [5] Xindong Wu, Vipin Kumar, J Ross Quinlan, Joydeep Ghosh, Qiang Yang, Hiroshi Motoda, Geoffrey J McLachlan, Angus Ng, Bing Liu, S Yu Philip, et al. Top 10 algorithms in data mining. Knowledge and information systems, 14(1):1–37, 2008. [6] DA Adeniyi, Z Wei, and Y Yongquan. Automated web usage data mining and recommendation system using k-nearest neighbor (knn) classification method. Applied Computing and Informatics, 12(1):90–108, 2016. [7] Bruno Trstenjak, Sasa Mikac, and Dzenana Donko. Knn with tf-idf based framework for text categorization. Procedia Engineering, 69:1356–1364, 2014. [8] BL Deekshatulu, Priti Chandra, et al. Classification of heart disease using k-nearest neighbor and genetic algorithm. Procedia Technology, 10:85–94, 2013. [9] Sadegh Bafandeh Imandoust and Mohammad Bolandraftar. Application of k-nearest neighbor (knn) approach for predicting economic events: Theoretical background. International Journal of Engineering Research and Applications, 3(5):605–610, 2013. [10] Luc Devroye, László Györfi, and Gábor Lugosi. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 2013. [11] Richard J Samworth et al. Optimal weighted nearest neighbour classifiers. The Annals of Statistics, 40(5):2733–2763, 2012. [12] Charles J Stone. Consistent nonparametric regression. The Annals of Statistics, pages 595–620, 1977. [13] Luc P Devroye, TJ Wagner, et al. Distribution-free consistency results in nonparametric discrimination and regression function estimation. The Annals of Statistics, 8(2):231–239, 1980. [14] László Györfi, Michael Kohler, Adam Krzyzak, and Harro Walk. A distribution-free theory of nonparametric regression. Springer Science & Business Media, 2006. [15] Jianqing Fan and Irene Gijbels. Local polynomial modelling and its applications: monographs on statistics and applied probability 66, volume 66. CRC Press, 1996. [16] Dietrich Wettschereck and Thomas G Dietterich. Locally adaptive nearest neighbor algorithms. Advances in Neural Information Processing Systems, pages 184–184, 1994. [17] Shiliang Sun and Rongqing Huang. An adaptive k-nearest neighbor algorithm. In 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery. [18] Anil K Ghosh. On nearest neighbor classification using adaptive choice of k. Journal of computational and graphical statistics, 16(2):482–502, 2007. [19] Li Baoli, Lu Qin, and Yu Shiwen. An adaptive k-nearest neighbor text categorization strategy. ACM Transactions on Asian Language Information Processing (TALIP), 3(4):215–226, 2004. [20] Ian S Abramson. On bandwidth variation in kernel estimates-a square root law. The annals of Statistics, pages 1217–1223, 1982. [21] Bernard W Silverman. Density estimation for statistics and data analysis, volume 26. CRC press, 1986. [22] Serdar Demir and Öniz Toktami¸s. On the adaptive nadaraya-watson kernel regression estimators. Hacettepe Journal of Mathematics and Statistics, 39(3), 2010. [23] Khulood Hamed Aljuhani et al. Modification of the adaptive nadaraya-watson kernel regression estimator. Scientific Research and Essays, 9(22):966–971, 2014. [24] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2012. [25] Gérard Biau and Luc Devroye. Lectures on the Nearest Neighbor Method, volume 1. Springer, 2015. [26] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pages 604–613. ACM, 1998. 9
2016
107
6,003
Protein contact prediction from amino acid co-evolution using convolutional networks for graph-valued images Vladimir Golkov1, Marcin J. Skwark2, Antonij Golkov3, Alexey Dosovitskiy4, Thomas Brox4, Jens Meiler2, and Daniel Cremers1 1 Technical University of Munich, Germany 2 Vanderbilt University, Nashville, TN, USA 3 University of Augsburg, Germany 4 University of Freiburg, Germany golkov@cs.tum.edu, marcin@skwark.pl, antonij.golkov@student.uni-augsburg.de, {dosovits,brox}@cs.uni-freiburg.de, jens.meiler@vanderbilt.edu, cremers@tum.de Abstract Proteins are responsible for most of the functions in life, and thus are the central focus of many areas of biomedicine. Protein structure is strongly related to protein function, but is difficult to elucidate experimentally, therefore computational structure prediction is a crucial task on the way to solve many biological questions. A contact map is a compact representation of the three-dimensional structure of a protein via the pairwise contacts between the amino acids constituting the protein. We use a convolutional network to calculate protein contact maps from detailed evolutionary coupling statistics between positions in the protein sequence. The input to the network has an image-like structure amenable to convolutions, but every “pixel” instead of color channels contains a bipartite undirected edge-weighted graph. We propose several methods for treating such “graph-valued images” in a convolutional network. The proposed method outperforms state-of-the-art methods by a considerable margin. 1 Introduction Proteins perform most of the functions in the cells of living organisms, acting as enzymes to perform complex chemical reactions, recognizing foreign particles, conducting signals, and building cell scaffolds – to name just a few. Their function is dictated by their three-dimensional structure, which can be quite involved, despite the fact that proteins are linear polymers composed of only 20 different types of amino acids. The sequence of amino acids dictates the three-dimensional structure and related proteins share both structure and function. Predicting protein structure from amino acid sequence remains a problem that is still largely unsolved. 1.1 Protein structure and contact maps The primary structure of a protein refers to the linear sequence of the amino acid residues that constitute the protein, as encoded by the corresponding gene. During or after its biosynthesis, a protein spatially folds into an energetically favourable conformation. Locally it folds into so-called secondary structure (α-helices and β-strands). The global three-dimensional structure into which the entire protein folds is referred to as the tertiary structure. Fig. 1a depicts the tertiary structure of a protein consisting of several α-helices. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Protein structure is mediated and stabilized by series of weak interactions (physical contacts) between pairs of its amino acids. Let L be the length of the sequence of a protein (i.e. the number of its amino acids). The tertiary structure can be partially summarized as a so-called contact map – a sparse L × L matrix C encoding the presence or absence of physical contact between all pairs of L amino acid residues of a protein. The entry Ci,j is equal to 1 if residues i and j are in contact and 0 if they are not. Intermediate values may encode different levels of contact likeliness. We use these intermediate values without rounding where possible because they hold additional information. The “contact likeliness” is a knowledge-based function derived from Protein Data Bank, dependent on the distance between Cβ atoms of involved amino acids and their type. It has been parametrized based on the amino acids’ heavy atoms making biophysically feasible contact in experimentally determined structures. (a) Tertiary structure (b) Contact map (c) Variants of contact (d) Co-evolution statistics Figure 1: Oxymyoglobin (a) and its contact between amino acid residue 6 and 133. Helix–helix contacts correspond to “checkerboard” patterns in the contact map (b). Various variants of the contact 6/133 encountered in nature (native pose in upper left, remaining poses are theoretical models) (c) are reflected in the co-evolution statistics (d). 2 Methods The proposed method is based on inferring direct co-evolutionary couplings between pairs of amino acids of a protein, and predicting the contact map from them using a convolutional neural network. 2.1 Multiple sequence alignments As of today the UniProt Archive (UniParc [1]) consists of approximately 130 million different protein sequences. This is only a small fraction of all the protein sequences existing on Earth, whose number is estimated to be on the order of 1010 to 1012 [2]. Despite this abundance, there exist only about 105 sequence families, which in turn adopt one of about 104 folds [2]. This is due to the fact that homologous proteins (proteins originating from common ancestors) are similar in terms of their structure and function. Homologs are under evolutionary pressure to maintain the structure and function of the ancestral protein, while at the same time adapting to the changes in the environment. Evolutionarily related proteins can be identified by means of homology search using dynamic programming, hidden Markov models, and other statistical models, which group homologous proteins into so-called multiple sequence alignments. A multiple sequence alignment consists of sequences of related proteins, aligned such that corresponding amino acids share the same position (column). The 20 amino acid types are represented by the letters A,C,D,E,F,G,H,I,K,L,M,N,P,Q,R,S,T,V,W,Y. Besides, a “gap” (represented as “–”) is used as a 21st character to account for insertions and deletions. For the purpose of this work, all the input alignments have been generated with jackhmmer, part of HMMER package (version 3.1b2, http://hmmer.org) run against the UniParc database released in summer 2015. The alignment has been constructed with the E-value inclusion threshold of 1, allowing for inclusion of distant homologs, at a risk of contaminating the alignment with potentially evolutionarily unrelated sequences. The resultant multiple sequence alignments have not been modified in any way, except for removal of inserts (positions that were not present in the protein sequence of interest). Notably, contrary to many evolutionary approaches, we did not remove columns that (a) contained many gaps, (b) were too diverse or (c) were too conserved. In so doing, we emulated a fully automated prediction regime. 2 2.2 Potts model for co-evolution of amino acid residues Protein structure is stabilized by series of contacts: weak, favourable interactions between amino acids adjacent in space (but not necessarily in sequence). If an amino acid becomes mutated in the course of evolution, breaking a favourable contact, there is an evolutionary pressure for a compensating mutation to occur in the interacting partner(s) to restore the protein to an unfrustrated state. These pressures lead to amino acid pairs varying in tandem in the multiple sequence alignments. The observed covariances can subsequently be used to predict which of the positions in the protein sequence are close together in space. The directly observed covariances are by themselves a poor predictor of inter-residue contact. This is due to transitivity of correlations in multiple sequence alignments. When an amino acid A that is in contact with amino acids B and C mutates to A’, it exerts a pressure for B and C to adopt to this mutation, leading to a spurious, indirect correlation between B and C. Oftentimes these spurious correlations are more prominent than the actual, direct ones. This problem can be modelled in terms of one- and two-body interactions, analogous to the Ising model of statistical mechanics (or its generalization – the Potts model). Solving an inverse Ising/Potts problem (inferring direct causes from a set of observations), while not feasible analytically, can be accomplished by approximate, numerical algorithms. Such approaches have been recently successfully applied to the problem of protein contact prediction [3, 4]. One of the most widely-adopted approaches to this problem is pseudolikelihood maximization for inferring an inverse Potts model (plmDCA [3, 5]). It results in an L × L × 21 × 21 array of inferred evolutionary couplings between pairs of the L positions in the protein, described in terms of 21 × 21 coupling matrices. These coupling matrices depict the strength of evolutionary pressure at particular amino acid type pairs (e.g. histidine–threonine) to be present at this position pair – the higher the value, the more pressure there is. These values are not directly interpretable, as they depend on the environment the amino acids are in, their propensity to mutate and many other factors. So far, the best approach to obtain scores corresponding to contact propensities was to compute the Frobenius norm of individual coupling matrices rendering a contact matrix, which then has been subject to average product correction [6]. Average product correction scales the value of contact propensity based on the mean values for involved positions and a mean value for the entire contact matrix. As there is insufficient data to conclusively infer all the parameters, and coupling inference is inherently ill-posed, regularization is required [3, 5]. Here we used l2 regularization with λ = 0.01. These approaches to reduce each 21 × 21 coupling matrix to only one value discard valuable information encoded in matrices, consequently leading to a reduction in expected predictive capability. In this work we use the entire L × L × 21 × 21 coupling data J in their unmodified form. The value Ji,j,k,l quantifies the co-evolution of residue type k at location i with residue type l at location j. The L × L × 21 × 21 array J serves as the main input to the convolutional network to predict the L × L contact map C. The following symmetries hold: Ci,j = Cj,i and Ji,j,k,l = Jj,i,l,k∀i, j, k, l. 2.3 Convolutional neural network for contact prediction The goal of this work is to predict the contact Ci,j between residues i and j from the co-evolution statistics Ji,j,k,l obtained from pseudolikelihood maximization [3]. Not only the local statistics (Ji,j,k,l)k,l for fixed (i, j) but also the neighborhood around (i, j) is informative for contact determination. Particularly, contacts between different secondary structure elements are reflected both in the spatial contact pattern, such as the “checkerboard” pattern typical for helix–helix contacts, cf. Fig. 1b (the “i” and “j” dimensions), as well as in the residue types (the “k” and “l” dimensions) at (i, j) and in its neighborhood. Thus, a convolutional neural network [7] with convolutions over (i, j), i.e. learning the transformation to be applied to all w × w × 21 × 21 windows of (Ji,j,k,l), is a highly appropriate method for prediction of Ci,j. The features in each “pixel” (i, j) are the entries of the 21 × 21 co-evolution statistics (Ji,j,k,l)k,l∈{1,...,21} between amino acid residues i and j. Fig. 1d shows the co-evolution statistics of residues 6 and 133, i.e. (J6,133,k,l)k,l∈{1,...,21}, of oxymyoglobin. These 21 · 21 entries can be vectorized to constitute the feature vector of length 441 at the respective “pixel”. 3 The neural network input J and at its output C should have the same size along the convolution dimensions “i” and “j”. In order to achieve this, the input boundaries are padded accordingly (i.e. by the receptive window size) along these dimensions. In order to help the network distinguish the padding values (e.g. zeros) from valid co-evolution values, the indicator function of the valid region (1 in the valid L × L region and 0 in the padded region) is introduced as an additional feature channel. Our method is based on pseudolikelihood maximization [3] and convolutional networks, plmConv for short. 2.4 Convolutional neural network for bipartite-graph-valued images The fixed order of the 441 features can be considered acceptable since any input–output mapping can in principle be learned, assuming we have sufficient training data (and an appropriate network architecture). However, if the amount of training data is limited then a better-structured, more compact representation might be of great advantage as opposed to requiring to see most of the possible configurations of co-evolution. Such more compact representations can be obtained by relaxing the knowledge of the identities of the amino acid residues, as described in the following. The features at “pixel” (i, j) correspond to the weights of a (complete) bipartite undirected edgeweighted graph K21,21 with 21 + 21 vertices, with the first disjoint set of 21 vertices representing the 21 amino acid types at position i, the second set representing the 21 amino acid types at position j, and the edge weights representing co-evolution of the respective variants. Thus, B = (Ji,j,k,l)k,l∈{1,...,21} is the biadjacency matrix of this graph, i.e. A =  0 B BT 0  is its adjacency matrix. The edge weights (i.e. entries of B) are different at each “pixel” (i, j). There are different possibilities of passing these features (the entries of B) to a convolutional network. We propose and evaluate the following possibilities to construct the feature vector at pixel (i, j): 1. Vectorize B, maintaining the order of the amino acid types; 2. Sort the vectorized matrix B; 3. Sort the rows of B by their row-wise norm, then vectorize; 4. Construct a histogram of the entries of B. While the first method maintains the order of amino acid types, all others produce feature vectors that are invariant to permutations of the amino acid types. 2.5 Generalization to arbitrary graphs In other applications to graph-valued images with general (not necessarily bipartite) graphs, similar transformations as above can be applied to the adjacency matrix A. An additional useful property is the special role of the diagonal of A. Node weights can be included as additional features, and accordingly reordered. There has been work on neural networks which can process functions defined on graphs [8, 9, 10, 11]. In contrast to these approaches, in our case the input is defined on a regular grid, but the value of the input at each location is a graph. 2.6 Data sets The Critical Assessment of Techniques for Protein Structure Prediction (CASP) is a bi-annual community-wide experiment in blind prediction of previously unknown protein structures. The prediction targets vary in difficulty, with some having a structure of homologous proteins already deposited in the Protein Data Bank (PDB), considered easy targets, some having no detectable homologs in PDB (hard targets), and some having entirely new folds (free modelling targets). The protein targets vary also in terms of available sequence homologs, which can range from only a few sequences to hundreds of thousands. We posit that the method we propose is robust and general. To illustrate its performance, we have intentionally trained it on a limited set of proteins originating from CASP9 and CASP10 experiments 4 and tested it on CASP11 proteins. In so doing, we emulated the conditions of a real-life structure prediction experiment. The proteins from these experiments form a suitable data set for this analysis, as they (a) are varied in terms of structure and “difficulty”, (b) have previously unknown structures, which have been subsequently made public, (c) are timestamped and (d) they have been subject to contact prediction attempts by other groups whose results are publicly available. Therefore, training on CASP9 and CASP10 data sets allowed us to avoid cross-contamination. We are reasonably confident that any performance of the method originates from the method’s strengths and is not a result of overfitting. The training has been conducted on a subset of 231 proteins from CASP9 and CASP10, while the test set consisted of 89 proteins from CASP11 (all non-cancelled targets). Several proteins have been excluded from the training set for technical reasons: lack of any detectable homologs, too many homologs detected, or lack of structure known at the time of publishing of CASP sets. The problems with the number of sequences can be alleviated by attempting different homology detection strategies, which we did not do, as we wanted to keep the analysis homogeneous. 2.7 Neural network architecture Deep learning has strong advantages over handcrafted processing pipelines and is setting new performance records and bringing new insights in the biomedical community [12, 13]. However, parts of the community are adopting deep learning with certain hesitation, even in areas where it is essential for scientific progress. One of the main objections is a belief that the craft of network architecture design and the network internals cannot be scientifically comprehended and lack theoretical underpinnings. This is a false belief. There are scientific results to the contrary, concerning the loss function [14] and network internals [15]. In the present work, we design the network architecture based on our knowledge of which features might be meaningful for the network to extract, and how. The first layer learns 128 filters of size 1 × 1. Thus, 441 input features are compressed to 128 learned features. This compression enforces the grouping of similar amino acids by their properties. Examples of important properties are hydrophobicity, polarity, and size. Some of the most relevant parts of the input information “cysteine (C) at position i has a strongly positive evolutionary coupling with histidine (H) at position j” (cf. Fig. 1d) is that the amino acids co-evolving have certain hydrophilicity properties; that both are polar; and that the one at position i is rather small and the one at position j is rather large; etc. One layer is sufficient to perform such a transformation. Note that we do not handcraft these features; the network learns feature extractors that are optimal in terms of the training data. Besides, compressing the inputs in this optimal way also reduces the number of weights of the subsequent layer, thus regularizing the model in a natural way, and reducing the run time and memory requirements. The second layer learns 64 filters of size 7 × 7. This allows to see the context (and end) of the contact between two secondary structure elements (e.g. a contact between two β-strands). In other words, this choice of the window size and number of filters is motivated by the fact that information such as “(i, j) is a contact between a β-strand at i and a β-strand at j, the arrangement is antiparallel, the contact ends two residues after i (and before j)” can be captured from a 7 × 7 window of the data, and well encoded in about 64 filters. The third and final layer learns one filter (returning the predicted contact map) with the window size 9×9. Thus, the overall receptive window of the convolutional network is 15×15, which provides the required amount of context of the co-evolution data to predict the contacts. Particularly, the relative position (including the angle) between two contacting α-helices can be well captured at this window size. At the same time, this deep architecture is different from having, say, a network with a single 15 × 15 convolutional layer because a non-deep network would require seeing many possible 15 × 15 configurations in a non-abstract manner, and would tend to generalize badly and overfit. In contrast, abstraction to higher-level features is provided by preceding layers in our architecture. We used mean squared error loss, dropout 0.2 after input layer, 0.5 after each hidden layer, one pixel stride, no pooling. The network is trained in Lasagne (https://github.com/Lasagne) using the Adam algorithm [16] with learning rate 0.0001 for 100 epochs. 5 3 Results To assess the performance of protein contact prediction methods, we have used the contact likeliness criterion for Cβ distances (cf. Introduction), but the qualitative results are not dependent on the criterion chosen. We have evaluated predictions both in terms of Top 10 pairs that are predicted most likely to be in contact. It is estimated that in a protein one can observe L to 3L contacts, where L is the length of the amino acid chain. Thus we have also evaluated greater numbers of predicted contacts. We have assessed the predictions with respect to the sequence separation. It is widely accepted that it is more difficult to predict long-range contacts than the ones separated by few amino acids in the sequence space. At the same time, it is the long-range contacts that are most useful for restraining the protein structure prediction simulations [17]. Maintaining the order of amino acid types (feature vector construction method #1) yielded the best results in our case, which we focus on exclusively in the following. (a) PPV for our approach vs. plmDCA20 and MetaPSICOV (b) PPV for discussed methods as a function of contact definition Figure 2: Method performance. Panel (a): prediction accuracy of plmConv (Y-axis) vs plmDCA and MetaPSICOV (X-axis, in red and yellow, respectively); lines: least square fit, circles: individual comparisons. Panel (b): prediction accuracy, depending on contact definition. X-axis: Cβ distance threshold for amino acid pair to be in contact. plmConv yields more accurate predictions than plmDCA. We compared the predictive performance of the proposed plmConv method to plmDCA in terms of positive predictive value (PPV) at different prediction counts and different sequence separations (see Table 1 and Fig. 2a). Regardless of the chosen threshold, plmConv yields considerably higher accuracy. This effect is particularly important in context of long-range contacts, which tend to be underpredicted by plmDCA and related methods, but are readily recovered by plmConv. The notable improvement in predictive power is important, given that both plmDCA and plmConv use exactly the same data and same inference algorithm, but differ in the processing of the inferred co-evolution matrices. We posit that this may have longstanding implications for evolutionary coupling analysis, some of which we discuss below. plmConv is more accurate than MetaPSICOV, while remaining more flexible. We compared our method to MetaPSICOV [18, 19], a method that performed best in the CASP11 experiment. We observed that plmConv results in overall higher prediction accuracy than MetaPSICOV (see Table 1 and Fig. 2a). This holds for all the criteria, except for the top-ranked short contacts. MetaPSICOV performs slightly better at the top-ranked short-range contacts, but they are easier to predict, and less useful for protein folding [17]. It is worth noting that MetaPSICOV achieves its high prediction accuracy by combining multiple sources of co-evolution data (including methods functionally identical to plmDCA) with predicted biophysical properties of a protein (e.g. secondary structure) and a feed-forward neural network. In plmConv we are able to achieve higher performance, by using (a) an arbitrary alignment and (b) a single co-evolution result, which potentially allows for tuning the hyperparameters of (a) and (b) to answer relevant biological questions. 6 Separation Method Top 10 L/10 L/5 L/2 L MetaPSICOV 0.797 0.761 0.717 0.615 0.516 All plmDCA 0.598 0.570 0.525 0.435 0.356 plmConv 0.807 0.768 0.729 0.663 0.573 MetaPSICOV 0.754 0.683 0.583 0.415 0.294 Short plmDCA 0.497 0.415 0.318 0.229 0.178 plmConv 0.724 0.654 0.581 0.438 0.320 MetaPSICOV 0.710 0.645 0.559 0.419 0.302 Medium plmDCA 0.506 0.438 0.355 0.253 0.180 plmConv 0.744 0.673 0.583 0.428 0.304 MetaPSICOV 0.594 0.562 0.522 0.436 0.339 Long plmDCA 0.536 0.516 0.455 0.372 0.285 plmConv 0.686 0.651 0.616 0.531 0.430 Table 1: Positive predictive value for all non-local (separation 6+ positions), short-range, mid-range and long-range (6 −11, 12 −23 and 24+ positions) contacts. We demonstrate results for Top 10 contacts per protein, as well as customary thresholds of L/10, L/5, L/2 and L contacts per protein, where L is the length of the amino acid chain. Figure 3: Positive predictive value for described methods at L contacts considered as a function of the information content of the alignment. Scatter plot: observed raw values. Line plot: rolling average with window size 15. plmConv pushes the boundaries of inference with few sequences. One of the major drawbacks of statistical inference for evolutionary analysis is its dependence on availability of high amounts of homologous sequences in multiple sequence alignments. Our method to a large extent alleviates this problem. As illustrated in Fig. 3, plmConv outperforms plmDCA accross all the range. MetaPSICOV appears to be slightly better at the low-count end of the spectrum, which we believe is due to the way MetaPSICOV augments the prediction process with additional data – a technique known to improve the prediction, that we have expressly not used in this work. plmConv predicts long-range contacts more accurately. As mentioned above, it is the long-range contacts which are of most utility for protein structure prediction experiments. Table 1 demonstrates that plmConv is highly suitable for predicting long range contacts, yielding better performance across all the contact count thresholds. T0784: a success story. One of the targets in CASP11 (target ID: T0784) was a DUF4425 family protein (BACOVA_05332) from Bacteroides ovatus (PDB ID: 4qey). The number of identifiable sequence homologs for this protein was relatively low, which resulted in uninterpretable contact map obtained by plmDCA. The same co-evolution statistics used as input to plmConv yielded a contact map which not only was devoid of the noise present in plmDCA’s contact map, but also uncovered numerous long-range contacts that were not identifiable previously. The contact map produced by plmConv for this target is also of much higher utility than the one returned by MetaPSICOV. Note in Fig. 4c how MetaPSICOV prediction lacks nearly all the long-range contacts, which are present in the plmConv prediction. 7 (a) Structure (b) Contact maps predicted by our method vs. plmDCA (c) Contact maps predicted by our method vs. MetaPSICOV Figure 4: An example of one of CASP11 proteins (T0784), where plmConv is able to recover the contact map, which other methods cannot. True contacts (ground truth) marked in gray. Predictions of respective methods are marked in color, with true positives in green and false positives in red. Predictions along the diagonal with separation of 5 amino acids or less have not been considered in computing positive predictive value and have been marked in lighter colors in the plots. 4 Discussion and Conclusions In this work we proposed an entirely new way to handle the outputs of the co-evolutionary analyses of multiple sequence alignments of homologous proteins. We demonstrated that this method is considerably superior to the current ways of handling the co-evolution data, able to extract more information from them, and consequently greatly aid protein contact prediction based on these data. Contact prediction with our method is more accurate and 2 to 3 times faster than with MetaPSICOV. Relevance to the field. Until now, the utility of co-evolution-based contact prediction was limited because most of the proteins that had sufficiently high amount of sequence homologs had also their structures determined and available for comparative modelling. As plmConv is able to predict highaccuracy contact maps from as few as 100 sequences, it opens a whole new avenue of possibilities for the field. While there are only a few protein families that have thousands of known homologs but no known structure, there are hundreds which are potentially within the scope of this method. We postulate that this method should allow for computational elucidation of more structures, be it by means of pure computational simulation, or simulation guided by predicted contacts and sparse experimental restraints. plmConv allows for varying prediction parameters. One of the strengths of the proposed method is that it is agnostic to the input data, in particular to the way input alignments are constructed and to the inference parameters (regularization strength). Therefore, one could envision using alignments of close homologs to elucidate the co-evolution of a variable region in the protein (e.g. variable regions of antibodies, extracellular loops of G protein–coupled receptors etc.), or distant homologs to yield structural insights into the overall fold of the protein. In the same way, one could vary the regularization strength of the inference, with stronger regularization allowing for more precise elucidation of the few couplings (and consequently contacts) that are most significant for protein stability or structure from the evolutionary point of view. Conversely, it is possible to relax the regularization strength and let the data speak for itself, which could potentially result in a better picture of the overall contact map and give a holistic insight into the evolutionary constraints on the structure of the protein in question. The method we propose is directly applicable to a vast array of biological problems, being both accurate and flexible. It can use arbitrary input data and prediction parameters, which allows the end user to tailor it to answer pertinent biological questions. Most importantly, though, even if trained on the heavily constrained data set, it is able to produce results exceeding in predictive capabilities those of the state-of-the-art methods in protein contact prediction at a fraction of computational effort, making it perfectly suitable for large-scale analyses. We expect that the performance of the method will further improve when trained on a larger, more representative set of proteins. 8 Acknowledgments Grant support: Deutsche Telekom Foundation, ERC Consolidator Grant “3DReloaded”, ERC Starting Grant “VideoLearn”. References [1] Rasko Leinonen, Federico Garcia Diez, David Binns, Wolfgang Fleischmann, Rodrigo Lopez, and Rolf Apweiler. UniProt archive. Bioinformatics, 20(17):3236–3237, 2004. [2] In-Geol Choi and Sung-Hou Kim. Evolution of protein structural classes and protein sequence families. Proceedings of the National Academy of Sciences of the United States of America, 103(38):14056–61, 2006. [3] Magnus Ekeberg, Cecilia Lövkvist, Yueheng Lan, Martin Weigt, and Erik Aurell. Improved contact prediction in proteins: Using pseudolikelihoods to infer Potts models. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 87(1):1–19, 2013. [4] Faruck Morcos, Andrea Pagnani, Bryan Lunt, Arianna Bertolino, Debora S Marks, Chris Sander, Riccardo Zecchina, José N Onuchic, Terence Hwa, and Martin Weigt. Direct-coupling analysis of residue coevolution captures native contacts across many protein families. Proceedings of the National Academy of Sciences of the United States of America, 108(49):E1293–301, 2011. [5] Christoph Feinauer, Marcin J. Skwark, Andrea Pagnani, and Erik Aurell. Improving contact prediction along three dimensions. PLOS Computational Biology, 10(10):e1003847, 2014. [6] S. D. Dunn, L. M. Wahl, and G. B. Gloor. Mutual information without the influence of phylogeny or entropy dramatically improves residue contact prediction. Bioinformatics, 24(3):333–340, 2008. [7] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten ZIP code recognition. Neural Computation, 1(4):541–551, 1989. [8] Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE Transactions on Neural Networks, 20(1):61–80, 2009. [9] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. In International Conference on Learning Representations, 2014. [10] Mikael Henaff, Joan Bruna, and Yann LeCun. Deep convolutional networks on graph-structured data. arXiv:1506.05163, 2015. [11] David K Duvenaud, Dougal Maclaurin, Jorge Iparraguirre, Rafael Bombarell, Timothy Hirzel, Alan AspuruGuzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. Advances in Neural Information Processing Systems 28, pages 2215–2223, 2015. [12] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: convolutional networks for medical image segmentation. In Medical Image Computing and Computer-Assisted Intervention, pages 234–241, 2015. [13] Vladimir Golkov, Alexey Dosovitskiy, Jonathan Sperl, Marion Menzel, Michael Czisch, Philipp Samann, Thomas Brox, and Daniel Cremers. q-Space deep learning: twelve-fold shorter and model-free diffusion MRI scans. IEEE Transactions on Medical Imaging, 35(5):1344–1351, 2016. [14] Anna Choromanska, Mikael Henaff, Michael Mathieu, Gérard Ben Arous, and Yann LeCun. The loss surfaces of multilayer networks. Journal of Machine Learning Research: Workshop and Conference Proceedings, 38:192–204, 2015. [15] Alexey Dosovitskiy and Thomas Brox. Inverting visual representations with convolutional networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. [16] Diederik P. Kingma and Jimmy Lei Ba. Adam: a method for stochastic optimization. In International Conference on Learning Representations, 2015. [17] M Michael Gromiha and Samuel Selvaraj. Importance of long-range interactions in protein folding. Biophysical Chemistry, 77(1):49–68, 1999. [18] David T. Jones, Tanya Singh, Tomasz Kosciolek, and Stuart Tetchner. MetaPSICOV: Combining coevolution methods for accurate prediction of contacts and long range hydrogen bonding in proteins. Bioinformatics, 31(7):999–1006, 2015. [19] Tomasz Kosciolek and David T. Jones. Accurate contact predictions using covariation techniques and machine learning. Proteins: Structure, Function and Bioinformatics, 84(Suppl 1):145–151, 2016. 9
2016
108
6,004
Learnable Visual Markers Oleg Grinchuk1, Vadim Lebedev1,2, and Victor Lempitsky1 1Skolkovo Institute of Science and Technology, Moscow, Russia 2Yandex, Moscow, Russia Abstract We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks. In our approach, the markers are obtained as color images synthesized by a deep network from input bit strings, whereas another deep network is trained to recover the bit strings back from the photos of these markers. The two networks are trained simultaneously in a joint backpropagation process that takes characteristic photometric and geometric distortions associated with marker fabrication and marker scanning into account. Additionally, a stylization loss based on statistics of activations in a pretrained classification network can be inserted into the learning in order to shift the marker appearance towards some texture prototype. In the experiments, we demonstrate that the markers obtained using our approach are capable of retaining bit strings that are long enough to be practical. The ability to automatically adapt markers according to the usage scenario and the desired capacity as well as the ability to combine information encoding with artistic stylization are the unique properties of our approach. As a byproduct, our approach provides an insight on the structure of patterns that are most suitable for recognition by ConvNets and on their ability to distinguish composite patterns. 1 Introduction Visual markers (also known as visual fiducials or visual codes) are used to facilitate humanenvironment and robot-environment interaction, and to aid computer vision in resource-constrained and/or accuracy-critical scenarios. Examples of such markers include simple 1D (linear) bar codes [31] and their 2D (matrix) counterparts such as QR-codes [9] or Aztec codes [18], which are used to embed chunks of information into objects and scenes. In robotics, AprilTags [23] and similar methods [3, 4, 26] are a popular way to make locations, objects, and agents easily identifiable for robots. Within the realm of augmented reality (AR), ARCodes [6] and similar marker systems [13, 21] are used to enable real-time camera pose estimation with high accuracy, low latency, and on low-end devices. Overall, such markers can embed information into the environment in a more compact and language-independent way as compared to traditional human text signatures, and they can also be recognized and used by autonomous and human-operated devices in a robust way. Existing visual markers are designed “manually” based on the considerations of the ease of processing by computer vision algorithms, the information capacity, and, less frequently, aesthetics. Once marker family is designed, a computer vision-based approach (a marker recognizer) has to be engineered and tuned in order to achieve reliable marker localization and interpretation [1, 17, 25]. The two processes of the visual marker design on one hand and the marker recognizer design on the other hand are thus separated into two subsequent steps, and we argue that such separation makes the corresponding design choices inherently suboptimal. In particular, the third aspect (aesthetics) 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. is usually overlooked, which leads to visually-intrusive markers that in many circumstances might not fit the style of a certain environment and make this environment “computer-friendly” at the cost of “human-friendliness”. In this work, we propose a new general approach to constructing use visual markers that leverages recent advances in deep generative learning. To this end, we suggest to embed the two tasks of the visual marker design and the marker recognizer design into a single end-to-end learning framework. Within our approach, the learning process produces markers and marker recognizers that are adapted to each other “by design”. While our idea is more general, we investigate the case where the markers are synthesized by a deep neural network (the synthesizer network), and when they are recognized by another deep network (the recognizer network). In this case, we demonstrate how these two networks can be both learned by a joint stochastic optimization process. The benefits of the new approach are thus several-fold: 1. As we demonstrate, the learning process can take into account the adversarial effects that complicate recognition of the markers, such as perspective distortion, confusion with background, low-resolution, motion blur, etc. All such effects can be modeled at training time as piecewise-differentiable transforms. In this way they can be embedded into the learning process that will adapt the synthesizer and the recognizer to be robust with respect to such effect. 2. It is easy to control the trade-offs between the complexity of the recognizer network, the information capacity of the codes, and the robustness of the recognition towards different adversarial effects. In particular, one can set the recognizer to have a certain architecture, fix the variability and the strength of the adversarial effects that need to be handled, and then the synthesizer will adapt so that the most “legible” codes for such circumstances can be computed. 3. Last but not least, the aesthetics of the neural codes can be brought into the optimization. Towards this end we show that we can augment the learning objective with a special stylization loss inspired by [7, 8, 29]. Including such loss facilitates the emergence of stylized neural markers that look as instances of a designer-provided stochastic texture. While such modification of the learning process can reduce the information capacity of the markers, it can greatly increase the “human-friendliness” of the resulting markers. Below, we introduce our approach and then briefly discuss the relation of this approach to prior art. We then demonstrate several examples of learned marker families. 2 Learnable visual markers We now detail our approach (Figure 1). Our goal is to build a synthesizer network S(b; θS) with learnable parameters θS that can encode a bit sequence b = {b1, b2, . . . bn} containing n bits into an image M of the size m-by-m (a marker). For notational simplicity in further derivations, we assume that bi ∈{−1, +1}. To recognize the markers produced by the synthesizer, a recognizer network R(I; θR) with learnable parameters θR is created. The recognizer takes an image I containing a marker and infers the realvalued sequence r = {r1, r2, . . . , rn}. The recognizer is paired to the synthesizer to ensure that sign ri = bi, i.e. that the signs of the numbers inferred by the recognizers correspond to the bits encoded by the synthesizer. In particular, we can measure the success of the recognition using a simple loss function based on element-wise sigmoid: L(b, r) = −1 n n X i=1 σ(biri) = −1 n n X i=1 1 1 + exp(−biri) (1) where the loss is distributed between −1 (perfect recognition) and 0. In real life, the recognizer network does not get to work with the direct outputs of the synthesizer. Instead, the markers produced by the synthesizer network are somehow embedded into an environment (e.g. via printing or using electronic displays), and later their images are captioned by some camera controlled by a human or by a robot. During learning, we model the transformation between 2 Gram matrix Gram matrix decoding loss synthesizer network synthesizer network recognizer network recognizer network pretrained ConvNet pretrained ConvNet texture loss input bit string decoded input texture sample backpropagation rendering network rendering network Figure 1: The outline of our approach and the joint learning process. Our core architecture consists of the synthesizer network that converts input bit sequences into visual markers, the rendering network that simulates photometric and geometric distortions associated with marker printing and capturing, and the recognizer network that is designed to recover the input bit sequence from the distorted markers. The whole architecture is trained end-to-end by backpropagation, after which the synthesizer network can be used to generate markers, and the recognizer network to recover the information from the markers placed in the environment. Additionally, we can enforce the visual similarity of markers to a given texture sample using the mismatch in deep Gram matrix statistics in a pretrained network [7] as the second loss term during learning (right part). a marker produced by the synthesizer and the image of that marker using a special feed-forward network (the renderer network) T (M; φ), where the parameters of the renderer network φ are sampled during learning and correspond to background variability, lighting variability, perspective slant, blur kernel, color shift/white balance of the camera, etc. In some scenarios, the non-learnable parameters φ can be called nuisance parameters, although in others we might be interested in recovering some of them (e.g. the perspective transform parameters). During learning φ is sampled from some distribution Φ which should model the variability of the above-mentioned effects in the conditions under which the markers are meant to be used. When our only objective is robust marker recognition, the learning process can be framed as the minimization of the following functional: f(θS, θR) = E b∼U(n) φ∼Φ L  b, R  T S(b; θS); φ  ; θR   . (2) Here, the bit sequences b are sampled uniformly from U(n) = {−1; +1}n, passed through the synthesizer, the renderer, and the recognizer, with the (minus) loss (1) being used to measure the success of the recognition. The parameters of the synthesizer and the recognizer are thus optimized to maximize the success rate. The minimization of (2) can then be accomplished using a stochastic gradient descent algorithm, e.g. ADAM [14]. Each iteration of the algorithm samples a mini-batch of different bit sequences as well as different rendering layer parameter sets and updates the parameters of the synthesizer and the recognizer networks in order to minimize the loss (1) for these samples. Practical implementation. As mentioned above, the components of the architecture, namely the synthesizer, the renderer, and the recognizer can be implemented as feed-forward networks. The recognizer network can be implemented as a feedforward convolutional network [16] with n output units. The synthesizer can use multiplicative and up-convolutional [5, 34] layers, as well as elementwise non-linearities. Implementing the renderer T (M; φ) (Figure 2) requires non-standard layers. We have implemented the renderer as a chain of layers, each introducing some “nuisance” transformation. We have implemented a special layer that superimposes an input over a bigger background patch drawn from a random pool of images. We use the spatial transformer layer [11] to implement the geometric distortion in a differentiable manner. Color shifts and intensity changes can be implemented using differentiable elementwise transformations (linear, multiplicative, gamma). Blurring associated with lens effect or motion can be simply implemented using a convolutional layer. The nuisance transformation layers can be chained resulting in a renderer layer that can model complex geometric and photometric transformations (Figure 2). 3 Spatial Transform Color Transform Blur Superimpose Marker Figure 2: Visualizations of the rendering network T (M; φ). For the input marker M on the left the output of the network is obtained through several stages (which are all piecewise-differentiable w.r.t. inputs); on the right the outputs T (M; φ) for several random nuisance parameters φ are shown. The use of piecewise-differentiable transforms within T allows to backpropagate through T. Controlling the visual appearance. Interestingly, we observed that under variable conditions, the optimization of (2) results in markers that have a consistent and interesting visual texture (Figure 3). Despite such style consistency, it might be desirable to control the appearance of the resulting markers more explicitly e.g. using some artistic prototypes. Recently, [7] have achieved remarkable results in texture generation by measuring the statistics of textures using Gram matrices of convolutional maps inside deep convolutional networks trained to classify natural images. Texture synthesis can then be achieved by minimizing the deviation between such statistics of generated images and of style prototypes. Based on their approach, [12, 29] have suggested to include such deviation as a loss into the training process for deep feedforward generative neural networks. In particular, the feed-forward networks in [29] are trained to convert noise vectors into textures. We follow this line of work and augment our learning objective (2) with the texture loss of [7]. Thus, we consider a feed-forward network C(M; γ) that computes the result of the t-th convolutional layers of a network trained for large-scale natural image classification such as the VGGNet [28]. For an image M, the output C(M; γ) thus contains k 2D channels (maps). The network C uses the parameters γ that are pre-trained on a large-scale dataset and that are not part of our learning process. The style of an image M is then defined using the following k-by-k Gram matrix G(M; γ) with each element defined as: Gij(M; γ) = ⟨Ci(M; γ), Cj(M; γ) ⟩, (3) where Ci and Cj are the i-th and the j-th maps and the inner product is taken over all spatial locations. Given a prototype texture M 0, the learning objective can be augmented with the term: fstyle(θS) = E b∼U(n) G(S(b; θS); γ) −G(M 0; γ) 2 . (4) The incorporation of the term (4) forces the markers S(b; θS) produced by the synthesizer to have the visual appearance similar to instances of the texture defined by the prototype M0 [7]. 3 Related Work We now discuss the classes of deep learning methods that to the best of our understanding are most related to our approach. Our work is partially motivated by the recent approaches that analyze and visualize pretrained deep networks by synthesizing color images evoking certain responses in these networks. Towards this end [27] generate examples that maximize probabilities of certain classes according to the network, [33] generate visual illusions that maximize such probabilities while retaining similarity to a predefined image of a potentially different class, [22] also investigate ways of generating highly-abstract and structured color images that maximize probabilities of a certain class. Finally, [20] synthesize color images that evoke a predefined vector of responses at a certain level of the network for the purpose of network inversion. Our approach is related to these approaches, since our markers can be regarded as stimuli invoking certain responses in the recognizer network. Unlike these approaches, our recognizer network is not kept fixed but is updated together with the synthesizer network that generates the marker images. Another obvious connection are autoencoders [2], which are models trained to (1) encode inputs into a compact intermediate representation through the encoder network and (2) recover the original input 4 64 bits, default params, C=59.9, p=99.3% 96 bits, low affine, C=90.2, p=99.3% 64 bits, low affine σ = 0.05, C=61.2, p=99.5% 8 bits, high blur, C=7.91, p=99.9% 32 bits, grayscale, C=27.9, p=98.3% 64 bits, nonlinear encoder, C=58.4, p=98.9% 64 bits, thin network, C=40.1, p=93.2% 64 bits, 16 pixel marker, C=56.8, p=98.5% Figure 3: Visualization of the markers learned by our approach under different circumstances shown in captions (see text for details). The captions also show the bit length, the capacity of the resulting encoding (in bits), as well as the accuracy achieved during training. In each case we show six markers: (1) – the marker corresponding to a bit sequence consisting of −1, (2) – the marker corresponding to a bit sequence consisting of +1, (3) and (4) – markers for two random bit sequences that differ by a single bit, (5) and (6) – two markers corresponding to two more random bit sequences. Under many conditions a characteristic grid pattern emerges. by passing the compact representation through the decoder network. Our system can be regarded as a special kind of autoencoder with the certain format of the intermediate representation (a color image). Our decoder is trained to be robust to certain class of transformations of the intermediate representations that are modeled by the rendering network. In this respect, our approach is related to variational autoencoders [15] that are trained with stochastic intermediate representations and to denoising autoencoders [30] that are trained to be robust to noise. Finally, our approach for creating textured markers can be related to steganography [24], which aims at hiding a signal in a carrier image. Unlike steganography, we do not aim to conceal information, but just to minimize its “intrusiveness”, while keeping the information machine-readable in the presence of distortions associated with printing and scanning. 4 Experiments Below, we present qualitative and quantitative evaluation of our approach. For longer bit sequences, the approach might not be able to train a perfect pair of a synthesizer and a recognizer, and therefore, similarly to other visual marker systems, it makes sense to use error-correcting encoding of the signal. Since the recognizer network returns the odds for each bit in the recovered signal, our approach is suitable for any probabilistic error-correction coding [19]. Synthesizer architectures. For the experiments without texture loss, we use the simplest synthesizer network, which consists of a single linear layer (with a 3m2 × n matrix and a bias vector) that is followed by an element-wise sigmoid. For the experiments with texture loss, we started with the synthesizer used in [29], but found out that it can be greatly simplified for our task. Our final architecture takes a binary code as input, transforms it with single fully connected layer and series of 3 × 3 convolutions with 2× upsamplings in between. Recognizer architectures. Unless reported otherwise, the recognizer network was implemented as a ConvNet with three convolutional layers (96 5 × 5 filters followed by max-pooling and ReLU), and two fully-connected layer with 192 and n output units respectively (where n is the length of the code). We find this architecture sufficient to successfully deal with marker encoding. In some experiments we have also considered a much smaller networks with 24 maps in convolutional layers, and 48 units in the penultimate layer (“thin network”). In general, the convergence on the training stage greatly benefits from adding Batch Normalization [10] after every convolutional layer. During 5 prototype all −1 all +1 half random random + 1 bit diff. Figure 4: Examples of textured 64-bit marker families. The texture protototype is shown in the first column, while five remaining columns show markers for the following sequences: all −1, all +1, 32 consecutive −1 followed by 32 −1, and, finally, two random bit sequences that differ by a single bit. our experiments with texture loss, we used VGGNet-like architecture with 3 blocks, each consisting of two 3 × 3 convolutions and maxpooling, followed by two dense layers. Rendering settings. We perform a spatial transform as an affine transformation, where the 6 affine parameters are sampled from [1, 0, 0, 0, 1, 0]+N(0, σ) (assuming origin at the center of the marker). The example for σ = 0.1 is shown in Fig. 2. We leave more complex spatial transforms (e.g. thin plate spline [11]) that can make markers more robust to bending for future work. Some resilience to bending can still be observed in our qualitative results. Given an image x, we implement the color transformation layer as c1xc2 +c3, where the parameters are sampled from the uniform distribution U[−δ, δ]. As we find that printed markers tend to reduce the color contrast, we add a contrast reduction layer that transforms each value to kx + (1 −k)[0.5] for a random k. Quantitative measurements. To quantify the performance of our markers under different circumstances, we report the accuracy p to which our system converges during the learning under different settings (to evaluate accuracy, we threshold recognizer predictions at zero). Whenever we vary the signal length n, we also report the capacity of the code, which is defined as C = n(1−H(p)), where H(p) = −p log p −(1 −p) log(1 −p) is the coding entropy. Unless specified otherwise, we use the rendering network settings visualized in Figure 2, which gives the impression of the variability and the difficulty of the recovery problem, as the recognizer network is applied to the outputs of this rendering network. Experiments without texture loss. The bulk of experiments without the texture loss has been performed with m = 32 i.e. 32 × 32 patches (we used bilinear interpolation when printing or visualizing). The learned marker families with the base architectures as well as with its variations are shown in Figure 3. It is curious to see the emergence of lattice structures (even though our synthesizer network in this case was a simple single-layer multiplicative network). Apparently, such 6 64/64 63/64 124/128 32/32 64/64 59/64 62/64 122/128 31/32 56/64 64/64 64/64 126/128 32/32 64/64 56/64 59/64 115/128 31/32 60/64 Figure 5: Screenshots of marker recognition process (black box is a part of the user interface and corresponds to perfect alignment). The captions are in (number of correctly recovered bits/total sequence length) format. The rightmost two columns correspond to stylized markers. These marker families were trained with spatial variances σ = 0.1, 0.05, 0.1, 0.05, 0.05 respectively. Larger σ leads to code recovery robustness with respect to affine transformation. lattices are most efficient in terms of storing information for later recovery with a ConvNet. It can also be seen how the system can adapt the markers to varying bit lengths or to varying robustness demands (e.g. to increasing blur or geometric distortions). We have further plotted how the quantitative performance depends on the bit length and and on the marker size in Figure 6. Experiments with texture loss. An interesting effect we have encountered while training synthesizer with texture loss and small output marker size is that it often ended up producing very similar patterns. We tried to tweak architecture to handle this problem but eventually found out that it goes away for larger markers. Performance of real markers. We also show some qualitative results that include printing (on a laser printer using various backgrounds) and capturing (with a webcam) of the markers. Characteristic results in Figure 4 demonstrate that our system can successfully recover encoded signals with small amount of mistakes. The amount of mistakes can be further reduced by applying the system with jitter and averaging the odds (not implemented here). Here, we aid the system by roughly aligning the marker with a pre-defined square (shown as part of the user interface). As can be seen the degradation of the results with the increasing alignment error is graceful (due to the use of affine transforms inside the rendering network at train time). In a more advanced system, such alignment can be bypassed altogether, using a pipeline that detects marker instances in a video stream and localizes their corners. Here, one can either use existing quad detection algorithms as in [23] or make the localization process a deep feed-forward network and include it into the joint learning in our system. In the latter case, the synthesizer would adapt to produce markers that are distinguishable from backgrounds and have easily identifiable corners. In 7 0 50 100 150 200 97 98 99 100 Number of bits Accuracy, % less affine default thin network 0 20 40 60 90 95 100 Marker size, pixels Accuracy, % Figure 6: Left – dependence of the recognition accuracy on the size of the bit string for two variants with the default networks, and one with the reduced number of maps in each convolutional layer. Reducing the capacity of the network hurts the performance a lot, while reducing spatial variation in the rendering network (to σ = 0.05) increases the capacity very considerably. Right – dependence of the recognition accuracy on the marker size (with otherwise default settings). The capacity of the coding quickly saturates as markers grow bigger. such qualitative experiments (Figure 4), we observe the error rates that are roughly comparable with our quantitative experiments. Recognizer networks for QR-codes. We have also experimented with replacing the synthesizer network with a standard QR-encoder. While we tried different settings (such as error-correction level, input bit sequence representation), the highest recognition rate we could achieve with our architecture of the recognizer network was only 85%. Apparently, the recognizer network cannot reverse the combination of error-correction encoding and rendering transformations well. We also tried to replace both the synthesizer and the recognizer with a QR-encoder and a QR-decoder. Here we found that standard QR-decoders cannot decode QR-markers processed by our renderer network at the typical level of blur in our experiments (though special-purpose blind deblurring algorithms such as [32] are likely to succeed). 5 Discussion In this work, we have proposed a new approach to marker design, where marker design and their recognizer are learned jointly. Additionally, an aesthetics-related term can be added into the objective. To the best of our knowledge, we are the first to approach visual marker design using optimization. One curious side aspect of our work is the fact that the learned markers can provide an insight into the architecture of ConvNets (or whatever architecture is used in the recognizer network). In more details, they represent patterns that are most suitable for recognition with ConvNets. Unlike other approaches that e.g. visualize patterns for networks trained to classify natural images, our method decouples geometric and topological factors on one hand from the natural image statistics on the other, as we obtain these markers in a “content-free” manner1. As discussed above, one further extension to the system might be including marker localizer into the learning as another deep feedforward network. We note that in some scenarios (e.g. generating augmented reality tags for real-time camera localization), one can train the recognizer to estimate the parameters of the geometric transformation in addition or even instead of the recovering the input bit string. This would allow to create visual markers particularly suitable for accurate pose estimation. 1The only exception are the background images used by the rendering layer. In our experience, their statistics have negligible influence on the emerging patterns. 8 References [1] L. F. Belussi and N. S. Hirata. Fast component-based qr code detection in arbitrarily acquired images. Journal of mathematical imaging and vision, 45(3):277–292, 2013. [2] Y. Bengio. Learning deep architectures for AI. Foundations and trends in Machine Learning, 2(1):1–127, 2009. [3] F. Bergamasco, A. Albarelli, and A. Torsello. Pi-tag: a fast image-space marker design based on projective invariants. Machine vision and applications, 24(6):1295–1310, 2013. [4] D. Claus and A. W. Fitzgibbon. Reliable fiducial detection in natural scenes. Computer Vision-ECCV 2004, pp. 469–480. Springer, 2004. [5] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015. [6] M. Fiala. ARTag, a fiducial marker system using digital techniques. Conf. Computer Vision and Pattern Recognition (CVPR), v. 2, pp. 590–596, 2005. [7] L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. Advances in Neural Information Processing Systems, NIPS, pp. 262–270, 2015. [8] L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,CVPR, 2016. [9] M. Hara, M. Watabe, T. Nojiri, T. Nagaya, and Y. Uchiyama. Optically readable two-dimensional code and method and apparatus using the same, 1998. US Patent 5,726,435. [10] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proc. International Conference on Machine Learning, ICML, pp. 448–456, 2015. [11] M. Jaderberg, K. Simonyan, A. Zisserman, et al. Spatial transformer networks. Advances in Neural Information Processing Systems, pp. 2008–2016, 2015. [12] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision (ECCV), pp. 694–711, 2016. [13] M. Kaltenbrunner and R. Bencina. Reactivision: a computer-vision framework for table-based tangible interaction. Proc. of the 1st international conf. on tangible and embedded interaction, pp. 69–74, 2007. [14] D. P. Kingma and J. B. Adam. A method for stochastic optimization. International Conference on Learning Representation, 2015. [15] D. P. Kingma and M. Welling. Auto-encoding variational bayes. International Conference on Learning Representations, 2014. [16] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. [17] C.-C. Lo and C. A. Chang. Neural networks for bar code positioning in automated material handling. Industrial Automation and Control: Emerging Technologies, pp. 485–491. IEEE, 1995. [18] A. Longacre Jr and R. Hussey. Two dimensional data encoding structure and symbology for use with optical readers, 1997. US Patent 5,591,956. [19] D. J. MacKay. Information theory, inference and learning algorithms. Cambridge university press, 2003. [20] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. Conf. Computer Vision and Pattern Recognition (CVPR), 2015. [21] J. Mooser, S. You, and U. Neumann. Tricodes: A barcode-like fiducial design for augmented reality media. IEEE Multimedia and Expo, pp. 1301–1304, 2006. [22] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. Conf. on Computer Vision and Pattern Recognition (CVPR), 2015. [23] E. Olson. Apriltag: A robust and flexible visual fiducial system. Robotics and Automation (ICRA), 2011 IEEE International Conference on, pp. 3400–3407. IEEE, 2011. [24] F. A. Petitcolas, R. J. Anderson, and M. G. Kuhn. Information hiding-a survey. Proceedings of the IEEE, 87(7):1062–1078, 1999. [25] A. Richardson and E. Olson. Learning convolutional filters for interest point detection. Conf. on Robotics and Automation (ICRA), pp. 631–637, 2013. [26] D. Scharstein and A. J. Briggs. Real-time recognition of self-similar landmarks. Image and Vision Computing, 19(11):763–772, 2001. [27] K. Simonyan, A. Vedaldi, and A. Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013. [28] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [29] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. Int. Conf. on Machine Learning (ICML), 2016. [30] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. Int. Conf. on Machine learning (ICML), 2008. [31] N. J. Woodland and S. Bernard. Classifying apparatus and method, 1952. US Patent 2,612,994. [32] S. Yahyanejad and J. Str¨om. Removing motion blur from barcode images. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 41–46. IEEE, 2010. [33] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. Computer vision– ECCV 2014, pp. 818–833. Springer, 2014. [34] M. D. Zeiler, G. W. Taylor, and R. Fergus. Adaptive deconvolutional networks for mid and high level feature learning. Int. Conf. on Computer Vision (ICCV), pp. 2018–2025, 2011. 9
2016
109
6,005
Generating Videos with Scene Dynamics Carl Vondrick MIT vondrick@mit.edu Hamed Pirsiavash UMBC hpirsiav@umbc.edu Antonio Torralba MIT torralba@mit.edu Abstract We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation. 1 Introduction Understanding object motions and scene dynamics is a core problem in computer vision. For both video recognition tasks (e.g., action classification) and video generation tasks (e.g., future prediction), a model of how scenes transform is needed. However, creating a model of dynamics is challenging because there is a vast number of ways that objects and scenes can change. In this work, we are interested in the fundamental problem of learning how scenes transform with time. We believe investigating this question may yield insight into the design of predictive models for computer vision. However, since annotating this knowledge is both expensive and ambiguous, we instead seek to learn it directly from large amounts of in-the-wild, unlabeled video. Unlabeled video has the advantage that it can be economically acquired at massive scales yet contains rich temporal signals “for free” because frames are temporally coherent. With the goal of capturing some of the temporal knowledge contained in large amounts of unlabeled video, we present an approach that learns to generate tiny videos which have fairly realistic dynamics and motions. To do this, we capitalize on recent advances in generative adversarial networks [9, 31, 4], which we extend to video. We introduce a two-stream generative model that explicitly models the foreground separately from the background, which allows us to enforce that the background is stationary, helping the network to learn which objects move and which do not. Our experiments suggest that our model has started to learn about dynamics. In our generation experiments, we show that our model can generate scenes with plausible motions.1 We conducted a psychophysical study where we asked over a hundred people to compare generated videos, and people preferred videos from our full model more often. Furthermore, by making the model conditional on an input image, our model can sometimes predict a plausible (but “incorrect”) future. In our recognition experiments, we show how our model has learned, without supervision, useful features for human action classification. Moreover, visualizations of the learned representation suggest future generation may be a promising supervisory signal for learning to recognize objects of motion. 1See http://mit.edu/vondrick/tinyvideo for the animated videos. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The primary contribution of this paper is showing how to leverage large amounts of unlabeled video in order to acquire priors about scene dynamics. The secondary contribution is the development of a generative model for video. The remainder of this paper describes these contributions in detail. In section 2, we describe our generative model for video. In section 3, we present several experiments to analyze the generative model. We believe that generative video models can impact many applications, such as in simulations, forecasting, and representation learning. 1.1 Related Work This paper builds upon early work in generative video models [29]. However, previous work has focused mostly on small patches, and evaluated it for video clustering. Here, we develop a generative video model for natural scenes using state-of-the-art adversarial learning methods [9, 31]. Conceptually, our work is related to studies into fundamental roles of time in computer vision [30, 12, 2, 7, 24]. However, here we are interested in generating short videos with realistic temporal semantics, rather than detecting or retrieving them. Our technical approach builds on recent work in generative adversarial networks for image modeling [9, 31, 4, 47, 28], which we extend to video. To our knowledge, there has been relatively little work extensively studying generative adversarial networks for video. Most notably, [22] also uses adversarial networks for video frame prediction. Our framework can generate videos for longer time scales and learn representations of video using unlabeled data. Our work is also related to efforts to predict the future in video [33, 22, 43, 50, 42, 17, 8, 54] as well as concurrent work in future generation [6, 15, 20, 49, 55]. Often these works may be viewed as a generative model conditioned on the past frames. Our work complements these efforts in two ways. Firstly, we explore how to generate videos from scratch (not conditioned on the past). Secondly, while prior work has used generative models in video settings mostly on a single frame, we jointly generate a sequence of frames (32 frames) using spatio-temporal convolutional networks, which may help prevent drifts due to errors accumulating. We leverage approaches for recognizing actions in video with deep networks, but apply them for video generation instead. We use spatio-temporal 3D convolutions to model videos [40], but we use fractionally strided convolutions [51] instead because we are interested in generation. We also use two-streams to model video [34], but apply them for video generation instead of action recognition. However, our approach does not explicitly use optical flow; instead, we expect the network to learn motion features on its own. Finally, this paper is related to a growing body of work that capitalizes on large amounts of unlabeled video for visual recognition tasks [18, 46, 37, 13, 24, 25, 3, 32, 26, 27, 19, 41, 42, 1]. We instead leverage large amounts of unlabeled video for generation. 2 Generative Models for Video In this section, we present a generative model for videos. We propose to use generative adversarial networks [9], which have been shown to have good performance on image generation [31, 4]. 2.1 Review: Generative Adversarial Networks The main idea behind generative adversarial networks [9] is to train two networks: a generator network G tries to produce a video, and a discriminator network D tries to distinguish between “real“ videos and “fake” generated videos. One can train these networks against each other in a min-max game where the generator seeks to maximally fool the discriminator while simultaneously the discriminator seeks to detect which examples are fake: min wG max wD Ex∼px(x) [log D(x; wD)] + Ez∼pz(z) [log (1 −D(G(z; wG); wD))] (1) where z is a latent “code” that is often sampled from a simple distribution (such as a normal distribution) and x ∼px(x) samples from the data distribution. In practice, since we do not know the true distribution of data px(x), we can estimate the expectation by drawing from our dataset. Since we will optimize Equation 1 with gradient based methods (SGD), the two networks G and D can take on any form appropriate for the task as long as they are differentiable with respect to parameters wG and wD. We design a G and D for video. 2 Background Stream 2D convolutions Foreground Stream 3D convolutions Noise 100 dim Mask Foreground Background Replicate over Time m ⊙f + (1 −m) ⊙b Generated Video Space-Time Cuboid Tanh Sigmoid Tanh Figure 1: Video Generator Network: We illustrate our network architecture for the generator. The input is 100 dimensional (Gaussian noise). There are two independent streams: a moving foreground pathway of fractionally-strided spatio-temporal convolutions, and a static background pathway of fractionally-strided spatial convolutions, both of which up-sample. These two pathways are combined to create the generated video using a mask from the motion pathway. Below each volume is its size and the number of channels in parenthesis. 2.2 Generator Network The input to the generator network is a low-dimensional latent code z ∈Rd. In most cases, this code can be sampled from a distribution (e.g., Gaussian). Given a code z, we wish to produce a video. We design the architecture of the generator network with a few principles in mind. Firstly, we want the network to be invariant to translations in both space and time. Secondly, we want a low-dimensional z to be able to produce a high-dimensional output (video). Thirdly, we want to assume a stationary camera and take advantage of the the property that usually only objects move. We are interested in modeling object motion, and not the motion of cameras. Moreover, since modeling that the background is stationary is important in video recognition tasks [44], it may be helpful in video generation as well. We explore two different network architectures: One Stream Architecture: We combine spatio-temporal convolutions [14, 40] with fractionally strided convolutions [51, 31] to generate video. Three dimensional convolutions provide spatial and temporal invariance, while fractionally strided convolutions can upsample efficiently in a deep network, allowing z to be low-dimensional. We use an architecture inspired by [31], except extended in time. We use a five layer network of 4 × 4 × 4 convolutions with a stride of 2, except for the first layer which uses 2 × 4 × 4 convolutions (time × width × height). We found that these kernel sizes provided an appropriate balance between training speed and quality of generations. Two Stream Architecture: The one stream architecture does not model that the world is stationary and usually only objects move. We experimented with making this behavior explicit in the model. We use an architecture that enforces a static background and moving foreground. We use a two-stream architecture where the generator is governed by the combination: G2(z) = m(z) ⊙f(z) + (1 −m(z)) ⊙b(z). (2) Our intention is that 0 ≥m(z) ≥1 can be viewed as a spatio-temporal mask that selects either the foreground f(z) model or the background model b(z) for each pixel location and timestep. To enforce a background model in the generations, b(z) produces a spatial image that is replicated over time, while f(z) produces a spatio-temporal cuboid masked by m(z). By summing the foreground model with the background model, we can obtain the final generation. Note that ⊙is element-wise multiplication, and we replicate singleton dimensions to match its corresponding tensor. During learning, we also add to the objective a small sparsity prior on the mask λ∥m(z)∥1 for λ = 0.1, which we found helps encourage the network to use the background stream. 3 We use fractionally strided convolutional networks for m(z), f(z), and b(z). For f(z), we use the same network as the one-stream architecture, and for b(z) we use a similar generator architecture to [31]. We only use their architecture; we do not initialize with their learned weights. To create the mask m(z), we use a network that shares weights with f(z) except the last layer, which has only one output channel. We use a sigmoid activation function for the mask. We visualize the two-stream architecture in Figure 1. In our experiments, the generator produces 64 × 64 videos for 32 frames, which is a little over a second. 2.3 Discriminator Network The discriminator needs to be able to solve two problems: firstly, it must be able to classify realistic scenes from synthetically generated scenes, and secondly, it must be able to recognize realistic motion between frames. We chose to design the discriminator to be able to solve both of these tasks with the same model. We use a five-layer spatio-temporal convolutional network with kernels 4 × 4 × 4 so that the hidden layers can learn both visual models and motion models. We design the architecture to be reverse of the foreground stream in the generator, replacing fractionally strided convolutions with strided convolutions (to down-sample instead of up-sample), and replacing the last layer to output a binary classification (real or not). 2.4 Learning and Implementation We train the generator and discriminator with stochastic gradient descent. We alternate between maximizing the loss w.r.t. wD and minimizing the loss w.r.t. wG until a fixed number of iterations. All networks are trained from scratch. Our implementation is based off a modified version of [31] in Torch7. We used a more numerically stable implementation of cross entropy loss to prevent overflow. We use the Adam [16] optimizer and a fixed learning rate of 0.0002 and momentum term of 0.5. The latent code has 100 dimensions, which we sample from a normal distribution. We use a batch size of 64. We initialize all weights with zero mean Gaussian noise with standard deviation 0.01. We normalize all videos to be in the range [−1, 1]. We use batch normalization [11] followed by the ReLU activation functions after every layer in the generator, except the output layers, which uses tanh. Following [31], we also use batch normalization in the discriminator except for the first layer and we instead use leaky ReLU [48]. Training typically took several days on a GPU. 3 Experiments We experiment with the generative adversarial network for video (VGAN) on both generation and recognition tasks. We also show several qualitative examples online. 3.1 Unlabeled Video Dataset We use a large amount of unlabeled video to train our model. We downloaded over two million videos from Flickr [39] by querying for popular Flickr tags as well as querying for common English words. From this pool, we created two datasets: Unfiltered Unlabeled Videos: We use these videos directly, without any filtering, for representation learning. The dataset is over 5, 000 hours. Filtered Unlabeled Videos: To evaluate generations, we use the Places2 pre-trained model [53] to automatically filter the videos by scene category. Since image/video generation is a challenging problem, we assembled this dataset to better diagnose strengths and weaknesses of approaches. We experimented with four scene categories: golf course, hospital rooms (babies), beaches, and train station. Stabilization: As we are interested in the movement of objects and not camera shake, we stabilize the camera motion for both datasets. We extract SIFT keypoints [21], use RANSAC to estimate a homography (rotation, translation, scale) between adjacent frames, and warp frames to minimize background motion. When the homography moved out of the frame, we fill in the missing values using the previous frames. If the homography has too large of a re-projection error, we ignore that segment of the video for training, which only happened 3% of the time. The only other pre-processing we do is normalizing the videos to be in the range [−1, 1]. We extract frames at native frame rate (25 fps). We use 32-frame videos of spatial resolution 64 × 64. 4 Frame  1 Frame  32 Frame  16 Beach  Generated  Videos Frame  1 Frame  32 Frame  16 Golf  Course  Generated  Videos Frame  1 Frame  32 Frame  16 Train  Station  Generated  Videos Frame  1 Frame  32 Frame  16 Hospital  /  Baby  Generated  Videos Figure 2: Video Generations: We show some generations from the two-stream model. The red arrows highlight motions. Please see http://mit.edu/vondrick/tinyvideo for animated movies. 3.2 Video Generation We evaluate both the one-stream and two-stream generator. We trained a generator for each scene category in our filtered dataset. We perform both a qualitative evaluation as well as a quantitative psychophysical evaluation to measure the perceptual quality of the generated videos. Qualitative Results: We show several examples of the videos generated from our model in Figure 2. We observe that a) the generated scenes tend to be fairly sharp and that b) the motion patterns are generally correct for their respective scene. For example, the beach model tends to produce beaches with crashing waves, the golf model produces people walking on grass, and the train station generations usually show train tracks and a train with windows rapidly moving along it. While the model usually learns to put motion on the right objects, one common failure mode is that the objects lack resolution. For example, the people in the beaches and golf courses are often blobs. Nevertheless, we believe it is promising that our model can generate short motions. We visualize the behavior of the two-stream architecture in Figure 3. Baseline: Since to our knowledge there are no existing large-scale generative models of video ([33] requires an input frame), we develop a simple but reasonable baseline for this task. We train an autoencoder over our data. The encoder is similar to the discriminator network (except producing 100 dimensional code), while the decoder follows the two-stream generator network. Hence, the baseline autoencoder network has a similar number of parameters as our full approach. We then feed examples through the encoder and fit a Gaussian Mixture Model (GMM) with 256 components over the 100 dimensional hidden space. To generate a novel video, we sample from this GMM, and feed the sample through the decoder. Evaluation Metric: We quantitatively evaluate our generation using a psychophysical two-alternative forced choice with workers on Amazon Mechanical Turk. We show a worker two random videos, 5 + = + = Background Foreground Generation + = + = Background Foreground Generation Figure 3: Streams: We visualize the background, foreground, and masks for beaches (left) and golf (right). The network generally learns to disentangle the foreground from the background. Percentage of Trials “Which video is more realistic?” Golf Beach Train Baby Mean Random Preference 50 50 50 50 50 Prefer VGAN Two Stream over Autoencoder 88 83 87 71 82 Prefer VGAN One Stream over Autoencoder 85 88 85 73 82 Prefer VGAN Two Stream over VGAN One Stream 55 58 47 52 53 Prefer VGAN Two Stream over Real 21 23 23 6 18 Prefer VGAN One Stream over Real 17 21 19 8 16 Prefer Autoencoder over Real 4 2 4 2 3 Table 1: Video Generation Preferences: We show two videos to workers on Amazon Mechanical Turk, and ask them to choose which video is more realistic. The table shows the percentage of times that workers prefer one generations from one model over another. In all cases, workers tend to prefer video generative adversarial networks over an autoencoder. In most cases, workers show a slight preference for the two-stream model. and ask them “Which video is more realistic?” We collected over 13, 000 opinions across 150 unique workers. We paid workers one cent per comparison, and required workers to historically have a 95% approval rating on MTurk. We experimented with removing bad workers that frequently said real videos were not realistic, but the relative rankings did not change. We designed this experiment following advice from [38], which advocates evaluating generative models for the task at hand. In our case, we are interested in perceptual quality of motion. We consider a model X better than model Y if workers prefer generations from X more than generations from Y. Quantitative Results: Table 1 shows the percentage of times that workers preferred generations from one model over another. Workers consistently prefer videos from the generative adversarial network more than an autoencoder. Additionally, workers show a slight preference for the two-stream architecture, especially in scenes where the background is large (e.g., golf course, beach). Although the one-stream architecture is capable of generating stationary backgrounds, it may be difficult to find this solution, motivating a more explicit architecture. The one-stream architecture generally produces high-frequency temporal flickering in the background. To evaluate whether static frames are better than our generations, we also ask workers to choose between our videos and a static frame, and workers only chose the static frame 38% of the time, suggesting our model produces more realistic motion than static frames on average. Finally, while workers generally can distinguish real videos from generated videos, the workers show the most confusion with our two-stream model compared to baselines, suggesting the two-stream generations may be more realistic on average. 3.3 Video Representation Learning We also experimented with using our model as a way to learn unsupervised representations for video. We train our two-stream model with over 5, 000 hours of unfiltered, unlabeled videos from Flickr. We then fine-tune the discriminator on the task of interest (e.g., action recognition) using a relatively small set of labeled video. To do this, we replace the last layer (which is a binary classifier) with a K-way softmax classifier. We also add dropout [36] to the penultimate layer to reduce overfitting. Action Classification: We evaluated performance on classifying actions on UCF101 [35]. We report accuracy in Figure 4a. Initializing the network with the weights learned from the generative adversarial network outperforms a randomly initialized network, suggesting that it has learned an useful internal representation for video. Interestingly, while a randomly initialized network under-performs hand-crafted STIP features [35], the network initialized with our model significantly 6 Method Accuracy Chance 0.9% STIP Features [35] 43.9% Temporal Coherence [10] 45.4% Shuffle and Learn [24] 50.2% VGAN + Random Init 36.7% VGAN + Logistic Reg 49.3% VGAN + Fine Tune 52.1% ImageNet Supervision [45]91.4% (a) Accuracy with Unsupervised Methods # of Labeled Training Videos 102 103 104 Accuracy (percentage) 0 5 10 15 20 25 30 35 40 45 50 VGAN Init Random Init Chance (b) Performance vs # Data # of Labeled Training Videos 0 2000 4000 6000 8000 10000 Relative Accuracy Gain 1.5 1.6 1.7 1.8 1.9 2 2.1 (c) Relative Gain vs # Data Figure 4: Video Representation Learning: We evaluate the representation learned by the discriminator for action classification on UCF101 [35]. (a) By fine-tuning the discriminator on a relatively small labeled dataset, we can obtain better performance than random initialization, and better than hand-crafted space-time interest point (STIP) features. Moreover, our model slightly outperforms another unsupervised video representation [24] despite using an order of magnitude fewer learned parameters and only 64 × 64 videos. Note unsupervised video representations are still far from models that leverage external supervision. (b) Our unsupervised representation with less labeled data outperforms random initialization with all the labeled data. Our results suggest that, with just 1/8th of the labeled data, we can match performance to a randomly initialized network that used all of the labeled data. (c) The fine-tuned model has larger relative gain over random initialization in cases with less labeled data. Note that (a) is over all train/test splits of UCF101, while (b,c) is over the first split in order to make experiments less expensive. outperforms it. We also experimented with training a logistic regression on only the last layer, which performed worse. Finally, our model slightly outperforms another recent unsupervised video representation learning approach [24]. However, our approach uses an order of magnitude fewer parameters, less layers (5 layers vs 8 layers), and low-resolution video. Performance vs Data: We also experimented with varying the amount of labeled training data available to our fine-tuned network. Figure 4b reports performance versus the amount of labeled training data available. As expected, performance increases with more labeled data. The fine-tuned model shows an advantage in low data regimes: even with one eighth of the labeled data, the finetuned model still beats a randomly initialized network. Moreover, Figure 4c plots the relative accuracy gain over the fine-tuned model and the random initialization (fine-tuned performance divided by random initialized performance). This shows that fine-tuning with our model has larger relative gain over random initialization in cases with less labeled data, showing its utility in low-data regimes. 3.4 Future Generation We investigate whether our approach can be used to generate the future of a static image. Specifically, given a static image x0, can we extrapolate a video of possible consequent frames? Encoder: We utilize the same model as our two-stream model, however we must make one change in order to input the static image instead of the latent code. We can do this by attaching a fivelayer convolutional network to the front of the generator which encodes the image into the latent space, similar to a conditional generative adversarial network [23]. The rest of the generator and discriminator networks remain the same. However, we add an additional loss term that minimizes the L1 distance between the input and the first frame of the generated image. We do this so that the generator creates videos consistent with the input image. We train from scratch with the objective: min wG max wD Ex∼px(x) [log D(x; wD)] + Ex0∼px0(x0) [log (1 −D(G(x0; wG); wD))] +Ex0∼px0(x0)  λ∥x0 −G0(x0; wG)∥2 2  (3) where x0 is the first frame of the input, G0(·) is the first frame of the generated video, and λ ∈R is a hyperparameter. The discriminator will try to classify realistic frames and realistic motions as before, while the generator will try to produce a realistic video such that the first frame is reconstructed well. Results: We qualitatively show a few examples of our approach in Figure 5 using held-out testing videos. Although the extrapolations are rarely correct, they often have fairly plausible motions. The 7 Frame  1 Frame  32 Frame  16 Static   Input Frame  1 Frame  32 Frame  16 Static   Input Generated  Video Generated  Video Figure 5: Future Generation: We show one application of generative video models where we predict videos given a single static image. The red arrows highlight regions of motion. Since this is an ambiguous task, our model usually does not generate the correct video, however the generation is often plausible. Please see http://mit.edu/vondrick/tinyvideo for animated movies. (a) hidden unit that fires on “person” (b) hidden unit that fires on “train tracks” Figure 6: Visualizing Representation: We visualize some hidden units in the encoder of the future generator, following the technique from [52]. We highlight regions of images that a particular convolutional hidden unit maximally activates on. While not at all units are semantic, some units activate on objects that are sources for motion, such as people and train tracks. most common failure is that the generated video has a scene similar but not identical to the input image, such as by changing colors or dropping/hallucinating objects. The former could be solved by a color histogram normalization in post-processing (which we did not do for simplicity), while we suspect the latter will require building more powerful generative models. The generated videos are usually not the correct video, but we observe that often the motions are plausible. We are not aware of an existing approach that can directly generate multi-frame videos from a single static image. [33, 22] can generate video, but they require multiple input frames and empirically become blurry after extrapolating many frames. [43, 50] can predict optic flow from a single image, but they do not generate several frames of motion and may be susceptible to warping artifacts. We believe this experiment shows an important application of generative video models. Visualizing Representation: Since generating the future requires understanding how objects move, the network may need learn to recognize some objects internally, even though it is not supervised to do so. Figure 6 visualizes some activations of hidden units in the third convolutional layer. While not at all units are semantic, some of the units tend to be selective for objects that are sources of motion, such as people or train tracks. These visualizations suggest that scaling up future generation might be a promising supervisory signal for object recognition and complementary to [27, 5, 46]. Conclusion: Understanding scene dynamics will be crucial for the next generation of computer vision systems. In this work, we explored how to learn some dynamics from large amounts of unlabeled video by capitalizing on adversarial learning methods. Since annotating dynamics is expensive, we believe learning from unlabeled data is a promising direction. While we are still a long way from fully harnessing the potential of unlabeled video, our experiments support that abundant unlabeled video can be lucrative for both learning to generate videos and learning visual representations. Acknowledgements: We thank Yusuf Aytar for dataset discussions. We thank MIT TIG, especially Garrett Wollman, for troubleshooting issues on storing the 26 TB of video. We are grateful for the Torch7 community for answering many questions. NVidia donated GPUs used for this research. This work was supported by NSF grant #1524817 to AT, START program at UMBC to HP, and the Google PhD fellowship to CV. 8 References [1] Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Learning sound representations from unlabeled video. NIPS, 2016. [2] Tali Basha, Yael Moses, and Shai Avidan. Photo sequencing. In ECCV. 2012. [3] Chao-Yeh Chen and Kristen Grauman. Watching unlabeled video helps learn new human actions from very few labeled snapshots. In CVPR, 2013. [4] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. [5] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsupervised visual representation learning by context prediction. In ICCV, 2015. [6] Chelsea Finn, Ian Goodfellow, and Sergey Levine. Unsupervised learning for physical interaction through video prediction. arXiv, 2016. [7] J´ozsef Fiser and Richard N Aslin. Statistical learning of higher-order temporal structure from visual shape sequences. JEP, 2002. [8] Katerina Fragkiadaki, Sergey Levine, Panna Felsen, and Jitendra Malik. Recurrent network models for human dynamics. In ICCV, 2015. [9] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [10] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In CVPR, 2006. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv, 2015. [12] Phillip Isola, Joseph J Lim, and Edward H Adelson. Discovering states and transformations in image collections. In CVPR, 2015. [13] Dinesh Jayaraman and Kristen Grauman. Learning image representations tied to ego-motion. In ICCV, 2015. [14] Shuiwang Ji, Wei Xu, Ming Yang, and Kai Yu. 3d convolutional neural networks for human action recognition. PAMI, 2013. [15] Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, and Koray Kavukcuoglu. Video pixel networks. arXiv, 2016. [16] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv, 2014. [17] Kris Kitani, Brian Ziebart, James Bagnell, and Martial Hebert. Activity forecasting. ECCV, 2012. [18] Quoc V Le. Building high-level features using large scale unsupervised learning. In CASSP, 2013. [19] Yin Li, Manohar Paluri, James M Rehg, and Piotr Doll´ar. Unsupervised learning of edges. arXiv, 2015. [20] William Lotter, Gabriel Kreiman, and David Cox. Deep predictive coding networks for video prediction and unsupervised learning. arXiv, 2016. [21] David G Lowe. Object recognition from local scale-invariant features. In ICCV, 1999. [22] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv, 2015. [23] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv, 2014. [24] Ishan Misra, C. Lawrence Zitnick, and Martial Hebert. Shuffle and Learn: Unsupervised Learning using Temporal Order Verification. In ECCV, 2016. [25] Hossein Mobahi, Ronan Collobert, and Jason Weston. Deep learning from temporal coherence in video. In ICML, 2009. [26] Phuc Xuan Nguyen, Gregory Rogez, Charless Fowlkes, and Deva Ramanan. The open world of micro-videos. arXiv, 2016. [27] Andrew Owens, Jiajun Wu, Josh H McDermott, William T Freeman, and Antonio Torralba. Ambient sound provides supervision for visual learning. arXiv, 2016. [28] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. arXiv, 2016. [29] Nikola Petrovic, Aleksandar Ivanovic, and Nebojsa Jojic. Recursive estimation of generative models of video. In CVPR, 2006. [30] Lyndsey Pickup, Zheng Pan, Donglai Wei, YiChang Shih, Changshui Zhang, Andrew Zisserman, Bernhard Scholkopf, and William Freeman. Seeing the arrow of time. In CVPR, 2014. [31] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv, 2015. [32] Vignesh Ramanathan, Kevin Tang, Greg Mori, and Li Fei-Fei. Learning temporal embeddings for complex video analysis. In CVPR, 2015. [33] MarcAurelio Ranzato, Arthur Szlam, Joan Bruna, Michael Mathieu, Ronan Collobert, and Sumit Chopra. Video (language) modeling: a baseline for generative models of natural videos. arXiv, 2014. [34] Karen Simonyan and Andrew Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014. [35] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv, 2012. [36] Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. JMLR, 2014. [37] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. arXiv, 2015. [38] Lucas Theis, A¨aron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. arXiv, 2015. [39] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. Yfcc100m: The new data in multimedia research. ACM, 2016. [40] Du Tran, Lubomir Bourdev, Rob Fergus, Lorenzo Torresani, and Manohar Paluri. Learning spatiotemporal features with 3d convolutional networks. arXiv, 2014. [41] Carl Vondrick, Donald Patterson, and Deva Ramanan. Efficiently scaling up crowdsourced video annotation. IJCV, 2013. [42] Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba. Anticipating visual representations from unlabeled video. CVPR, 2015. [43] Jacob Walker, Arpan Gupta, and Martial Hebert. Patch to the future: Unsupervised visual prediction. In CVPR, 2014. [44] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In ICCV, 2013. [45] Limin Wang, Yuanjun Xiong, Zhe Wang, and Yu Qiao. Towards good practices for very deep two-stream convnets. arXiv, 2015. [46] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. [47] Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. arXiv, 2016. [48] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv, 2015. [49] Tianfan Xue, Jiajun Wu, Katherine L Bouman, and William T Freeman. Visual dynamics: Probabilistic future frame synthesis via cross convolutional networks. arXiv, 2016. [50] Jenny Yuen and Antonio Torralba. A data-driven approach for event prediction. In ECCV. 2010. [51] Matthew D Zeiler, Dilip Krishnan, Graham W Taylor, and Rob Fergus. Deconvolutional networks. In CVPR, 2010. [52] Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Object detectors emerge in deep scene cnns. arXiv, 2014. [53] Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Torralba, and Aude Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. [54] Yipin Zhou and Tamara L Berg. Temporal perception and prediction in ego-centric video. In ICCV, 2015. [55] Yipin Zhou and Tamara L Berg. Learning temporal transformations from time-lapse videos. In ECCV, 2016. 9
2016
11
6,006
Finite-Sample Analysis of Fixed-k Nearest Neighbor Density Functional Estimators Shashank Singh Statistics & Machine Learning Departments Carnegie Mellon University sss1@andrew.cmu.edu Barnabás Póczos Machine Learning Departments Carnegie Mellon University bapoczos@cs.cmu.edu Abstract We provide finite-sample analysis of a general framework for using k-nearest neighbor statistics to estimate functionals of a nonparametric continuous probability density, including entropies and divergences. Rather than plugging a consistent density estimate (which requires k →∞as the sample size n →∞) into the functional of interest, the estimators we consider fix k and perform a bias correction. This is more efficient computationally, and, as we show in certain cases, statistically, leading to faster convergence rates. Our framework unifies several previous estimators, for most of which ours are the first finite sample guarantees. 1 Introduction Estimating entropies and divergences of probability distributions in a consistent manner is of importance in a number of problems in machine learning. Entropy estimators have applications in goodness-of-fit testing [13], parameter estimation in semi-parametric models [51], studying fractal random walks [3], and texture classification [14, 15]. Divergence estimators have been used to generalize machine learning algorithms for regression, classification, and clustering from inputs in RD to sets and distributions [40, 33]. Divergences also include mutual informations as a special case; mutual information estimators have applications in feature selection [35], clustering [2], causality detection [16], optimal experimental design [26, 38], fMRI data analysis [7], prediction of protein structures [1], and boosting and facial expression recognition [41]. Both entropy estimators and mutual information estimators have been used for independent component and subspace analysis [23, 47, 37, 17], as well as for image registration [14, 15]. Further applications can be found in [25]. This paper considers the more general problem of estimating functionals of the form F(P) := E X∼P [f(p(X))] , (1) using n IID samples from P, where P is an unknown probability measure with smooth density function p and f is a known smooth function. We are interested in analyzing a class of nonparametric estimators based on k-nearest neighbor (k-NN) distance statistics. Rather than plugging a consistent estimator of p into (1), which requires k →∞as n →∞, these estimators derive a bias correction for the plug-in estimator with fixed k; hence, we refer to this type of estimator as a fixed-k estimator. Compared to plug-in estimators, fixed-k estimators are faster to compute. As we show, fixed-k estimators can also exhibit superior rates of convergence. As shown in Table 1, several authors have derived bias corrections necessary for fixed-k estimators of entropies and divergences, including, most famously, the Shannon entropy estimator of [20]. 1 The estimators in Table 1 estimators are known to be weakly consistent, 2 but, except for Shannon entropy, 1MATLAB code for these estimators is in the ITE toolbox https://bitbucket.org/szzoli/ite/ [48]. 2Several of these proofs contain errors regarding the use of integral convergence theorems when their conditions do not hold, as described in [39]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Functional Name Functional Form Bias Correction Ref. Shannon Entropy E [log p(X)] Additive constant: ψ(n) −ψ(k) + log(k/n) [20][13] Rényi-α Entropy E  pα−1(X)  Multiplicative constant: Γ(k) Γ(k+1−α) [25, 24] KL Divergence E h log p(X) q(X) i None∗ [50] α-Divergence E  p(X) q(X) α−1 Multiplicative constant: Γ2(k) Γ(k−α+1)Γ(k+α−1) [39] Table 1: Functionals with known bias-corrected k-NN estimators, their bias corrections, and references. All expectations are over X ∼P. Γ(t) = R ∞ 0 xt−1e−x dx is the gamma function, and ψ(x) = d dx log (Γ(x)) is the digamma function. α ∈R\{1} is a free parameter. ∗For KL divergence, bias corrections for p and q cancel. no finite sample bounds are known. The main goal of this paper is to provide finite-sample analysis of these estimators, via unified analysis of the estimator after bias correction. Specifically, we show conditions under which, for β-Hölder continuous (β ∈(0, 2]) densities on D dimensional space, the bias of fixed-k estimators decays as O n−β/D and the variance decays as O n−1 , giving a mean squared error of O n−2β/D + n−1 . Hence, the estimators converge at the parametric O(n−1) rate when β ≥D/2, and at the slower rate O(n−2β/D) otherwise. A modification of the estimators would be necessary to leverage additional smoothness for β > 2, but we do not pursue this here. Along the way, we prove a finite-sample version of the useful fact [25] that (normalized) k-NN distances have an Erlang asymptotic distribution, which may be of independent interest. We present our results for distributions P supported on the unit cube in RD because this significantly simplifies the statements of our results, but, as we discuss in the supplement, our results generalize fairly naturally, for example to distributions supported on smooth compact manifolds. In this context, it is worth noting that our results scale with the intrinsic dimension of the manifold. As we discuss later, we believe deriving finite sample rates for distributions with unbounded support may require a truncated modification of the estimators we study (as in [49]), but we do not pursue this here. 2 Problem statement and notation Let X := [0, 1]D denote the unit cube in RD, and let µ denote the Lebesgue measure. Suppose P is an unknown µ-absolutely continuous Borel probability measure supported on X, and let p : X →[0, ∞) denote the density of P. Consider a (known) differentiable function f : (0, ∞) →R. Given n samples X1, ..., Xn drawn IID from P, we are interested in estimating the functional F(P) := E X∼P [f(p(X))] . Somewhat more generally (as in divergence estimation), we may have a function f : (0, ∞)2 →R of two variables and a second unknown probability measure Q, with density q and n IID samples Y1, ..., Yn. Then, we are interested in estimating F(P, Q) := E X∼P [f(p(X), q(X))] . Fix r ∈[1, ∞] and a positive integer k. We will work with distances induced by the r-norm ∥x∥r := D X i=1 xr i !1/r and define cD,r := (2Γ(1 + 1/r))D Γ(1 + D/r) = µ(B(0, 1)), where B(x, ε) := {y ∈RD : ∥x −y∥r < ε} denotes the open radius-ε ball centered at x. Our estimators use k-nearest neighbor (k-NN) distances: Definition 1. (k-NN distance): Given n IID samples X1, ..., Xn from P, for x ∈RD, we define the k-NN distance εk(x) by εk(x) = ∥x−Xi∥r, where Xi is the kth-nearest element (in ∥·∥r) of the set {X1, ..., Xn} to x. For divergence estimation, given n samples Y1, ..., Yn from Q, then we similarly define δk(x) by δk(x) = ∥x −Yi∥r, where Yi is the kth-nearest element of {Y1, ..., Yn} to x. µ-absolute continuity of P precludes the existence of atoms (i.e., ∀x ∈RD, P({x}) = µ({x}) = 0). Hence, each εk(x) > 0 a.s. We will require this to study quantities such as log εk(x) and 1/εk(x). 2 3 Estimator 3.1 k-NN density estimation and plug-in functional estimators The k-NN density estimator ˆpk(x) = k/n µ(B(x, εk(x)) = k/n cDεD k (x) is well-studied nonparametric density estimator [28], motivated by noting that, for small ε > 0, p(x) ≈P(B(x, ε)) µ(B(x, ε)) , and that, P(B(x, εk(x))) ≈k/n. One can show that, for x ∈RD at which p is continuous, if k →∞and k/n →0 as n →∞, then ˆpk(x) →p(x) in probability ([28], Theorem 3.1). Thus, a natural approach for estimating F(P) is the plug-in estimator ˆFP I := 1 n n X i=1 f (ˆpk(Xi)) . (2) Since ˆpk →p in probability pointwise as k, n →∞and f is smooth, one can show ˆFP I is consistent, and in fact derive finite sample convergence rates (depending on how k →∞). For example, [44] show a convergence rate of O  n−min{ 2β β+D ,1} for β-Hölder continuous densities (after sample splitting and boundary correction) by setting k ≍n β β+d . Unfortunately, while necessary to ensure V [ˆpk(x)] →0, the requirement k →∞is computationally burdensome. Furthermore, increasing k can increase the bias of ˆpk due to over-smoothing (see (5) below), suggesting that this may be sub-optimal for estimating F(P). Indeed, similar work based on kernel density estimation [42] suggests that, for plug-in functional estimators, under-smoothing may be preferable, since the empirical mean results in additional smoothing. 3.2 Fixed-k functional estimators An alternative approach is to fix k as n →∞. Since ˆFP I is itself an empirical mean, unlike V [ˆpk(x)], V h ˆFP I i →0 as n →∞. The more critical complication of fixing k is bias. Since f is typically non-linear, the non-vanishing variance of ˆpk translates into asymptotic bias. A solution adopted by several papers is to derive a bias correction function B (depending only on known factors) such that E X1,...,Xn  B  f  k/n µ(B(x, εk(x))  = E X1,...,Xn  f P(B(x, εk(x))) µ(B(x, εk(x))  . (3) For continuous p, the quantity pεk(x)(x) := P(B(x, εk(x))) µ(B(x, εk(x)) (4) is a consistent estimate of p(x) with k fixed, but it is not computable, since P is unknown. The bias correction B gives us an asymptotically unbiased estimator ˆFB(P) := 1 n n X i=1 B (f (ˆpk(Xi))) = 1 n n X i=1 B  f  k/n µ(B(Xi, εk(Xi))  . that uses k/n in place of P(B(x, εk(x))). This estimate extends naturally to divergences: ˆFB(P, Q) := 1 n n X i=1 B (f (ˆpk(Xi), ˆqk(Xi))) . As an example, if f = log (as in Shannon entropy), then it can be shown that, for any continuous p, E [log P(B(x, εk(x)))] = ψ(k) −ψ(n). 3 Hence, for Bn,k := ψ(k) −ψ(n) + log(n) −log(k), E X1,...,Xn  f  k/n µ(B(x, εk(x))  + Bn,k = E X1,...,Xn  f P(B(x, εk(x))) µ(B(x, εk(x))  . giving the estimator of [20]. Other examples of functionals for which the bias correction is known are given in Table 1. In general, deriving an appropriate bias correction can be quite a difficult problem specific to the functional of interest, and it is not our goal presently to study this problem; rather, we are interested in bounding the error of ˆFB(P), assuming the bias correction is known. Hence, our results apply to all of the estimators in Table 1, as well as any estimators of this form that may be derived in the future. 4 Related work 4.1 Estimating information theoretic functionals Recently, there has been much work on analyzing estimators for entropy, mutual information, divergences, and other functionals of densities. Besides bias-corrected fixed-k estimators, most of this work has taken one of three approaches. One series of papers [27, 42, 43] studied a boundarycorrected plug-in approach based on under-smoothed kernel density estimation. This approach has strong finite sample guarantees, but requires prior knowledge of the support of the density, and can have a slow rate of convergence. A second approach [18, 22] uses von Mises expansion to partially correct the bias of optimally smoothed density estimates. This is statistically more efficient, but can require computationally demanding numerical integration over the support of the density. A final line of work [30, 31, 44, 46] studied plug-in estimators based on consistent, boundary corrected k-NN density estimates (i.e., with k →∞as n →∞). [32] study a divergence estimator based on convex risk minimization, but this relies of the context of an RKHS, making results are difficult to compare. Rates of Convergence: For densities over RD satisfying a Hölder smoothness condition parametrized by β ∈(0, ∞), the minimax mean squared error rate for estimating functionals of the form R f(p(x)) dx has been known since [6] to be O  n−min{ 8β 4β+D ,1} . [22] recently derived identical minimax rates for divergence estimation. Most of the above estimators have been shown to converge at the rate O  n−min{ 2β β+D ,1} . Only the von Mises approach [22] is known to achieve the minimax rate for general β and D, but due to its computational demand (O(2Dn3)), 3 the authors suggest using other statistically less efficient estimators for moderate sample size. Here, we show that, for β ∈(0, 2], bias-corrected fixed-k estimators converge at the relatively fast rate O  n−min{ 2β D ,1} . For β > 2, modifications are needed for the estimator to leverage the additional smoothness of the density. Notably, this rate is adaptive; that is, it does not require selecting a smoothing parameter depending on the unknown β; our results (Theorem 5) imply the above rate is achieved for any fixed choice of k. On the other hand, since no empirical error metric is available for cross-validation, parameter selection is an obstacle for competing estimators. 4.2 Prior analysis of fixed-k estimators As of writing this paper, the only finite-sample results for ˆFB(P) were those of [5] for the KozachenkoLeonenko (KL) 4 Shannon entropy estimator. [20] Theorem 7.1 of [5] shows that, if the density p has compact support, then the variance of the KL estimator decays as O(n−1). They also claim (Theorem 7.2) to bound the bias of the KL estimator by O(n−β), under the assumptions that p is β-Hölder continuous (β ∈(0, 1]), bounded away from 0, and supported on the interval [0, 1]. However, in their proof, [5] neglect to bound the additional bias incurred near the boundaries of [0, 1], where the density cannot simultaneously be bounded away from 0 and continuous. In fact, because the KL estimator does not attempt to correct for boundary bias, it is not clear that the bias should decay as O(n−β) under these conditions; we require additional conditions at the boundary of X. 3Fixed-k estimators can be computed in O Dn2 time, or O 2Dn log n  using k-d trees for small D. 4Not to be confused with Kullback-Leibler (KL) divergence, for which we also analyze an estimator. 4 [49] studied a closely related entropy estimator for which they prove √n-consistency. Their estimator is identical to the KL estimator, except that it truncates k-NN distances at √n, replacing εk(x) with min{εk(x), √n}. This sort of truncation may be necessary for certain fixed-k estimators to satisfy finite-sample bounds for densities of unbounded support, though consistency can be shown regardless. Finally, two very recent papers [12, 4] have analyzed the KL estimator. In this case, [12] generalize the results of [5] to D > 1, and [4] weaken the regularity and boundary assumptions required by our bias bound, while deriving the same rate of convergence. Moreover, they show that, if k increases with n at the rate k ≍log5 n, the KL estimator is asymptotically efficient (i.e., asymptotically normal, with optimal asymptotic variance). As explained in Section 8, together with our results this elucidates the role of k in the KL estimator: fixing k optimizes the convergence rate of the estimator, but increasing k slowly can further improve error by constant factors. 5 Discussion of assumptions The lack of finite-sample results for fixed-k estimators is due to several technical challenges. Here, we discuss some of these challenges, motivating the assumptions we make to overcome them. First, these estimators are sensitive to regions of low probability (i.e., p(x) small), for two reasons: 1. Many functions f of interest (e.g., f = log or f(z) = zα, α < 0) have singularities at 0. 2. The k-NN estimate ˆpk(x) of p(x) is highly biased when p(x) is small. For example, for p β-Hölder continuous (β ∈(0, 2]), one has ([29], Theorem 2) Bias(ˆpk(x)) ≍  k np(x) β/D . (5) For these reasons, it is common in analysis of k-NN estimators to assume the following [5, 39]: (A1) p is bounded away from zero on its support. That is, p∗:= infx∈X p(x) > 0. Second, unlike many functional estimators (see e.g., [34, 45, 42]), the fixed-k estimators we consider do not attempt correct for boundary bias (i.e., bias incurred due to discontinuity of p on the boundary ∂X of X). 5 The boundary bias of the density estimate ˆpk(x) does vanish at x in the interior X ◦ of X as n →∞, but additional assumptions are needed to obtain finite-sample rates. Either of the following assumptions would suffice: (A2) p is continuous not only on X ◦but also on ∂X (i.e., p(x) →0 as dist(x, ∂X) →0). (A3) p is supported on all of RD. That is, the support of p has no boundary. This is the approach of [49], but we reiterate that, to handle an unbounded domain, they require truncating εk(x). Unfortunately, both assumptions (A2) and (A3) are inconsistent with (A1). Our approach is to assume (A2) and replace assumption (A1) with a much milder assumption that p is locally lower bounded on its support in the following sense: (A4) There exist ρ > 0 and a function p∗: X →(0, ∞) such that, for all x ∈X, r ∈(0, ρ], p∗(x) ≤P (B(x,r)) µ(B(x,r)) . We show in Lemma 2 that assumption (A4) is in fact very mild; in a metric measure space of positive dimension D, as long as p is continuous on X, such a p∗exists for any desired ρ > 0. For simplicity, we will use ρ = √ D = diam(X). As hinted by (5) and the fact that F(P) is an expectation, our bounds will contain terms of the form E X∼P " 1 (p∗(X))β/D # = Z X p(x) (p∗(x))β/D dµ(x) (with an additional f ′(p∗(x)) factor if f has a singularity at zero). Hence, our key assumption is that these quantities are finite. This depends primarily on how quickly p approaches zero near ∂X. For many functionals, Lemma 6 gives a simple sufficient condition. 5This complication was omitted in the bias bound (Theorem 7.2) of [5] for entropy estimation. 5 6 Preliminary lemmas Here, we present some lemmas, both as a means of summarizing our proof techniques and also because they may be of independent interest for proving finite-sample bounds for other k-NN methods. Due to space constraints, all proofs are given in the appendix. Our first lemma states that, if p is continuous, then it is locally lower bounded as described in the previous section. Lemma 2. (Existence of Local Bounds) If p is continuous on X and strictly positive on the interior X ◦of X, then, for ρ := √ D = diam(X), there exists a continuous function p∗: X ◦→(0, ∞) and a constant p∗∈(0, ∞) such that 0 < p∗(x) ≤P(B(x, r)) µ(B(x, r)) ≤p∗< ∞, ∀x ∈X, r ∈(0, ρ]. We now use these local lower and upper bounds to prove that k-NN distances concentrate around a term of order (k/(np(x)))1/D. Related lemmas, also based on multiplicative Chernoff bounds, are used by [21, 9] and [8, 19] to prove finite-sample bounds on k-NN methods for cluster tree pruning and classification, respectively. For cluster tree pruning, the relevant inequalities bound the error of the k-NN density estimate, and, for classification, they lower bound the probability of nearby samples of the same class. Unlike in cluster tree pruning, we are not using a consistent density estimate, and, unlike in classification, our estimator is a function of k-NN distances themselves (rather than their ordering). Thus, our statement is somewhat different, bounding the k-NN distances themselves: Lemma 3. (Concentration of k-NN Distances) Suppose p is continuous on X and strictly positive on X ◦. Let p∗and p∗be as in Lemma 2. Then, for any x ∈X ◦, 1. if r >  k p∗(x)n 1/D , then P [εk(x) > r] ≤e−p∗(x)rDn  ep∗(x)rDn k k . 2. if r ∈  0,  k p∗n 1/D , then P [εk(x) < r] ≤e−p∗(x)rDn ep∗rDn k kp∗(x)/p∗ . It is worth noting an asymmetry in the above bounds: counter-intuitively, the lower bound depends on p∗. This asymmetry is related to the large bias of k-NN density estimators when p is small (as in (5)). The next lemma uses Lemma 3 to bound expectations of monotone functions of the ratio ˆpk/p∗. As suggested by the form of integrals (6) and (7), this is essentially a finite-sample statement of the fact that (appropriately normalized) k-NN distances have Erlang asymptotic distributions; this asymptotic statement is key to consistency proofs of [25] and [39] for α-entropy and divergence estimators. Lemma 4. Let p be continuous on X and strictly positive on X ◦. Define p∗and p∗as in Lemma 2. Suppose f : (0, ∞) →R is continuously differentiable and f ′ > 0. Then, we have the upper bound 6 sup x∈X ◦E  f+ p∗(x) ˆpk(x)  ≤f+(1) + e √ k Z ∞ k e−yyk Γ(k + 1)f+ y k  dy, (6) and, for all x ∈X ◦, for κ(x) := kp∗(x)/p∗, the lower bound E  f− p∗(x) ˆpk(x)  ≤f−(1) + e s k κ(x) Z κ(x) 0 e−yyκ(x) Γ(κ(x) + 1)f− y k  dy (7) Note that plugging the function z 7→f  kz cD,rnp∗(x)  1 D  into Lemma 4 gives bounds on E [f(εk(x))]. As one might guess from Lemma 3 and the assumption that f is smooth, this bound is roughly of the order ≍  k np(x)  1 D . For example, for any α > 0, a simple calculation from (6) gives E [εα k(x)] ≤  1 + α D   k cD,rnp∗(x)  α D . (8) (8) is used for our bias bound, and more direct applications of Lemma 4 are used in variance bound. 6f+(x) = max{0, f(x)} and f−(x) = −min{0, f(x)} denote the positive and negative parts of f. Recall that E [f(X)] = E [f+(X)] −E [f−(X)]. 6 7 Main results Here, we present our main results on the bias and variance of ˆFB(P). Again, due to space constraints, all proofs are given in the appendix. We begin with bounding the bias: Theorem 5. (Bias Bound) Suppose that, for some β ∈(0, 2], p is β-Hölder continuous with constant L > 0 on X, and p is strictly positive on X ◦. Let p∗and p∗be as in Lemma 2. Let f : (0, ∞) →R be differentiable, and define Mf,p : X →[0, ∞) by Mf,p(x) := sup z∈[p∗(x),p∗] d dz f(z) Assume Cf := E X∼p " Mf,p(X) (p∗(X)) β D # < ∞. Then, E ˆFB(P) −F(P) ≤CfL k n  β D . The statement for divergences is similar, assuming that q is also β-Hölder continuous with constant L and strictly positive on X ◦. Specifically, we get the same bound if we replace Mf,o with Mf,p(x) := sup (w,z)∈[p∗(x),p∗]×[q∗(x),q∗] ∂ ∂wf(w, z) and define Mf,q similarly (i.e., with ∂ ∂z) and we assume that Cf := E X∼p " Mf,p(X) (p∗(X)) β D # + E X∼p " Mf,q(X) (q∗(X)) β D # < ∞. As an example of the applicability of Theorem 5, consider estimating the Shannon entropy. Then, f(z) = log(x), and so we need Cf = R X (p∗(x))−β/D dµ(x) < ∞. The assumption Cf < ∞is not immediately transparent. For the functionals in Table 1, Cf has the form R X (p(x))−c dx, for some c > 0, and hence Cf < ∞intuitively means p(x) cannot approach zero too quickly as dist(x, ∂X) →0. The following lemma gives a formal sufficient condition: Lemma 6. (Boundary Condition) Let c > 0. Suppose there exist b∂∈(0, 1 c), c∂, ρ∂> 0 such that, for all x ∈X with ε(x) := dist(x, ∂X) < ρ∂, p(x) ≥c∂εb∂(x). Then, R X (p∗(x))−c dµ(x) < ∞. In the supplement, we give examples showing that this condition is fairly general, satisfied by densities proportional to xb∂near ∂X (i.e., those with at least b∂nonzero one-sided derivatives on the boundary). We now bound the variance. The main obstacle here is that the fixed-k estimator is an empirical mean of dependent terms (functions of k-NN distances). We generalize the approach used by [5] to bound the variance of the KL estimator of Shannon entropy. The key insight is the geometric fact that, in (RD, ∥· ∥p), there exists a constant Nk,D (independent of n) such that any sample Xi can be amongst the k-nearest neighbors of at most Nk,D other samples. Hence, at most Nk,D + 1 of the terms in (2) can change when a single Xi is added, suggesting a variance bound via the Efron-Stein inequality [10], which bounds the variance of a function of random variables in terms of its expected change when its arguments are resampled. [11] originally used this approach to prove a general Law of Large Numbers (LLN) for nearest-neighbors statistics. Unfortunately, this LLN relies on bounded kurtosis assumptions that are difficult to justify for the log or negative power statistics we study. Theorem 7. (Variance Bound) Suppose B ◦f is continuously differentiable and strictly monotone. Assume Cf,p := EX∼P  B2(f(p∗(X)))  < ∞, and Cf := R ∞ 0 e−yykf(y) < ∞. Then, for CV := 2 (1 + Nk,D) (3 + 4k) (Cf,p + Cf) , we have V h ˆFB(P) i ≤CV n . As an example, if f = log (as in Shannon entropy), then, since B is an additive constant, we simply require R X p(x) log2(p∗(x)) < ∞. In general, Nk,D is of the order k2cD, for some c > 0. Our bound is likely quite loose in k; in practice, V h ˆFB(P) i typically decreases somewhat with k. 7 8 Conclusions and discussion In this paper, we gave finite-sample bias and variance error bounds for a class of fixed-k estimators of functionals of probability density functions, including the entropy and divergence estimators in Table 1. The bias and variance bounds in turn imply a bound on the mean squared error (MSE) of the bias-corrected estimator via the usual decomposition into squared bias and variance: Corollary 8. (MSE Bound) Under the conditions of Theorems 5 and 7, E  ˆHk(X) −H(X) 2 ≤C2 fL2 k n 2β/D + CV n . (9) Choosing k: Contrary to the name, fixing k is not required for “fixed-k” estimators. [36] empirically studied the effect of changing k with n and found that fixing k = 1 gave best results for estimating F(P). However, there has been no theoretical justification for fixing k. Assuming tightness of our bias bound in k, we provide this in a worst-case sense: since our bias bound is nondecreasing in k and our variance bound is no larger than the minimax MSE rate for these estimation problems, reducing variance (i.e., increasing k) does not improve the (worst-case) convergence rate. On the other hand, [4] recently showed that slowly increasing k can improves the asymptotic variance of the estimator, with the rate k ≍log5 n leading to asymptotic efficiency. In view of these results, we suggest that increasing k can improve error by constant factors, but cannot improve the convergence rate. Finally, we note that [36] found increasing k quickly (e.g., k = n/2) was best for certain hypothesis tests based on these estimators. Intuitively, this is because, in testing problems, bias is less problematic than variance (e.g., an asymptotically biased estimator can still lead to a consistent test). Acknowledgments This material is based upon work supported by a National Science Foundation Graduate Research Fellowship to the first author under Grant No. DGE-1252522. References [1] C. Adami. Information theory in molecular biology. Physics of Life Reviews, 1:3–22, 2004. [2] M. Aghagolzadeh, H. Soltanian-Zadeh, B. Araabi, and A. Aghagolzadeh. A hierarchical clustering based on mutual information maximization. In Proc. of IEEE International Conf. on Image Processing, 2007. [3] P. A. Alemany and D. H. Zanette. Fractal random walks from a variational formalism for Tsallis entropies. Phys. Rev. E, 49(2):R956–R958, Feb 1994. doi: 10.1103/PhysRevE.49.R956. [4] Thomas B Berrett, Richard J Samworth, and Ming Yuan. Efficient multivariate entropy estimation via k-nearest neighbour distances. arXiv preprint arXiv:1606.00304, 2016. [5] Gérard Biau and Luc Devroye. Entropy estimation. In Lectures on the Nearest Neighbor Method, pages 75–91. Springer, 2015. [6] L. Birge and P. Massart. Estimation of integral functions of a density. Annals of Statistics, 23:11–29, 1995. [7] B. Chai, D. B. Walther, D. M. Beck, and L. Fei-Fei. Exploring functional connectivity of the human brain using multivariate information analysis. In NIPS, 2009. [8] Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 3437–3445, 2014. [9] Kamalika Chaudhuri, Sanjoy Dasgupta, Samory Kpotufe, and Ulrike von Luxburg. Consistent procedures for cluster tree estimation and pruning. IEEE Trans. on Information Theory, 60(12):7900–7912, 2014. [10] Bradley Efron and Charles Stein. The jackknife estimate of variance. Ann. of Stat., pages 586–596, 1981. [11] D. Evans. A law of large numbers for nearest neighbor statistics. In Proceedings of the Royal Society, volume 464, pages 3175–3192, 2008. [12] Weihao Gao, Sewoong Oh, and Pramod Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [13] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. J. Nonparametric Stat., 17:277–297, 2005. [14] A. O. Hero, B. Ma, O. Michel, and J. Gorman. Alpha-divergence for classification, indexing and retrieval, 2002. Communications and Signal Processing Laboratory Technical Report CSPL-328. [15] A. O. Hero, B. Ma, O. J. J. Michel, and J. Gorman. Applications of entropic spanning graphs. IEEE Signal Processing Magazine, 19(5):85–95, 2002. [16] K. Hlaváckova-Schindler, M. Paluˆsb, M. Vejmelkab, and J. Bhattacharya. Causality detection based on information-theoretic approaches in time series analysis. Physics Reports, 441:1–46, 2007. [17] M. M. Van Hulle. Constrained subspace ICA based on mutual information optimization directly. Neural Computation, 20:964–973, 2008. [18] Kirthevasan Kandasamy, Akshay Krishnamurthy, Barnabas Poczos, Larry Wasserman, et al. Nonparametric von Mises estimators for entropies, divergences and mutual informations. In NIPS, pages 397–405, 2015. 8 [19] Aryeh Kontorovich and Roi Weiss. A Bayes consistent 1-NN classifier. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pages 480–488, 2015. [20] L. F. Kozachenko and N. N. Leonenko. A statistical estimate for the entropy of a random vector. Problems of Information Transmission, 23:9–16, 1987. [21] Samory Kpotufe and Ulrike V Luxburg. Pruning nearest neighbor cluster trees. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 225–232, 2011. [22] A. Krishnamurthy, K. Kandasamy, B. Poczos, and L. Wasserman. Nonparametric estimation of renyi divergence and friends. In International Conference on Machine Learning (ICML), 2014. [23] E. G. Learned-Miller and J. W. Fisher. ICA using spacings estimates of entropy. J. Machine Learning Research, 4:1271–1295, 2003. [24] N. Leonenko and L. Pronzato. Correction of ‘a class of Rényi information estimators for mulitidimensional densities’ Ann. Statist., 36(2008) 2153-2182, 2010. [25] N. Leonenko, L. Pronzato, and V. Savani. A class of Rényi information estimators for multidimensional densities. Annals of Statistics, 36(5):2153–2182, 2008. [26] J. Lewi, R. Butera, and L. Paninski. Real-time adaptive information-theoretic optimization of neurophysiology experiments. In Advances in Neural Information Processing Systems, volume 19, 2007. [27] H. Liu, J. Lafferty, and L. Wasserman. Exponential concentration inequality for mutual information estimation. In Neural Information Processing Systems (NIPS), 2012. [28] D. O. Loftsgaarden and C. P. Quesenberry. A nonparametric estimate of a multivariate density function. Ann. Math. Statist, 36:1049–1051, 1965. [29] YP Mack and M Rosenblatt. Multivariate k-nearest neighbor density estimates. J. Multivar. Analysis, 1979. [30] Kevin Moon and Alfred Hero. Multivariate f-divergence estimation with confidence. In Advances in Neural Information Processing Systems, pages 2420–2428, 2014. [31] Kevin R Moon and Alfred O Hero. Ensemble estimation of multivariate f-divergence. In Information Theory (ISIT), 2014 IEEE International Symposium on, pages 356–360. IEEE, 2014. [32] X. Nguyen, M.J. Wainwright, and M.I. Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, To appear., 2010. [33] J. Oliva, B. Poczos, and J. Schneider. Distribution to distribution regression. In International Conference on Machine Learning (ICML), 2013. [34] D. Pál, B. Póczos, and Cs. Szepesvári. Estimation of Rényi entropy and mutual information based on generalized nearest-neighbor graphs. In Proceedings of the Neural Information Processing Systems, 2010. [35] H. Peng and C. Dind. Feature selection based on mutual information: Criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans On Pattern Analysis and Machine Intelligence, 27, 2005. [36] F. Pérez-Cruz. Estimation of information theoretic measures for continuous random variables. In Advances in Neural Information Processing Systems 21, 2008. [37] B. Póczos and A. L˝orincz. Independent subspace analysis using geodesic spanning trees. In ICML, 2005. [38] B. Póczos and A. L˝orincz. Identification of recurrent neural networks by Bayesian interrogation techniques. J. Machine Learning Research, 10:515–554, 2009. [39] B. Poczos and J. Schneider. On the estimation of alpha-divergences. In International Conference on AI and Statistics (AISTATS), volume 15 of JMLR Workshop and Conference Proceedings, pages 609–617, 2011. [40] B. Poczos, L. Xiong, D. Sutherland, and J. Schneider. Nonparametric kernel estimators for image classification. In 25th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [41] C. Shan, S. Gong, and P. W. Mcowan. Conditional mutual information based boosting for facial expression recognition. In British Machine Vision Conference (BMVC), 2005. [42] S. Singh and B. Poczos. Exponential concentration of a density functional estimator. In Neural Information Processing Systems (NIPS), 2014. [43] S. Singh and B. Poczos. Generalized exponential concentration inequality for Rényi divergence estimation. In International Conference on Machine Learning (ICML), 2014. [44] Kumar Sricharan, Raviv Raich, and Alfred O Hero. k-nearest neighbor estimation of entropies with confidence. In IEEE International Symposium on Information Theory, pages 1205–1209. IEEE, 2011. [45] Kumar Sricharan, Raviv Raich, and Alfred O Hero III. Estimation of nonlinear functionals of densities with confidence. Information Theory, IEEE Transactions on, 58(7):4135–4159, 2012. [46] Kumar Sricharan, Dennis Wei, and Alfred O Hero. Ensemble estimators for multivariate entropy estimation. IEEE Transactions on Information Theory, 59(7):4374–4388, 2013. [47] Z. Szabó, B. Póczos, and A. L˝orincz. Undercomplete blind subspace deconvolution. J. Machine Learning Research, 8:1063–1095, 2007. [48] Zoltán Szabó. Information theoretical estimators toolbox. Journal of Machine Learning Research, 15: 283–287, 2014. (https://bitbucket.org/szzoli/ite/). [49] A. B. Tsybakov and E. C. van der Meulen. Root-n consistent estimators of entropy for densities with unbounded support. Scandinavian J. Statistics, 23:75–83, 1996. [50] Q. Wang, S.R. Kulkarni, and S. Verdú. Divergence estimation for multidimensional densities via k-nearestneighbor distances. IEEE Transactions on Information Theory, 55(5), 2009. [51] E. Wolsztynski, E. Thierry, and L. Pronzato. Minimum-entropy estimation in semi-parametric models. Signal Process., 85(5):937–949, 2005. ISSN 0165-1684. 9
2016
110
6,007
Maximizing Influence in an Ising Network: A Mean-Field Optimal Solution Christopher W. Lynn Department of Physics and Astronomy University of Pennsylvania chlynn@sas.upenn.edu Daniel D. Lee Department of Electrical and Systems Engineering University of Pennsylvania ddlee@seas.upenn.edu Abstract Influence maximization in social networks has typically been studied in the context of contagion models and irreversible processes. In this paper, we consider an alternate model that treats individual opinions as spins in an Ising system at dynamic equilibrium. We formalize the Ising influence maximization problem, which has a natural physical interpretation as maximizing the magnetization given a budget of external magnetic field. Under the mean-field (MF) approximation, we present a gradient ascent algorithm that uses the susceptibility to efficiently calculate local maxima of the magnetization, and we develop a number of sufficient conditions for when the MF magnetization is concave and our algorithm converges to a global optimum. We apply our algorithm on random and real-world networks, demonstrating, remarkably, that the MF optimal external fields (i.e., the external fields which maximize the MF magnetization) shift from focusing on high-degree individuals at high temperatures to focusing on low-degree individuals at low temperatures. We also establish a number of novel results about the structure of steady-states in the ferromagnetic MF Ising model on general graph topologies, which are of independent interest. 1 Introduction With the proliferation of online social networks, the problem of optimally influencing the opinions of individuals in a population has garnered tremendous attention [1–3]. The prevailing paradigm treats marketing as a viral process, whereby the advertiser is given a budget of seed infections and chooses the subset of individuals to infect such that the spread of the ensuing contagion is maximized. The development of algorithmic methods for influence maximization under the viral paradigm has been the subject of vigorous study, resulting in a number of efficient techniques for identifying meaningful marketing strategies in real-world settings [4–6]. While the viral paradigm accurately describes out-of-equilibrium phenomena, such as the introduction of new ideas or products to a system, these models fail to capture reverberant opinion dynamics wherein repeated interactions between individuals in the network give rise to complex macroscopic opinion patterns, as, for example, is the case in the formation of political opinions [7–10]. In this context, rather than maximizing the spread of a viral advertisement, the marketer is interested in optimally shifting the equilibrium opinions of individuals in the network. To describe complex macroscopic opinion patterns resulting from repeated microscopic interactions, we naturally employ the language of statistical mechanics, treating individual opinions as spins in an Ising system at dynamic equilibrium and modeling marketing as the addition of an external magnetic field. The resulting problem, which we call Ising influence maximization (IIM), has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external field. While a number of models have been proposed for describing reverberant opinion dynamics [11], our 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. use of the Ising model follows a vibrant interdisciplinary literature [12, 13], and is closely related to models in game theory [14, 15] and sociophysics [16, 17]. Furthermore, complex Ising models have found widespread use in machine learning, and our model is formally equivalent to a pair-wise Markov random field or a Boltzmann machine [18–20]. Our main contributions are as follows: 1. We formalize the influence maximization problem in the context of the Ising model, which we call the Ising influence maximization (IIM) problem. We also propose the mean-field Ising influence maximization (MF-IIM) problem as an approximation to IIM (Section 2). 2. We find sufficient conditions under which the MF-IIM objective is smooth and concave, and we present a gradient ascent algorithm that guarantees an ϵ-approximation to MF-IIM (Section 4). 3. We present numerical simulations that probe the structure and performance of MF optimal marketing strategies. We find that at high temperatures, it is optimal to focus influence on high-degree individuals, while at low temperatures, it is optimal to spread influence among low-degree individuals (Sections 5 and 6). 4. Throughout the paper we present a number of novel results concerning the structure of steady-states in the ferromagnetic MF Ising model on general (weighted, directed) stronglyconnected graphs, which are of independent interest. We name two highlights: • The well-known pitchfork bifurcation structure for the ferromagnetic MF Ising model on a lattice extends exactly to general strongly-connected graphs, and the critical temperature is equal to the spectral radius of the adjacency matrix (Theorem 3). • There can exist at most one stable steady-state with non-negative (non-positive) components, and it is smooth and concave (convex) in the external field (Theorem 4). 2 The Ising influence maximization problem We consider a weighted, directed social network consisting of a set of individuals N = {1, . . . , n}, each of which is assigned an opinion σi ∈{±1} that captures its current state. By analogy with the Ising model, we refer to σ = (σi) as a spin configuration of the system. Individuals in the network interact via a non-negative weighted coupling matrix J ∈Rn×n ≥0 , where Jij ≥0 represents the amount of influence that individual j holds over the opinion of individual i, and the non-negativity of J represents the assumption that opinions of neighboring individuals tend to align, known in physics as a ferromagnetic interaction. Each individual also interacts with forces external to the network via an external field h ∈Rn. For example, if the spins represent the political opinions of individuals in a social network, then Jij represents the influence that j holds over i’s opinion and hi represents the political bias of node i due to external forces such as campaign advertisements and news articles. The opinions of individuals in the network evolve according to asynchronous Glauber dynamics. At each time t, an individual i is selected uniformly at random and her opinion is updated in response to the external field h and the opinions of others in the network σ(t) by sampling from P (σi(t + 1) = 1|σ(t)) = eβ( P j Jijσj(t)+hi) P σ′ i=±1 eβσ′ i( P j Jijσj(t)+hi) , (1) where β is the inverse temperature, which we refer to as the interaction strength, and unless otherwise specified, sums are assumed over N. Together, the quadruple (N, J, h, β) defines our system. We refer to the total expected opinion, M = P i ⟨σi⟩, as the magnetization, where ⟨·⟩denotes an average over the dynamics in Eq. (1), and we often consider the magnetization as a function of the external field, denoted M(h). Another important concept is the susceptibility matrix, χij = ∂⟨σi⟩ ∂hj , which quantifies the response of individual i to a change in the external field on node j. We study the problem of maximizing the magnetization of an Ising system with respect to the external field. We assume that an external field h can be added to the system, subject to the constraints h ≥0 and P i hi ≤H, where H > 0 is the external field budget, and we denote the set of feasible external fields by FH = {h ∈Rn : h ≥0, P i hi = H}. In general, we also assume that the system experiences an initial external field b ∈Rn, which cannot be controlled. 2 Definition 1. (Ising influence maximization (IIM)) Given a system (N, J, b, β) and a budget H, find a feasible external field h ∈FH that maximizes the magnetization; that is, find an optimal external field h∗such that h∗= arg max h∈FH M(b + h). (2) Notation. Unless otherwise specified, bold symbols represent column vectors with the appropriate number of components, while non-bold symbols with subscripts represent individual components. We often abuse notation and write relations such as m ≥0 to mean mi ≥0 for all components i. 2.1 The mean-field approximation In general, calculating expectations over the dynamics in Eq. (1) requires Monte-Carlo simulations or other numerical approximation techniques. To make analytic progress, we employ the variational mean-field approximation, which has roots in statistical physics and has long been used to tackle inference problems in Boltzmann machines and Markov random fields [21–24]. The mean-field approximation replaces the intractable task of calculating exact averages over Eq. (1) with the problem of solving the following set of self-consistency equations: mi = tanh  β  X j Jijmj + hi    , (3) for all i ∈N, where mi approximates ⟨σi⟩. We refer to the right-hand side of Eq. (3) as the mean-field map, f(m) = tanh [β(Jm + h)], where tanh(·) is applied component-wise. In this way, a fixed point of the mean-field map is a solution to Eq. (3), which we call a steady-state. In general, there may be many solutions to Eq. (3), and we denote by Mh the set of steady-states for a system (N, J, h, β). We say that a steady-state m is stable if ρ(f ′(m)) < 1, where ρ(·) denotes the spectral radius and f ′(m)ij = ∂fi ∂mj m = β 1 −m2 i  Jij ⇒ f ′(m) = βD(m)J, (4) where D(m)ij = (1 −m2 i )δij. Furthermore, under the mean-field approximation, given a stable steady-state m, the susceptibility has a particularly nice form: χMF ij = β 1 −m2 i  X k Jikχkj + δij ! ⇒ χMF = β (I −βD(m)J)−1 D(m), (5) where I is the n × n identity matrix. For the purpose of uniquely defining our objective, we optimistically choose to maximize the maximum magnetization among the set of steady-states, defined by M MF (h) = max m∈Mh X i mi(h). (6) We note that the pessimistic framework of maximizing the minimum magnetization yields an equally valid objective. We also note that simply choosing a steady-state to optimize does not yield a well-defined objective since, as h increases, steady-states can pop in and out of existence. Definition 2. (Mean-field Ising influence maximization (MF-IIM)) Given a system (N, J, b, β) and a budget H, find an optimal external field h∗such that h∗= arg max h∈FH M MF (b + h). (7) 3 The structure of steady-states in the MF Ising model Before proceeding further, we must prove an important result concerning the existence and structure of solutions to Eq. (3), for if there exists a system that does not admit a steady-state, then our objective 3 is ill-defined. Furthermore, if there exists a unique steady-state m, then M MF = P i mi, and there is no ambiguity in our choice of objective. Theorem 3 establishes that every system admits a steady-state and that the well-known pitchfork bifurcation structure for steady-states of the ferromagnetic MF Ising model on a lattice extends exactly to general (weighted, directed) strongly-connected graphs. In particular, for any strongly-connected graph described by J, there is a critical interaction strength βc below which there exists a unique and stable steady-state. For h = 0, as β crosses βc from below, two new stable steady-states appear, one with all-positive components and one with all-negative components. Interestingly, the critical interaction strength is equal to the inverse of the spectral radius of J, denoted βc = 1/ρ(J). Theorem 3. Any system (N, J, h, β) exhibits a steady-state. Furthermore, if its network is stronglyconnected, then, for β < βc, there exists a unique and stable steady-state. For h = 0, as β crosses βc from below, the unique steady-state gives rise to two stable steady-states, one with all-positive components and one with all-negative components. Proof sketch. The existence of a steady-state follows directly by applying Brouwer’s fixed-point theorem to f. For β < βc, it can be shown that f is a contraction mapping, and hence admits a unique and stable steady-state by Banach’s fixed point theorem. For h = 0 and β < βc, m = 0 is the unique steady-state and f ′(m) = βJ. Because J is strongly-connected, the Perron-Frobenius theorem guarantees a simple eigenvalue equal to ρ(J) and a corresponding all-positive eigenvector. Thus, when β crosses 1/ρ(J) from below, the Perron-Frobenius eigenvalue of f ′(m) crosses 1 from below, giving rise to a supercritical pitchfork bifurcation with two new stable steady-states corresponding to the Perron-Frobenius eigenvector. Remark. Some of our results assume J is strongly-connected in order to use the Perron-Frobenius theorem. We note that this assumption is not restrictive, since any graph can be efficiently decomposed into strongly-connected components on which our results apply independently. Theorem 3 shows that the objective M MF (b+h) is well-defined. Furthermore, for β < βc, Theorem 3 guarantees a unique and stable steady-state m for all b + h. In this case, MF-IIM reduces to maximizing M MF = P i mi, and because m is stable, M MF (b + h) is smooth for all h by the implicit function theorem. Thus, for β < βc, we can use standard gradient ascent techniques to efficiently calculate locally-optimal solutions to MF-IIM. In general, M MF is not necessarily smooth in h since the topological structure of steady-states may change as h varies. However, in the next section we show that if there exists a stable and entry-wise non-negative steady-state, and if J is strongly-connected, then M MF (b+h) is both smooth and concave in h, regardless of the interaction strength. 4 Sufficient conditions for when MF-IIM is concave We consider conditions for which MF-IIM is smooth and concave, and hence exactly solvable by efficient techniques. The case under consideration is when J is strongly-connected and there exists a stable non-negative steady-state. Theorem 4. Let (N, J, b, β) describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state m(b). Then, for any H, M MF (b + h) = P i mi(b + h), M MF (b + h) is smooth in h, and M MF (b + h) is concave in h for all h ∈FH. Proof sketch. Our argument follows in three steps. We first show that m(b) is the unique stable non-negative steady-state and that it attains the maximum total opinion among steady-states. This guarantees that M MF (b) = P i mi(b). Furthermore, m(b) gives rise to a unique and smooth branch of stable non-negative steady-states for additional h, and hence M MF (b + h) = P i mi(b + h) for all h > 0. Finally, one can directly show that M MF (b + h) is concave in h. Remark. By arguments similar to those in Theorem 4, it can be shown that any stable non-positive steady-state is unique, attains the minimum total opinion among steady-states, and is smooth and convex for decreasing h. The above result paints a significantly simplified picture of the MF-IIM problem when J is stronglyconnected and there exists a stable non-negative steady-state m(b). Given a budget H, for any feasible marketing strategy h ∈FH, m(b + h) is the unique stable non-negative steady-state, attains the maximum total opinion among steady-states, and is smooth in h. Thus, the objective 4 Algorithm 1: An ϵ-approximation to MF-IIM Input: System (N, J, b, β) for which there exists a stable non-negative steady-state, budget H, accuracy parameter ϵ > 0 Output: External field h that approximates a MF optimal external field h∗ t = 0; h(0) ∈FH; α ∈(0, 1 L) ; repeat ∂M MF (b+h(t)) ∂hj = P i χMF ij (b + h(t)); h(t + 1) = PFH  h(t) + α▽hM MF (b + h(t))  ; t++; until M MF (b + h∗) −M MF (b + h(t)) ≤ϵ; h = h(t); M MF (b+h) = P i mi(b+h) is smooth, allowing us to write down a gradient ascent algorithm that approximates a local maximum. Furthermore, since M MF (b+h) is concave in h, any local maximum of M MF on FH is a global maximum, and we can apply efficient gradient ascent techniques to solve MF-IIM. Our algorithm, summarized in Algorithm 1, is initialized at a feasible external field. At each iteration, we calculate the susceptibility of the system, namely ∂M MF ∂hj = P i χMF ij , and project this gradient onto FH (the projection operator PFH is well-defined since FH is convex). Stepping along the direction of the projected gradient with step size α ∈(0, 1 L), where L is a Lipschitz constant of M MF , Algorithm 1 converges to an ϵ-approximation to MF-IIM in O(1/ϵ) iterations [25]. 4.1 Sufficient conditions for the existence of a stable non-negative steady-state In the previous section we found that MF-IIM is efficiently solvable if there exists a stable nonnegative steady-state. While this assumption may seem restrictive, we show, to the contrary, that the appearance of a stable non-negative steady-state is a fairly general phenomenon. We first show, for J strongly-connected, that the existence of a stable non-negative steady-state is robust to increases in h and that the existence of a stable positive steady-state is robust to increases in β. Theorem 5. Let (N, J, h, β) describe a system with a strongly-connected graph for which there exists a stable non-negative steady-state m. If m ≥0, then as h increases, m gives rise to a unique and smooth branch of stable non-negative steady-states. If m > 0, then as β increases, m gives rise to a unique and smooth branch of stable positive steady-states. Proof sketch. By the implicit function theorem, any stable steady-state can be locally defined as a function of both h and β. Using the susceptibility, one can directly show that any stable non-negative steady-state remains stable and non-negative as h increases and that any stable positive steady-state remains stable and positive as β increases. The intuition behind Theorem 5 is that increasing the external field will never destroy a steady-state in which all of the opinions are already non-positive. Furthermore, as the interaction strength increases, each individual reacts more strongly to the positive influence of her neighbors, creating a positive feedback loop that results in an even more positive magnetization. We conclude by showing for J strongly-connected that if h ≥0, then there exists a stable non-negative steady-state. Theorem 6. Let (N, J, h, β) describe any system with a strongly-connected network. If h ≥0, then there exists a stable non-negative steady-state. Proof sketch. For h > 0 and β < βc, it can be shown that the unique steady-state is positive, and hence Theorem 5 guarantees the result for all β′ > β. For h = 0, Theorem 3 provides the result. All together, the results of this section provide a number of sufficient conditions under which MF-IIM is exactly and efficiently solvable by Algorithm 1. 5 5 A shift in the structure of solutions to MF-IIM The structure of solutions to MF-IIM is of fundamental theoretical and practical interest. We demonstrate, remarkably, that solutions to MF-IIM shift from focusing on nodes of high degree at low interaction strengths to focusing on nodes of low degree at high interaction strengths. Consider an Ising system described by (N, J, h, β) in the limit β ≪βc. To first-order in β, the self-consistency equations (3) take the form: m = β (Jm + h) ⇒ m = β(I −βJ)−1h. (8) Since β < βc, we have ρ(βJ) < 1, allowing us to expand (I −βJ)−1 in a geometric series: m = βh + β2Jh + O(β3) ⇒ M MF (h) = β X i hi + β2 X i dout i hi + O(β3), (9) where dout i = P j Jji is the out-degree of node i. Thus, for low interaction strengths, the MF magnetization is maximized by focusing the external field on the nodes of highest out-degree in the network, independent of b and H. To study the structure of solutions to MF-IIM at high interaction strengths, we make the simplifying assumptions that J is strongly-connected and b ≥0 so that Theorem 6 guarantees a stable nonnegative steady state m. For large β and an additional external field h ∈FH, m takes the form mi ≈tanh  β  X j Jij + bi + hi    ≈1 −2e−2β(din i +bi+hi), (10) where din i = P j Jij is the in-degree of node i. Thus, in the high-β limit, we have: M MF (b + h) ≈ X i  1 −2e−2β(din i +bi+hi) ≈n −2e−2β(din i∗+h(0) i∗+hi∗), (11) where i∗= arg mini(din i + bi + hi). Thus, for high interaction strengths, the solutions to MF-IIM for an external field budget H are given by: h∗= arg max h∈FH  n −2e−2β(din i∗+h(0) i∗+hi∗) ≡arg max h∈FH min i din i + bi + hi  . (12) Eq. (12) reveals that the high-β solutions to MF-IIM focus on the nodes for which din i + bi + hi is smallest. Thus, if b is uniform, the MF magnetization is maximized by focusing the external field on the nodes of smallest in-degree in the network. We emphasize the strength and novelty of the above results. In the context of reverberant opinion dynamics, the optimal control strategy has a highly non-trivial dependence on the strength of interactions in the system, a feature not captured by viral models. Thus, when controlling a social system, accurately determining the strength of interactions is of critical importance. 6 Numerical simulations We present numerical experiments to probe the structure and performance of MF optimal external fields. We verify that the solutions to MF-IIM undergo a shift from focusing on high-degree nodes at low interaction strengths to focusing on low-degree nodes at high interaction strengths. We also find that for sufficiently high and low interaction strengths, the MF optimal external field achieves the maximum exact magnetization, while admitting performance losses near βc. However, even at βc, we demonstrate that solutions to MF-IIM significantly outperform common node-selection heuristics based on node degree and centrality. We first consider an undirected hub-and-spoke network, shown in Figure 1, where Jij ∈{0, 1} and we set b = 0 for simplicity. Since b ≥0, Algorithm 1 is guaranteed to achieve a globally optimal MF magnetization. Furthermore, because the network is small, we can calculate exact solutions to IIM by brute force search. The left plot in Figure 1 compares the average degree of the MF and exact optimal external fields over a range of temperatures for an external field budget H = 1, verifying 6 Figure 1: Left: A comparison of the structure of the MF and exact optimal external fields, denoted h∗ MF and h∗, in a hub-and-spoke network. Right: The relative performance of h∗ MF compared to h∗; i.e., M(h∗ MF )/M(h∗ MF ), where M denotes the exact magnetization. Figure 2: Left: A stochastic block network consisting of a highly-connected community (Block 1) and a sparsely-connected community (Block2). Center: The solution to MF-IIM shifts from focusing on Block 1 to Block 2 as β increases. Right: Even at βc, the MF solution outperforms common node-selection heuristics. that the solution to MF-IIM shifts from focusing on high-degree nodes at low interaction strengths to low-degree nodes at high interaction strengths. Furthermore, we find that the shift in the MF optimal external field occurs near the critical interaction strength βc = .5. The performance of the MF optimal strategy (measured as the ratio of the magnetization achieved by the MF solution to that achieved by the exact solution) is shown in the right plot in Figure 1. For low and high interaction strengths, the MF optimal external field achieves the maximum magnetization, while near βc, it incurs significant performance losses, a phenomenon well-studied in the literature [21]. We now consider a stochastic block network consisting of 100 nodes split into two blocks of 50 nodes each, shown in Figure 2. An undirected edge of weight 1 is placed between each pair of nodes in Block 1 with probability .2, between each pair in Block 2 with probability .05, and between nodes in different blocks with probability .05, resulting in a highly-connected community (Block 1) surrounded by a sparsely-connected community (Block 2). For b = 0 and H = 20, the center plot in Figure 2 demonstrates that the solution to MF-IIM shifts from focusing on Block 1 at low β to focusing on Block 2 at high β and that the shift occurs near βc. The stochastic block network is sufficiently large that exact calculation of the optimal external fields is infeasible. Thus, we resort to comparing the MF solutions with three node-selection heuristics: one that distributes the budget in amounts proportional to nodes’ degrees, one that distributes the budget proportional to nodes’ centralities (the inverse of a node’s average shortest path length to all other nodes), and one that distributes the budget randomly. The magnetizations are approximated via Monte Carlo simulations of the Glauber dynamics, and we consider the system at β = βc to represent the worst-case scenario for the MF optimal external fields. The right plot in Figure 2 shows that, even at βc, the solutions to MF-IIM outperform common node-selection heuristics. We consider a real-world collaboration network (Figure 3) composed of 904 individuals, where each edge is unweighted and represents the co-authorship of a paper on the arXiv [26]. We note that co-authorship networks are known to capture many of the key structural features of social networks 7 Figure 3: Left: A collaboration network of 904 physicists where each edge represents the co-authorship of a paper on the arXiv. Center: The solution to MF-IIM shifts from high- to lowdegree nodes as β increases. Right: The MF solution out-performs common node-selection heuristics, even at βc. [27]. For b = 0 and H = 40, the center plot in Figure 3 illustrates the sharp shift in the solution to MF-IIM at βc = 0.05 from high- to low-degree nodes. Furthermore, the right plot in Figure 3 compares the performance of the MF optimal external field with the node-selection heuristics described above, where we again consider the system at βc as a worst-case scenario, demonstrating that Algorithm 1 is scalable and performs well on real-world networks. 7 Conclusions We study influence maximization, one of the fundamental problems in network science, in the context of the Ising model, wherein repeated interactions between individuals give rise to complex macroscopic patterns. The resulting problem, which we call Ising influence maximization, has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Under the mean-field approximation, we develop a number of sufficient conditions for when the problem is concave, and we provide a gradient ascent algorithm that uses the susceptibility to efficiently calculate locally-optimal external fields. Furthermore, we demonstrate that the MF optimal external fields shift from focusing on high-degree individuals at low interaction strengths to focusing on low-degree individuals at high interaction strengths, a phenomenon not observed in viral models. We apply our algorithm on random and real-world networks, numerically demonstrating shifts in the solution structure and showing that our algorithm out-performs common node-selection heuristics. It would be interesting to study the exact Ising model on an undirected network, in which case the spin statistics are governed by the Boltzmann distribution. Using this elegant steady-state description, one might be able to derive analytic results for the exact IIM problem. Our work establishes a fruitful connection between influence maximization and statistical physics, paving the way for exciting cross-disciplinary research. For example, one could apply advanced mean-field techniques, such as those in [21], to generate efficient algorithms of increasing accuracy. Furthermore, because our model is equivalent to a Boltzmann machine, one could propose a framework for data-based influence maximization based on well-known Boltzmann machine learning techniques. Acknowledgements. We thank Michael Kearns and Eric Horsley for enlightening discussions, and we acknowledge support from the U.S. National Science Foundation, the Air Force Office of Scientific Research, and the Department of Transportation. References [1] P. Domingos and M. Richardson. Mining the network value of customers. KDD, pages 57–66, 2001. [2] M. Richardson and P. Domingos. Mining knowledge-sharing sites for viral marketing. KDD’02. ACM, pages 61–70, 2002. 8 [3] D. Kempe, J. M. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. KDD’03. ACM, pages 137–146, 2003. [4] E. Mossel and S. Roch. On the submodularity of influence in social networks. In STOC’07, pages 128–134. ACM, 2007. [5] S. Goyal, H. Heidari, and M. Kearns. Competitive contagion in networks. GEB, 2014. [6] M. Gomez Rodriguez and B. Schölkopf. Influence maximization in continuous time diffusion networks. In ICML, 2012. [7] S. Galam and S. Moscovici. Towards a theory of collective phenomena: consensus and attitude changes in groups. European Journal of Social Psychology, 21(1):49–74, 1991. [8] D. J. Isenberg. Group polarization: A critical review and meta-analysis. Journal of personality and social psychology, 50(6):1141, 1986. [9] M. Mäs, A. Flache, and D. Helbing. Individualization as driving force of clustering phenomena in humans. PLoS Comput Biol, 6(10), 2010. [10] M. Moussaïd, J. E. Kämmer, P. P. Analytis, and H. Neth. Social influence and the collective dynamics of opinion formation. PLoS One, 8(11), 2013. [11] A. De, I. Valera, N. Ganguly, S. Bhattacharya, et al. Learning opinion dynamics in social networks. arXiv preprint arXiv:1506.05474, 2015. [12] A. Montanari and A. Saberi. The spread of innovations in social networks. PNAS, 107(47), 2010. [13] C. Castellano, S. Fortunato, and V. Loreto. Statistical physics of social dynamics. Rev. Mod. Phys., 81:591–646, 2009. [14] L. Blume. The statistical mechanics of strategic interaction. GEB, 5:387–424, 1993. [15] R. McKelvey and T. Palfrey. Quantal response equilibria for normal form games. GEB, 7:6–38, 1995. [16] S. Galam. Sociophysics: a review of galam models. Int. J. Mod. Phys. C, 19(3):409–440, 2008. [17] K. Sznajd-Weron and J. Sznajd. Opinion evolution in closed community. International Journal of Modern Physics C, 11(06), 2000. [18] R. Kindermann and J. Snell. Markov random fields and their applications. AMS, Providence, RI, 1980. [19] T. Tanaka. Mean-field theory of boltzmann machine learning. PRE, pages 2302–2310, 1998. [20] H. Nishimori and K. M. Wong. Statistical mechanics of image restoration and error-correcting codes. PRE, 60(1):132, 1999. [21] J. Yedidia. An idiosyncratic journey beyond mean field theory. Advanced mean field methods: Theory and practice, pages 21–36, 2001. [22] M. I. Jordan, Z. Ghahraman, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [23] M. Opper and D. Saad. Advanced mean field methods: Theory and practice. MIT press, 2001. [24] L. K. Saul, T. Jaakkola, and M. I. Jordan. Mean field theory for sigmoid belief networks. Journal of artificial intelligence research, 4(1):61–76, 1996. [25] M. Teboulle. First order algorithms for convex minimization. IPAM, 2010. Tutorials. [26] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection, June 2014. [27] M. Newman. The structure of scientific collaboration networks. PNAS, 98, 2001. 9
2016
111
6,008
2016
112
6,009
Adaptive Concentration Inequalities for Sequential Decision Problems Shengjia Zhao Tsinghua University zhaosj12@stanford.edu Enze Zhou Tsinghua University zhouez_thu_12@126.com Ashish Sabharwal Allen Institute for AI AshishS@allenai.org Stefano Ermon Stanford University ermon@cs.stanford.edu Abstract A key challenge in sequential decision problems is to determine how many samples are needed for an agent to make reliable decisions with good probabilistic guarantees. We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen number of samples. Our inequalities are tight under natural assumptions and can greatly simplify the analysis of common sequential decision problems. In particular, we apply them to sequential hypothesis testing, best arm identification, and sorting. The resulting algorithms rival or exceed the state of the art both theoretically and empirically. 1 Introduction Many problems in artificial intelligence (AI) and machine learning (ML) involve designing agents that interact with stochastic environments. The environment is typically modeled with a collection of random variables. A common assumption is that the agent acquires information by observing samples from these random variables. A key problem is to determine the number of samples that are required for the agent to make sound inferences and decisions based on the data it has collected. Many abstract problems fit into this general framework, including sequential hypothesis testing, e.g., testing for positiveness of the mean [18, 6], analysis of streaming data [19], best arm identification for multi-arm bandits (MAB) [1, 5, 13], etc. These problems involve the design of a sequential algorithm that needs to decide, at each step, either to acquire a new sample, or to terminate and output a conclusion, e.g., decide whether the mean of a random variable is positive or not. The challenge is that obtaining too many samples will result in inefficient algorithms, while taking too few might lead to the wrong decision. Concentration inequalities such as Hoeffding’s inequality [11], Chernoff bound, and Azuma’s inequality [7, 5] are among the main analytic tools. These inequalities are used to bound the probability of a large discrepancy between sample and population means, for a fixed number of samples n. An agent can control its risk by making decisions based on conclusions that hold with high confidence, due to the unlikely occurrence of large deviations. However, these inequalities only hold for a fixed, constant number of samples that is decided a-priori. On the other hand, we often want to design agents that make decisions adaptively based on the data they collect. That is, we would like the number of samples itself to be a random variable. Traditional concentration inequalities, however, often do not hold when the number of samples is stochastic. Existing analysis requires ad-hoc strategies to bypass this issue, such as union bounding the risk over time [18, 17, 13]. These approaches can lead to suboptimal algorithms. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We introduce Hoeffding-like concentration inequalities that hold for a random, adaptively chosen number of samples. Interestingly, we can achieve our goal with a small double logarithmic overhead with respect to the number of samples required for standard Hoeffding inequalities. We also show that our bounds cannot be improved under some natural restrictions. Even though related inequalities have been proposed before [15, 2, 3], we show that ours are significantly tighter, and come with a complete analysis of the fundamental limits involved. Our inequalities are directly applicable to a number of sequential decision problems. In particular, we use them to design and analyze new algorithms for sequential hypothesis testing, best arm identification, and sorting. Our algorithms rival or outperform state-of-the-art techniques both theoretically and empirically. 2 Adaptive Inequalities and Their Properties We begin with some definitions and notation: Definition 1. [20] Let X be a zero mean random variable. For any d > 0, we say X is d-subgaussian if ∀r ∈R, E[erX] ≤ed2r2/2 Note that a random variable can be subgaussian only if it has zero mean [20]. However, with some abuse of notation, we say that any random variable X is subgaussian if X −E[X] is subgaussian. Many important types of distributions are subgaussian. For example, by Hoeffding’s Lemma [11], a distribution bounded in an interval of width 2d is d-subgaussian and a Gaussian random variable N(0, σ2) is σ-subgaussian. Henceforth, we shall assume that the distributions are 1/2-subgaussian. Any d-subgaussian random variable can be scaled by 1/(2d) to be 1/2-subgaussian Definition 2 (Problem setup). Let X be a zero mean 1/2-subgaussian random variable. {X1, X2, . . .} are i.i.d. random samples of X. Let Sn = Pn i=1 Xi be a random walk. J is a stopping time with respect to {X1, X2, . . .}. We let J take a special value ∞where Pr[J = ∞] = 1 −limn→∞Pr[J ≤ n]. We also let f : N →R+ be a function that will serve as a boundary for the random walk. We note that because it is possible for J to be infinity, to simplify notation, what we really mean by Pr[EJ], where EJ is some event, is Pr[{J < ∞} ∩EJ]. We can often simplify notation and use Pr[EJ] without confusion. 2.1 Standard vs. Adaptive Concentration Inequalities There is a very large class of well known inequalities that bound the probability of large deviations by confidence that increases exponentially w.r.t. bound tightness. An example is the Hoeffding inequality [12] which states, using the definitions mentioned above, Pr[Sn ≥ √ bn] ≤e−2b (1) Other examples include Azuma’s inequality, Chernoff bound [7], and Bernstein inequalities [21]. However, these inequalities apply if n is a constant chosen in advance, or independent of the underlying process, but are generally untrue when n is a stopping time J that, being a random variable, depends on the process. In fact we shall later show in Theorem 3 that we can construct a stopping time J such that Pr[SJ ≥ √ bJ] = 1 (2) for any b > 0, even when we put strong restrictions on J. Comparing Eqs. (1) and (2), one clearly sees how Chernoff and Hoeffding bounds are applicable only to algorithms whose decision to continue to sample or terminate is fixed a priori. This is a severe limitation for stochastic algorithms that have uncertain stopping conditions that may depend on the underlying process. We call a bound that holds for all possible stopping rules J an adaptive bound. 2.2 Equivalence Principle We start with the observation that finding a probabilistic bound on the position of the random walk SJ that holds for any stopping time J is equivalent to finding a deterministic boundary f(n) that the walk is unlikely to ever cross. Formally, 2 Proposition 1. For any δ > 0, Pr[SJ ≥f(J)] ≤δ (3) for any stopping time J if and only if Pr[{∃n, Sn ≥f(n)}] ≤δ (4) Intuitively, for any f(n) we can choose an adversarial stopping rule that terminates the process as soon as the random walk crosses the boundary f(n). We can therefore achieve (3) for all stopping times J only if we guarantee that the random walk is unlikely to ever cross f(n), as in Eq. (4). 2.3 Related Inequalities The problem of studying the supremum of a random walk has a long history. The seminal work of Kolmogorov and Khinchin [4] characterized the limiting behavior of a zero mean random walk with unit variance: lim sup n→∞ Sn √2n log log n = 1 a.s. This law is called the Law of Iterated Logarithms (LIL), and sheds light on the limiting behavior of a random walk. In our framework, this implies lim m→∞Pr h ∃n > m : Sn ≥ p 2an log log n i = 1 if a < 1 0 if a > 1 This theorem provides a very strong result on the asymptotic behavior of the walk. However, in most ML and statistical applications, we are also interested in the finite-time behavior, which we study. The problem of analyzing the finite-time properties of a random walk has been considered before in the ML literature. It is well known, and can be easily proven using Hoeffding’s inequality union bounded over all possible times, that a trivial bound f(n) = p n log(2n2/δ)/2 (5) holds in the sense of Pr [∃n, Sn ≥f(n)] ≤δ. This is true because by union bound and Hoeffding inequality [12] Pr[∃n, Sn ≥f(n)] ≤ ∞ X n=1 Pr[Sn ≥f(n)] ≤ ∞ X n=1 e−log(2n2/δ) ≤δ ∞ X n=1 1 2n2 ≤δ Recently, inspired by the Law of Iterated Logarithms, Jamieson et al. [15], Jamieson and Nowak [13] and Balsubramani [2] proposed a boundary f(n) that scales asymptotically as Θ(√n log log n) such that the “crossing event” {∃n, Sn ≥f(n)} is guaranteed to occur with a low probability. They refer to this as finite time LIL inequality. These bounds, however, have significant room for improvement. Furthermore, [2] holds asymptotically, i.e., only w.r.t. the event {∃n > N, Sn ≥f(n)} for a sufficiently large (but finite) N, rather than across all time steps. In the following sections, we develop general bounds that improve upon these methods. 3 New Adaptive Hoeffding-like Bounds Our first main result is an alternative to finite time LIL that is both tighter and simpler: Theorem 1 (Adaptive Hoeffding Inequality). Let Xi be zero mean 1/2-subgaussian random variables. {Sn = Pn i=1 Xi, n ≥1} be a random walk. Let f : N →R+. Then, 1. If limn→∞ f(n) √ (1/2)n log log n < 1, there exists a distribution for X such that Pr[{∃n, Sn ≥f(n)}] = 1 2. If f(n) = p an log(logc n + 1) + bn, c > 1, a > c/2, b > 0, and ζ is the Riemann-ζ function, then Pr[{∃n, Sn ≥f(n)}] ≤ζ (2a/c) e−2b/c (6) 3 We also remark that in practice the values of a and c do not significantly affect the quality of the bound. We recommend fixing a = 0.6 and c = 1.1 and will use this configuration in all subsequent experiments. The parameter b is the main factor controlling the confidence we have on the bound (6), i.e., the risk. The value of b is chosen so that the bound holds with probability at least 1 −δ, where δ is a user specified parameter. Based on Proposition 1, and fixing a and c as above, we get a readily applicable corollary: Corollary 1. Let J be any random variable taking value in N. If f(n) = p 0.6n log(log1.1 n + 1) + bn then Pr[SJ ≥f(J)] ≤12e−1.8b The bound we achieve is very similar in form to Hoeffding inequality (1), with an extra O(log log n) slack to achieve robustness to stochastic, adaptively chosen stopping times. We shall refer to this inequality as the Adaptive Hoeffding (AH) inequality. Informally, part 1 of Theorem 1 implies that if we choose a boundary f(n) that is convergent w.r.t. √n log log n and would like to bound the probability of the threshold-crossing event, p (1/2)n log log n is the asymptotically smallest f(n) we can have; anything asymptotically smaller will be crossed with probability 1. Furthermore, part 2 implies that as long as a > 1/2, we can choose a sufficiently large b so that threshold crossing has an arbitrarily small probability. Combined, we thus have that for any κ > 0, the minimum f (call it f ∗) needed to ensure an arbitrarily small threshold-crossing probability can be bounded asymptotically as follows: p 1/2 p n log log n ≤f ∗(n) ≤( p 1/2 + κ) p n log log n (7) Figure 1: Illustration of Theorem 1 part 2. Each blue line represents a sampled walk. Although the probability of reaching higher than the Hoeffding bound (red) at a given time is small, the threshold is crossed almost surely. The new bound (green) remains unlikely to be crossed. This fact is illustrated in Figure 1, where we plot the bound f(n) from Corollary 1 with 12e−1.8b = δ = 0.05 (AH, green). The corresponding Hoeffding bound (red) that would have held (with the same confidence, had n been a constant) is plotted as well. We also show draws from an unbiased random walk (blue). Out of the 1000 draws we sampled, approximately 25% of them cross the Hoeffding bound (red) before time 105, while none of them cross the adaptive bound (green), demonstrating the necessity of the extra √log log n factor even in practice. We also compare our bound with the trivial bound (5), LIL bound in Lemma 1 of [15] and Theorem 2 of [2]. The graph in Figure 2 shows the relative performance of the three bounds across different values of n and risk δ. The LIL bound of [15] is plotted with parameter ϵ = 0.01 as recommended. We also experimented with other values of ϵ, obtaining qualitatively similar results. It can be seen that our bound is significantly tighter (by roughly a factor of 1.5) across all values of n and δ that we evaluated. 3.1 More General, Non-Smooth Boundaries If we relax the requirement that f(n) must be smooth, or, formally, remove the condition that lim n→∞ f(n) √n log log n must exist or go to ∞, then we might be able to obtain tighter bounds. 4 Figure 2: Comparison of Adaptive Hoeffding (AH) and LIL [15], LIL2 [2] and Trivial bound. A threshold function f(n) is computed and plotted according to the four bounds, so that crossing occurs with bounded probability δ (risk). The two plots correspond to different risk levels (0.01 and 0.1). For example many algorithms such as median elimination [9] or the exponential gap algorithm [17, 6] make (sampling) decisions “in batch”, and therefore can only stop at certain pre-defined times. The intuition is that if more samples are collected between decisions, the failure probability can be easier to control. This is equivalent to restricting the stopping time J to take values in a set N ⊂N. Equivalently we can also think of using a boundary function f(n) defined as follows: fN(n) = f(n) n ∈N +∞ otherwise (8) Very often the set N is taken to be the following set: Definition 3 (Exponentially Sparse Stopping Time). We denote by Nc, c > 1, the set Nc = {⌈cn⌉: n ∈N}. Methods based on exponentially sparse stopping times often achieve asymptotically optimal performance on a range of sequential decision making problems [9, 18, 17]. Here we construct an alternative to Theorem 1 based on exponentially sparse stopping times. We obtain a bound that is asymptotically equivalent, but has better constants and is often more effective in practice. Theorem 2 (Exponentially Sparse Adaptive Hoeffding Inequality). Let {Sn, n ≥1} be a random walk with 1/2-subgaussian increments. If f(n) = p an log(logc n + 1) + bn and c > 1, a > 1/2, b > 0, we have Pr[{∃n ∈Nc, Sn ≥f(n)}] ≤ζ(2a) e−2b We call this inequality the exponentially sparse adaptive Hoeffding (ESAH) inequality. Compared to (6), the main improvement is the lack of the constant c in the RHS. In all subsequent experiments we fix a = 0.55 and c = 1.05. Finally, we provide limits for any boundary, including those obtained by a batch-sampling strategy. Theorem 3. Let {Sn, n ≥1} be a zero mean random walk with 1/2-subgaussian increments. Let f : N →R+. Then 1. If there exists a constant C ≥0 such that lim infn→∞ f(n) √n < C, then Pr[{∃n, Sn ≥f(n)}] = 1 2. If limn→∞ f(n) √n = +∞, then for any δ > 0 there exists an infinite set N ⊂N such that Pr[{∃n ∈N, Sn ≥f(n)}] < δ 5 Informally, part 1 states that if a threshold f(n) drops an infinite number of times below an asymptotic bound of Θ(√n), then the threshold will be crossed with probability 1. This rules out Hoeffding-like bounds. If f(n) grows asymptotically faster than √n, then one can “sparsify” f(n) so that it will be crossed with an arbitrarily small probability. In particular, a boundary with the form in Equation (8) can be constructed to bound the threshold-crossing probability below any δ (part 2 of the Theorem). 4 Applications to ML and Statistics We now apply our adaptive bound results to design new algorithms for various classic problems in ML and statistics. Our bounds can be used to analyze algorithms for many natural sequential problems, leading to a unified framework for such analysis. The resulting algorithms are asymptotically optimal or near optimal, and outperform competing algorithms in practice. We provide two applications in the following subsections and leave another to the appendix. 4.1 Sequential Testing for Positiveness of Mean Our first example is sequential testing for the positiveness of the mean of a bounded random variable. In this problem, there is a 1/2-subgaussian random variable X with (unknown) mean µ ̸= 0. At each step, an agent can either request a sample from X, or terminate and declare whether or not E[X] > 0. The goal is to bound the agent’s error probability by some user specified value δ. This problem is well studied [10, 18, 6]. In particular Karp and Kleinberg [18] show in Lemma 3.2 (“second simulation lemma”) that this problem can be solved with an O log(1/δ) log log(1/µ)/µ2 algorithm with confidence 1 −δ. They also prove a lower bound of Ω log log(1/µ)/µ2 . Recently, Chen and Li [6] referred to this problem as the SIGN-ξ problem and provided similar results. We propose an algorithm that achieves the optimal asymptotic complexity and performs very well in practice, outperforming competing algorithms by a wide margin (because of better asymptotic constants). The algorithm is captured by the following definition. Definition 4 (Boundary Sequential Test). Let f : N →R+ be a function. We draw i.i.d. samples X1, X2, . . . from the target distribution X. Let Sn = Pn i=1 Xi be the corresponding partial sum. 1. If Sn ≥f(n), terminate and declare E[X] > 0; 2. if Sn ≤−f(n), terminate and declare E[X] < 0; 3. otherwise increment n and obtain a new sample. We call such a test a symmetric boundary test. In the following theorem we analyze its performance. Theorem 4. Let δ > 0 and X be any 1/2-subgaussian distribution with non-zero mean. Let f(n) = p an log(logc n + 1) + bn Figure 3: Empirical Performance of Boundary Tests. The plot on the left is the algorithm in Definition 4 and Theorem 4 with δ = 0.05, the plot on the right uses half the correct threshold. Despite of a speed up of 4 times, the empirical accuracy drops below the requirement 6 where c > 1, a > c/2, and b = c/2 log ζ (2a/c) + c/2 log 1/δ. Then, with probability at least 1 −δ, a symmetric boundary test terminates with the correct sign for E[X], and with probability 1 −δ, for any ϵ > 0 it terminates in at most (2c + ϵ) log(1/δ) log log(1/µ) µ2  samples asymptotically w.r.t. 1/µ and 1/δ. 4.1.1 Experiments To evaluate the empirical performance of our algorithm (AH-RW), we run an experiment where X is a Bernoulli distribution over {−1/2, 1/2}, for various values of the mean parameter µ. The confidence level δ is set to 0.05, and the results are averaged across 100 independent runs. For this experiment and other experiments in this section, we set the parameters a = 0.6 and c = 1.1. We plot in Figure 3 the empirical accuracy, average number of samples used (runtime), and the number of samples after which 90% of the runs terminate. Figure 4: Comparison of various algorithms for deciding the positiveness of the mean of a Bernoulli random variable. AH-RW and ESAH-RW use orders of magnitude fewer samples than alternatives. The empirical accuracy of AH-RW is very high, as predicted by Theorem 4. Our bound is empirically very tight. If we decrease the bound by a factor of 2, that is we use f(n)/2 instead of f(n), we get the curve in the right hand side plot of Figure 3. Despite a speed up of approximately 4 times, the empirical accuracy gets below the 0.95 requirement, especially when µ is small. We also compare our method, AH-RW, to the Exponential Gap algorithm from [6] and the algorithm from the “second simulation lemma” of [18]. Both of these algorithms rely on a batch sampling idea and have very similar performance. The results show that our algorithm is at least an order of magnitude faster (note the log-scale). We also evaluate a variant of our algorithm (ESAH-RW) where the boundary function f(n) is taken to be fNc as in Theorem 2 and Equation (8). This algorithm achieves very similar performance as Theorem 4, justifying the practical applicability of batch sampling. 4.2 Best Arm Identification The MAB (Multi-Arm Bandit) problem [1, 5] studies the optimal behavior of an agent when faced with a set of choices with unknown rewards. There are several flavors of the problem. In this paper, we focus on the fixed confidence best arm identification problem [13]. In this setting, the agent is presented with a set of arms A, where the arms are indistinguishable except for their expected reward. The agent is to make sequential decisions at each time step to either pull an arm α ∈A, or to terminate and declare one arm to have the largest expected reward. The goal is to identify the best arm with a probability of error smaller than some pre-specified δ > 0. To facilitate the discussion, we first define the notation we will use. We denote by K = |A| as the total number of arms. We denote by µα the true mean of an arm, α∗= arg max µα, We also define ˆµα(nα) as the empirical mean after nα pulls of an arm. This problem has been extensively studied, including recently [8, 14, 17, 15, 6]. A survey is presented by Jamieson and Nowak [13], who classify existing algorithms into three classes: action elimination based [8, 14, 17, 6], which achieve good asymptotics but often perform unsatisfactorily in practice; UCB based, such as lil’UCB by [15]; and LUCB based approaches, such as [16, 13], which achieve sub-optimal asymptotics of O(K log K) but perform very well in practice. We provide a new algorithm that out-performs all previous algorithm, including LUCB, in Algorithm 1. Theorem 5. For any δ > 0, with probability 1 −δ, Algorithm 1 outputs the optimal arm. 7 Algorithm 1 Adaptive Hoeffding Race (set of arms A, K = |A|, parameter δ) fix parameters a = 0.6, c = 1.1, b = c/2 (log ζ (2a/c) + log(2/δ)) initialize for all arms α ∈A, nα = 0, initialize ˆA = A be the set of remaining arms while ˆA has more than one arm do Let ˆα∗be the arm with highest empirical mean, and compute for all α ∈ˆA fα(nα) =    r a log(logc nα + 1) + b + c log |ˆA|/2  /nα if α = ˆα∗ p (a log(logc nα + 1) + b) /nα otherwise draw a sample from the arm with largest value of fα(nα) from ˆA, nα = nα + 1 remove from ˆA arm α if ˆµa + fα(nα) < ˆµˆα∗−fˆα∗(nˆα∗) end while return the only element in ˆA 4.2.1 Experiments Figure 5: Comparison of various methods for best arm identification. Our methods AHR and ESAHR are significantly faster than state-of-the-art. Batch sampling ES-AHR is the most effective one. We implemented Algorithm 1 and a variant where the boundary f is set to fNc as in Theorem 2. We call this alternative version ES-AHR, standing for exponentially sparse adaptive Hoeffding race. For comparison we implemented the lil’UCB and lil’UCB+LS described in [14], and lil’LUCB described in [13]. Based on the results of [13], these algorithms are the fastest known to date. We also implemented the DISTRIBUTIONBASED-ELIMINATION from [6], which theoretically is the state-of-the-art in terms of asymptotic complexity. Despite this fact, the empirical performance is orders of magnitude worse compared to other algorithms for the instance sizes we experimented with. We experimented with most of the distribution families considered in [13] and found qualitatively similar results. We only report results using the most challenging distribution we found that was presented in that survey, where µi = 1 −(i/K)0.6. The distributions are Gaussian with 1/4 variance, and δ = 0.05. The sample count is measured in units of H1 = P α̸=α∗∆−2 α hardness [13]. 5 Conclusions We studied the threshold crossing behavior of random walks, and provided new concentration inequalities that, unlike classic Hoeffding-style bounds, hold for any stopping rule. We showed that these inequalities can be applied to various problems, such as testing for positiveness of mean, best arm identification, obtaining algorithms that perform well both in theory and in practice. Acknowledgments This research was supported by NSF (#1649208) and Future of Life Institute (#2016-158687). References [1] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. 2002. 8 [2] A. Balsubramani. Sharp Finite-Time Iterated-Logarithm Martingale Concentration. ArXiv e-prints, May 2014. URL https://arxiv.org/abs/1405.2639. [3] A. Balsubramani and A. Ramdas. Sequential Nonparametric Testing with the Law of the Iterated Logarithm. ArXiv e-prints, June 2015. URL https://arxiv.org/abs/1506.03486. [4] Leo Breiman. Probability. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 1992. ISBN 0-89871-296-3. [5] Nicolo Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge university press, 2006. [6] Lijie Chen and Jian Li. On the optimal sample complexity for best arm identification. CoRR, abs/1511.03774, 2015. URL http://arxiv.org/abs/1511.03774. [7] Fan Chung and Linyuan Lu. Concentration inequalities and martingale inequalities: a survey. Internet Math., 3(1):79–127, 2006. URL http://projecteuclid.org/euclid.im/1175266369. [8] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. PAC bounds for multi-armed bandit and Markov decision processes. In Jyrki Kivinen and Robert H. Sloan, editors, Computational Learning Theory, volume 2375 of Lecture Notes in Computer Science, pages 255–270. Springer Berlin Heidelberg, 2002. [9] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action elimination and stopping conditions for the multi-armed bandit and reinforcement learning problem. Journal of Machine Learning Research (JMLR), 2006. [10] R. H. Farrell. Asymptotic behavior of expected sample size in certain one sided tests. Ann. Math. Statist., 35(1):36–72, 03 1964. [11] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 1963. [12] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30, 1963. [13] Kevin Jamieson and Robert Nowak. Best-arm identification algorithms for multi-armed bandits in the fixed confidence setting, 2014. [14] Kevin Jamieson, Matthew Malloy, R. Nowak, and S. Bubeck. On finding the largest mean among many. ArXiv e-prints, June 2013. [15] Kevin Jamieson, Matthew Malloy, Robert Nowak, and Sébastien Bubeck. lil’UCB : An optimal exploration algorithm for multi-armed bandits. Journal of Machine Learning Research (JMLR), 2014. [16] Shivaram Kalyanakrishnan, Ambuj Tewari, Peter Auer, and Peter Stone. PAC subset selection in stochastic multi-armed bandits. In ICML-2012, pages 655–662, New York, NY, USA, June-July 2012. [17] Zohar Karnin, Tomer Koren, and Oren Somekh. Almost optimal exploration in multi-armed bandits. In ICML-2013, volume 28, pages 1238–1246. JMLR Workshop and Conference Proceedings, May 2013. [18] Richard M. Karp and Robert Kleinberg. Noisy binary search and its applications. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, pages 881–890, Philadelphia, PA, USA, 2007. [19] Volodymyr Mnih, Csaba Szepesvári, and Jean-Yves Audibert. Empirical bernstein stopping. In ICML-2008, pages 672–679, New York, NY, USA, 2008. [20] Omar Rivasplata. Subgaussian random variables: An expository note, 2012. [21] Pranab K. Sen and Julio M. Singer. Large Sample Methods in Statistics: An Introduction with Applications. Chapman and Hall, 1993. 9
2016
113
6,010
Refined Lower Bounds for Adversarial Bandits Sébastien Gerchinovitz Institut de Mathématiques de Toulouse Université Toulouse 3 Paul Sabatier Toulouse, 31062, France sebastien.gerchinovitz@math.univ-toulouse.fr Tor Lattimore Department of Computing Science University of Alberta Edmonton, Canada tor.lattimore@gmail.com Abstract We provide new lower bounds on the regret that must be suffered by adversarial bandit algorithms. The new results show that recent upper bounds that either (a) hold with high-probability or (b) depend on the total loss of the best arm or (c) depend on the quadratic variation of the losses, are close to tight. Besides this we prove two impossibility results. First, the existence of a single arm that is optimal in every round cannot improve the regret in the worst case. Second, the regret cannot scale with the effective range of the losses. In contrast, both results are possible in the full-information setting. 1 Introduction We consider the standard K-armed adversarial bandit problem, which is a game played over T rounds between a learner and an adversary. In every round t ∈{1, . . . , T} the learner chooses a probability distribution pt = (pi,t)1⩽i⩽K over {1, . . . , K}. The adversary then chooses a loss vector ℓt = (ℓi,t)1⩽i⩽K ∈[0, 1]K, which may depend on pt. Finally the learner samples an action from pt denoted by It ∈{1, . . . , K} and observes her own loss ℓIt,t. The learner would like to minimise her regret, which is the difference between cumulative loss suffered and the loss suffered by the optimal action in hindsight: RT (ℓ1:T ) = T X t=1 ℓIt,t −min 1⩽i⩽K T X t=1 ℓi,t , where ℓ1:T ∈[0, 1]T K is the sequence of losses chosen by the adversary. A famous strategy is called Exp3, which satisfies E[RT (ℓ1:T )] = O( p KT log(K))) where the expectation is taken over the randomness in the algorithm and the choices of the adversary [Auer et al., 2002]. There is also a lower bound showing that for every learner there is an adversary for which the expected regret is E[RT (ℓ1:T )] = Ω( √ KT) [Auer et al., 1995]. If the losses are chosen ahead of time, then the adversary is called oblivious, and in this case there exists a learner for which E[RT (ℓ1:T )] = O( √ KT) [Audibert and Bubeck, 2009]. One might think that this is the end of the story, but it is not so. While the worst-case expected regret is one quantity of interest, there are many situations where a refined regret guarantee is more informative. Recent research on adversarial bandits has primarily focussed on these issues, especially the questions of obtaining regret guarantees that hold with high probability as well as stronger guarantees when the losses are “nice” in some sense. While there are now a wide range of strategies with upper bounds that depend on various quantities, the literature is missing lower bounds for many cases, some of which we now provide. We focus on three classes of lower bound, which are described in detail below. The first addresses the optimal regret achievable with high probability, where we show there is little room for improvement over existing strategies. Our other results concern lower bounds that depend on some kind of regularity in the losses (“nice” data). Specifically we prove lower bounds that replace T in the regret bound with the loss of the best action (called first-order bounds) and also with the quadratic variation of the losses (called second-order bounds). 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. High-probability bounds Existing strategies Exp3.P [Auer et al., 2002] and Exp3-IX [Neu, 2015a] are tuned with a confidence parameter δ ∈(0, 1) and satisfy, for all ℓ1:T ∈[0, 1]KT , P  RT (ℓ1:T ) ⩾c p KT log(K/δ)  ⩽δ (1) for some universal constant c > 0. An alternative tuning of Exp-IX or Exp3.P [Bubeck and CesaBianchi, 2012] leads to a single algorithm for which, for all ℓ1:T ∈[0, 1]KT , ∀δ ∈(0, 1) P RT (ℓ1:T ) ⩾c √ KT p log(K) + log(1/δ) p log(K) !! ⩽δ . (2) The difference is that in (1) the algorithm depends on δ while in (2) it does not. The cost of not knowing δ is that the log(1/δ) moves outside the square root. In Section 2 we prove two lower bounds showing that there is little room for improvement in either (1) or (2). First-order bounds An improvement over the worst-case regret bound of O( √ TK) is the so-called improvement for small losses. Specifically, there exist strategies (eg., FPL-TRIX by Neu [2015b] with earlier results by Stoltz [2005], Allenberg et al. [2006], Rakhlin and Sridharan [2013]) such that for all ℓ1:T ∈[0, 1]KT E[RT (ℓ1:T )] ⩽O q L∗ T K log(K) + K log(KT)  , with L∗ T = min 1⩽i⩽K T X t=1 ℓi,t , (3) where the expectation is with respect to the internal randomisation of the algorithm (the losses are fixed). This result improves on the O( √ KT) bounds since L∗ T ⩽T is always guaranteed and sometimes L∗ T is much smaller than T. In order to evaluate the optimality of this bound, we first rewrite it in terms of the small-loss balls Bα,T defined for all α ∈[0, 1] and T ⩾1 by Bα,T ≜  ℓ1:T ∈[0, 1]KT : L∗ T T ⩽α  . (4) Corollary 1. The first-order regret bound (3) of Neu [2015b] is equivalent to: ∀α ∈[0, 1], sup ℓ1:T ∈Bα,T E[RT (ℓ1:T )] ⩽O p αTK log(K) + K log(KT)  . The proof is straightforward. Our main contribution in Section 3 is a lower bound of the order of √ αTK for all α ∈Ω(log(T)/T). This minimax lower bound shows that we cannot hope for a better bound than (3) (up to log factors) if we only know the value of L∗ T . Second-order bounds Another type of improved regret bound was derived by Hazan and Kale [2011b] and involves a second-order quantity called the quadratic variation: QT = T X t=1 ∥ℓt −µT ∥2 2 ⩽TK 4 , (5) where µT = 1 T PT t=1 ℓt is the mean of all loss vectors. (In other words, QT /T is the sum of the empirical variances of all the K arms). Hazan and Kale [2011b] addressed the general online linear optimisation setting. In the particular case of adversarial K-armed bandits with an oblivious adversary (as is the case here), they showed that there exists an efficient algorithm such that for some absolute constant c > 0 and for all T ⩾2 ∀ℓ1:T ∈[0, 1]KT , E[RT (ℓ1:T )] ⩽c  K2p QT log T + K1.5 log2 T + K2.5 log T  . (6) As before we can rewrite the regret bound (6) in terms of the small-variation balls Vα,T defined for all α ∈[0, 1/4] and T ⩾1 by Vα,T ≜  ℓ1:T ∈[0, 1]KT : QT TK ⩽α  . (7) Corollary 2. The second-order regret bound (6) of Hazan and Kale [2011b] is equivalent to: ∀α ∈[0, 1/4], sup ℓ1:T ∈Vα,T E[RT (ℓ1:T )] ⩽c  K2p αTK log T + K3/2 log2 T + K5/2 log T  . 2 The proof is straightforward because the losses are deterministic and fixed in advance by an oblivious adversary. In Section 4 we provide a lower bound of order √ αTK that holds whenever α = Ω(log(T)/T). This minimax lower bound shows that we cannot hope for a bound better than (7) by more than a factor of K2√log T if we only know the value of QT . Closing the gap is left as an open question. Two impossibility results in the bandit setting We also show in Section 4 that, in contrast to the full-information setting, regret bounds involving the cumulative variance of the algorithm as in [Cesa-Bianchi et al., 2007] cannot be obtained in the bandit setting. More precisely, we prove that two consequences that hold true in the full-information case, namely: (i) a regret bound proportional to the effective range of the losses and (ii) a bounded regret if one arm performs best at all rounds, must fail in the worst case for every bandit algorithm. Additional notation and key tools Before the theorems we develop some additional notation and describe the generic ideas in the proofs. For 1 ⩽i ⩽K let Ni(t) be the number of times action i has been chosen after round t. All our lower bounds are derived by analysing the regret incurred by strategies when facing randomised adversaries that choose the losses for all actions from the same joint distribution in every round (sometimes independently for each action and sometimes not). Ber(α) denotes the Bernoulli distribution with parameter α ∈[0, 1]. If P and Q are measures on the same probability space, then KL(P, Q) is the KL-divergence between them. For a < b we define clip[a,b](x) = min {b, max {a, x}} and for x, y ∈R we let x ∨y = max{x, y}. Our main tools throughout the analysis are the following information-theoretic lemmas. The first bounds the KL divergence between the laws of the observed losses/actions for two distributions on the losses. Lemma 1. Fix a randomised bandit algorithm and two probability distributions Q1 and Q2 on [0, 1]K. Assume the loss vectors ℓ1, . . . , ℓT ∈[0, 1]K are drawn i.i.d. from either Q1 or Q2, and denote by Qj the joint probability distribution on all sources of randomness when Qj is used (formally, Qj = Pint ⊗(Q⊗T j ), where Pint is the probability distribution used by the algorithm for its internal randomisation). Let t ⩾1. Denote by ht = (Is, ℓIs,s)1⩽s⩽t−1 the history available at the beginning of round t, by Q(ht,It) j the law of (ht, It) under Qj, and by Qj,i the ith marginal distribution of Qj. Then, KL  Q(ht,It) 1 , Q(ht,It) 2  = K X i=1 EQ1  Ni(t −1)  KL Q1,i, Q2,i  . Results of roughly this form are well known and the proof follows immediately from the chain rule for the relative entropy and the independence of the loss vectors across time (see [Auer et al., 2002] or the supplementary material). One difference is that the losses need not be independent across the arms, which we heavily exploit in our proofs by using correlated losses. The second key lemma is an alternative to Pinsker’s inequality that proves useful when the Kullback-Leibler divergence is larger than 2. It has previously been used for bandit lower bounds (in the stochastic setting) by Bubeck et al. [2013]. Lemma 2 (Lemma 2.6 in Tsybakov 2008). Let P and Q be two probability distributions on the same measurable space. Then, for every measurable subset A (whose complement we denote by Ac), P(A) + Q(Ac) ⩾1 2 exp −KL(P, Q)  . 2 Zero-Order High Probability Lower Bounds We prove two new high-probability lower bounds on the regret of any bandit algorithm. The first shows that no strategy can enjoy smaller regret than Ω( p KT log(1/δ)) with probability at least 1 −δ. Upper bounds of this form have been shown for various algorithms including Exp3.P [Auer et al., 2002] and Exp3-IX [Neu, 2015a]. Although this result is not very surprising, we are not aware of any existing work on this problem and the proof is less straightforward than one might expect. An added benefit of our result is that the loss sequences producing large regret have two special properties. First, the optimal arm is the same in every round and second the range of the losses in each round is O( p K log(1/δ)/T). These properties will be useful in subsequent analysis. In the second lower bound we show that any algorithm for which E[RT (ℓ1:T )] = O( √ KT) must necessarily suffer a high probability regret of at least Ω( √ KT log(1/δ)) for some sequence ℓ1:T . 3 The important difference relative to the previous result is that strategies with log(1/δ) appearing inside the square root depend on a specific value of δ, which must be known in advance. Theorem 1. Suppose K ⩾2 and δ ∈(0, 1/4) and T ⩾32(K −1) log(2/δ), then there exists a sequence of losses ℓ1:T ∈[0, 1]KT such that P  RT (ℓ1:T ) ⩾1 27 p (K −1)T log(1/(4δ))  ⩾δ/2 , where the probability is taken with respect to the randomness in the algorithm. Furthermore ℓ1:T can be chosen in such a way that there exists an i such that for all t it holds that ℓi,t = minj ℓj,t and maxj,k{ℓj,t −ℓk,t} ⩽ p (K −1) log(1/(4δ))/T/(4√log 2). Theorem 2. Suppose K ⩾2, T ⩾1, and there exists a strategy and constant C > 0 such that for any ℓ1:T ∈[0, 1]KT it holds that E[RT (ℓ1:T )] ⩽C p (K −1)T. Let δ ∈(0, 1/4) satisfy p (K −1)/T log(1/(4δ)) ⩽C and T ⩾32 log(2/δ). Then there exists ℓ1:T ∈[0, 1]KT for which P RT (ℓ1:T ) ⩾ p (K −1)T log(1/(4δ)) 203C ! ⩾δ/2 , where the probability is taken with respect to the randomness in the algorithm. Corollary 3. If p ∈ (0, 1) and C > 0, then there does not exist a strategy such that for all T, K, ℓ1:T ∈ [0, 1]KT and δ ∈ (0, 1) the regret is bounded by P  RT (ℓ1:T ) ⩾C p (K −1)T logp(1/δ)  ⩽δ. The corollary follows easily by integrating the assumed high-probability bound and applying Theorem 2 for sufficiently large T and small δ. The proof may be found in the supplementary material. Proof of Theorems 1 and 2 Both proofs rely on a carefully selected choice of correlated stochastic losses described below. Let Z1, Z2, . . . , ZT be a sequence of i.i.d. Gaussian random variables with mean 1/2 and variance σ2 = 1/(32 log(2)). Let ∆∈[0, 1/30] be a constant that will be chosen differently in each proof and define K random loss sequences ℓ1 1:T , . . . , ℓK 1:T where ℓj i,t =      clip[0,1](Zt −∆) if i = 1 clip[0,1](Zt −2∆) if i = j ̸= 1 clip[0,1](Zt) otherwise . For 1 ⩽j ⩽K let Qj be the measure on ℓ1:T ∈[0, 1]KT and I1, . . . , IT when ℓi,t = ℓj i,t for all 1 ⩽i ⩽K and 1 ⩽t ⩽T. Informally, Qj is the measure on the sequence of loss vectors and actions when the learner interacts with the losses sampled from the jth environment defined above. Lemma 3. Let δ ∈ (0, 1) and suppose ∆ ⩽ 1/30 and T ⩾ 32 log(2/δ). Then Qi RT (ℓi 1:T ) ⩾∆T/4  ⩾Qi (Ni(T) ⩽T/2) −δ/2 and EQi[RT (ℓi 1:T )] ⩾7∆EQi[T −Ni(T)]/8. The proof follows by substituting the definition of the losses and applying Azuma’s inequality to show that clipping does not occur too often. See the supplementary material for details. Proof of Theorem 1. First we choose the value of ∆that determines the gaps in the losses by ∆= p σ2(K −1) log(1/(4δ))/(2T) ⩽1/30. By the pigeonhole principle there exists an i > 1 for which EQ1[Ni(T)] ⩽T/(K −1). Therefore by Lemmas 2 and 1, and the fact that the KL divergence between clipped Gaussian distributions is always smaller than without clipping, Q1 (N1(T) ⩽T/2) + Qi (N1(T) > T/2) ⩾1 2 exp  −KL  Q(hT ,IT ) 1 , Q(hT ,IT ) i  ⩾1 2 exp  −EQ1[Ni(T)](2∆)2 2σ2  ⩾1 2 exp  − 2T∆2 σ2(K −1)  = 2δ . But by Lemma 3 max k∈{1,i} Qk RT (ℓk 1:T ) ⩾T∆/4  ⩾max {Q1 (N1(T) ⩽T/2) , Qi (Ni(T) ⩽T/2)} −δ/2 ⩾1 2 (Q1 (N1(T) ⩽T/2) + Qi (N1(T) > T/2)) −δ/2 ⩾δ/2 . 4 Therefore there exists an i ∈{1, . . . , K} such that Qi RT (ℓi 1:T ) ⩾ s σ2T(K −1) 32 log  1 4δ ! = Qi RT (ℓi 1:T ) ⩾T∆/4  ⩾δ/2 . The result is completed by substituting the value of σ2 = 1/(32 log(2)) and by noting that maxj,k{ℓj,t −ℓk,t} ⩽2∆⩽ p (K −1) log(1/(4δ))/T/(4√log 2) Qi-almost surely. Proof of Theorem 2. By the assumption on δ we have ∆= 7σ2 16C q K−1 T log 1 4δ  ⩽1/30. Suppose for all i > 1 that EQ1[Ni(T)] > σ2 2∆2 log  1 4δ  . (8) Then by the assumption in the theorem statement and the second part of Lemma 3 we have C p (K −1)T ⩾EQ1[RT (ℓ1 1:T )] ⩾7∆ 8 EQ1 " K X i=2 Ni(T) # > 7σ2(K −1) 16∆ log 1 4δ = C p (K −1)T , which is a contradiction. Therefore there exists an i > 1 for which Eq. (8) does not hold. Then by the same argument as the previous proof it follows that max k∈{1,i} Qk  RT (ℓk 1:T ) ⩾ 7σ2 4 · 16C p (K −1)T log 1 4δ  = max k∈{1,i} Qk RT (ℓk 1:T ) ⩾T∆/4  ⩾δ/2 . The result is completed by substituting the value of σ2 = 1/(32 log(2)). Remark 1. It is possible to derive similar high-probability regret bounds with non-correlated losses. However the correlation makes the results cleaner (we do not need an additional concentration argument to locate the optimal arm) and it is key to derive Corollaries 4 and 5 in Section 4. 3 First-Order Lower Bound First-order upper bounds provide improvement over minimax bounds when the loss of the optimal action is small. Recall from Corollary 1 that first-order bounds can be rewritten in terms of the small-loss balls Bα,T defined in (4). Theorem 3 below provides a new lower bound of order pL∗ T K, which matches the best existing upper bounds up to logarithmic factors. As is standard for minimax results this does not imply a lower bound on every loss sequence ℓ1:T . Instead it shows that we cannot hope for a better bound if we only know the value of L∗ T . Theorem 3. Let K ⩾2, T ⩾K ∨118, and α ∈[(c log(32T) ∨(K/2))/T, 1/2], where c = 64/9. Then for any randomised bandit algorithm supℓ1:T ∈Bα,T E[RT (ℓ1:T )] ⩾ √ αTK/27, where the expectation is taken with respect to the internal randomisation of the algorithm. Our proof is inspired by that of Auer et al. [2002, Theorem 5.1]. The key difference is that we take Bernoulli distributions with parameter close to α instead of 1/2. This way the best cumulative loss L∗ T is ensured to be concentrated around αT, and the regret lower bound √ αTK ≈ p α(1 −α)TK can be seen to involve the variance α(1 −α)T of the binomial distribution with parameters α and T. First we state the stochastic construction of the losses and prove a general lemma that allows us to prove Theorem 3 and will also be useful in Section 4 to a derive a lower bound in terms of the quadratic variation. Let ε ∈[0, 1 −α] be fixed and define K probability distributions (Qj)K j=1 on [0, 1]KT such that under Qj the following hold: • All random losses ℓi,t for 1 ⩽i ⩽K and 1 ⩽t ⩽T are independent. • ℓi,t is sampled from a Bernoulli distribution with parameter α+ε if i ̸= j, or with parameter α if i = j. Lemma 4. Let α ∈(0, 1), K ⩾2, and T ⩾K/(4(1−α)). Consider the probability distributions Qj on [0, 1]KT defined above with ε = (1/2) p α(1 −α)K/T, and set ¯Q = 1 K PK j=1 Qj. Then for any randomised bandit algorithm E[RT (ℓ1:T )] ⩾ p α(1 −α)TK/8, where the expectation is with respect to both the internal randomisation of the algorithm and the random loss sequence ℓ1:T which is drawn from ¯Q. 5 The assumption T ⩾K/(4(1 −α)) above ensures that ε ⩽1 −α, so that the Qj are well defined. Proof of Lemma 4. We lower bound the regret by the pseudo-regret for each distribution Qj: EQj " T X t=1 ℓIt,t −min 1⩽i⩽K T X t=1 ℓi,t # ⩾EQj " T X t=1 ℓIt,t # −min 1⩽i⩽K EQj " T X t=1 ℓi,t # = T X t=1 EQj  α + ε −ε1{It=j}  −Tα = Tε 1 −1 T T X t=1 Qj(It = j) ! , (9) where the first equality follows because EQj[ℓIt,t] = EQj[EQj[ℓIt,t|ℓ1:t−1, It]] = EQj[α + ε − ε1{It=j}] since under Qj, the conditional distribution of ℓt given (ℓ1:t−1, It) is simply ⊗K i=1B(α + ε −ε1{i=j}). To bound (9) from below, note that by Pinsker’s inequality we have for all t ∈ {1, . . . , T} and j ∈{1, . . . , K}, Qj(It = j) ⩽Q0(It = j) + (KL(QIt 0 , QIt j )/2)1/2, where Q0 = Ber(α + ε)⊗KT is the joint probability distribution that makes all the ℓi,t i.i.d. Ber(α + ε), and QIt 0 and QIt j denote the laws of It under Q0 and Qj respectively. Plugging the last inequality above into (9), averaging over j = 1, . . . , K and using the concavity of the square root yields E¯Q " T X t=1 ℓIt,t −min 1⩽i⩽K T X t=1 ℓi,t # ⩾Tε  1 −1 K − v u u t 1 2T T X t=1 1 K K X j=1 KL QIt 0 , QIt j   , (10) where we recall that ¯Q = 1 K PK j=1 Qj. The rest of the proof is devoted to upper-bounding KL(QIt 0 , QIt j ). Denote by ht = (Is, ℓIs,s)1⩽s⩽t−1 the history available at the beginning of round t. From Lemma 1 KL  QIt 0 , QIt j  ⩽KL  Q(ht,It) 0 , Q(ht,It) j  = EQ0  Nj(t −1)  KL B(α + ε), B(α)  ⩽EQ0  Nj(t −1)  ε2 α(1 −α) , (11) where the last inequality follows by upper bounding the KL divergence by the χ2 divergence (see the supplementary material). Averaging (11) over j ∈{1, . . . , K} and t ∈{1, . . . , T} and noting that PT t=1(t −1) ⩽T 2/2 we get 1 T T X t=1 1 K K X j=1 KL QIt 0 , QIt j  ⩽1 T T X t=1 (t −1)ε2 Kα(1 −α) ⩽ Tε2 2Kα(1 −α) . Plugging the above inequality into (10) and using the definition of ε = (1/2) p α(1 −α)K/T yields E¯Q " T X t=1 ℓIt,t −min 1⩽i⩽K T X t=1 ℓi,t # ⩾Tε  1 −1 K −1 4  ⩾1 8 p α(1 −α)TK . Proof of Theorem 3. We show that there exists a loss sequence ℓ1:T ∈[0, 1]KT such that L∗ T ⩽αT and E[RT (ℓ1:T )] ⩾(1/27) √ αTK. Lemma 4 above provides such kind of lower bound, but without the guarantee on L∗ T . For this purpose we will use Lemma 4 with a smaller value of α (namely, α/2) and combine it with Bernstein’s inequality to prove that L∗ T ⩽Tα with high probability. Part 1: Applying Lemma 4 with α/2 (note that T ⩾K ⩾K/(4(1 −α/2)) by assumption on T) and noting that maxj EQj[RT (ℓ1:T )] ⩾E¯Q[RT (ℓ1:T )] we get that for some j ∈{1, . . . , K} the probability distribution Qj defined with ε = (1/2) p (α/2)(1 −α/2)K/T satisfies EQj[RT (ℓ1:T )] ⩾1 8 r α 2  1 −α 2  TK ⩾1 32 √ 6αTK (12) since α ⩽1/2 by assumption. Part 2: Next we prove that Qj(L∗ T > Tα) ⩽ 1 32T . (13) 6 To this end, first note that L∗ T ⩽PT t=1 ℓj,t. Second, note that under Qj, the ℓj,t, t ⩾1, are i.i.d. Ber(α/2). We can thus use Bernstein’s inequality: applying Theorem 2.10 (and a remark on p.38) of Boucheron et al. [2013] with Xt = ℓj,t −α/2 ⩽1 = b, with v = T(α/2)(1 −α/2), and with c = b/3 = 1/3), we get that, for all δ ∈(0, 1), with Qj-probability at least 1 −δ, L∗ T ⩽ T X t=1 ℓj,t ⩽Tα 2 + r 2T α 2  1 −α 2  log 1 δ + 1 3 log 1 δ ⩽Tα 2 +  1 + 1 3  r Tα log 1 δ ⩽Tα 2 + Tα 2 = Tα , (14) where the second last inequality is true whenever Tα ⩾log(1/δ) and that last is true whenever Tα ⩾(8/3)2 log(1/δ) = c log(1/δ). By assumption on α, these two conditions are satisfied for δ = 1/(32T), which concludes the proof of (13). Conclusion: We show by contradiction that there exists a loss sequence ℓ1:T ∈[0, 1]KT such that L∗ T ⩽αT and E[RT (ℓ1:T )] ⩾1 64 √ 6αTK , (15) where the expectation is with respect to the internal randomisation of the algorithm. Imagine for a second that (15) were false for every loss sequence ℓ1:T ∈[0, 1]KT satisfying L∗ T ⩽αT. Then we would have 1{L∗ T ⩽αT }EQj[RT (ℓ1:T )|ℓ1:T ] ⩽(1/64) √ 6αTK almost surely (since the internal source of randomness of the bandit algorithm is independent of ℓ1:T ). Therefore by the tower rule for the first expectation on the r.h.s. below, we would get EQj[RT (ℓ1:T )] = EQj h RT (ℓ1:T )1{L∗ T ⩽αT } i + EQj h RT (ℓ1:T )1{L∗ T >αT } i ⩽1 64 √ 6αTK + T · Qj(L∗ T > Tα) ⩽1 64 √ 6αTK + 1 32 < 1 32 √ 6αTK (16) where (16) follows from (13) and by noting that 1/32 < (1/64) √ 6αTK since α ⩾K/(2T) > 4/(6T) ⩾4/(6TK). Comparing (16) and (12) we get a contradiction, which proves that there exists a loss sequence ℓ1:T ∈[0, 1]KT satisfying both L∗ T ⩽αT and (15). We conclude the proof by noting that √ 6/64 ⩾1/27. Finally, the condition T ⩾K ∨118 is sufficient to make the interval  (c log(32T) ∨(K/2))/T, 1 2  non empty. 4 Second-Order Lower Bounds We start by giving a lower bound on the regret in terms of the quadratic variation that is close to existing upper bounds except in the dependence on the number of arms. Afterwards we prove that bandit strategies cannot adapt to losses that lie in a small range or the existence of an action that is always optimal. Lower bound in terms of quadratic variation We prove a lower bound of Ω( √ αTK) over any small-variation ball Vα,T (as defined by (7)) for all α = Ω(log(T)/T). This minimax lower bound matches the upper bound of Corollary 2 up to a multiplicative factor of K2p log(T). Closing this gap is left as an open question, but we conjecture that the upper bound is loose (see also the COLT open problem by Hazan and Kale [2011a]). Theorem 4. Let K ⩾ 2, T ⩾ (32K) ∨601, and α ∈ [(2c1 log(T) ∨8K)/T, 1/4], where c1 = (4/9)2(3 √ 5 + 1)2 ⩽ 12. Then for any randomised bandit algorithm, supℓ1:T ∈Vα,T E[RT (ℓ1:T )] ⩾ √ αTK/25, where the expectation is taken with respect to the internal randomisation of the algorithm. The proof is very similar to that of Theorem 3; it also follows from Lemma 4 and Bernstein’s inequality. It is postponed to the supplementary material. Impossibility results In the full-information setting (where the entire loss vector is observed after each round) Cesa-Bianchi et al. [2007, Theorem 6] designed a carefully tuned exponential weighting algorithm for which the regret depends on the variation of the algorithm and the range of the losses: ∀ℓ1:T ∈RKT , E[RT (ℓ1:T )] ⩽4 p VT log K + 4ET log K + 6ET , (17) 7 where the expectation is taken with respect to the internal randomisation of the algorithm and ET = max1⩽t⩽T max1⩽i,j⩽K |ℓi,t −ℓj,t| denotes the effective range of the losses and VT = PT t=1 VarIt∼pt(ℓIt,t) denotes the cumulative variance of the algorithm (in each round t the expert’s action It is drawn at random from the weight vector pt). The bound in (17) is not closed-form because VT depends on the algorithm, but has several interesting consequences: 1. If for all t the losses ℓi,t lie in an unknown interval [at, at + ρ] with a small width ρ > 0, then VarIt∼pt(ℓIt,t) ⩽ρ2/4, so that VT ⩽Tρ2/4. Hence E[RT (ℓ1:T )] ⩽2ρ p T log K + 4ρ log K + 6ρ . Therefore, though the algorithm by Cesa-Bianchi et al. [2007, Section 4.2] does not use the prior knowledge of at or ρ, it is able to incur a regret that scales linearly in the effective range ρ. 2. If all the losses ℓi,t are nonnegative, then by Corollary 3 of [Cesa-Bianchi et al., 2007] the second-order bound (17) implies the first-order bound E[RT (ℓ1:T )] ⩽4 s L∗ T  MT −L∗ T T  log K + 39MT max{1, log K} , (18) where MT = max1⩽t⩽T max1⩽i⩽K ℓi,t . 3. If there exists an arm i∗that is optimal at every round t (i.e., ℓi∗,t = mini ℓi,t for all t ⩾1), then any translation-invariant algorithm with regret guarantees as in (18) above suffers a bounded regret. This is the case for the fully automatic algorithm of Cesa-Bianchi et al. [2007, Theorem 6] mentioned above. Then by the translation invariance of the algorithm all losses ℓi,t appearing in the regret bound can be replaced with the translated losses ℓi,t −ℓi∗,t ⩾0, so that a bound of the same form as (18) implies a regret bound of O(log K). 4. Assume that the loss vectors ℓt are i.i.d. with a unique optimal arm in expectation (i.e., there exists i∗such that E[ℓi∗,1] < E[ℓi,1] for all i ̸= i∗). Then using the Hoeffding-Azuma inequality we can show that the algorithm of Cesa-Bianchi et al. [2007, Section 4.2] has with high probability a bounded cumulative variance VT , and therefore (by (17)) incurs a bounded regret, in the same spirit as in de Rooij et al. [2014], Gaillard et al. [2014]. We already know that point 2 has a counterpart in the bandit setting. If one is prepared to ignore logarithmic terms, then point 4 also has an analogue in the bandit setting due to the existence of logarithmic regret guarantees for stochastic bandits [Lai and Robbins, 1985]. The following corollaries show that in the bandit setting it is not possible to design algorithms to exploit the range of the losses or the existence of an arm that is always optimal. We use Theorem 1 as a general tool but the bounds can be improved to √ TK/30 by analysing the expected regret directly (similar to Lemma 4). Corollary 4. Let K ⩾2, T ⩾32(K −1) log(14) and ρ ⩾0.22 p (K −1)/T. Then for any randomised bandit algorithm, supℓ1,...,ℓT ∈Cρ E[RT (ℓ1:T )] ⩾ p T(K −1)/504, where the expectation is with respect to the randomness in the algorithm, and Cρ ≜  x ∈[0, 1]K : maxi,j |xi −xj| ⩽ρ . Corollary 5. Let K ⩾2 and T ⩾32(K −1) log(14). Then, for any randomised bandit algorithm, there is a loss sequence ℓ1:T ∈[0, 1]KT such that there exists an arm i∗that is optimal at every round t (i.e., ℓi∗,t = mini ℓi,t for all t ⩾1), but E[RT (ℓ1:T )] ⩾ p T(K −1)/504, where the expectation is with respect to the randomness in the algorithm. Proof of Corollaries 4 and 5. Both results follow from Theorem 1 by choosing δ = 0.15. Therefore there exists an ℓ1:T such that P{RT (ℓ1:T ) ⩾ p (K −1)T log(1/(4 · 0.15)/27} ⩾0.15/2, which implies (since RT (ℓ1:T ) ⩾0 here) that E[RT (ℓ1:T )] ⩾ p (K −1)T/504. Finally note that ℓ1:T ∈ Cρ since ρ ⩾ p (K −1) log(1/(4δ))/T/(4√log 2) and there exists an i such that ℓi,t ⩽ℓj,t for all j and t. Acknowledgments The authors would like to thank Aurélien Garivier and Émilie Kaufmann for insightful discussions. This work was partially supported by the CIMI (Centre International de Mathématiques et d’Informatique) Excellence program. The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grants ANR-13-BS01-0005 (project SPADRO) and ANR13-CORD-0020 (project ALICIA). 8 References C. Allenberg, P. Auer, L. Györfi, and G. Ottucsák. Hannan consistency in on-line learning in case of unbounded losses under partial monitoring. In Proceedings of ALT’2006, pages 229–243. Springer, 2006. J. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In Proceedings of Conference on Learning Theory (COLT), pages 217–226, 2009. P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Foundations of Computer Science, 1995. Proceedings., 36th Annual Symposium on, pages 322–331. IEEE, 1995. P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire. The nonstochastic multi-armed bandit problem. SIAM J. Comput., 32(1):48–77, 2002. S. Boucheron, G. Lugosi, and P. Massart. Concentration inequalities: a nonasymptotic theory of independence. Oxford University Press, 2013. S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. S. Bubeck, V. Perchet, and P. Rigollet. Bounded regret in stochastic multi-armed bandits. In Proceedings of The 26th Conference on Learning Theory, pages 122–134, 2013. N. Cesa-Bianchi, Y. Mansour, and G. Stoltz. Improved second-order bounds for prediction with expert advice. Mach. Learn., 66(2/3):321–352, 2007. S. de Rooij, T. van Erven, P. D. Grünwald, and W. M. Koolen. Follow the leader if you can, hedge if you must. J. Mach. Learn. Res., 15(Apr):1281–1316, 2014. P. Gaillard, G. Stoltz, and T. van Erven. A second-order bound with excess losses. In Proceedings of the 27th Conference on Learning Theory (COLT’14), 2014. E. Hazan and S. Kale. A simple multi-armed bandit algorithm with optimal variation-bounded regret. In Proceedings of the 24th Conference on Learning Theory, pages 817–820, 2011a. E. Hazan and S. Kale. Better algorithms for benign bandits. J. Mach. Learn. Res., 12(Apr):1287–1311, 2011b. T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Adv. in Appl. Math., 6: 4–22, 1985. G. Neu. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In Advances in Neural Information Processing Systems 28 (NIPS 2015), 2015a. G. Neu. First-order regret bounds for combinatorial semi-bandits. In Proceedings of The 28th Conference on Learning Theory, pages 1360–1375, 2015b. A. Rakhlin and K. Sridharan. Online learning with predictable sequences. In Proceedings of the 26th Conference on Learning Theory, pages 993–1019, 2013. G. Stoltz. Incomplete information and internal regret in prediction of individual sequences. PhD thesis, Paris-Sud XI University, 2005. A. Tsybakov. Introduction to nonparametric estimation. Springer Science & Business Media, 2008. 9
2016
114
6,011
Structure-Blind Signal Recovery Dmitry Ostrovsky∗Zaid Harchaoui† Anatoli Juditsky∗Arkadi Nemirovski‡ firstname.lastname@imag.fr Abstract We consider the problem of recovering a signal observed in Gaussian noise. If the set of signals is convex and compact, and can be specified beforehand, one can use classical linear estimators that achieve a risk within a constant factor of the minimax risk. However, when the set is unspecified, designing an estimator that is blind to the hidden structure of the signal remains a challenging problem. We propose a new family of estimators to recover signals observed in Gaussian noise. Instead of specifying the set where the signal lives, we assume the existence of a well-performing linear estimator. Proposed estimators enjoy exact oracle inequalities and can be efficiently computed through convex optimization. We present several numerical illustrations that show the potential of the approach. 1 Introduction We consider the problem of recovering a complex-valued signal (xt)t∈Z from the noisy observations yτ = xτ + σζτ, −n ≤τ ≤n. (1) Here n ∈Z+, and ζτ ∼CN(0, 1) are i.i.d. standard complex-valued Gaussian random variables, meaning that ζ0 = ξ1 0 + ıξ2 0 with i.i.d. ξ1 0, ξ2 0 ∼N(0, 1). Our goal is to recover xt, 0 ≤t ≤n, given the sequence of observations yt−n, ..., yt up to instant t, a task usually referred to as (pointwise) filtering in machine learning, statistics, and signal processing [5]. The traditional approach to this problem considers linear estimators, or linear filters, which write as bxt = n X τ=0 φτyt−τ, 0 ≤t ≤n. Linear estimators have been thoroughly studied in various forms, they are both theoretically attractive [7, 3, 2, 16, 17, 11, 13] and easy to use in practice. If the set X of signals is well-specified, one can usually compute a (nearly) minimax on X linear estimator in a closed form. In particular, if X is a class of smooth signals, such as a H¨older or a Sobolev ball, then the corresponding estimator is given by the kernel estimator with the properly set bandwidth parameter [16] and is minimax among all possible estimators. Moreover, as shown by [6, 2], if only X is convex, compact, and centrally symmetric, the risk of the best linear estimator of xt is within a small constant factor of the minimax risk over X. Besides, if the set X can be specified in a computationally tractable way, which clearly is still a weaker assumption than classical smoothness assumptions, the best linear estimator can be efficiently computed by solving a convex optimization problem on X. In other words, given a computationally tractable set X on the input, one can compute a nearly-minimax linear estimator and the corresponding (nearly-minimax) risk over X. The strength of this approach, however, comes at ∗LJK, University of Grenoble Alpes, 700 Avenue Centrale, 38401 Domaine Universitaire de Saint-Martind’H`eres, France. †University of Washington, Seattle, WA 98195, USA. ‡Georgia Institute of Technology, Atlanta, GA 30332, USA. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. a price: the set X still must be specified beforehand. Therefore, when one faces a recovery problem without any prior knowledge of X, this approach cannot be implemented. We adopt here a novel approach to filtering, which we refer to as structure-blind recovery. While we do not require X to be specified beforehand, we assume that there exists a linear oracle – a wellperforming linear estimator of xt. Previous works [8, 10, 4], following a similar philosophy, proved that one can efficiently adapt to the linear oracle filter of length m = O(n) if the corresponding filter φ is time-invariant, i.e. it recovers the target signal uniformly well in the O(n)-sized neighbourhood of t, and if its ℓ2-norm is small – bounded by ρ/√m for a moderate ρ ≥1. The adaptive estimator is computed by minimizing the ℓ∞-norm of the filter discrepancy, in the Fourier domain, under the constraint on the ℓ1-norm of the filter in the Fourier domain. Put in contrast to the oracle linear filter, the price for adaptation is proved to be O(ρ3√ ln n), with the lower bound of O(ρ √ ln n) [8, 4]. We make the following contributions: • we propose a new family of recovery methods, obtained by solving a least-squares problem constrained or penalized by the ℓ1-norm of the filter in the Fourier domain; • we prove exact oracle inequalities for the ℓ2-risk of these methods; • we show that the price for adaptation improves upon previous works [8, 4] to O(ρ2√ ln n) for the point-wise risk and to O(ρ √ ln n) for the ℓ2-risk. • we present numerical experiments that show the potential of the approach on synthetic and real-world images and signals. Before presenting the theoretical results, let us introduce the notation we use throughout the paper. Filters Let C(Z) be the linear space of all two-sided complex-valued sequences x = {xt ∈C}t∈Z. For k, k′ ∈Z we consider finite-dimensional subspaces C(Zk′ k ) = {x ∈C(Z) : xt = 0, t /∈[k, k′]} . It is convenient to identify m-dimensional complex vectors, m = k′ −k + 1, with elements of C(Zk′ k ) by means of the notation: xk′ k := [xk; ...; xk′] ∈Ck′−k+1. We associate to linear mappings C(Zk′ k ) →C(Zj′ j ) (j′−j +1)×(k′−k+1) matrices with complex entries. The convolution u ∗v of two sequences u, v ∈C(Z) is a sequence with elements [u ∗v]t = X τ∈Z uτvt−τ, t ∈Z. Given observations (1) and ϕ ∈C(Zm 0 ) consider the (left) linear estimation of x associated with filter ϕ: bxt = [ϕ ∗y]t (bxt is merely a kernel estimate of xt by a kernel ϕ supported on [0, ..., m]). Discrete Fourier transform We define the unitary Discrete Fourier transform (DFT) operator Fn : Cn+1 →Cn+1 by z 7→Fnz, [Fnz]k = (n + 1)−1/2 n X t=0 zt e 2πıkt n+1 , 0 ≤k ≤n. The inverse Discrete Fourier transform (iDFT) operator F −1 n is given by F −1 n := F H n (here AH stands for Hermitian adjoint of A). By the Fourier inversion theorem, F −1 n (Fn z) = z. We denote ∥· ∥p usual ℓp-norms on C(Z): ∥x∥p = (P t∈Z |xt|p)1/p, p ∈[1, ∞]. Usually, the argument will be finite-dimensional – an element of C(Zk′ k ); we reserve the special notation ∥x∥n,p := ∥xn 0∥p. 2 Furthermore, DFT allows to equip C(Zn 0) with the norms associated with ℓp-norms in the spectral domain: ∥x∥∗ n,p := ∥xn 0∥∗ p := ∥Fnxn 0∥p, p ∈[1, ∞]; note that unitarity of the DFT implies the Parseval identity: ∥x∥n,2 = ∥x∥∗ n,2. Finally, c, C, and C′ stand for generic absolute constants. 2 Oracle inequality for constrained recovery Given observations (1) and ϱ > 0, we first consider the constrained recovery bxcon given by [bxcon]t = [bϕ ∗y]t, t = 0, ..., n, where bϕ is an optimal solution of the constrained optimization problem min ϕ∈C(Zn 0 )  ∥y −ϕ ∗y∥n,2 : ∥ϕ∥∗ n,1 ≤ϱ/ √ n + 1 . (2) The constrained recovery estimator minimizes a least-squares fit criterion under a constraint on ∥ϕ∥∗ n,1 = ∥Fnϕn 0∥1, that is an ℓ1 constraint on the discrete Fourier transform of the filter. While the least-squares objective naturally follows from the Gaussian noise assumption, the constraint can be motivated as follows. Small-error linear filters Linear filter ϕo with a small ℓ1 norm in the spectral domain and small recovery error exists, essentially, whenever there exists a linear filter with small recovery error [8, 4]. Indeed, let us say that x ∈C(Zn 0) is simple [4] with parameters m ∈Z+ and ρ ≥1 if there exists φo ∈C(Zm 0 ) such that for all −m ≤τ ≤2m,  E  |xτ −[φo ∗y]τ|2 1/2 ≤ σρ √m + 1. (3) In other words, x is (m, ρ)-simple if there exists a hypothetical filter φo of the length at most m + 1 which recovers xτ with squared risk uniformly bounded by σ2ρ2 m+1 in the interval −m ≤τ ≤2m. Note that (3) clearly implies that ∥φo∥2 ≤ρ/√m + 1, and that |[x −φo ∗x]τ| ≤σρ/√m + 1 ∀τ, −m ≤τ ≤2m. Now, let n = 2m, and let ϕo = φo ∗φo ∈Cn+1. As proved in [15, Appendix C], we have ∥ϕo∥∗ n,1 ≤2ρ2/ √ n + 1, (4) and, for a moderate absolute constant c, ∥x −ϕo ∗y∥n,2 ≤cσρ2p 1 + ln[1/α] (5) with probability 1−α. To summarize, if x is (m, ρ)-simple, i.e., when there exists a filter φo of length ≤m + 1 which recovers x with small risk on the interval [−m, 2m], then the filter ϕo = φo ∗φo of the length at most n + 1, with n = 2m, has small norm ∥ϕo∥∗ n,1 and recovers the signal x with (essentially the same) small risk on the interval [0, n]. Hidden structure The constrained recovery estimator is completely blind to a possible hidden structure of the signal, yet can seamlessly adapt to it when such a structure exists, in a way that we can rigorously establish. Using the right-shift operator on C(Z), [∆x]t = xt−1, we formalize the hidden structure as an unknown shift-invariant linear subspace of C(Z), ∆S = S, of a small dimension s. We do not assume that x belongs to that subspace. Instead, we make a more general assumption that x is close to this subspace, that is, it may be decomposed into a sum of a component that lies in the subspace and a component whose norm we can control. 3 Assumption A We suppose that x admits the decomposition x = xS + ε, xS ∈S, where S is an (unknown) shift-invariant, ∆S = S, subspace of C(Z) of dimension s, 1 ≤s ≤n+1, and ε is “small”, namely, ∥∆τε∥n,2 ≤σκ, 0 ≤τ ≤n. Shift-invariant subspaces of C(Z) are exactly the sets of solutions of homogeneous linear difference equations with polynomial operators. This is summarized by the following lemma (we believe it is a known fact; for completeness we provide a proof in [15, Appendix C]). Lemma 2.1. Solution set of a homogeneous difference equation with a polynomial operator p(∆), [p(∆)x]t = " s X τ=0 pτxt−τ # = 0, t ∈Z, (6) with deg(p(·)) = s, p(0) = 1, is a shift-invariant subspace of C(Z) of dimension s. Conversely, any shift-invariant subspace S ⊂C(Z), ∆S ⊆S, dim(S) = s < ∞, is the set of solutions of some homogeneous difference equation (6) with deg(p(·)) = s, p(0) = 1. Moreover, such p(·) is unique. On the other hand, for any polynomial p(·), solutions of (6) are exponential polynomials [? ] with frequencies determined by the roots of p(·). For instance, discrete-time polynomials xt = Ps−1 k=0 cktk, t ∈Z of degree s −1 (that is, exponential polynomials with all zero frequencies) form a linear space of dimension s of solutions of the equation (6) with a polynomial p(∆) = (1 −∆)s with a unique root of multiplicity s, having coefficients pk = (−1)ks k  . Naturally, signals which are close, in the ℓ2 distance, to discrete-time polynomials are Sobolev-smooth functions sampled over the regular grid [10]. Sum of harmonic oscillations xt = Ps k=1 ckeıωkt, ωk ∈[0, 2π) being all different, is another example; here, p(∆) = Qs k=1(1 −eıωk∆). We can now state an oracle inequality for the constrained recovery estimator; see [15, Appendix B]. Theorem 2.1. Let ϱ ≥1, and let ϕo ∈C(Zn 0) be such that ∥ϕo∥∗ n,1 ≤ϱ/ √ n + 1. Suppose that Assumption A holds for some s ∈Z+ and κ < ∞. Then for any α, 0 < α ≤1, it holds with probability at least 1 −α: ∥x −bxcon∥n,2 ≤∥x −ϕo ∗y∥n,2 + Cσ q s + ϱ κ p ln [1/α] + ln [n/α]  . (7) When considering simple signals, Theorem 2.1 gives the following. Corollary 2.1. Assume that signal x is (m, ρ)-simple, ρ ≥1 and m ∈Z+. Let n = 2m, ϱ ≥2ρ2, and let Assumption A hold for some s ∈Z+ and κ < ∞. Then for any α, 0 < α ≤1, it holds with probability at least 1 −α: ∥x −bxcon∥n,2 ≤Cσρ2p ln[1/α] + C′σ q s + ϱ κ p ln [1/α] + ln [n/α]  . Adaptation and price The price for adaptation in Theorem 2.1 and Corollary 2.1 is determined by three parameters: the bound on the filter norm ϱ, the deterministic error κ, and the subspace dimension s. Assuming that the signal to recover is simple, and that ϱ = 2ρ2, let us compare the magnitude of the oracle error to the term of the risk which reflects “price of adaptation”. Typically (in fact, in all known to us cases of recovery of signals from a shift-invariant subspace), the parameter ρ is at least √s. Therefore, the bound (5) implies the “typical bound” O(σ√γρ2) = σs√γ for the term ∥x −ϕo ∗y∥n,2 (we denote γ = ln(1/α)). As a result, for instance, in the “parametric situation”, when the signal belongs or is very close to the subspace, that is when κ = O(ln(n)), the price of adaptation O σ[s + ρ2(γ + √γ ln n)]1/2 is much smaller than the bound on the oracle error. In the “nonparametric situation”, when κ = O(ρ2), the price of adaptation has the same order of magnitude as the oracle error. Finally, note that under the premise of Corollary 2.1 we can also bound the pointwise error. We state the result for ϱ = 2ρ2 for simplicity; the proof can be found in [15, Appendix B]. 4 Theorem 2.2. Assume that signal x is (m, ρ)-simple, ρ ≥1 and m ∈Z+. Let n = 2m, ϱ = 2ρ2, and let Assumption A hold for some s ∈Z+ and κ < ∞. Then for any α, 0 < α ≤1, the constrained recovery bxcon satisfies |xn −[bxcon]n| ≤C σρ √m + 1  ρ2p ln[n/α] + ρ q κ p ln [1/α] + √s  . 3 Oracle inequality for penalized recovery To use the constrained recovery estimator with a provable guarantee, see e.g. Theorem 2.1, one must know the norm of a small-error linear filter ϱ, or at least have an upper bound on it. However, if this parameter is unknown, but instead the noise variance is known (or can be estimated from data), we can build a more practical estimator that still enjoys an oracle inequality. The penalized recovery estimator [bxpen]t = [bϕ ∗y]t is an optimal solution to a regularized leastsquares minimization problem, where the regularization penalizes the ℓ1-norm of the filter in the Fourier domain: bϕ ∈Argmin ϕ∈C(Zn 0 )  ∥y −ϕ ∗y∥2 n,2 + λ √ n + 1 ∥ϕ∥∗ n,1 . (8) Similarly to Theorem 2.1, we establish an oracle inequality for the penalized recovery estimator. Theorem 3.1. Let Assumption A hold for some s ∈Z+ and κ < ∞, and let ϕo ∈C(Zn 0) satisfy ∥ϕo∥∗ n,1 ≤ϱ/√n + 1 for some ϱ ≥1. 1o. Suppose that the regularization parameter of penalized recovery bxpen satisfies λ ≥λ, λ := 60σ2 ln[63n/α]. Then, for 0 < α ≤1, it holds with probability at least 1 −α: ∥x −bxpen∥n,2 ≤∥x −ϕo ∗y∥n,2 + C p ϱλ + C′σ q s + (bϱ + 1)κ p ln[1/α], where bϱ := √n + 1 ∥bϕ∥∗ n,1. 2o. Moreover, if κ ≤¯κ, ¯κ := 10 ln[42n/α] p ln [16/α] , and λ ≥2λ, one has ∥x −bxpen∥n,2 ≤∥x −ϕo ∗y∥n,2 + C p ϱλ + C′σ√s. The proof closely follows that of Theorem 2.1 and can also be found in [15, Appendix B]. 4 Discussion There is some redundancy between “simplicity” of a signal, as defined by (3), and Assumption A. Usually a simple signal or image x is also close to a low-dimensional subspace of C(Z) (see, e.g., [10, section 4]), so that Assumption A holds “automatically”. Likewise, x is “almost” simple when it is close to a low-dimensional time-invariant subspace. Indeed, if x ∈C(Z) belongs to S, i.e. Assumption A holds with κ = 0, one can easily verify that for n ≥s there exists a filter φo ∈C(Zn −n) such that ∥φo∥2 ≤ p s/(n + 1), and xτ = [φo ∗x]τ, τ ∈Z . (9) See [15, Appendix C] for the proof. This implies that x can be recovered efficiently from observations (1):  E  |xτ −[φo ∗y]τ|2 1/2 ≤σ r s n + 1. In other words, if instead of the filtering problem we were interested in the interpolation problem of recovering xt given 2n + 1 observations yt−n, ..., yt+n on the left and on the right of t, Assumption 5 A would imply a kind of simplicity of x. On the other hand, it is clear that Assumption A is not sufficient to imply the simplicity of x “with respect to the filtering”, in the sense of the definition we use in this paper, when we are allowed to use only observations on the left of t to compute the estimation of xt. Indeed, one can see, for instance, that already signals from the parametric family Xα = {x ∈C(Z) : xτ = cατ, c ∈C}, with a given |α| > 1, which form a one-dimensional space of solutions of the equation xτ = αxτ−1, cannot be estimated with small risk at t using only observations on the left of t (unless c = 0), and thus are not simple in the sense of (3). Of course, in the above example, the “difficulty” of the family Xα is due to instability of solutions of the difference equation which explode when τ →+∞. Note that signals x ∈Xα with |α| ≤1 (linear functions, oscillations, or damped oscillations) are simple. More generally, suppose that x satisfies a difference equation of degree s: 0 = p(∆)xτ " = s X i=0 pixτ−i # , (10) where p(z) = Ps i=0 pizi is the corresponding characteristic polynomial and ∆is the right shift operator. When p(z) is unstable – has roots inside the unit circle – (depending on “initial conditions”) the set of solutions to the equation (10) contains difficult to filter signals. Observe that stability of solutions is related to the direction of the time axis; when the characteristic polynomial p(z) has roots outside the unit circle, the corresponding solutions may be “left unstable” – increase exponentially when τ →−∞. In this case “right filtering” – estimating xτ using observations on the right of τ – will be difficult. A special situation where interpolation and filtering is always simple arises when the characteristic polynomial of the difference equation has all its roots on the unit circle. In this case, solutions to (10) are “generalized harmonic oscillations” (harmonic oscillations modulated by polynomials), and such signals are known to be simple. Theorem 4.1 summarizes the properties of the solutions of (10) in this particular case; see [15, Appendix C] for the proof. Theorem 4.1. Let s be a positive integer, and let p = [p0; ...; ps] ∈Cs+1 be such that the polynomial p(z) = Ps i=0 pizi has all its roots on the unit circle. Then for every integer m satisfying m ≥m(s) := Cs2 ln(s + 1), one can point out q ∈Cm+1 such that any solution to (10) satisfies xτ = [q ∗x]τ, ∀τ ∈Z, and ∥q∥2 ≤ρ(s, m)/√m where ρ(s, m) = C′ min n s3/2√ ln s, s p ln[ms] o . (11) 5 Numerical experiments We present preliminary results on simulated data of the proposed adaptive signal recovery methods in several application scenarios. We compare the performance of the penalized ℓ2-recovery of Sec. 3 to that of the Lasso recovery of [1] in signal and image denoising problems. Implementation details for the penalized ℓ2-recovery are given in Sec. 6. Discussion of the discretization approach underlying the competing Lasso method can be found in [1, Sec. 3.6]. We follow the same methodology in both signal and image denoising experiments. For each level of the signal-to-noise ratio SNR ∈{1, 2, 4, 8, 16}, we perform N Monte-Carlo trials. In each trial, we generate a random signal x on a regular grid with n points, corrupted by the i.i.d. Gaussian noise of variance σ2. The signal is normalized: ∥x∥2 = 1 so SNR−1 = σ√n. We set the regularization penalty in each method as follows. For penalized ℓ2-recovery (8), we use λ = 2σ2 log[63n/α] with α = 0.1. For Lasso [1], we use the common setting λ = σ√2 log n. We report experimental results by plotting the ℓ2-error ∥bx −x∥2, averaged over N Monte-Carlo trials, versus the inverse of the signal-to-noise ratio SNR−1. Signal denoising We consider denoising of a one-dimensional signal in two different scenarios, fixing N = 100 and n = 100. In the RandomSpikes scenario, the signal is a sum of 4 harmonic oscillations, each characterized by a spike of a random amplitude at a random position in the continuous frequency domain [0, 2π]. In the CoherentSpikes scenario, the same number of spikes is 6 <pn 0.06 0.12 0.25 0.5 1 2 4 `2-error 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. <pn 0.06 0.12 0.25 0.5 1 2 4 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. <pn 0.06 0.12 0.25 0.5 1 2 4 0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. <pn 0.06 0.12 0.25 0.5 1 2 4 0.005 0.01 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. Figure 1: Signal and image denoising in different scenarios, left to right: RandomSpikes, CoherentSpikes, RandomSpikes-2D, and CoherentSpikes-2D. The steep parts of the curves on high noise levels correspond to observations being thresholded to zero. sampled by pairs. Spikes in each pair have the same amplitude and are separated by only 0.1 of the DFT bin 2π/n which could make recovery harder due to high signal coherency. However, in practice we found RandomSpikes to be slightly harder than CoherentSpikes for both methods, see Fig. 1. As Fig. 1 shows, the proposed penalized ℓ2-recovery outperforms the Lasso method for all noise levels. The performance gain is particularly significant for high signal-to-noise ratios. Image Denoising We now consider recovery of an unknown regression function f on the regular grid on [0, 1]2 given the noisy observations: yτ = xτ + σζτ, τ ∈{0, 1, ..., m −1}2 , (12) where xτ = f(τ/m). We fix N = 40, and the grid dimension m = 40; the number of samples is then n = m2. For the penalized ℓ2-recovery, we implement the blockwise denoising strategy (see Appendix for the implementation details) with just one block for the entire image. We present additional numerical illustrations in the supplementary material. We study three different scenarios for generating the ground-truth signal in this experiment. The first two scenarios, RandomSpikes-2D and CoherentSpikes-2D, are two-dimensional counterparts of those studied in the signal denoising experiment: the ground-truth signal is a sum of 4 harmonic oscillations in R2 with random frequencies and amplitudes. The separation in the CoherentSpikes2D scenario is 0.2π/m in each dimension of the torus [0, 2π]2. The results for these scenarios are shown in Fig. 1. Again, the proposed penalized ℓ2-recovery outperforms the Lasso method for all noise levels, especially for high signal-to-noise ratios. In scenario DimensionReduction-2D we investigate the problem of estimating a function with a hidden low-dimensional structure. We consider the single-index model of the regression function: f(t) = g(θT t), g(·) ∈S1 β(1). (13) Here, S1 β(1) = {g : R →R, ∥g(β)(·)∥2 ≤1} is the Sobolev ball of smooth periodic functions on [0, 1], and the unknown structure is formalized as the direction θ. In our experiments we sample the direction θ uniformly at random and consider different values of the smoothness index β. If it is known a priori that the regression function possesses the structure (13), and only the index is unknown, one can use estimators attaining ”one-dimensional” rates of recovery; see e.g. [12] and references therein. In contrast, our recovery algorithms are not aware of the underlying structure but might still adapt to it. As shown in Fig. 2, the ℓ2-recovery performs well in this scenario despite the fact that the available theoretical bounds are pessimistic. For example, the signal (13) with a smooth g can be approximated by a small number of harmonic oscillations in R2. As follows from the proof of [9, Proposition 10] combined with Theorem 4.1, for a sum of k harmonic oscillations in Rd one can point out a reproducing linear filter with ϱ(k) = O(k2d) (neglecting the logarithmic factors), i.e. the theoretical guarantee is quite conservative for small values of β. 6 Details of algorithm implementation Here we give a brief account of some techniques and implementation tricks exploited in our codes. Solving the optimization problems Note that the optimization problems (2) and (8) underlying the proposed recovery algorithms are well structured Second-Order Conic Programs (SOCP) and 7 β = 2 <pn 0.06 0.12 0.25 0.5 1 2 4 `2-error 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. β = 1 <pn 0.06 0.12 0.25 0.5 1 2 4 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. β = 0.5 <pn 0.06 0.12 0.25 0.5 1 2 4 0.025 0.05 0.1 0.25 0.5 1 Lasso [1] Pen. `2-rec. Figure 2: Image denoising in DimensionReduction scenario; smoothness decreases from left to right. can be solved using Interior-point methods (IPM). However, the computational complexity of IPM applied to SOCP with dense matrices grows rapidly with problem dimension, so that large problems of this type arising in signal and image processing are well beyond the reach of these techniques. On the other hand, these problems possess nice geometry associated with complex ℓ1-norm. Moreover, their first-order information – the value of objective and its gradient at a given ϕ – can be computed using Fast Fourier Transform in time which is almost linear in problem size. Therefore, we used firstorder optimization algorithms, such as Mirror-Prox and Nesterov’s accelerated gradient algorithms (see [14] and references therein) in our recovery implementation. A complete description of the application of these optimization algorithms to our problem is beyond the scope of the paper; we shall present it elsewhere. Interpolating recovery In Sec. 2-3 we considered only recoveries which estimated the value xt of the signal via the observations at n + 1 points t −n, ..., t “on the left” (filtering problem). To recover the whole signal, one may consider a more flexible alternative – interpolating recovery – which estimates xt using observations on the left and on the right of t. In particular, if the objective is to recover a signal on the interval {−n, ..., n}, one can apply interpolating recoveries which use the same observations y−n, ..., yn to estimate xτ at any τ ∈{−n, ..., n}, by altering the relative position of the filter and the current point. Blockwise recovery Ideally, when using pointwise recovery, a specific filter is constructed for each time instant t. This may pose a tremendous amount of computation, for instance, when recovering a high-resolution image. Alternatively, one may split the signal into blocks, and process the points of each block using the same filter (cf. e.g. Theorem 2.1). For instance, a one-dimensional signal can be divided into blocks of length, say, 2m + 1, and to recover x ∈C(Zm −m) in each block one may fit one filter of length m + 1 recovering the right “half-block” xm 0 and another filter recovering the left “half-block” x−1 −m. 7 Conclusion We introduced a new family of estimators for structure-blind signal recovery that can be computed using convex optimization. The proposed estimators enjoy oracle inequalities for the ℓ2-risk and for the pointwise risk. Extensive theoretical discussions and numerical experiments will be presented in the follow-up journal paper. Acknowledgments We would like to thank Arnak Dalalyan and Gabriel Peyr´e for fruitful discussions. DO, AJ, ZH were supported by the LabEx PERSYVAL-Lab (ANR-11-LABX-0025) and the project Titan (CNRSMastodons). ZH was also supported by the project Macaron (ANR-14-CE23-0003-01), the MSRInria joint centre, and the program “Learning in Machines and Brains” (CIFAR). Research of AN was supported by NSF grants CMMI-1262063, CCF-1523768. 8 References [1] B. N. Bhaskar, G. Tang, and B. Recht. Atomic norm denoising with applications to line spectral estimation. IEEE Trans. Signal Processing, 61(23):5987–5999, 2013. [2] D. L. Donoho. Statistical estimation and optimal recovery. Ann. Statist., 22(1):238–270, 03 1994. [3] D. L. Donoho and M. G. Low. Renormalization exponents and optimal pointwise rates of convergence. Ann. Statist., 20(2):944–970, 06 1992. [4] Z. Harchaoui, A. Juditsky, A. Nemirovski, and D. Ostrovsky. Adaptive recovery of signals by convex optimization. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 929–955, 2015. [5] S. Haykin. Adaptive filter theory. Prentice Hall, 1991. [6] I. Ibragimov and R. Khasminskii. Nonparametric estimation of the value of a linear functional in Gaussian white noise. Theor. Probab. & Appl., 29(1):1–32, 1984. [7] I. Ibragimov and R. Khasminskii. Estimation of linear functionals in Gaussian noise. Theor. Probab. & Appl., 32(1):30–39, 1988. [8] A. Juditsky and A. Nemirovski. Nonparametric denoising of signals with unknown local structure, I: Oracle inequalities. Appl. & Comput. Harmon. Anal., 27(2):157–179, 2009. [9] A. Juditsky and A. Nemirovski. Nonparametric estimation by convex programming. Ann. Statist., 37(5a):2278–2300, 2009. [10] A. Juditsky and A. Nemirovski. Nonparametric denoising signals of unknown local structure, II: Nonparametric function recovery. Appl. & Comput. Harmon. Anal., 29(3):354–367, 2010. [11] T. Kailath, A. Sayed, and B. Hassibi. Linear Estimation. Prentice Hall, 2000. [12] O. Lepski and N. Serdyukova. Adaptive estimation under single-index constraint in a regression model. Ann. Statist., 42(1):1–28, 2014. [13] S. Mallat. A wavelet tour of signal processing. Academic Press, 1999. [14] Y. Nesterov and A. Nemirovski. On first-order algorithms for ℓ1/nuclear norm minimization. Acta Num., 22:509–575, 2013. [15] D. Ostrovsky, Z. Harchaoui, A. Juditsky, and A. Nemirovski. Structure-Blind Signal Recovery. arXiv:1607.05712v2, Oct. 2016. [16] A. Tsybakov. Introduction to Nonparametric Estimation. Springer, 2008. [17] L. Wasserman. All of Nonparametric Statistics. Springer, 2006. 9
2016
115
6,012
Reward Augmented Maximum Likelihood for Neural Structured Prediction Mohammad Norouzi Samy Bengio Zhifeng Chen Navdeep Jaitly Mike Schuster Yonghui Wu Dale Schuurmans {mnorouzi, bengio, zhifengc, ndjaitly}@google.com {schuster, yonghui, schuurmans}@google.com Google Brain Abstract A key problem in structured output prediction is direct optimization of the task reward function that matters for test evaluation. This paper presents a simple and computationally efficient approach to incorporate task reward into a maximum likelihood framework. By establishing a link between the log-likelihood and expected reward objectives, we show that an optimal regularized expected reward is achieved when the conditional distribution of the outputs given the inputs is proportional to their exponentiated scaled rewards. Accordingly, we present a framework to smooth the predictive probability of the outputs using their corresponding rewards. We optimize the conditional log-probability of augmented outputs that are sampled proportionally to their exponentiated scaled rewards. Experiments on neural sequence to sequence models for speech recognition and machine translation show notable improvements over a maximum likelihood baseline by using reward augmented maximum likelihood (RML), where the rewards are defined as the negative edit distance between the outputs and the ground truth labels. 1 Introduction Structured output prediction is ubiquitous in machine learning. Recent advances in natural language processing, machine translation, and speech recognition hinge on the development of better discriminative models for structured outputs and sequences. The foundations of learning structured output models were established by the seminal work on conditional random fields (CRFs) [17] and structured large margin methods [32], which demonstrate how generalization performance can be significantly improved when one considers the joint effects of the predictions across multiple output components. These models have evolved into their deep neural counterparts [29, 1] through the use of recurrent neural networks (RNN) with LSTM [13] cells and attention mechanisms [2]. A key problem in structured output prediction has always been to enable direct optimization of the task reward (loss) used for test evaluation. For example, in machine translation one seeks better BLEU scores, and in speech recognition better word error rates. Not surprisingly, almost all task reward metrics are not differentiable, hence hard to optimize. Neural sequence models (e.g. [29, 2]) optimize conditional log-likelihood, i.e. the conditional log-probability of the ground truth outputs given corresponding inputs. These models do not explicitly consider the task reward during training, hoping that conditional log-likelihood serves as a good surrogate for the task reward. Such methods make no distinction between alternative incorrect outputs: log-probability is only measured on the ground truth input-output pairs, and all alternative outputs are equally penalized through normalization, whether near or far from the ground truth target. We believe one can improve upon maximum likelihood (ML) sequence models if the difference in the rewards of alternative outputs is taken into account. Standard ML training, despite its limitations, has enabled the training of deep RNN models, leading to revolutionary advances in machine translation [29, 2, 21] and speech recognition [5–7]. A key property of ML training for locally normalized RNN models is that the objective function factorizes into individual loss terms, which could be efficiently optimized using stochastic gradient descend (SGD). This training procedure does not require any form of inference or sampling from the model during training, leading to computational efficiency and ease to implementation. By contrast, almost all alternative formulations for training structure prediction models require some form of inference or sampling from the model at training time which slows down training, especially for deep RNNs (e.g. see large margin, search-based [8, 39], and expected risk optimization methods). Our work is inspired by the use of reinforcement learning (RL) algorithms, such as policy gradient [37], to optimize expected task reward [25]. Even though expected task reward seems like a natural objective, direct policy optimization faces significant challenges: unlike ML, a stochastic gradient given a mini-batch of training examples is extremely noisy and has a high variance; gradients need to be estimated via sampling from the model, which is a non-stationary distribution; the reward is often sparse in a high-dimensional output space, which makes it difficult to find any high value predictions, preventing learning from getting off the ground; and, finally, maximizing reward does not explicitly consider the supervised labels, which seems inefficient. In fact, all previous attempts at direct policy optimization for structured output prediction have started by bootstrapping from a previously trained ML solution [25, 27], using several heuristics and tricks to make learning stable. This paper presents a new approach to task reward optimization that combines the computational efficiency and simplicity of ML with the conceptual advantages of expected reward maximization. Our algorithm called reward augmented maximum likelihood (RML) simply adds a sampling step on top of the typical likelihood objective. Instead of optimizing conditional log-likelihood on training input-output pairs, given each training input, we first sample an output proportionally to its exponentiated scaled reward. Then, we optimize log-likelihood on such auxiliary output samples given corresponding inputs. When the reward for an output is defined as its similarity to a ground truth output, then the output sampling distribution is peaked at the ground truth output, and its concentration is controlled by a temperature hyper-parameter. Our theoretical analysis shows that the RML and regularized expected reward objectives optimize a KL divergence between the exponentiated reward and model distributions, but in opposite directions. Further, we show that at non-zero temperatures, the gap between the two criteria can be expressed by a difference of variances measured on interpolating distributions. This observation reveals how entropy regularized expected reward can be estimated by sampling from exponentiated scaled rewards, rather than sampling from the model distribution. Remarkably, we find that the RML approach achieves significantly improved results over state of the art maximum likelihood RNNs. We show consistent improvement on both speech recognition (TIMIT dataset) and machine translation (WMT’14 dataset), where output sequences are sampled according to their edit distance to the ground truth outputs. Surprisingly, we find that the best performance is achieved with output sampling distributions that shift a lot of the weight away from the ground truth outputs. In fact, in our experiments, the training algorithm rarely sees the original unperturbed outputs. Our results give further evidence that models trained with imperfect outputs and their reward values can improve upon models that are only exposed to a single ground truth output per input [12, 20]. 2 Reward augmented maximum likelihood Given a dataset of input-output pairs, D ≡{(x(i), y∗(i))}N i=1, structured output models learn a parametric score function pθ(y | x), which scores different output hypotheses, y ∈Y. We assume that the set of possible output, Y is finite, e.g. English sentences up to a maximum length. In a probabilistic model, the score function is normalized, while in a large-margin model the score may not be normalized. In either case, once the score function is learned, given an input x, the model predicts an output by achieving maximal score, by(x) = argmax y pθ(y | x) . (1) If this optimization is intractable, approximate inference (e.g. beam search) is used. We use a reward function r(y, y∗) to evaluate different proposed outputs against ground-truth outputs. Given a test dataset D′, one computes P (x,y∗)∈D′ r(by(x), y∗) as a measure of empirical reward. Since models with larger empirical reward are preferred, ideally one hopes to maximize empirical reward during training. 2 However, since empirical reward is not amenable to numerical optimization, one often considers optimizing alternative differentiable objectives. The maximum likelihood (ML) framework tries to minimize negative log-likelihood of the parameters given the data, LML(θ; D) = X (x,y∗)∈D −log pθ(y∗| x) . (2) Minimizing this objective increases the conditional probability of the target outputs, log pθ(y∗| x), while decreasing the conditional probability of alternative incorrect outputs. According to this objective, all negative outputs are equally wrong, and none is preferred over the others. By contrast, reinforcement learning (RL) advocates optimizing expected reward (with a maximum entropy regularizer [38]), which is formulated as minimization of the following objective, LRL(θ; τ, D) = X (x,y∗)∈D  −τH (pθ(y | x)) − X y∈Y pθ(y | x) r(y, y∗)  , (3) where r(y, y∗) denotes the reward function, e.g. negative edit distance or BLEU score, τ controls the degree of regularization, and H (p) is the entropy of a distribution p, i.e. H (p(y)) = −P y∈Y p(y) log p(y). It is well-known that optimizing LRL(θ; τ) using SGD is challenging because of the large variance of the gradients. Below we describe how ML and RL objectives are related, and propose a hybrid between the two that combines their benefits for supervised learning. Let us define a distribution in the output space, termed the exponentiated payoff distribution, that is central in linking ML and RL objectives: q(y | y∗; τ) = 1 Z(y∗, τ) exp {r(y, y∗)/τ} , (4) where Z(y∗, τ) = P y∈Y exp {r(y, y∗)/τ}. One can verify that the global minimum of LRL(θ; τ), i.e. the optimal regularized expected reward, is achieved when the model distribution matches the exponentiated payoff distribution, i.e. pθ(y | x) = q(y | y∗; τ). To see this, we re-express the objective function in (3) in terms of a KL divergence between pθ(y | x) and q(y | y∗; τ), X (x,y∗)∈D DKL (pθ(y | x) ∥q(y | y∗; τ)) = 1 τ LRL(θ; τ) + const , (5) where the constant const on the RHS is P (x,y∗)∈D log Z(y∗, τ). Thus, the minimum of DKL (pθ ∥q) and LRL is achieved when pθ = q. At τ = 0, when there is no entropy regularization, the optimal pθ is a delta distribution, pθ(y | x) = δ(y | y∗), where δ(y | y∗) = 1 at y = y∗and 0 at y ̸= y∗. Note that δ(y | y∗) is equivalent to the exponentiated payoff distribution in the limit as τ →0. Returning to the log-likelihood objective, one can verify that (2) is equivalent to a KL divergence in the opposite direction between a delta distribution δ(y | y∗) and the model distribution pθ(y | x), X (x,y∗)∈D DKL (δ(y | y∗) ∥pθ(y | x)) = LML(θ) . (6) There is no constant on the RHS, as the entropy of a delta distribution is zero, i.e. H (δ(y | y∗)) = 0. We propose a method called reward-augmented maximum likelihood (RML), which generalizes ML by allowing a non-zero temperature parameter in the exponentiated payoff distribution, while still optimizing the KL divergence in the ML direction. The RML objective function takes the form, LRML(θ; τ, D) = X (x,y∗)∈D  − X y∈Y q(y | y∗; τ) log pθ(y | x)  , (7) which can be re-expressed in terms of a KL divergence as follows, X (x,y∗)∈D DKL (q(y | y∗; τ) ∥pθ(y | x)) = LRML(θ; τ) + const , (8) where the constant const is −P (x,y∗)∈D H (q(y | y∗, τ)). Note that the temperature parameter, τ ≥0, serves as a hyper-parameter that controls the smoothness of the optimal distribution around 3 correct targets by taking into account the reward function in the output space. The objective functions LRL(θ; τ) and LRML(θ; τ), have the same global optimum of pθ, but they optimize a KL divergence in opposite directions. We characterize the difference between these two objectives below, showing that they are equivalent up to their first order Taylor approximations. For optimization convenience, we focus on minimizing LRML(θ; τ) to achieve a good solution for LRL(θ; τ). 2.1 Optimization Optimizing the reward augmented maximum likelihood (RML) objective, LRML(θ; τ), is straightforward if one can draw unbiased samples from q(y | y∗; τ). We can express the gradient of LRML in terms of an expectation over samples from q(y | y∗; τ), ∇θLRML(θ; τ) = Eq(y|y∗;τ)  −∇θ log pθ(y | x)  . (9) Thus, to estimate ∇θLRML(θ; τ) given a mini-batch of examples for SGD, one draws y samples given mini-batch y∗’s and then optimizes log-likelihood on such samples by following the mean gradient. At a temperature τ = 0, this reduces to always sampling y∗, hence ML training with no sampling. By contrast, the gradient of LRL(θ; τ), based on likelihood ratio methods, takes the form, ∇θLRL(θ; τ) = Epθ(y|x)  −∇θ log pθ(y | x) · r(y, y∗)  . (10) There are several critical differences between (9) and (10) that make SGD optimization of LRML(θ; τ) more desirable. First, in (9), one has to sample from a stationary distribution, the so called exponentiated payoff distribution, whereas in (10) one has to sample from the model distribution as it is evolving. Not only does sampling from the model potentially slow down training, one also needs to employ several tricks to get a better estimate of the gradient of LRL [25]. A body of literature in reinforcement learning focuses on reducing the variance of (10) by using sophisticated techniques such as actor-critique methods [30, 9]. Further, the reward is often sparse in a high-dimensional output space, which makes finding any reasonable prediction challenging when (10) is used to refine a randomly initialized model. Thus, smart model initialization is needed. By contrast, we initialize the models randomly and refine them using (9). 2.2 Sampling from the exponentiated payoff distribution To compute the gradient of the model using the RML approach, one needs to sample auxiliary outputs from the exponentiated payoff distribution, q(y | y∗; τ). This sampling is the price that we have to pay to learn with rewards. One should contrast this with loss-augmented inference in structured large margin methods, and sampling from the model in RL. We believe sampling outputs proportional to exponentiated rewards is more efficient and effective in many cases. Experiments in this paper use reward values defined by either negative Hamming distance or negative edit distance. We sample from q(y | y∗; τ) by stratified sampling, where we first select a particular distance, and then sample an output with that distance value. Here we focus on edit distance sampling, as Hamming distance sampling is a simpler special case. Given a sentence y∗of length m, we count the number of sentences within an edit distance e, where e ∈{0, . . . , 2m}. Then, we reweight the counts by exp{−e/τ} and normalize. Let c(e, m) denote the number of sentences at an edit distance e from a sentence of length m. First, note that a deletion can be thought as a substitution with a nil token. This works out nicely because given a vocabulary of length v, for each insertion we have v options, and for each substitution we have v −1 options, but including the nil token, there are v options for substitutions too. When e = 1, there are m possible substitutions and m + 1 insertions. Hence, in total there are (2m + 1)v sentences at an edit distance of 1. Note, that exact computation of c(e, m) is difficult if we consider all edge cases, for example when there are repetitive words in y∗, but ignoring such edge cases we can come up with approximate counts that are reliable for sampling. When e > 1, we estimate c(e, m) by c(e, m) = m X s=0 m s m + e −2s e −s  ve , (11) where s enumerates over the number of substitutions. Once s tokens are substituted, then those s positions lose their significance, and the insertions before and after such tokens could be merged. Hence, given s substitutions, there are really m −s reference positions for e −s possible insertions. Finally, one can sample according to BLEU score or other sequence metrics by importance sampling where the proposal distribution could be edit distance sampling above. 4 3 RML analysis In the RML framework, we find the model parameters by minimizing the objective (7) instead of optimizing the RL objective, i.e. regularized expected reward in (3). The difference lies in minimizing DKL (q(y | y∗; τ) ∥pθ(y | x)) instead of DKL (pθ(y | x) ∥q(y | y∗; τ)). For convenience, let’s refer to q(y | y∗; τ) as q, and pθ(y | x) as p. Here, we characterize the difference between the two divergences, DKL (q ∥p) −DKL (p ∥q), and use this analysis to motivate the RML approach. We will initially consider the KL divergence in its more general form as a Bregman divergence, which will make some of the key properties clearer. A Bregman divergence is defined by a strictly convex, differentiable, closed potential function F : F →R [3]. Given F and two points p, q ∈F, the corresponding Bregman divergence DF : F × F →R+ is defined by DF (p ∥q) = F (p) −F (q) −(p −q)T∇F (q) , (12) the difference between the strictly convex potential at p and its first order Taylor approximation expanded about q. Clearly this definition is not symmetric between p and q. By the strict convexity of F it follows that DF (p ∥q) ≥0 with DF (p ∥q) = 0 if and only if p = q. To characterize the difference between opposite Bregman divergences, we provide a simple result that relates the two directions under suitable conditions. Let HF denote the Hessian of F. Proposition 1. For any twice differentiable strictly convex closed potential F, and p, q ∈int(F): DF (q ∥p) = DF (p ∥q) + 1 4(p −q)THF (a) −HF (b)  (p −q) (13) for some a = (1 −α)p + αq, (0 ≤α ≤1 2), b = (1 −β)q + βp, (0 ≤β ≤1 2). (see supp. material) For probability vectors p, q ∈∆|Y| and a potential F (p) = −τH (p), DF (p ∥q) = τDKL (p ∥q). Let f ∗: R|Y| →∆|Y| denote a normalized exponential operator that takes a real-valued logit vector and turns it into a probability vector. Let r and s denote real-valued logit vectors such that q = f ∗(r/τ) and p = f ∗(s/τ). Below, we characterize the gap between DKL (p(y) ∥q(y)) and DKL (q(y) ∥p(y)) in terms of the difference between s(y) and r(y). Proposition 2. The KL divergence between p and q in two directions can be expressed as, DKL (p ∥q) = DKL (q ∥p) + 1 4τ 2 Vary∼f ∗(a/τ) [s(y) −r(y)] − 1 4τ 2 Vary∼f ∗(b/τ) [s(y) −r(y)] < DKL (q ∥p) + 1 τ 2 ∥s −r∥2 2, for some a = (1−α)s+αr, (0 ≤α ≤1 2), b = (1−β)r +βs, (0 ≤β ≤1 2). (see supp. material) Given Proposition 2, one can relate the two objectives, LRL(θ; τ) (5) and LRML(θ; τ) (8), by LRL = τLRML + 1 4τ X (x,y∗)∈D n Vary∼f ∗(a/τ) [s(y) −r(y)]−Vary∼f ∗(b/τ) [s(y) −r(y)] o +const, (14) where s(y) denotes τ-scaled logits predicted by the model such that pθ(y | x) = f ∗(s(y)/τ), and r(y) = r(y, y∗). The gap between regularized expected reward (5) and τ-scaled RML criterion (8) is simply a difference of two variances, whose magnitude decreases with increasing regularization. Proposition 2 also shows an opportunity for learning algorithms: if τ is chosen so that q = f ∗(r/τ), then f ∗(a/τ) and f ∗(b/τ) have lower variance than p (which can always be achieved for sufficiently small τ provided p is not deterministic), then the expected regularized reward under p, and its gradient for training, can be exactly estimated, in principle, by including the extra variance terms and sampling from more focused distributions than p. Although we have not yet incorporated approximations to the additional variance terms into RML, this is an interesting research direction. 4 Related Work The literature on structure output prediction is vast, falling into three broad categories: (a) supervised learning approaches that ignore task reward and use supervision; (b) reinforcement learning approaches that use only task reward and ignore supervision; and (c) hybrid approaches that attempt to exploit both supervision and task reward. This paper clearly falls in category (c). Work in category (a) includes classical conditional random fields [17] and conditional log-likelihood training of RNNs [29, 2]. It also includes the approaches that attempt to perturb the training inputs 5 and supervised training structures to improves the robustness (and hopefully the generalization) of the conditional models (e.g. see [4, 16]). These approaches offer improvements to standard maximum likelihood estimation, but they are fundamentally limited by not incorporating a task reward. By contrast, work in category (b) includes reinforcement learning approaches that only consider task reward and do not use any other supervision. Beyond the traditional reinforcement learning approaches, such as policy gradient [37, 31], and actor-critic [30], Q-learning [34], this category includes SEARN [8]. There is some relationship to the work presented here and work on relative entropy policy search [23], and policy optimization via expectation maximization [35] and KLdivergence [14, 33], however none of these bridge the gap between the two directions of the KLdivergence, nor do they consider any supervision data as we do here. There is also a substantial body of related work in category (c), which considers how to exploit supervision information while training with a task reward metric. A canonical example is large margin structured prediction [32, 11], which explicitly uses supervision and considers an upper bound surrogate for task loss. This approach requires loss augmented inference that cannot be efficiently achieved for general task losses. We are not aware of successful large-margin methods for neural sequence prediction, but a related approach by [39] for neural machine translation builds on SEARN [8]. Some form of inference during training is still needed, and the characteristics of the objective are not well studied. We also mentioned the work on maximizing task reward by bootstrapping from a maximum likelihood policy [25, 27], but such an approach only makes limited use of supervision. Some work in robotics has considered exploiting supervision as a means to provide indirect sampling guidance to improve policy search methods that maximize task reward [18, 19, 26], but these approaches do not make use of maximum likelihood training. An interesting work is [15] which explicitly incorporates supervision in the policy evaluation phase of a policy iteration procedure that otherwise seeks to maximize task reward. However, this approach only considers a greedy policy form that does not lend itself to being represented as a deep RNN, and has not been applied to structured output prediction. Most relevant are ideas for improving approximate maximum likelihood training for intractable models by passing the gradient calculation through an approximate inference procedure [10, 28]. These works, however, are specialized to particular approximate inference procedures, and, by directly targeting expected reward, are subject to the variance problems that motivated this work. One advantage of the RML framework is its computational efficiency at training time. By contrast, RL and scheduled sampling [4] require sampling from the model, which can slow down the gradient computation by 2×. Structural SVM requires loss-augmented inference which is often more expensive than sampling from the model. Our framework only requires sampling from a fixed exponentated payoff distribution, which can be thought as a form of input pre-processing. This pre-processing can be parallelized by model training by having a thread handling loading the data and augmentation. Recently, we were informed of the unpublished work of Volkovs et al. [36] that also proposes an objective like RML, albeit with a different derivation. No theoretical relation was established to entropy regularized RL, nor was the method applied to neural nets for sequences, but large gains were reported over several baselines applying the technique to ranking problems with CRFs. 5 Experiments We compare our approach, reward augmented maximum likelihood (RML), with standard maximum likelihood (ML) training on sequence prediction tasks using state-of-the-art attention-based recurrent neural networks [29, 2]. Our experiments demonstrate that the RML approach considerably outperforms ML baseline on both speech recognition and machine translation tasks. 5.1 Speech recognition For experiments on speech recognition, we use the TIMIT dataset; a standard benchmark for clean phone recognition. This dataset consists of recordings from different speakers reading ten phonetically rich sentences covering major dialects of American English. We use the standard train / dev / test splits suggested by the Kaldi toolkit [24]. As the sequence prediction model, we use an attention-based encoder-decoder recurrent model of [5] with three 256-dimensional LSTM layers for encoding and one 256-dimensional LSTM layer for decoding. We do not modify the neural network architecture or its gradient computation in any way, 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 τ = 0.6 τ = 0.7 τ = 0.8 τ = 0.9 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Figure 1: Fraction of different number of edits applied to a sequence of length 20 for different τ. At τ = 0.9, augmentations with 5 to 9 edits are sampled with a probability > 0.1. [view in color] Method Dev set Test set ML baseline 20.87 (−0.2, +0.3) 22.18 (−0.4, +0.2) RML, τ = 0.60 19.92 (−0.6, +0.3) 21.65 (−0.5, +0.4) RML, τ = 0.65 19.64 (−0.2, +0.5) 21.28 (−0.6, +0.4) RML, τ = 0.70 18.97 (−0.1, +0.1) 21.28 (−0.5, +0.4) RML, τ = 0.75 18.44 (−0.4, +0.4) 20.15 (−0.4, +0.4) RML, τ = 0.80 18.27 (−0.2, +0.1) 19.97 (−0.1, +0.2) RML, τ = 0.85 18.10 (−0.4, +0.3) 19.97 (−0.3, +0.2) RML, τ = 0.90 18.00 (−0.4, +0.3) 19.89 (−0.4, +0.7) RML, τ = 0.95 18.46 (−0.1, +0.1) 20.12 (−0.2, +0.1) RML, τ = 1.00 18.78 (−0.6, +0.8) 20.41 (−0.2, +0.5) Table 1: Phone error rates (PER) for different methods on TIMIT dev and test sets. Average PER of 4 independent training runs is reported. but we only change the output targets fed into the network for gradient computation and SGD update. The input to the network is a standard sequence of 123-dimensional log-mel filter response statistics. Given each input, we generate new outputs around ground truth targets by sampling according to the exponentiated payoff distribution. We use negative edit distance as the measure of reward. Our output augmentation process allows insertions, deletions, and substitutions. An important hyper-parameter in our framework is the temperature parameter, τ, controlling the degree of output augmentation. We investigate the impact of this hyper-parameter and report results for τ selected from a candidate set of τ ∈{0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0}. At a temperature of τ = 0, outputs are not augmented at all, but as τ increases, more augmentation is generated. Figure 1 depicts the fraction of different numbers of edits applied to a sequence of length 20 for different values of τ. These edits typically include very small number of deletions, and roughly equal number of insertions and substitutions. For insertions and substitutions we uniformly sample elements from a vocabulary of 61 phones. According to Figure 1, at τ = 0.6, more than 60% of the outputs remain intact, while at τ = 0.9, almost all target outputs are being augmented with 5 to 9 edits being sampled with a probability larger than 0.1. We note that the augmentation becomes more severe as the outputs get longer. The phone error rates (PER) on both dev and test sets for different values of τ and the ML baseline are reported in Table 1. Each model is trained and tested 4 times, using different random seeds. In Table 1, we report average PER across the runs, and in parenthesis the difference of average error to minimum and maximum error. We observe that a temperature of τ = 0.9 provides the best results, outperforming the ML baseline by 2.9% PER on the dev set and 2.3% PER on the test set. The results consistently improve when the temperature increases from 0.6 to 0.9, and they get worse beyond τ = 0.9. It is surprising to us that not only the model trains with such a large amount of augmentation at τ = 0.9, but also it significantly improves upon the baseline. Finally, we note that previous work [6, 7] suggests several refinements to improve sequence to sequence models on TIMIT by adding noise to the weights and using more focused forward-moving attention mechanism. While these refinements are interesting and they could be combined with the RML framework, in this work, we do not implement such refinements, and focus specifically on a fair comparison between the ML baseline and the RML method. 7 Method Average BLEU Best BLEU ML baseline 36.50 36.87 RML, τ = 0.75 36.62 36.91 RML, τ = 0.80 36.80 37.11 RML, τ = 0.85 36.91 37.23 RML, τ = 0.90 36.69 37.07 RML, τ = 0.95 36.57 36.94 Table 2: Tokenized BLEU score on WMT’14 English to French evaluated on newstest-2014 set. The RML approach with different τ considerably improves upon the maximum likelihood baseline. 5.2 Machine translation We evaluate the effectiveness of the proposed approach on WMT’14 English to French machine translation benchmark. Translation quality is assessed using tokenized BLEU score, to be consistent with previous work on neural machine translation [29, 2, 22]. Models are trained on the full 36M sentence pairs from WMT’14 training set, and evaluated on 3003 sentence pairs from newstest-2014 test set. To keep the sampling process efficient and simple on such a large corpus, we augment the output sentences only based on Hamming distance (i.e. edit distance without insertion or deletion). For each sentece we sample a single output at each step. One can consider insertions and deletions or sampling according to exponentiated sentence BLEU scores, but we leave that to future work. As the conditional sequence prediction model, we use an attention-based encoder-decoder recurrent neural network similar to [2], but we use multi-layer encoder and decoder networks consisting of three layers of 1024 LSTM cells. As suggested by [2], for computing the softmax attention vectors, we use a feedforward neural network with 1024 hidden units, which operates on the last encoder and the first decoder layers. In all of the experiments, we keep the network architecture and the hyper-parameters fixed. All of the models achieve their peak performance after about 4 epochs of training, once we anneal the learning rates. To reduce the noise in the BLEU score evaluation, we report both peak BLEU score and BLEU score averaged among about 70 evaluations of the model while doing the fifth epoch of training. We perform beam search decoding with a beam size of 8. Table 2 summarizes our experimental results on WMT’14. We note that our ML translation baseline is quite strong, if not the best among neural machine translation models [29, 2, 22], achieving very competitive performance for a single model. Even given such a strong baseline, the RML approach consistently improves the results. Our best model with a temperature τ = 0.85 improves average BLEU by 0.4, and best BLEU by 0.35 points, which is a considerable improvement. Again we observe that as we increase the amount of augmentation from τ = 0.75 to τ = 0.85 the results consistently get better, and then they start to get worse with more augmentation. Details. We train the models using asynchronous SGD with 12 replicas without momentum. We use mini-batches of size 128. We initially use a learning rate of 0.5, which we then exponentially decay to 0.05 after 800K steps. We keep evaluating the models between 1.1 and 1.3 million steps and report average and peak BLEU scores in Table 2. We use a vocabulary 200K words for the source language and 80K for the target language. We only consider training sentences that are up to 80 tokens. We replace rare words with several UNK tokens based on their first and last characters. At inference time, we replace UNK tokens in the output sentences by copying source words according to largest attention activations as suggested by [22]. 6 Conclusion We present a learning algorithm for structured output prediction that generalizes maximum likelihood training by enabling direct optimization of a task reward metric. Our method is computationally efficient and simple to implement. It only requires augmentation of the output targets used within a log-likelihood objective. We show how using augmented outputs sampled according to edit distance improves a maximum likelihood baseline by a considerable margin, on both machine translation and speech recognition tasks. We believe this framework is applicable to a wide range of probabilistic models with arbitrary reward functions. In the future, we intend to explore the applicability of this framework to other probabilistic models on tasks with more complicated evaluation metrics. 8 References [1] D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins. Globally normalized transition-based neural networks. arXiv:1603.06042, 2016. [2] D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. [3] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh. Clustering with Bregman divergences. JMLR, 2005. [4] S. Bengio, O. Vinyals, N. Jaitly, and N. M. Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS, 2015. [5] W. Chan, N. Jaitly, Q. V. Le, and O. Vinyals. Listen, attend and spell. ICASSP, 2016. [6] J. Chorowski, D. Bahdanau, K. Cho, and Y. Bengio. End-to-end continuous speech recognition using attention-based recurrent nn: first results. arXiv:1412.1602, 2014. [7] J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, and Y. Bengio. Attention-based models for speech recognition. NIPS, 2015. [8] H. Daum´e, III, J. Langford, and D. Marcu. Search-based structured prediction. Mach. Learn. J., 2009. [9] T. Degris, P. M. Pilarski, and R. S. Sutton. Model-free reinforcement learning with continuous action in practice. ACC, 2012. [10] J. Domke. Generic methods for optimization-based modeling. AISTATS, 2012. [11] K. Gimpel and N. A. Smith. Softmax-margin crfs: Training log-linear models with cost functions. NAACL, 2010. [12] G. Hinton, O. Vinyals, and J. Dean. Distilling the knowledge in a neural network. arXiv:1503.02531, 2015. [13] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 1997. [14] H. J. Kappen, V. G´omez, and M. Opper. Optimal control as a graphical model inference problem. Mach. Learn. J., 2012. [15] B. Kim, A. M. Farahmand, J. Pineau, and D. Precup. Learning from limited demonstrations. NIPS, 2013. [16] A. Kumar, O. Irsoy, J. Su, J. Bradbury, R. English, B. Pierce, P. Ondruska, I. Gulrajani, and R. Socher. Ask me anything: Dynamic memory networks for natural language processing. ICML, 2016. [17] J. D. Lafferty, A. McCallum, and F. C. N. Pereira. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. ICML, 2001. [18] S. Levine and V. Koltun. Guided policy search. ICML, 2013. [19] S. Levine and V. Koltun. Variational policy search via trajectory optimization. NIPS, 2013. [20] D. Lopez-Paz, B. Sch¨olkopf, L. Bottou, and V. Vapnik. Unifying distillation and privileged information. ICLR, 2016. [21] M.-T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. EMNLP, 2015. [22] M.-T. Luong, I. Sutskever, Q. V. Le, O. Vinyals, and W. Zaremba. Addressing the rare word problem in neural machine translation. ACL, 2015. [23] J. Peters, K. M¨ulling, and Y. Alt¨un. Relative entropy policy search. AAAI, 2010. [24] D. Povey, A. Ghoshal, G. Boulianne, et al. The kaldi speech recognition toolkit. ASRU, 2011. [25] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. ICLR, 2016. [26] S. Shen, Y. Cheng, Z. He, W. He, H. Wu, M. Sun, and Y. Liu. Minimum risk training for neural machine translation. ACL, 2016. [27] D. Silver et al. Mastering the game of Go with deep neural networks and tree search. Nature, 2016. [28] V. Stoyanov, A. Ropson, and J. Eisner. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. AISTATS, 2011. [29] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. NIPS, 2014. [30] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. [31] R. S. Sutton, D. A. McAllester, S. P. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. NIPS, 2000. [32] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. NIPS, 2004. [33] E. Todorov. Linearly-solvable markov decision problems. NIPS, 2006. [34] H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. arXiv:1509.06461, 2015. [35] N. Vlassis, M. Toussaint, G. Kontes, and S. Piperidis. Learning model-free robot control by a Monte Carlo EM algorithm. Autonomous Robots, 2009. [36] M. Volkovs, H. Larochelle, and R. Zemel. Loss-sensitive training of probabilistic conditional random fields. arXiv:1107.1805v1, 2011. [37] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. J., 1992. [38] R. J. Williams and J. Peng. Function optimization using connectionist reinforcement learning algorithms. Connection Science, 1991. [39] S. Wiseman and A. M. Rush. Sequence-to-sequence learning as beam-search optimization. arXiv:1606.02960, 2016. 9
2016
116
6,013
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning Mehdi Sajjadi Mehran Javanmardi Tolga Tasdizen Department of Electrical and Computer Engineering University of Utah {mehdi, mehran, tolga}@sci.utah.edu Abstract Effective convolutional neural networks are trained on large sets of labeled data. However, creating large labeled datasets is a very costly and time-consuming task. Semi-supervised learning uses unlabeled data to train a model with higher accuracy when there is a limited set of labeled data available. In this paper, we consider the problem of semi-supervised learning with convolutional neural networks. Techniques such as randomized data augmentation, dropout and random max-pooling provide better generalization and stability for classifiers that are trained using gradient descent. Multiple passes of an individual sample through the network might lead to different predictions due to the non-deterministic behavior of these techniques. We propose an unsupervised loss function that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network. We evaluate the proposed method on several benchmark datasets. 1 Introduction Convolutional neural networks (ConvNets) [1, 2] achieve state-of-the-art accuracy on a variety of computer vision tasks, including classification, object localization, detection, recognition and scene labeling [3, 4]. The advantage of ConvNets partially originates from their complexity (large number of parameters), but this can result in overfitting without a large amount of training data. However, creating a large labeled dataset is very costly. A notable example is the ’ImageNet’ [5] dataset with 1000 category and more than 1 million training images. The state-of-the-art accuracy of this dataset is improved every year using ConvNet-based methods (e.g., [6, 7]). This dataset is the result of significant manual effort. However, with around 1000 images per category, it barely contains enough training samples to prevent the ConvNet from overfitting [7]. On the other hand, unlabeled data is cheap to collect. For example, there are numerous online resources for images and video sequences of different types. Therefore, there has been an increased interest in exploiting the readily available unlabeled data to improve the performance of ConvNets. Randomization plays an important role in the majority of learning systems. Stochastic gradient descent, dropout [8], randomized data transformation and augmentation [9] and many other training techniques that are essential for fast convergence and effective generalization of the learning functions introduce some non-deterministic behavior to the learning system. Due to these uncertainties, passing a single data sample through a learning system multiple times might lead to different predictions. Based on this observation, we introduce an unsupervised loss function optimized by gradient descent that takes advantage of this randomization effect and minimizes the difference in predictions of multiple passes of a data sample through the network during the training phase, which leads to better generalization in testing time. The proposed unsupervised loss function specifically regularizes the network based on the variations caused by randomized data augmentation, dropout and randomized max-pooling schemes. This loss function can be combined with any supervised loss function. In this 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. paper, we apply the proposed unsupervised loss function to ConvNets as a state-of-the-art supervised classifier. We show through numerous experiments that this combination leads to a competitive semi-supervised learning method. 2 Related Work There are many approaches to semi-supervised learning in general. Self-training and co-training [10, 11] are two well-known classic examples. Another set of approaches is based on generative models, for example, methods based on Gaussian Mixture Models (GMM) and Hidden Markov Models (HMM) [12]. These generative models generally try to use unlabeled data in modeling the joint probability distribution of the training data and labels. Transductive SVM (TSVM) [13] and S3VM [14] are another semi-supervised learning approach that tries to find a decision boundary with a maximum margin on both labeled and unlabeled data. A large group of semi-supervised methods is based on graphs and the similarities between the samples [15, 16]. For example, if a labeled sample is similar to an unlabeled sample, its label is assigned to that unlabeled sample. In these methods, the similarities are encoded in the edges of a graph. Label propagation [17] is an example of these methods in which the goal is to minimize the difference between model predictions of two samples with large weighted edge. In other words, similar samples tend to get similar predictions. In this paper, our focus is on semi-supervised deep learning. There has always been interest in exploiting unlabeled data to improve the performance of ConvNets. One approach is to use unlabeled data to pre-train the filters of ConvNet [18, 19]. The goal is to reduce the number of training epochs required to converge and improve the accuracy compared to a model trained by random initialization. Predictive sparse decomposition (PSD) [20] is one example of these methods used for learning the weights in the filter bank layer. The works presented in [21] and [22] are two recent examples of learning features by pre-training ConvNets using unlabeled data. In these approaches, an auxiliary target is defined for a pair of unlabeled images [21] or a pair of patches from a single unlabeled image [22]. Then a pair of ConvNets is trained to learn descriptive features from unlabeled images. These features can be fine-tuned for a specific task with a limited set of labeled data. However, many recent ConvNet models with state-of-the-art accuracy start from randomly initialized weights using techniques such as Xavier’s method [23, 6]. Therefore, approaches that make better use of unlabeled data during training instead of just pre-training are more desired. Another example of semi-supervised learning with ConvNets is region embedding [24], which is used for text categorization. The work in [25] is also a deep semi-supervised learning method based on embedding techniques. Unlabeled video frames are also being used to train ConvNets [26, 27]. The target of the ConvNet is calculated based on the correlations between video frames. Another notable example is semi-supervised learning with ladder networks [28] in which the sums of supervised and unsupervised loss functions are simultaneously minimized by backpropagation. In this method, a feedforward model, is assumed to be an encoder. The proposed network consists of a noisy encoder path and a clean one. A decoder is added to each layer of the noisy path. This decoder is supposed to reconstruct a clean activation of each layer. The unsupervised loss function is the difference between the output of each layer in clean path and its corresponding reconstruction from the noisy path. Another approach by [29] is to take a random unlabeled sample and generate multiple instances by randomly transforming that sample multiple times. The resulting set of images forms a surrogate class. Multiple surrogate classes are produced and a ConvNet is trained on them. One disadvantage of this method is that it does not scale well with the number of unlabeled examples because a separate class is needed for every training sample during unsupervised training. In [30], the authors propose a mutual-exclusivity loss function that forces the set of predictions for a multiclass dataset to be mutually-exclusive. In other words, it forces the classifier’s prediction to be close to one only for one class and zero for the others. It is shown that this loss function makes use of unlabeled data and pushes the decision boundary to a less dense area of decision space. Another set of works related to our approach try to restrict the variations of the prediction function. Tangent distance and tangent propagation proposed by [31] enforce local classification invariance with respect to the transformations of input images. Here, we propose a simpler method that additionally minimizes the internal variations of the network caused by dropout and randomized pooling and leads to state-of-the-art results on MNIST (with 100 labeled samples), CIFAR10 and CIFAR100. Another example is Slow Feature Analysis (SFA) (e.g., [32] and [33]) that encourages the representations of temporally close data to exhibit small differences. 2 3 Method Given any training sample, a model’s prediction should be the same under any random transformation of the data and perturbations to the model. The transformations can be any linear and non-linear data augmentation being used to extend the training data. The disturbances include dropout techniques and randomized pooling schemes. In each pass, each sample can be randomly transformed or the hidden nodes can be randomly activated. As a result, the network’s prediction can be different for multiple passes of the same training sample. However, we know that each sample is assigned to only one class. Therefore, the network’s prediction is expected to be the same despite transformations and disturbances. We introduce an unsupervised loss function that minimizes the mean squared differences between different passes of an individual training sample through the network. Note that we do not need to know the label of a training sample in order to enforce this loss. Therefore, the proposed loss function is completely unsupervised and can be used along with supervised training as a semi-supervised learning method. Even if we don’t have a separate unlabeled set, we can apply the proposed loss function on samples of labeled set to enforce stability. Here, we formally define the proposed unsupervised loss function. We start with a dataset with N training samples and C classes. Let us assume that f j(xi) is the classifier’s prediction vector on the i’th training sample during the j’th pass through the network. We assume that each training sample is passed n times through the network. We define the T j(xi) to be a random linear or non-linear transformation on the training sample xi before the j’th pass through the network. The proposed loss function for each data sample is: lTS U = N X i=1 n−1 X j=1 n X k=j+1 ∥f j(T j(xi)) −f k(T k(xi))∥2 2 (1) Where ’TS’ stands for transformation/stability. We pass a training sample through the network n times. In each pass, the transformation T j(xi) produces a different input to the network from the original training sample. In addition, each time the randomness inside the network, which can be caused by dropout or randomized pooling schemes, leads to a different prediction output. We minimize the sum of squared differences between each possible pair of predictions. We can minimize this objective function using gradient descent. Although Eq. 1 is quadratically dependent on the number of augmented versions of the data (n), calculation of loss and gradient is only based on the prediction vectors. So, the computing cost is negligible even for large n. Note that recent neural-network-based methods are optimized on batches of training samples instead of a single sample (batch vs. online training). We can design batches to contain replications of training samples so we can easily optimize this transformation/stability loss function. If we use data augmentation, we put different transformed versions of an unlabeled data in the mini-batch instead of replication. This unsupervised loss function can be used with any backpropagation-based algorithm. Even though, every mini-batch contains replications of a training sample, these are used to calculate a single backpropagation signal avoiding gradient bias and not adversely affecting convergence. It is also possible to combine this loss with any supervised loss function. We reserve part of the mini-batch for labeled data which are not replicated. As mentioned in section 2, mutual-exclusivity loss function of [30] forces the classifier’s prediction vector to have only one non-zero element. This loss function naturally complements the transformation/stability loss function. In supervised learning, each element of the prediction vector is pushed towards zero or one depending on the corresponding element in label vector. The proposed loss minimizes the l2-norm of the difference between predictions of multiple transformed versions of a sample, but it does not impose any restrictions on the individual elements of a single prediction vector. As a result, each prediction vector might be a trivial solution instead of a valid prediction due to lack of labels. Mutual-exclusivity loss function forces each prediction vector to be valid and prevents trivial solutions. This loss function for the training sample xi is defined as follows: lME U = N X i=1 n X j=1  − C X k=1 f j k(xi) C Y l=1,l̸=k (1 −f j l (xi))   (2) Where ’ME’ stands for mutual-exclusivity. f j k(xi) is the k-th element of prediction vector f j(xi). In the experiments, we show that the combination of both loss functions leads to further improvements in the accuracy of the models. We define the combination of both loss functions as transforma3 tion/stability plus mutual-exclusivity loss function: lU = λ1lME U + λ2lTS U (3) 4 Experiments We show the effect of the proposed unsupervised loss functions using ConvNets on MNIST [2], CIFAR10 and CIFAR100 [34], SVHN [35], NORB [36] and ILSVRC 2012 challenge [5]. We use two frameworks to implement and evaluate the proposed loss function. The first one is cuda-convnet [37], which is the original implementation of the well-known AlexNet model. The second framework is the sparse convolutional networks [38] with fractional max-pooling [39], which is a more recent implementation of ConvNets achieving state-of-the-art accuracy on CIFAR10 and CIFAR100 datasets. We show through different experiments that by using the proposed loss function, we can improve the accuracy of the models trained on a few labeled samples on both implementations. In Eq. 1, we set n to be 4 for experiments conducted using cuda-convnet and 5 for experiments performed using sparse convolutional networks. Sparse convolutional network allows for any arbitrary batch sizes. As a result, we tried different options for n and n = 5 is the optimal choice. However, cuda-convnet allows for mini-batches of size 128. Therefore, it is not possible to use n = 5. Instead, we decided to use n = 4. In practice the difference is insignificant. We used MNIST to find the optimal n. We tried different n up to 10 and did not observe improvements for n larger than 5. It must be noted that replicating a training sample four or five times does not necessarily increase the computational complexity with the same factor. Based on the experiments, with higher n fewer training epochs are required for the models to converge. We perform multiple experiments for each dataset. We use the available training data of each dataset to create two sets: labeled and unlabeled. We do not use the labels of the unlabeled set during training. It must be noted that for the experiments with data augmentation, we apply data augmentation to both labeled and unlabeled set. We compare models that are trained only on the labeled set with models that are trained on both the labeled set and the unlabeled set using the unsupervised loss function. We show that by using the unsupervised loss function, we can improve the accuracy of classifiers on benchmark datasets. For experiments performed using sparse convolutional network, we describe the network parameters using the format adopted from the original paper [39]: (10kC2 −FMP √ 2)5 −C2 −C1 In the above example network, 10k is the number of maps in the k’th convolutional layer. In this example, k = 1, 2, ..., 5. C2 specifies that convolutions use a kernel size of 2. FMP √ 2 indicates that convolutional layers are followed by a fractional max-pooling (FMP) layer [39] that reduces the size of feature maps by a factor of √ 2. As mentioned earlier, the mutual-exclusivity loss function of [30] complements the transformation/stability loss function. We implement that loss function in both cuda-convnet and sparse convolutional networks as well. We experimentally choose λ1 and λ2 in Eq. 3. However, the performance of the models is not overly sensitive to these parameters, and in most of the experiments it is fixed to λ1 = 0.1 and λ2 = 1. 4.1 MNIST MNIST is the most frequently used dataset in the area of digit classification. It contains 60000 training and 10000 test samples of size 28 × 28 pixels. We perform experiments on MNIST using a sparse convolutional network with the following architecture: (32kC2 −FMP √ 2)6 −C2 −C1. We use dropout to regularize the network. The ratio of dropout gradually increases from the first layer to the last layer. We do not use any data augmentation for this task. In other words, T j(xi) of Eq. 1 is identity function for this dataset. In this case, we take advantage of the random effects of dropout and fractional max-pooling using the unsupervised loss function. We randomly select 10 samples from each class (total of 100 labeled samples). We use all available training data as the unlabeled set. First, we train a model based on this labeled set only. Then, we train models by adding unsupervised loss functions. In separate experiments, we add transformation/stability loss function, mutual-exclusivity loss function and the combination of both. Each experiment is repeated five times with a different random subset of training samples. We repeat the same set of experiments using 100% of MNIST training samples. The results are given in Table 1. We can see that the proposed loss significantly improves the accuracy on test data. We also compare the results with ladder networks [28]. Combination of both loss functions reduces the error rate to 0.55% 4 ± 0.16 which is the state-of-the-art for the task of MNIST with 100 labeled samples to the best of our knowledge. The state-of-the-art error rate on MNIST using all training data without data augmentation is 0.24% [40]. It can be seen that we can achieve a close accuracy by using only 100 labeled samples. Table 1: Error rates (%) on test set for MNIST (mean % ± std). labeled transform mut-excl both ladder ladder net data only /stability loss loss [30] losses net. [28] baseline [28] 100 : 5.44 ± 1.48 0.76 ± 0.61 3.92 ± 1.12 0.55 ± 0.16 0.89 ± 0.50 6.43 ± 0.84 all: 0.32 ± 0.02 0.29 ± 0.02 0.30 ± 0.03 0.27 ± 0.02 0.36 4.2 SVHN and NORB SVHN is another digit classification task similar to MNIST. This dataset contains about 70000 images for training and more than 500000 easier images [35] for validation. We do not use the validation set. The test set contains 26032 images, which are RGB images of size 32 × 32. Generally, SVHN is a more difficult task compared to MNIST because of the large variations in the images. We do not perform any pre-processing for this dataset. We simply convert the color images to grayscale by removing hue and saturation information. NORB is a collection of stereo images in six classes. The training set contains 10 folds of 29160 images. It is common practice to use only the first two folds for training. The test set contains two folds, totaling 58320. The original images are 108 × 108. However, we scale them down to 48 × 48 similar to [9]. We perform experiments on these two datasets using both cuda-convnet and sparse convolutional network implementations of the unsupervised loss function. In the first set of experiments, we use cuda-convnet to train models with different ratios of labeled and unlabeled data. We randomly choose 1%, 5%, 10%, 20% and 100% of training samples as labeled data. All of the training samples are used as the unlabeled set. For each labeled set, we train four models using cuda-convnet. The first model uses labeled set only. The second model is trained on unlabeled set using mutual-exclusivity loss function in addition to the labeled set. The third model is trained on the unlabeled set using the transformation/stability loss function in addition to the labeled set. The last model is also trained on both sets but combines two unsupervised loss functions. Each experiment is repeated five times. For each repetition, we use a different subset of training samples as labeled data. The cuda-convnet model consists of two convolutional layers with 64 maps and kernel size of 5, two locally connected layers with 32 maps and kernel size 3. Each convolutional layer is followed by a max-pooling layer. A fully connected layer with 256 nodes is added before the last layer. We use data augmentation for these experiments. T j(xi) of Eq. 1 crops every training sample to 28 × 28 for SVHN and 44 × 44 for NORB at random locations. T j(xi) also randomly rotates training samples up to ±20◦. These transformations are applied to both labeled and unlabeled sets. The results are shown in Figure 1 for SVHN and Figure 2 for NORB. Each point in the graph is the mean error rate of five repetitions. The error bars show the standard deviation of these five repetitions. As expected, we can see that in all experiments the classification accuracy is improved as we add more labeled data. However, we observe that for each set of labeled data we can improve the results by using the proposed unsupervised loss functions. We can also see that when the number of labeled samples is small, the improvement is more significant. For example, when we use only 1% of labeled data, we gain an improvement in accuracy of about 2.5 times by using unsupervised loss functions. As we add more labeled samples, the difference in accuracy between semi-supervised and supervised approaches becomes smaller. Note that the combination of transformation/stability loss function and mutual-exclusivity loss function improves the accuracy even further. As mentioned earlier, these two unsupervised loss functions complement each other. Therefore, in most of the experiments we use the combination of two unsupervised loss functions. We perform another set of experiments on these two datasets using sparse convolutional networks as a state-of-the-art classifier. We create five sets of labeled data. For each set, we randomly pick a different 1% subset of training samples as labeled set and all training data as unlabeled set. We train two models: the first trained only on labeled data, and the second using the labeled set and a combination of both unsupervised losses. Similarly, we train models using all available training data as both the labeled set and unlabeled set. We do not use data augmentation for any of these experiments. In other 5 1 5 10 20 100 5 10 15 20 25 Percent of labeled data Error rate (%) SVHN both unsupervised losses labeled data only unsupervised transformation/stability loss unsupervised mutual−exclusivity loss Figure 1: SVHN dataset: semi-supervised learning vs. training with labeled data only. 1 5 10 20 100 4 6 8 10 12 14 16 18 20 22 Percent of labeled data Error rate (%) NORB both unsupervised losses labeled data only unsupervised transformation/stability loss unsupervised mutual−exclusivity loss Figure 2: NORB dataset: semi-supervised learning vs. training with labeled data only. words, T j(xi) of Eq. 1 is identity function. As a result, dropout and random max-pooling are the only sources of variation in this case. We use the following model: (32kC2 −FMP 3√ 2)12 −C2 −C1. Similar to MNIST, we use dropout to regularize the network. Again, the ratio of dropout gradually increases from the first layer to the last layer. The results (average of five error rates) are shown in Table 2. Here, we can see that by using unsupervised loss functions we can significantly improve the accuracy of the classifier by trying to minimize the variation in prediction of the network. In addition, for NORB dataset we can observe that by using only 1% of labeled data and applying unsupervised loss functions, we can achieve accuracy that is close to the case when we use 100% of labeled data. Table 2: Error on test data for SVHN and NORB with 1% and 100% of data (mean % ± std). SVHN NORB 1% of data 100% of data 1% of data 100% of data labeled data only: 12.25 ± 0.80 2.28 ± 0.05 10.01 ± 0.81 1.63 ± 0.12 semi-supervised: 6.03 ± 0.62 2.22 ± 0.04 2.15 ± 0.37 1.63 ± 0.07 4.3 CIFAR10 CIFAR10 is a collection of 60000 tiny 32 × 32 images of 10 categories (50000 for training and 10000 for test). We use sparse convolutional networks to perform experiments on this dataset. For this dataset, we create 10 labeled sets. Each set contains 4000 samples that are randomly picked from the training set. All 50000 training samples are used as unlabeled set. We train two sets of models on these data. The first set of models is trained on labeled data only, and the other set of models is trained on the unlabeled set using a combination of both unsupervised loss functions in addition to the labeled set. For this dataset, we do not perform separate experiments for two unsupervised loss functions because of time constraints. However, based on the results from MNIST, SVHN and NORB, we deduce that the combination of both unsupervised losses provides improved accuracy. We use data augmentation for these experiments. Similar to [39], we perform affine transformations, including randomized mix of translations, rotations, flipping, stretching and shearing operations by T j(xi) of Eq. 1. Similar to [39], we train the network without transformations for the last 10 epochs. We use the following parameters for the models: (32kC2 −FMP 3√ 2)12 −C2 −C1. We use dropout, and its ratio gradually increases from the first layer to the last layer. The results are given in Table 3. We also compare the results to ladder networks [28]. The model in [28] does not use data augmentation. We can see that the combination of unsupervised loss functions on unlabeled data improves the accuracy of the models. In another set of experiments, we use all available training data as both labeled and unlabeled sets. We train a network with the following parameters: (96kC2 −FMP 3√ 2)12 −C2 −C1. We use affine transformations for this task too. Here again, we use transformation/stability plus the mutual-exclusivity loss function. We repeat this experiments five times and achieve 3.18% ± 0.1 mean and standard deviation error rate. The 6 Table 3: Error rates on test data for CIFAR10 with 4000 labeled samples (mean % ± std). transformation/stability+mutual-exclusivity ladder networks [28] labeled data only: 13.60 ± 0.24 23.33 ± 0.61 semi-supervised: 11.29 ± 0.24 20.40 ± 0.47 state-of-the-art error rate for this dataset is 3.47%, achieved by the fractional max-pooling method [39] but obtained with a larger model (160n vs. 96n). We perform a single run experiment with 160n model and achieve the error rate of 3.00%. Similar to [39], we perform 100 passes during test time. Here, we surpass state-of-the-art accuracy by adding unsupervised loss functions. 4.4 CIFAR100 CIFAR100 is also a collection of 60000 tiny images of size 32 × 32. This dataset is similar to CIFAR10. However, it contains images of 100 categories compared to 10. Therefore, we have a smaller number of training samples per category. Similar to CIFAR10, we perform experiments on this dataset using sparse convolutional networks. We use all available training data as both labeled and unlabeled sets. The state-of-the-art error rate for this dataset is 23.82%, obtained by fractional max-pooling [39] on sparse convolutional networks. The following model was used to achieve this error rate: (96kC2 −FMP 3√ 2)12 −C2 −C1. Dropout was also used with a ratio increasing from the first layer to the last layer. We use the same model parameters and add transformation/stability plus the mutual-exclusivity loss function. Similar to [39], we do not use data augmentation for this task (T j(xi) of Eq. 1 is identity function). Therefore, the proposed loss function minimizes the randomness effect due to dropout and max-pooling. We achieve 21.43% ± 0.16 mean and standard deviation error rate, which is the state-of-the-art for this task. We perform 12 passes during the test time similar to [39]. 4.5 ImageNet We perform experiments on the ILSVRC 2012 challenge. The training data consists of 1281167 natural images of different sizes from 1000 categories. We create five labeled datasets from available training samples. Each dataset consists of 10% of training data. We form each dataset by randomly picking a subset of training samples. All available training data is used as the unlabeled set. We use cuda-convnet to train AlexNet model [7] for this dataset. Similar to [7], all images are re-sized to 256 × 256. We also use data augmentation for this task following steps of [7], i.e., T j(xi) of Eq. 1 performs random translations, flipping and color noise. We train two models on each labeled dataset. One model is trained using labeled data only. The other model is trained on both labeled and unlabeled set using the transformation/stability plus mutual-exclusivity loss function. At each iteration, we generate four different transformed versions of each unlabeled sample. So, each unlabeled sample is forward passed through the network four times. Since we use all training data as unlabeled set, the computational cost of each iteration is roughly quadrupled. But, in practice we found that when we use 10% of training data as labeled set, the network converges in 20 epochs instead of standard 90 epochs of AlexNet model. So, overall cost of our method for ImageNet is less than or equal to AlexNet. The results on validation set are shown in Table 4. We also compare the results to the model trained on the mutual-exclusivity loss function only and reported in [30]. We can see that even for a large dataset with many categories, the proposed unsupervised loss function improves the classification accuracy. The error rate of a single AlexNet model on validation set of ILSVRC 2012 using all training data is 18.2% [7]. Table 4: Error rates (%) on validation set for ILSVR 2012 (Top-5). mean mutual [21] ∼1.5% rep 1 rep 2 rep 3 rep 4 rep 5 ± std xcl [30] of data labeled only: 45.73 46.15 46.06 45.57 46.08 45.91 ± 0.25 45.63 85.9 semi-sup: 39.50 39.99 39.94 39.70 40.08 39.84 ± 0.23 42.90 84.2 7 5 Discussion We can see that the proposed loss function can improve the accuracy of a ConvNet regardless of the architecture and implementation. We improve the accuracy of two relatively different implementations of ConvNets, i.e., cuda-convnet and sparse convolutional networks. For SVHN and NORB, we do not use dropout or randomized pooling for the experiments performed using cuda-convnet. Therefore, the only source of variation in different passes of a sample through the network is random transformations (translation and rotation). For the experiments performed using sparse convolutional networks on these two datasets, we do not use data transformation. Instead, we use dropout and randomized pooling. Based on the results, we can see that in both cases we can significantly improve the accuracy when we have a small number of labeled samples. For CIFAR100, we achieve state-of-the-art error rate of 21.43% by taking advantage of the variations caused by dropout and randomized pooling. In ImageNet and CIFAR10 experiments, we use both data transformation and dropout. For CIFAR10, we also have randomized pooling and achieve the state-of-the-art error rate of 3.00%. In MNIST experiments with 100 labeled samples and NORB experiments with 1% of labeled data, we achieve accuracy reasonably close to the case when we use all available training data by applying mutualexclusivity loss and minimizing the difference in predictions of multiple passes caused by dropout and randomized pooling. 6 Conclusion In this paper, we proposed an unsupervised loss function that minimizes the variations in different passes of a sample through the network caused by non-deterministic transformations and randomized dropout and max-pooling schemes. We evaluated the proposed method using two ConvNet implementations on multiple benchmark datasets. We showed that it is possible to achieve significant improvements in accuracy by using the transformation/stability loss function along with mutual-exclusivity of [30] when we have a small number of labeled data available. Acknowledgments This work was supported by NSF IIS-1149299. References [1] B. B. Le Cun, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel, “Handwritten digit recognition with a back-propagation network,” in Advances in neural information processing systems, Citeseer, 1990. [2] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. [3] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun, “Overfeat: Integrated recognition, localization and detection using convolutional networks,” arXiv preprint arXiv:1312.6229, 2013. [4] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Computer Vision and Pattern Recognition, pp. 3431–3440, 2015. [5] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al., “Imagenet large scale visual recognition challenge,” International Journal of Computer Vision, vol. 115, no. 3, pp. 211–252, 2015. [6] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Computer Vision and Pattern Recognition, pp. 1–9, 2015. [7] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012. [8] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov, “Improving neural networks by preventing co-adaptation of feature detectors,” arXiv preprint arXiv:1207.0580, 2012. [9] D. Ciresan, U. Meier, and J. Schmidhuber, “Multi-column deep neural networks for image classification,” in Computer Vision and Pattern Recognition, pp. 3642–3649, IEEE, 2012. [10] A. Blum and T. Mitchell, “Combining labeled and unlabeled data with co-training,” in Proceedings of the eleventh annual conference on Computational learning theory, pp. 92–100, ACM, 1998. [11] V. R. de Sa, “Learning classification with unlabeled data,” in Advances in neural information processing systems, pp. 112–119, 1994. 8 [12] D. J. Miller and H. S. Uyar, “A mixture of experts classifier with learning based on both labelled and unlabelled data,” in Advances in neural information processing systems, pp. 571–577, 1997. [13] T. Joachims, “Transductive inference for text classification using support vector machines,” in ICML, vol. 99, pp. 200–209, 1999. [14] K. Bennett, A. Demiriz, et al., “Semi-supervised support vector machines,” Advances in Neural Information processing systems, pp. 368–374, 1999. [15] A. Blum and S. Chawla, “Learning from labeled and unlabeled data using graph mincuts,” 2001. [16] X. Zhu, Z. Ghahramani, J. Lafferty, et al., “Semi-supervised learning using gaussian fields and harmonic functions,” in International Conference on Machine Learning, vol. 3, pp. 912–919, 2003. [17] X. Zhu and Z. Ghahramani, “Learning from labeled and unlabeled data with label propagation,” tech. rep., Citeseer, 2002. [18] Y. LeCun, K. Kavukcuoglu, C. Farabet, et al., “Convolutional networks and applications in vision.,” in ISCAS, pp. 253–256, 2010. [19] K. Jarrett, K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “What is the best multi-stage architecture for object recognition?,” in International Conference on Computer Vision, pp. 2146–2153, IEEE, 2009. [20] K. Kavukcuoglu, M. Ranzato, and Y. LeCun, “Fast inference in sparse coding algorithms with applications to object recognition,” arXiv preprint arXiv:1010.3467, 2010. [21] P. Agrawal, J. Carreira, and J. Malik, “Learning to see by moving,” in International Conference on Computer Vision, pp. 37–45, 2015. [22] C. Doersch, A. Gupta, and A. A. Efros, “Unsupervised visual representation learning by context prediction,” in International Conference on Computer Vision, pp. 1422–1430, 2015. [23] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in International conference on artificial intelligence and statistics, pp. 249–256, 2010. [24] R. Johnson and T. Zhang, “Semi-supervised convolutional neural networks for text categorization via region embedding,” in Advances in Neural Information Processing Systems, pp. 919–927, 2015. [25] J. Weston, F. Ratle, H. Mobahi, and R. Collobert, “Deep learning via semi-supervised embedding,” in Neural Networks: Tricks of the Trade, pp. 639–655, Springer, 2012. [26] X. Wang and A. Gupta, “Unsupervised learning of visual representations using videos,” in International Conference on Computer Vision, pp. 2794–2802, 2015. [27] D. Jayaraman and K. Grauman, “Learning image representations tied to ego-motion,” in International Conference on Computer Vision, pp. 1413–1421, 2015. [28] A. Rasmus, M. Berglund, M. Honkala, H. Valpola, and T. Raiko, “Semi-supervised learning with ladder networks,” in Advances in Neural Information Processing Systems, pp. 3532–3540, 2015. [29] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, and T. Brox, “Discriminative unsupervised feature learning with convolutional neural networks,” in Advances in Neural Information Processing Systems, pp. 766–774, 2014. [30] M. Sajjadi, M. Javanmardi, and T. Tasdizen, “Mutual exclusivity loss for semi-supervised deep learning,” in International Conference on Image Processing, IEEE, 2016. [31] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri, “Transformation invariance in pattern recognition—tangent distance and tangent propagation,” in Neural networks: tricks of the trade, pp. 239–274, Springer, 1998. [32] D. Jayaraman and K. Grauman, “Slow and steady feature analysis: higher order temporal coherence in video,” Computer Vision and Pattern Recognition, 2016. [33] L. Sun, K. Jia, T.-H. Chan, Y. Fang, G. Wang, and S. Yan, “Dl-sfa: deeply-learned slow feature analysis for action recognition,” in Computer Vision and Pattern Recognition, pp. 2625–2632, 2014. [34] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” 2009. [35] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng, “Reading digits in natural images with unsupervised feature learning,” in NIPS workshop on deep learning and unsupervised feature learning, vol. 2011, p. 4, Granada, Spain, 2011. [36] Y. LeCun, F. J. Huang, and L. Bottou, “Learning methods for generic object recognition with invariance to pose and lighting,” in Computer Vision and Pattern Recognition, vol. 2, pp. II–97, IEEE, 2004. [37] A. Krizhevskey, “Cuda-convnet.” code.google.com/p/cuda-convnet, 2014. [38] B. Graham, “Spatially-sparse convolutional neural networks,” arXiv preprint arXiv:1409.6070, 2014. [39] B. Graham, “Fractional max-pooling,” arXiv preprint arXiv:1412.6071, 2014. [40] J.-R. Chang and Y.-S. Chen, “Batch-normalized maxout network in network,” arXiv preprint arXiv:1511.02583, 2015. 9
2016
117
6,014
An Online Sequence-to-Sequence Model Using Partial Conditioning Navdeep Jaitly Google Brain ndjaitly@google.com David Sussillo Google Brain sussillo@google.com Quoc V. Le Google Brain qvl@google.com Oriol Vinyals Google DeepMind vinyals@google.com Ilya Sutskever Open AI∗ ilyasu@openai.com Samy Bengio Google Brain bengio@google.com Abstract Sequence-to-sequence models have achieved impressive results on various tasks. However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences. This is because they generate an output sequence conditioned on an entire input sequence. In this paper, we present a Neural Transducer that can make incremental predictions as more input arrives, without redoing the entire computation. Unlike sequence-to-sequence models, the Neural Transducer computes the next-step distribution conditioned on the partially observed input sequence and the partially generated sequence. At each time step, the transducer can decide to emit zero to many output symbols. The data can be processed using an encoder and presented as input to the transducer. The discrete decision to emit a symbol at every time step makes it difficult to learn with conventional backpropagation. It is however possible to train the transducer by using a dynamic programming algorithm to generate target discrete decisions. Our experiments show that the Neural Transducer works well in settings where it is required to produce output predictions as data come in. We also find that the Neural Transducer performs well for long sequences even when attention mechanisms are not used. 1 Introduction The recently introduced sequence-to-sequence model has shown success in many tasks that map sequences to sequences, e.g., translation, speech recognition, image captioning and dialogue modeling [17, 4, 1, 6, 3, 20, 18, 15, 19]. However, this method is unsuitable for tasks where it is important to produce outputs as the input sequence arrives. Speech recognition is an example of such an online task – users prefer seeing an ongoing transcription of speech over receiving it at the “end” of an utterance. Similarly, instant translation systems would be much more effective if audio was translated online, rather than after entire utterances. This limitation of the sequence-to-sequence model is due to the fact that output predictions are conditioned on the entire input sequence. In this paper, we present a Neural Transducer, a more general class of sequence-to-sequence learning models. Neural Transducer can produce chunks of outputs (possibly of zero length) as blocks of inputs arrive - thus satisfying the condition of being “online” (see Figure 1(b) for an overview). The model generates outputs for each block by using a transducer RNN that implements a sequence-to-sequence model. The inputs to the transducer RNN come from two sources: the encoder RNN and its own ∗Work done at Google Brain 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. yL yi y2 y1 xL xi x2 x1 Neural Network yi-1 y1 <s> yL-1 yL </s> (a) seq2seq <e> <e> xL xbW+W xbW x1 y2 ym encoder transducer ym ym+1 ym+1 <e> transducer W (b) Neural Transducer Figure 1: High-level comparison of our method with sequence-to-sequence models. (a) Sequence-tosequence model [17]. (b) The Neural Transducer (this paper) which emits output symbols as data come in (per block) and transfers the hidden state across blocks. recurrent state. In other words, the transducer RNN generates local extensions to the output sequence, conditioned on the features computed for the block by an encoder RNN and the recurrent state of the transducer RNN at the last step of the previous block. During training, alignments of output symbols to the input sequence are unavailable. One way of overcoming this limitation is to treat the alignment as a latent variable and to marginalize over all possible values of this alignment variable. Another approach is to generate alignments from a different algorithm, and train our model to maximize the probability of these alignments. Connectionist Temporal Classification (CTC) [7] follows the former strategy using a dynamic programming algorithm, that allows for easy marginalization over the unary potentials produced by a recurrent neural network (RNN). However, this is not possible in our model, since the neural network makes next-step predictions that are conditioned not just on the input data, but on the alignment, and the targets produced until the current step. In this paper, we show how a dynamic programming algorithm, can be used to compute "approximate" best alignments from this model. We show that training our model on these alignments leads to strong results. On the TIMIT phoneme recognition task, a Neural Transducer (with 3 layered unidirectional LSTM encoder and 3 layered unidirectional LSTM transducer) can achieve an accuracy of 20.8% phoneme error rate (PER) which is close to state-of-the-art for unidirectional models. We show too that if good alignments are made available (e.g, from a GMM-HMM system), the model can achieve 19.8% PER. 2 Related Work In the past few years, many proposals have been made to add more power or flexibility to neural networks, especially via the concept of augmented memory [10, 16, 21] or augmented arithmetic units [13, 14]. Our work is not concerned with memory or arithmetic components but it allows more flexibility in the model so that it can dynamically produce outputs as data come in. Our work is related to traditional structured prediction methods, commonplace in speech recognition. The work bears similarity to HMM-DNN [11] and CTC [7] systems. An important aspect of these approaches is that the model makes predictions at every input time step. A weakness of these models is that they typically assume conditional independence between the predictions at each output step. Sequence-to-sequence models represent a breakthrough where no such assumptions are made – the output sequence is generated by next step prediction, conditioning on the entire input sequence and the partial output sequence generated so far [5, 6, 3]. Figure 1(a) shows the high-level picture of this architecture. However, as can be seen from the figure, these models have a limitation in that they have to wait until the end of the speech utterance to start decoding. This property makes them unattractive for real time speech recognition and online translation. Bahdanau et. al. [2] attempt to rectify this for speech recognition by using a moving windowed attention, but they do not provide a mechanism to address the situation that arises when no output can be produced from the windowed segment of data. Figure 1(b) shows the difference between our method and sequence-to-sequence models. A strongly related model is the sequence transducer [8, 9]. This model augments the CTC model by combining the transcription model with a prediction model. The prediction model is akin to a 2 ⍺b ⍺2 ym ym+2 ym+1 ym-1 ym-2 cm encoder sm-1 sm sm+1 ym ym+1 sm+2 h(b-1)W sm+3 h(b-1)W+1 h(b-1)W+2 hbW hbW+1 + ym+3 h’m+1 h’m+2 h’m+3 W attention transducer eb-1=m-1 eb=m+2 ym-1 ym+2 ⍺1 h’m-1 h’m x(b-1)W+1 xbW xbW+1 x(b-1)W Figure 2: An overview of the Neural Transducer architecture for speech. The input acoustic sequence is processed by the encoder to produce hidden state vectors hi at each time step i, i = 1 · · · L. The transducer receives a block of inputs at each step and produces up to M output tokens using the sequence-to-sequence model over this input. The transducer maintains its state across the blocks through the use of recurrent connections to the previous output time steps. The figure above shows the transducer producing tokens for block b. The subsequence emitted in this block is ymym+1ym+2. language model and operates only on the output tokens, as a next step prediction model. This gives the model more expressiveness compared to CTC which makes independent predictions at every time step. However, unlike the model presented in this paper, the two models in the sequence transducer operate independently – the model does not provide a mechanism by which the prediction network features at one time step would change the transcription network features in the future, and vice versa. Our model, in effect both generalizes this model and the sequence to sequence model. Our formulation requires inferring alignments during training. However, our results indicate that this can be done relatively fast, and with little loss of accuracy, even on a small dataset where no effort was made at regularization. Further, if alignments are given, as is easily done offline for various tasks, the model is able to train relatively fast, without this inference step. 3 Methods In this section we describe the model in more detail. Please refer to Figure 2 for an overview. 3.1 Model Let x1···L be the input data that is L time steps long, where xi represents the features at input time step i. Let W be the block size, i.e., the periodicity with which the transducer emits output tokens, and N = ⌈L W ⌉be the number of blocks. Let ˜y1···S be the target sequence, corresponding to the input sequence. Further, let the transducer produce a sequence of k outputs, ˜yi···(i+k), where 0 ≤k < M, for any input block. Each such sequence is padded with the <e> symbol, that is added to the vocabulary. It signifies that the transducer may proceed and consume data from the next block. When no symbols are produced for a block, this symbol is akin to the blank symbol of CTC. The sequence ˜y1···S can be transduced from the input from various alignments. Let Y be the set of all alignments of the output sequence ˜y1···S to the input blocks. Let y1···(S+B)) ∈Y be any such alignment. Note that the length of y is B more than the length of ˜y, since there are B end of block symbols, <e>, in y. However, the number of sequences y matching to ˜y is much larger, corresponding to all possible alignments of ˜y to the blocks. The block that element yi is aligned 3 to can be inferred simply by counting the number of <e> symbols that came before index i. Let, eb, b ∈1 · · · N be the index of the last token in y emitted in the bth block. Note that e0 = 0 and eN = (S + B). Thus yeb =<e> for each block b. In this section, we show how to compute p y1···(S+B))|x1···L  . Later, in section 3.5 we show how to compute, and maximize p (˜y1···S|x1···L). We first compute the probability of l compute the probability of seeing output sequence y1···eb by the end of block b as follows: p (y1···eb|x1···bW ) = p (y1···e1|x1···W ) b Y b′=2 p  y(eb′−1+1)···e′ b|x1···b′W , y1···eb′−1  (1) Each of the terms in this equation is itself computed by the chain rule decomposition, i.e., for any block b, p y(eb−1+1)···eb|x1···bW , y1···eb−1  = eb Y m=e(b−1)+1 p ym|x1···bW , y1···(m−1)  (2) The next step probability terms, p ym|x1···bW , y1···(m−1)  , in Equation 2 are computed by the transducer using the encoding of the input x1···bW computed by the encoder, and the label prefix y1···(m−1) that was input into the transducer, at previous emission steps. We describe this in more detail in the next subsection. 3.2 Next Step Prediction We again refer the reader to Figure 2 for this discussion. The example shows a transducer with two hidden layers, with units sm and h′m at output step m. In the figure, the next step prediction is shown for block b. For this block, the index of the first output symbol is m = eb−1 + 1, and the index of the last output symbol is m + 2 (i.e. eb = m + 2). The transducer computes the next step prediction, using parameters, θ, of the neural network through the following sequence of steps: sm = fRNN (sm−1, [cm−1; ym−1] ; θ) (3) cm = fcontext sm, h((b−1)W +1)···bW ; θ  (4) h′ m = fRNN (h′ m−1, [cm; sm] ; θ) (5) p ym|x1···bW , y1···(m−1)  = fsoftmax (ym; h′ m, θ) (6) where fRNN (am−1, bm; θ) is the recurrent neural network function (such as an LSTM or a sigmoid or tanh RNN) that computes the state vector am for a layer at a step using the recurrent state vector am−1 at the last time step, and input bm at the current time step;2 fsoftmax (·; am; θ) is the softmax distribution computed by a softmax layer, with input vector am; and fcontext sm, h((b−1)W +1)···bW ; θ  is the context function, that computes the input to the transducer at output step m from the state sm at the current time step, and the features h((b−1)W +1)···bW of the encoder for the current input block, b. We experimented with different ways of computing the context vector – with and without an attention mechanism. These are described subsequently in section 3.3. Note that since the encoder is an RNN, h(b−1)W ···bW is actually a function of the entire input, x1···bW so far. Correspondingly, sm is a function of the labels emitted so far, and the entire input seen so far.3 Similarly, h′m is a function of the labels emitted so far and the entire input seen so far. 3.3 Computing fcontext We first describe how the context vector is computed by an attention model similar to earlier work [5, 1, 3]. We call this model the MLP-attention model. 2Note that for LSTM, we would have to additionally factor in cell states from the previous states - we have ignored this in the notation for purpose of clarity. The exact details are easily worked out. 3For the first output step of a block it includes only the input seen until the end of the last block. 4 In this model the context vector cm is in computed in two steps - first a normalized attention vector αm is computed from the state sm of the transducer and next the hidden states h(b−1)W +1···bW of the encoder for the current block are linearly combined using α and used as the context vector. To compute αm, a multi-layer perceptron computes a scalar value, em j for each pair of transducer state sm and encoder h(b−1)W +j. The attention vector is computed from the scalar values, em j , j = 1 · · · W. Formally: em j = fattention sm, h(b−1)W +j; θ  (7) αm = softmax ([em 1 ; em 2 ; · · · em W ]) (8) cm = W X j=1 αm j h(b−1)W +j (9) We also experimented with using a simpler model for fattention that computed em j = sT mh(b−1)W +j. We refer to this model as DOT-attention model. Both of these attention models have two shortcomings. Firstly there is no explicit mechanism that requires the attention model to move its focus forward, from one output time step to the next. Secondly, the energies computed as inputs to the softmax function, for different input frames j are independent of each other at each time step, and thus cannot modulate (e.g., enhance or suppress) each other, other than through the softmax function. Chorowski et. al. [6] ameliorate the second problem by using a convolutional operator that affects the attention at one time step using the attention at the last time step. We attempt to address these two shortcomings using a new attention mechanism. In this model, instead of feeding [em 1 ; em 2 ; · · · em W ] into a softmax, we feed them into a recurrent neural network with one hidden layer that outputs the softmax attention vector at each time step. Thus the model should be able to modulate the attention vector both within a time step and across time steps. This attention model is thus more general than the convolutional operator of Chorowski et. al. (2015), but it can only be applied to the case where the context window size is constant. We refer to this model as LSTM-attention. 3.4 Addressing End of Blocks Since the model only produces a small sequence of output tokens in each block, we have to address the mechanism for shifting the transducer from one block to the next. We experimented with three distinct ways of doing this. In the first approach, we introduced no explicit mechanism for end-of-blocks, hoping that the transducer neural network would implicitly learn a model from the training data. In the second approach we added end-of-block symbols, <e>, to the label sequence to demarcate the end of blocks, and we added this symbol to the target dictionary. Thus the softmax function in Equation 6 implicitly learns to either emit a token, or to move the transducer forward to the next block. In the third approach, we model moving the transducer forward, using a separate logistic function of the attention vector. The target of the logistic function is 0 or 1 depending on whether the current step is the last step in the block or not. 3.5 Training In this section we show how the Neural Transducer model can be trained. The probability of the output sequence ˜y1..S, given x1···L is as follows4: p (˜y1···S|x1···L) = X y∈Y p y1···(S+B))|x1···L  (10) In theory, we can train the model by maximizing the log of equation 10. The gradient for the log likelihood can easily be expressed as follows: d dθ log p (˜y1···S|x1···L) = X y∈Y p y1···(S+B))|x1···L, ˜y1···S  d dθ log p y1···(S+B)|x1···L  (11) 4Note that this equation implicitly incorporates the prior for alignments within the equation 5 Each of the latter term in the sum on the right hand side can be computed, by backpropagation, using y as the target of the model. However, the marginalization is intractable because of the sum over a combinatorial number of alignments. Alternatively, the gradient can be approximated by sampling from the posterior distribution (i.e. p y1···(S+B))|x1···L, ˜y1···S  ). However, we found this had very large noise in the learning and the gradients were often too biased, leading to the models that rarely achieved decent accuracy. Instead, we attempted to maximize the probability in equation 10 by computing the sum over only one term - corresponding to the y1···S with the highest posterior probability. Unfortunately, even doing this exactly is computationally infeasible because the number of possible alignments is combinatorially large and the problem of finding the best alignment cannot be decomposed to easier subproblems. So we use an algorithm that finds the approximate best alignment with a dynamic programming-like algorithm that we describe in the next paragraph. At each block, b, for each output position j, this algorithm keeps track of the approximate best hypothesis h(j, b) that represents the best partial alignment of the input sequence ˜y1···j to the partial input x1···bW . Each hypothesis, keeps track of the best alignment y1···(j+b) that it represents, and the recurrent states of the decoder at the last time step, corresponding to this alignment. At block b + 1, all hypotheses h(j, b), j <= min (b (M −1) , S) are extended by at most M tokens using their recurrent states, to compute h(j, b + 1), h(j + 1, b + 1) · · · h(j + M, b + 1)5. For each position j′, j′ <= min ((b + 1) (M −1) , S) the highest log probability hypothesis h(j′, b + 1) is kept6. The alignment from the best hypothesis h(S, B) at the last block is used for training. In theory, we need to compute the alignment for each sequence when it is trained, using the model parameters at that time. In practice, we batch the alignment inference steps, using parallel tasks, and cache these alignments. Thus alignments are computed less frequently than the model updates typically every 100-300 sequences. This procedure has the flavor of experience replay from Deep Reinforcement learning work [12]. 3.6 Inference For inference, given the input acoustics x1···L, and the model parameters, θ, we find the sequence of labels y1..M that maximizes the probability of the labels, conditioned on the data, i.e., ˜y1···S = arg max y1···S′,e1···N N X b=1 log p ye(b−1)+1···eb|x1···bW , y1···e(b−1)  (12) Exact inference in this scheme is computationally expensive because the expression for log probability does not permit decomposition into smaller terms that can be independently computed. Instead, each candidate, y1·S′, would have to be tested independently, and the best sequence over an exponentially large number of sequences would have to be discovered. Hence, we use a beam search heuristic to find the “best” set of candidates. To do this, at each output step m, we keep a heap of alternative n best prefixes, and extend each one by one symbol, trying out all the possible alternative extensions, keeping only the best n extensions. Included in the beam search is the act of moving the attention to the next input block. The beam search ends either when the sequence is longer than a pre-specified threshold, or when the end of token symbol is produced at the last block. 4 Experiments and Results 4.1 Addition Toy Task We experimented with the Neural Transducer on the toy task of adding two three-digit decimal numbers. The second number is presented in the reverse order, and so is the target output. Thus the model can produce the first output as soon as the first digit of the second number is observed. The model is able to achieve 0% error on this task with a very small number of units (both encoder and transducer are 1 layer unidirectional LSTM RNNs with 100 units). 5Note the minutiae that each of these extensions ends with <e> symbol. 6We also experimented with sampling from the extensions in proportion to the probabilities, but this did not always improve results. 6 As can be seen below, the model learns to output the digits as soon as the required information is available. Occasionally the model waits an extra step to output its target symbol. We show results (blue) for four different examples (red). A block window size of W=1 was used, with M=8. 2 + 7 2 5 <s> 2 2 7 + 3 <s> <e> <e> <e> 9<e> 2<e> 5<e> <e> <e> <e> <e> <e> 032<e> 1 7 4 + 3 <s> 4 0 + 2 6 2 <s> <e> <e> <e> <e> <e> 771<e> <e> <e> <e> <e> 2<e> 0<e> 3<e> 4.2 TIMIT We used TIMIT, a standard benchmark for speech recognition, for our larger experiments. Log Mel filterbanks were computed every 10ms as inputs to the system. The targets were the 60 phones defined for the TIMIT dataset (h# were relabelled as pau). We used stochastic gradient descent with momentum with a batch size of one utterance per training step. An initial learning rate of 0.05, and momentum of 0.9 was used. The learning rate was reduced by a factor of 0.5 every time the average log prob over the validation set decreased 7. The decrease was applied for a maximum of 4 times. The models were trained for 50 epochs and the parameters from the epochs with the best dev set log prob were used for decoding. We trained a Neural Transducer with three layer LSTM RNN coupled to a three LSTM layer unidirectional encoder RNN, and achieved a PER of 20.8% on the TIMIT test set. This model used the LSTM attention mechanism. Alignments were generated from a model that was updated after every 300 steps of Momentum updates. Interestingly, the alignments generated by the model are very similar to the alignments produced by a Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) system that we trained using the Kaldi toolkit – even though the model was trained entirely discriminatively. The small differences in alignment correspond to an occasional phoneme emitted slightly later by our model, compared to the GMM-HMM system. We also trained models using alignments generated from the GMM-HMM model trained on Kaldi. The frame level alignments from Kaldi were converted into block level alignments by assigning each phone in the sequence to the block it was last observed in. The same architecture model described above achieved an accuracy of 19.8% with these alignments. For further exploratory experiments, we used the GMM-HMM alignments as given to avoid computing the best alignments. Table 1 shows a comparison of our method against a basic implementation of a sequence-to-sequence model that produces outputs for each block independent of the other blocks, and concatenates the produced sequences. Here, the sequence-to-sequence model produces the output conditioned on the state of the encoder at the end of the block. Both models used an encoder with two layers of 250 LSTM cells, without attention. The standard sequence-to-sequence model performs significantly worse than our model – the recurrent connections of the transducer across blocks are clearly helpful in improving the accuracy of the model. Table 1: Impact of maintaining recurrent state of transducer across blocks on the PER (median of 3 runs). This table shows that maintaining the state of the transducer across blocks leads to much better results. W BLOCK-RECURRENCE PER 15 No 34.3 15 Yes 20.6 Figure 3 shows the impact of block size on the accuracy of the different transducer variants that we used. See Section 3.3 for a description of the {DOT,MLP,LSTM}-attention models. All models used a two LSTM layer encoder and a two LSTM layer transducer. The model is sensitive to the choice of the block size, when no attention is used. However, it can be seen that with an appropriate choice of window size (W=8), the Neural Transducer without attention can match the accuracy of the attention based Neural Transducers. Further exploration of this configuration should lead to improved results. When attention is used in the transducer, the precise value of the block size becomes less important. The LSTM-based attention model seems to be more consistent compared to the other attention 7Note the TIMIT provides a validation set, called the dev set. We use these terms interchangeably. 7 mechanisms we explored. Since this model performed best with W=25, we used this configuration for subsequent experiments. 5 10 15 20 25 30 window size (W) 19 20 21 22 23 24 25 26 Phone Error Rate (PER) no-attention DOT-ATTENTION MLP-ATTENTION LSTM-ATTENTION Figure 3: Impact of the number of frames (W) in a block and attention mechanism on PER. Each number is the median value from three experiments. Table 2 explores the impact of the number of layers in the transducer and the encoder on the PER. A three layer encoder coupled to a three layer transducer performs best on average. Four layer transducers produced results with higher spread in accuracy – possibly because of the more difficult optimization involved. Thus, the best average PER we achieved (over 3 runs) was 19.8% on the TIMIT test set. These results could probably be improved with other regularization techniques, as reported by [6] but we did not pursue those avenues in this paper. Table 2: Impact of depth of encoder and transducer on PER. # of layers in encoder / transducer 1 2 3 4 2 19.2 18.9 18.8 3 18.5 18.2 19.4 For a comparison with previously published sequence-to-sequence models on this task, we used a three layer bidirectional LSTM encoder with 250 LSTM cells in each direction and achieved a PER of 18.7%. By contrast, the best reported results using previous sequence-to-sequence models are 17.6% [6]. However, this requires controlling overfitting carefully. 5 Discussion One of the important side-effects of our model using partial conditioning with a blocked transducer is that it naturally alleviates the problem of “losing attention” suffered by sequence-to-sequence models. Because of this, sequence-to-sequence models perform worse on longer utterances [6, 3]. This problem is automatically tackled in our model because each new block automatically shifts the attention monotonically forward. Within a block, the model learns to move attention forward from one step to the next, and the attention mechanism rarely suffers, because both the size of a block, and the number of output steps for a block are relatively small. As a result, error in attention in one block, has minimal impact on the predictions at subsequent blocks. Finally, we note that increasing the block size, W, so that it is as large as the input utterance makes the model similar to vanilla end-to-end models [5, 3]. 6 Conclusion We have introduced a new model that uses partial conditioning on inputs to generate output sequences. This allows the model to produce output as input arrives. This is useful for speech recognition systems and will also be crucial for future generations of online speech translation systems. Further it can be useful for performing transduction over long sequences – something that is possibly difficult for sequence-to-sequence models. We applied the model to a toy task of addition, and to a phone recognition task and showed that is can produce results comparable to the state of the art from sequence-to-sequence models. 8 References [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural Machine Translation by Jointly Learning to Align and Translate. In International Conference on Learning Representations, 2015. [2] Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. End-to-end attention-based large vocabulary speech recognition. In http://arxiv.org/abs/1508.04395, 2015. [3] William Chan, Navdeep Jaitly, Quoc V Le, and Oriol Vinyals. Listen, attend and spell. arXiv preprint arXiv:1508.01211, 2015. [4] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwen, and Yoshua Bengio. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Conference on Empirical Methods in Natural Language Processing, 2014. [5] Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end Continuous Speech Recognition using Attention-based Recurrent NN: First Results. In Neural Information Processing Systems: Workshop Deep Learning and Representation Learning Workshop, 2014. [6] Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. AttentionBased Models for Speech Recognition. In Neural Information Processing Systems, 2015. [7] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645–6649. IEEE, 2013. [8] Alex Graves. Sequence Transduction with Recurrent Neural Networks. In International Conference on Machine Learning: Representation Learning Workshop, 2012. [9] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech Recognition with Deep Recurrent Neural Networks. In IEEE International Conference on Acoustics, Speech and Signal Processing, 2013. [10] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [11] Geoffrey Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012. [12] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013. [13] Arvind Neelakantan, Quoc V Le, and Ilya Sutskever. Neural programmer: Inducing latent programs with gradient descent. arXiv preprint arXiv:1511.04834, 2015. [14] Scott Reed and Nando de Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015. [15] Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, JianYun Nie, Jianfeng Gao, and Bill Dolan. A neural network approach to context-sensitive generation of conversational responses. arXiv preprint arXiv:1506.06714, 2015. [16] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439, 2015. [17] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to Sequence Learning with Neural Networks. In Neural Information Processing Systems, 2014. [18] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a foreign language. In NIPS, 2015. [19] Oriol Vinyals and Quoc V. Le. A neural conversational model. In ICML Deep Learning Workshop, 2015. [20] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and Tell: A Neural Image Caption Generator. In IEEE Conference on Computer Vision and Pattern Recognition, 2015. [21] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015. 9
2016
118
6,015
Interaction Networks for Learning about Objects, Relations and Physics Anonymous Author(s) Affiliation Address email Abstract Reasoning about objects, relations, and physics is central to human intelligence, and 1 a key goal of artificial intelligence. Here we introduce the interaction network, a 2 model which can reason about how objects in complex systems interact, supporting 3 dynamical predictions, as well as inferences about the abstract properties of the 4 system. Our model takes graphs as input, performs object- and relation-centric 5 reasoning in a way that is analogous to a simulation, and is implemented using 6 deep neural networks. We evaluate its ability to reason about several challenging 7 physical domains: n-body problems, rigid-body collision, and non-rigid dynamics. 8 Our results show it can be trained to accurately simulate the physical trajectories of 9 dozens of objects over thousands of time steps, estimate abstract quantities such 10 as energy, and generalize automatically to systems with different numbers and 11 configurations of objects and relations. Our interaction network implementation 12 is the first general-purpose, learnable physics engine, and a powerful general 13 framework for reasoning about object and relations in a wide variety of complex 14 real-world domains. 15 1 Introduction 16 Representing and reasoning about objects, relations and physics is a “core” domain of human common 17 sense knowledge [25], and among the most basic and important aspects of intelligence [27, 15]. Many 18 everyday problems, such as predicting what will happen next in physical environments or inferring 19 underlying properties of complex scenes, are challenging because their elements can be composed 20 in combinatorially many possible arrangements. People can nevertheless solve such problems by 21 decomposing the scenario into distinct objects and relations, and reasoning about the consequences 22 of their interactions and dynamics. Here we introduce the interaction network – a model that can 23 perform an analogous form of reasoning about objects and relations in complex systems. 24 Interaction networks combine three powerful approaches: structured models, simulation, and deep 25 learning. Structured models [7] can exploit rich, explicit knowledge of relations among objects, 26 independent of the objects themselves, which supports general-purpose reasoning across diverse 27 contexts. Simulation is an effective method for approximating dynamical systems, predicting how the 28 elements in a complex system are influenced by interactions with one another, and by the dynamics 29 of the system. Deep learning [23, 16] couples generic architectures with efficient optimization 30 algorithms to provide highly scalable learning and inference in challenging real-world settings. 31 Interaction networks explicitly separate how they reason about relations from how they reason about 32 objects, assigning each task to distinct models which are: fundamentally object- and relation-centric; 33 and independent of the observation modality and task specification (see Model section 2 below 34 and Fig. 1a). This lets interaction networks automatically generalize their learning across variable 35 numbers of arbitrarily ordered objects and relations, and also recompose their knowledge of entities 36 Submitted to 30th Conference on Neural Information Processing Systems (NIPS 2016). Do not distribute. Object reasoning Relational reasoning Compute interaction Apply object dynamics Effects Objects, relations Predictions, inferences a. b. Figure 1: Schematic of an interaction network. a. For physical reasoning, the model takes objects and relations as input, reasons about their interactions, and applies the effects and physical dynamics to predict new states. b. For more complex systems, the model takes as input a graph that represents a system of objects, oj, and relations, ⟨i, j, rk⟩k, instantiates the pairwise interaction terms, bk, and computes their effects, ek, via a relational model, fR(·). The ek are then aggregated and combined with the oj and external effects, xj, to generate input (as cj), for an object model, fO(·), which predicts how the interactions and dynamics influence the objects, p. and interactions in novel and combinatorially many ways. They take relations as explicit input, 37 allowing them to selectively process different potential interactions for different input data, rather 38 than being forced to consider every possible interaction or those imposed by a fixed architecture. 39 We evaluate interaction networks by testing their ability to make predictions and inferences about var40 ious physical systems, including n-body problems, and rigid-body collision, and non-rigid dynamics. 41 Our interaction networks learn to capture the complex interactions that can be used to predict future 42 states and abstract physical properties, such as energy. We show that they can roll out thousands of 43 realistic future state predictions, even when trained only on single-step predictions. We also explore 44 how they generalize to novel systems with different numbers and configurations of elements. Though 45 they are not restricted to physical reasoning, the interaction networks used here represent the first 46 general-purpose learnable physics engine, and even have the potential to learn novel physical systems 47 for which no physics engines currently exist. 48 Related work Our model draws inspiration from previous work that reasons about graphs and 49 relations using neural networks. The “graph neural network” [22] is a framework that shares learning 50 across nodes and edges, the “recursive autoencoder” [24] adapts its processing architecture to exploit 51 an input parse tree, the “neural programmer-interpreter” [21] is a composable neural network that 52 mimics the execution trace of a program, and the “spatial transformer” [11] learns to dynamically 53 modify network connectivity to capture certain types of interactions. Others have explored deep 54 learning of logical and arithmetic relations [26], and relations suitable for visual question-answering 55 [1]. 56 The behavior of our model is similar in spirit to a physical simulation engine [2], which generates 57 sequences of states by repeatedly applying rules that approximate the effects of physical interactions 58 and dynamics on objects over time. The interaction rules are relation-centric, operating on two or 59 more objects that are interacting, and the dynamics rules are object-centric, operating on individual 60 objects and the aggregated effects of the interactions they participate in. 61 Previous AI work on physical reasoning explored commonsense knowledge, qualitative representa62 tions, and simulation techniques for approximating physical prediction and inference [28, 9, 6]. The 63 “NeuroAnimator” [8] was perhaps the first quantitative approach to learning physical dynamics, by 64 training neural networks to predict and control the state of articulated bodies. Ladický et al. [14] 65 recently used regression forests to learn fluid dynamics. Recent advances in convolutional neural 66 networks (CNNs) have led to efforts that learn to predict coarse-grained physical dynamics from 67 images [19, 17, 18]. Notably, Fragkiadaki et al. [5] used CNNs to predict and control a moving 68 ball from an image centered at its coordinates. Mottaghi et al. [20] trained CNNs to predict the 3D 69 trajectory of an object after an external impulse is applied. Wu et al. [29] used CNNs to parse objects 70 from images, which were then input to a physics engine that supported prediction and inference. 71 2 2 Model 72 Definition To describe our model, we use physical reasoning as an example (Fig. 1a), and build 73 from a simple model to the full interaction network (abbreviated IN). To predict the dynamics of a 74 single object, one might use an object-centric function, fO, which inputs the object’s state, ot, at 75 time t, and outputs a future state, ot+1. If two or more objects are governed by the same dynamics, 76 fO could be applied to each, independently, to predict their respective future states. But if the 77 objects interact with one another, then fO is insufficient because it does not capture their relationship. 78 Assuming two objects and one directed relationship, e.g., a fixed object attached by a spring to a freely 79 moving mass, the first (the sender, o1) influences the second (the receiver, o2) via their interaction. 80 The effect of this interaction, et+1, can be predicted by a relation-centric function, fR. The fR takes 81 as input o1, o2, as well as attributes of their relationship, r, e.g., the spring constant. The fO is 82 modified so it can input both et+1 and the receiver’s current state, o2,t, enabling the interaction to 83 influence its future state, o2,t+1, 84 et+1 = fR(o1,t, o2,t, r) o2,t+1 = fO(o2,t, et+1) The above formulation can be expanded to larger and more complex systems by representing them 85 as a graph, G = ⟨O, R⟩, where the nodes, O, correspond to the objects, and the edges, R, to the 86 relations (see Fig. 1b). We assume an attributed, directed multigraph because the relations have 87 attributes, and there can be multiple distinct relations between two objects (e.g., rigid and magnetic 88 interactions). For a system with NO objects and NR relations, the inputs to the IN are, 89 O = {oj}j=1...NO , R = {⟨i, j, rk⟩k}k=1...NR where i ̸= j, 1 ≤i, j ≤NO , X = {xj}j=1...NO The O represents the states of each object. The triplet, ⟨i, j, rk⟩k, represents the k-th relation in the 90 system, from sender, oi, to receiver, oj, with relation attribute, rk. The X represents external effects, 91 such as active control inputs or gravitational acceleration, which we define as not being part of the 92 system, and which are applied to each object separately. 93 The basic IN is defined as, 94 IN(G) = φO(a(G, X, φR( m(G) ) )) (1) 95 m(G) = B = {bk}k=1...NR fR(bk) = ek φR(B) = E = {ek}k=1...NR a(G, X, E) = C = {cj}j=1...NO fO(cj) = pj φO(C) = P = {pj}j=1...NO (2) The marshalling function, m, rearranges the objects and relations into interaction terms, bk = 96 ⟨oi, oj, rk⟩∈B, one per relation, which correspond to each interaction’s receiver, sender, and 97 relation attributes. The relational model, φR, predicts the effect of each interaction, ek ∈E, by 98 applying fR to each bk. The aggregation function, a, collects all effects, ek ∈E, that apply to each 99 receiver object, merges them, and combines them with O and X to form a set of object model inputs, 100 cj ∈C, one per object. The object model, φO, predicts how the interactions and dynamics influence 101 the objects by applying fO to each cj, and returning the results, pj ∈P. This basic IN can predict 102 the evolution of states in a dynamical system – for physical simulation, P may equal the future states 103 of the objects, Ot+1. 104 The IN can also be augmented with an additional component to make abstract inferences about the 105 system. The pj ∈P, rather than serving as output, can be combined by another aggregation function, 106 g, and input to an abstraction model, φA, which returns a single output, q, for the whole system. We 107 explore this variant in our final experiments that use the IN to predict potential energy. 108 An IN applies the same fR and fO to every bk and cj, respectively, which makes their relational and 109 object reasoning able to handle variable numbers of arbitrarily ordered objects and relations. But 110 one additional constraint must be satisfied to maintain this: the a function must be commutative and 111 associative over the objects and relations. Using summation within a to merge the elements of E into 112 C satisfies this, but division would not. 113 Here we focus on binary relations, which means there is one interaction term per relation, but another 114 option is to have the interactions correspond to n-th order relations by combining n senders in each bk. 115 The interactions could even have variable order, where each bk includes all sender objects that interact 116 with a receiver, but would require a fR than can handle variable-length inputs. These possibilities are 117 beyond the scope of this work, but are interesting future directions. 118 3 Implementation The general definition of the IN in the previous section is agnostic to the choice 119 of functions and algorithms, but we now outline a learnable implementation capable of reasoning 120 about complex systems with nonlinear relations and dynamics. We use standard deep neural network 121 building blocks, multilayer perceptrons (MLP), matrix operations, etc., which can be trained efficiently 122 from data using gradient-based optimization, such as stochastic gradient descent. 123 We define O as a DS ×NO matrix, whose columns correspond to the objects’ DS-length state vectors. 124 The relations are a triplet, R = ⟨Rr, Rs, Ra⟩, where Rr and Rs are NO × NR binary matrices which 125 index the receiver and sender objects, respectively, and Ra is a DR × NR matrix whose DR-length 126 columns represent the NR relations’ attributes. The j-th column of Rr is a one-hot vector which 127 indicates the receiver object’s index; Rs indicates the sender similarly. For the graph in Fig. 1b, 128 Rr = h 0 0 1 1 0 0 i and Rs = h 1 0 0 0 0 1 i . The X is a DX × NO matrix, whose columns are DX-length vectors 129 that represent the external effect applied each of the NO objects. 130 The marshalling function, m, computes the matrix products, ORr and ORs, and concatenates them 131 with Ra: m(G) = [ORr; ORs; Ra] = B . 132 The resulting B is a (2DS + DR) × NR matrix, whose columns represent the interaction terms, bk, 133 for the NR relations (we denote vertical and horizontal matrix concatenation with a semicolon and 134 comma, respectively). The way m constructs interaction terms can be modified, as described in our 135 Experiments section (3). 136 The B is input to φR, which applies fR, an MLP, to each column. The output of fR is a DE-length 137 vector, ek, a distributed representation of the effects. The φR concatenates the NR effects to form the 138 DE × NR effect matrix, E. 139 The G, X, and E are input to a, which computes the DE × NO matrix product, ¯E = ERT r , whose 140 j-th column is equivalent to the elementwise sum across all ek whose corresponding relation has 141 receiver object, j. The ¯E is concatenated with O and X: a(G, X, E) = [O; X; ¯E] = C. 142 The resulting C is a (DS + DX + DE) × NO matrix, whose NO columns represent the object states, 143 external effects, and per-object aggregate interaction effects. 144 The C is input to φO, which applies fO, another MLP, to each of the NO columns. The output of fO 145 is a DP -length vector, pj, and φO concatenates them to form the output matrix, P. 146 To infer abstract properties of a system, an additional φA is appended and takes P as input. The g 147 aggregation function performs an elementwise sum across the columns of P to return a DP -length 148 vector, ¯P. The ¯P is input to φA, another MLP, which returns a DA-length vector, q, that represents 149 an abstract, global property of the system. 150 Training an IN requires optimizing an objective function over the learnable parameters of φR and φO. 151 Note, m and a involve matrix operations that do not contain learnable parameters. 152 Because φR and φO are shared across all relations and objects, respectively, training them is statisti153 cally efficient. This is similar to CNNs, which are very efficient due to their weight-sharing scheme. 154 A CNN treats a local neighborhood of pixels as related, interacting entities: each pixel is effectively 155 a receiver object and its neighboring pixels are senders. The convolution operator is analogous to 156 φR, where fR is the local linear/nonlinear kernel applied to each neighborhood. Skip connections, 157 recently popularized by residual networks, are loosely analogous to how the IN inputs O to both 158 φR and φO, though in CNNs relation- and object-centric reasoning are not delineated. But because 159 CNNs exploit local interactions in a fixed way which is well-suited to the specific topology of images, 160 capturing longer-range dependencies requires either broad, insensitive convolution kernels, or deep 161 stacks of layers, in order to implement sufficiently large receptive fields. The IN avoids this restriction 162 by being able to process arbitrary neighborhoods that are explicitly specified by the R input. 163 3 Experiments 164 Physical reasoning tasks Our experiments explored two types of physical reasoning tasks: pre165 dicting future states of a system, and estimating their abstract properties, specifically potential energy. 166 We evaluated the IN’s ability to learn to make these judgments in three complex physical domains: 167 n-body systems; balls bouncing in a box; and strings composed of springs that collide with rigid 168 objects. We simulated the 2D trajectories of the elements of these systems with a physics engine, and 169 recorded their sequences of states. See the Supplementary Material for full details. 170 4 In the n-body domain, such as solar systems, all n bodies exert distance- and mass-dependent 171 gravitational forces on each other, so there were n(n −1) relations input to our model. Across 172 simulations, the objects’ masses varied, while all other fixed attributes were held constant. The 173 training scenes always included 6 bodies, and for testing we used 3, 6, and 12 bodies. In half of 174 the systems, bodies were initialized with velocities that would cause stable orbits, if not for the 175 interactions with other objects; the other half had random velocities. 176 In the bouncing balls domain, moving balls could collide with each other and with static walls. 177 The walls were represented as objects whose shape attribute represented a rectangle, and whose 178 inverse-mass was 0. The relations input to the model were between the n objects (which included the 179 walls), for (n(n −1) relations). Collisions are more difficult to simulate than gravitational forces, and 180 the data distribution was much more challenging: each ball participated in a collision on less than 1% 181 of the steps, following straight-line motion at all other times. The model thus had to learn that despite 182 there being a rigid relation between two objects, they only had meaningful collision interactions when 183 they were in contact. We also varied more of the object attributes – shape, scale and mass (as before) 184 – as well as the coefficient of restitution, which was a relation attribute. Training scenes contained 6 185 balls inside a box with 4 variably sized walls, and test scenes contained either 3, 6, or 9 balls. 186 The string domain used two types of relations (indicated in rk), relation structures that were more 187 sparse and specific than all-to-all, as well as variable external effects. Each scene contained a string, 188 comprised of masses connected by springs, and a static, rigid circle positioned below the string. The 189 n masses had spring relations with their immediate neighbors (2(n −1)), and all masses had rigid 190 relations with the rigid object (2n). Gravitational acceleration, with a magnitude that was varied 191 across simulation runs, was applied so that the string always fell, usually colliding with the static 192 object. The gravitational acceleration was an external input (not to be confused with the gravitational 193 attraction relations in the n-body experiments). Each training scene contained a string with 15 point 194 masses, and test scenes contained either 5, 15, or 30 mass strings. In training, one of the point masses 195 at the end of the string, chosen at random, was always held static, as if pinned to the wall, while the 196 other masses were free to move. In the test conditions, we also included strings that had both ends 197 pinned, and no ends pinned, to evaluate generalization. 198 Our model takes as input the state of each system, G, decomposed into the objects, O (e.g., n-body 199 objects, balls, walls, points masses that represented string elements), and their physical relations, R 200 (e.g., gravitational attraction, collisions, springs), as well as the external effects, X (e.g., gravitational 201 acceleration). Each object state, oj, could be further divided into a dynamic state component 202 (e.g., position and velocity) and a static attribute component (e.g., mass, size, shape). The relation 203 attributes, Ra, represented quantities such as the coefficient of restitution, and spring constant. The 204 input represented the system at the current time. The prediction experiment’s target outputs were the 205 velocities of the objects on the subsequent time step, and the energy estimation experiment’s targets 206 were the potential energies of the system on the current time step. We also generated multi-step 207 rollouts for the prediction experiments (Fig. 2), to assess the model’s effectiveness at creating visually 208 realistic simulations. The output velocity, vt, on time step t became the input velocity on t + 1, and 209 the position at t + 1 was updated by the predicted velocity at t. 210 Data Each of the training, validation, test data sets were generated by simulating 2000 scenes 211 over 1000 time steps, and randomly sampling 1 million, 200k, and 200k one-step input/target pairs, 212 respectively. The model was trained for 2000 epochs, randomly shuffling the data indices between 213 each. We used mini-batches of 100, and balanced their data distributions so the targets had similar 214 per-element statistics. The performance reported in the Results was measured on held-out test data. 215 We explored adding a small amount of Gaussian noise to 20% of the data’s input positions and 216 velocities during the initial phase of training, which was reduced to 0% from epochs 50 to 250. The 217 noise std. dev. was 0.05× the std. dev. of each element’s values across the dataset. It allowed the 218 model to experience physically impossible states which could not have been generated by the physics 219 engine, and learn to project them back to nearby, possible states. Our error measure did not reflect 220 clear differences with or without noise, but rollouts from models trained with noise were slightly 221 more visually realistic, and static objects were less subject to drift over many steps. 222 Model architecture The fR and fO MLPs contained multiple hidden layers of linear transforms 223 plus biases, followed by rectified linear units (ReLUs), and an output layer that was a linear transform 224 plus bias. The best model architecture was selected by a grid search over layer sizes and depths. All 225 5 True Model True Model True Model Time Time Time Figure 2: Prediction rollouts. Each column contains three panels of three video frames (with motion blur), each spanning 1000 rollout steps. Columns 1-2 are ground truth and model predictions for n-body systems, 3-4 are bouncing balls, and 5-6 are strings. Each model column was generated by a single model, trained on the underlying states of a system of the size in the top panel. The middle and bottom panels show its generalization to systems of different sizes and structure. For n-body, the training was on 6 bodies, and generalization was to 3 and 12 bodies. For balls, the training was on 6 balls, and generalization was to 3 and 9 balls. For strings, the training was on 15 masses with 1 end pinned, and generalization was to 30 masses with 0 and 2 ends pinned. 6 inputs (except Rr and Rs) were normalized by centering at the median and rescaling the 5th and 95th 226 percentiles to -1 and 1. All training objectives and test measures used mean squared error (MSE) 227 between the model’s prediction and the ground truth target. 228 All prediction experiments used the same architecture, with parameters selected by a hyperparameter 229 search. The fR MLP had four, 150-length hidden layers, and output length DE = 50. The fO MLP 230 had one, 100-length hidden layer, and output length DP = 2, which targeted the x, y-velocity. The 231 m and a were customized so that the model was invariant to the absolute positions of objects in the 232 scene. The m concatenated three terms for each bk: the difference vector between the dynamic states 233 of the receiver and sender, the concatenated receiver and sender attribute vectors, and the relation 234 attribute vector. The a only outputs the velocities, not the positions, for input to φO. 235 The energy estimation experiments used the IN from the prediction experiments with an additional 236 φA MLP which had one, 25-length hidden layer. Its P inputs’ columns were length DP = 10, and 237 its output length was DA = 1. 238 We optimized the parameters using Adam [13], with a waterfall schedule that began with a learning 239 rate of 0.001 and down-scaled the learning rate by 0.8 each time the validation error, estimated over 240 a window of 40 epochs, stopped decreasing. 241 Two forms of L2 regularization were explored: one applied to the effects, E, and another to the model 242 parameters. Regularizing E improved generalization to different numbers of objects and reduced 243 drift over many rollout steps. It likely incentivizes sparser communication between the φR and φO, 244 prompting them to operate more independently. Regularizing the parameters generally improved 245 performance and reduced overfitting. Both penalty factors were selected by a grid search. 246 Few competing models are available in the literature to compare our model against, but we considered 247 several alternatives: a constant velocity baseline which output the input velocity; an MLP baseline, 248 with two 300-length hidden layers, which took as input a flattened vector of all of the input data; and 249 a variant of the IN with the φR component removed (the interaction effects, E, was set to a 0-matrix). 250 4 Results 251 Prediction experiments Our results show that the IN can predict the next-step dynamics of our task 252 domains very accurately after training, with orders of magnitude lower test error than the alternative 253 models (Fig. 3a, d and g, and Table 1). Because the dynamics of each domain depended crucially on 254 interactions among objects, the IN was able to learn to exploit these relationships for its predictions. 255 The dynamics-only IN had no mechanism for processing interactions, and performed similarly to the 256 constant velocity model. The baseline MLP’s connectivity makes it possible, in principle, for it to 257 learn the interactions, but that would require learning how to use the relation indices to selectively 258 process the interactions. It would also not benefit from sharing its learning across relations and 259 objects, instead being forced to approximate the interactive dynamics in parallel for each objects. 260 The IN also generalized well to systems with fewer and greater numbers of objects (Figs. 3b-c, e-f 261 and h-k, and Table SM1 in Supp. Mat.). For each domain, we selected the best IN model from the 262 system size on which it was trained, and evaluated its MSE on a different system size. When tested 263 on smaller n-body and spring systems from those on which it was trained, its performance actually 264 exceeded a model trained on the smaller system. This may be due to the model’s ability to exploit its 265 greater experience with how objects and relations behave, available in the more complex system. 266 We also found that the IN trained on single-step predictions can be used to simulate trajectories over 267 thousands of steps very effectively, often tracking the ground truth closely, especially in the n-body 268 and string domains. When rendered into images and videos, the model-generated trajectories are 269 usually visually indistinguishable from those of the ground truth physics engine (Fig. 2; see Supp. 270 Mat. for videos of all images). This is not to say that given the same initial conditions, they cohere 271 perfectly: the dynamics are highly nonlinear and imperceptible prediction errors by the model can 272 rapidly lead to large differences in the systems’ states. But the incoherent rollouts do not violate 273 people’s expectations, and might be roughly on par with people’s understanding of these domains. 274 Estimating abstract properties We trained an abstract-estimation variant of our model to predict 275 potential energies in the n-body and string domains (the ball domain’s potential energies were always 276 0), and found it was much more accurate (n-body MSE 1.4, string MSE 1.1) than the MLP baseline 277 7 10-2 10-3 g. 15, 1 h. 5, 1 i. 30, 1 j. 15, 0 k. 15, 2 String 1 10-1 10 102 MSE (log-scale) a. 6 b. 3 c. 12 n-body 10-2 10-1 10-3 d. 6 e. 3 f. 9 Balls IN (15 obj, 1 pin) IN (5 obj, 1 pin) IN (15 obj, 0 pin) IN (30 obj, 1 pin) IN (15 obj, 2 pin) IN (3 obj) IN (12 obj) IN (6 obj) Constant velocity Baseline MLP Dynamics-only IN IN (3 obj) IN (9 obj) IN (6 obj) 10-2 Figure 3: Prediction experiment accuracy and generalization. Each colored bar represents the MSE between a model’s predicted velocity and the ground truth physics engine’s (the y-axes are log-scaled). Sublots (a-c) show n-body performance, (d-f) show balls, and (g-k) show string. The leftmost subplots in each (a, d, g) for each domain compare the constant velocity model (black), baseline MLP (grey), dynamics-only IN (red), and full IN (blue). The other panels show the IN’s generalization performance to different numbers and configurations of objects, as indicated by the subplot titles. For the string systems, the numbers correspond to: (the number of masses, how many ends were pinned). Table 1: Prediction experiment MSEs Domain Constant velocity Baseline Dynamics-only IN IN n-body 82 79 76 0.25 Balls 0.074 0.072 0.074 0.0020 String 0.018 0.016 0.017 0.0011 (n-body MSE 19, string MSE 425). The IN presumably learns the gravitational and spring potential 278 energy functions, applies them to the relations in their respective domains, and combines the results. 279 5 Discussion 280 We introduced interaction networks as a flexible and efficient model for explicit reasoning about 281 objects and relations in complex systems. Our results provide surprisingly strong evidence of their 282 ability to learn accurate physical simulations and generalize their training to novel systems with 283 different numbers and configurations of objects and relations. They could also learn to infer abstract 284 properties of physical systems, such as potential energy. The alternative models we tested performed 285 much more poorly, with orders of magnitude greater error. Simulation over rich mental models is 286 thought to be a crucial mechanism of how humans reason about physics and other complex domains 287 [4, 12, 10], and Battaglia et al. [3] recently posited a simulation-based “intuitive physics engine” 288 model to explain human physical scene understanding. Our interaction network implementation is the 289 first learnable physics engine that can scale up to real-world problems, and is a promising template for 290 new AI approaches to reasoning about other physical and mechanical systems, scene understanding, 291 social perception, hierarchical planning, and analogical reasoning. 292 In the future, it will be important to develop techniques that allow interaction networks to handle 293 very large systems with many interactions, such as by culling interaction computations that will have 294 negligible effects. The interaction network may also serve as a powerful model for model-predictive 295 control inputting active control signals as external effects – because it is differentiable, it naturally 296 supports gradient-based planning. It will also be important to prepend a perceptual front-end that 297 can infer from objects and relations raw observations, which can then be provided as input to an 298 interaction network that can reason about the underlying structure of a scene. By adapting the 299 interaction network into a recurrent neural network, even more accurate long-term predictions might 300 be possible, though preliminary tests found little benefit beyond its already-strong performance. 301 By modifying the interaction network to be a probabilistic generative model, it may also support 302 probabilistic inference over unknown object properties and relations. 303 By combining three powerful tools from the modern machine learning toolkit – relational reasoning 304 over structured knowledge, simulation, and deep learning – interaction networks offer flexible, 305 accurate, and efficient learning and inference in challenging domains. Decomposing complex 306 systems into objects and relations, and reasoning about them explicitly, provides for combinatorial 307 generalization to novel contexts, one of the most important future challenges for AI, and a crucial 308 step toward closing the gap between how humans and machines think. 309 8 References 310 [1] J Andreas, M Rohrbach, T Darrell, and D Klein. Learning to compose neural networks for question 311 answering. NAACL, 2016. 312 [2] D Baraff. Physically based modeling: Rigid body simulation. SIGGRAPH Course Notes, ACM SIGGRAPH, 313 2(1):2–1, 2001. 314 [3] PW Battaglia, JB Hamrick, and JB Tenenbaum. Simulation as an engine of physical scene understanding. 315 Proceedings of the National Academy of Sciences, 110(45):18327–18332, 2013. 316 [4] K.J.W. Craik. The nature of explanation. Cambridge University Press, 1943. 317 [5] K Fragkiadaki, P Agrawal, S Levine, and J Malik. Learning visual predictive models of physics for playing 318 billiards. ICLR, 2016. 319 [6] F. Gardin and B. Meltzer. Analogical representations of naive physics. Artificial Intelligence, 38(2):139– 320 159, 1989. 321 [7] Z. Ghahramani. Probabilistic machine learning and artificial intelligence. Nature, 521(7553):452–459, 322 2015. 323 [8] R Grzeszczuk, D Terzopoulos, and G Hinton. Neuroanimator: Fast neural network emulation and control of 324 physics-based models. In Proceedings of the 25th annual conference on Computer graphics and interactive 325 techniques, pages 9–20. ACM, 1998. 326 [9] P.J Hayes. The naive physics manifesto. Université de Genève, Institut pour les études s é mantiques et 327 cognitives, 1978. 328 [10] M. Hegarty. Mechanical reasoning by mental simulation. TICS, 8(6):280–285, 2004. 329 [11] M Jaderberg, K Simonyan, and A Zisserman. Spatial transformer networks. In in NIPS, pages 2008–2016, 330 2015. 331 [12] P.N. Johnson-Laird. Mental models: towards a cognitive science of language, inference, and consciousness, 332 volume 6. Cambridge University Press, 1983. 333 [13] D. Kingma and J. Ba. Adam: A method for stochastic optimization. ICLR, 2015. 334 [14] L Ladický, S Jeong, B Solenthaler, M Pollefeys, and M Gross. Data-driven fluid simulations using 335 regression forests. ACM Transactions on Graphics (TOG), 34(6):199, 2015. 336 [15] B Lake, T Ullman, J Tenenbaum, and S Gershman. Building machines that learn and think like people. 337 arXiv:1604.00289, 2016. 338 [16] Y LeCun, Y Bengio, and G Hinton. Deep learning. Nature, 521(7553):436–444, 2015. 339 [17] A Lerer, S Gross, and R Fergus. Learning physical intuition of block towers by example. arXiv:1603.01312, 340 2016. 341 [18] W Li, S Azimi, A Leonardis, and M Fritz. To fall or not to fall: A visual approach to physical stability 342 prediction. arXiv:1604.00066, 2016. 343 [19] R Mottaghi, H Bagherinezhad, M Rastegari, and A Farhadi. Newtonian image understanding: Unfolding 344 the dynamics of objects in static images. arXiv:1511.04048, 2015. 345 [20] R Mottaghi, M Rastegari, A Gupta, and A Farhadi. " what happens if..." learning to predict the effect of 346 forces in images. arXiv:1603.05600, 2016. 347 [21] SE Reed and N de Freitas. Neural programmer-interpreters. ICLR, 2016. 348 [22] F. Scarselli, M. Gori, A.C. Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. 349 IEEE Trans. Neural Networks, 20(1):61–80, 2009. 350 [23] J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. 351 [24] R Socher, E Huang, J Pennin, C Manning, and A Ng. Dynamic pooling and unfolding recursive autoen352 coders for paraphrase detection. In in NIPS, pages 801–809, 2011. 353 [25] E Spelke, K Breinlinger, J Macomber, and K Jacobson. Origins of knowledge. Psychol. Rev., 99(4):605– 354 632, 1992. 355 [26] I Sutskever and GE Hinton. Using matrices to model symbolic relationship. In D. Koller, D. Schuurmans, 356 Y. Bengio, and L. Bottou, editors, in NIPS 21, pages 1593–1600. 2009. 357 [27] J.B. Tenenbaum, C. Kemp, T.L. Griffiths, and N.D. Goodman. How to grow a mind: Statistics, structure, 358 and abstraction. Science, 331(6022):1279, 2011. 359 [28] P Winston and B Horn. The psychology of computer vision, volume 73. McGraw-Hill New York, 1975. 360 [29] J Wu, I Yildirim, JJ Lim, B Freeman, and J Tenenbaum. Galileo: Perceiving physical object properties by 361 integrating a physics engine with deep learning. In in NIPS, pages 127–135, 2015. 362 9
2016
119
6,016
Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods Yuanzhi Li Department of Computer Science Princeton University Princeton, NJ, 08450 yuanzhil@cs.princeton.edu Andrej Risteski Department of Computer Science Princeton University Princeton, NJ, 08450 risteski@cs.princeton.edu Abstract The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06]) 1 Introduction Maximum entropy principle The maximum entropy principle [Jay57] states that given mean parameters, i.e. Eµ[φt(x)] for a family of functionals φt(x), t ∈[1, T], where µ is distribution over the hypercube {−1, 1}n, the entropy-maximizing distribution µ is an exponential family distribution, i.e. µ(x) ∝exp(PT t=1 Jtφt(x)) for some potentials Jt, t ∈[1, T]. 1 This principle has been one of the reasons for the popularity of graphical models in machine learning: the “maximum entropy” assumption is interpreted as “minimal assumptions” on the distribution other than what is known about it. However, this principle is problematic from a computational point of view. Due to results of [BGS14, SV14], the potentials Jt of the Ising model, in many cases, are impossible to estimate well in polynomial time, unless NP = RP – so merely getting the description of the maximum entropy distribution is already hard. Moreover, in order to extract useful information about this distribution, usually we would also like to at least be able to sample efficiently from this distribution – which is typically NP-hard or even #P-hard. 1There is a more general way to state this principle over an arbitrary domain, not just the hypercube, but for clarity in this paper we will focus on the hypercube only. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper we address this problem in certain cases. We provide a “bi-criteria” approximation for the special case where the functionals φt(x) are φi,j(x) = xixj, i.e. pairwise moments: we produce a efficiently sampleable distribution over the hypercube which matches these moments up to multiplicative constant factors, and has entropy at most a constant factor smaller from from the entropy of the maximum entropy distribution. 2 Furthermore, the distribution which achieves this is very natural: the sign of a multivariate normal variable. This provides theoretical explanation for the phenomenon observed by the computational neuroscience community [BB07] that this distribution (there named dichotomized Gaussian there) has near-maximum entropy. Variational methods The above results also allow us to get results for a seemingly unrelated problem – approximating the partition function Z = P x∈{−1,1}n exp(PT t=1 Jtφt(x)) of a member of an exponential family. The reason this task is important is that it is tied to calculating marginals. One of the ways this task is solved is variational methods: namely, expressing log Z as an optimization problem. While there is a plethora of work on variational methods, of many flavors (mean field, Bethe/Kikuchi relaxations, TRBP, etc. for a survey, see [WJ08]), they typically come either with no guarantees, or with guarantees in very constrained cases (e.g. loopless graphs; graphs with large girth, etc. [WJW03, WJW05]). While this is a rich area of research, the following extremely basic research question has not been answered: What is the best approximation guarantee on the partition function in the worst case (with no additional assumptions on the potentials)? In the low-temperature limit, i.e. when |Jt| →∞, log Z →maxx∈{−1,1}n PT t=1 Jtφt(x) - i.e. the question reduces to purely optimization. In this regime, this question has very satisfying answers for many families φt(x). One classical example is when the functionals are φi,j(x) = xixj. In the graphical model community, these are known as Ising models, and in the optimization community this is the problem of optimizing quadratic forms and has been studied by [CW04, AN06, AMMN06]. In the optimization version, the previous papers showed that in the worst case, one can get O(log n) factor multiplicative factor approximation of it, and that unless P = NP, one cannot get better than constant factor approximations of it. In the finite-temperature version, it is known that it is NP-hard to achieve a 1+ϵ factor approximation to the partition function (i.e. construct a FPRAS) [SS12], but nothing is known about coarser approximations. We prove in this paper, informally, that one can get comparable multiplicative guarantees on the log-partition function in the finite temperature case as well – using the tools and insights we develop on the maximum entropy principles. Our methods are extremely generic, and likely to apply to many other exponential families, where algorithms based on linear/semidefinite programming relaxations are known to give good guarantees in the optimization regime. 2 Statements of results and prior work Approximate maximum entropy The main theorem in this section is the following one. Theorem 2.1. For any covariance matrix Σ of a centered distribution µ : {−1, 1}n →R, i.e. Eµ[xixj] = Σi,j, Eµ[xi] = 0, there is an efficiently sampleable distribution ˜µ, which can be sampled as sign(g), where g ∼N(0, Σ + βI) and satisfies G 1 + β Σi,j ≤E˜µ[XiXj] ≤ 1 1 + β Σi,j and has entropy H(˜µ) ≥n 25 (31/4√β−1)2 √ 3β , for any β ≥ 1 31/2 . There are two prior works on computational issues relating to maximum entropy principles, both proving hardness results. [BGS14] considers the “hard-core” model where the functionals φt are such that the distribution µ(x) puts zero mass on configurations x which are not independent sets with respect to some graph G. 2In fact, we produce a distribution with entropy Ω(n), which implies the latter claim since the maximum entropy of any distribution of over {−1, 1}n is at most n 2 They show that unless NP = RP, there is no FPRAS for calculating the potentials Jt, given the mean parameters Eµ[φt(x)]. [SV14] prove an equivalence between calculating the mean parameters and calculating partition functions. More precisely, they show that given an oracle that can calculate the mean parameters up to a (1 + ϵ) multiplicative factor in time O(poly(1/ϵ)), one can calculate the partition function of the same exponential family up to (1 + O(poly(ϵ))) multiplicative factor, in time O(poly(1/ϵ)). Note, the ϵ in this work potentially needs to be polynomially small in n (i.e. an oracle that can calculate the mean parameters to a fixed multiplicative constant cannot be used.) Both results prove hardness for fine-grained approximations to the maximum entropy principle, and ask for outputting approximations to the mean parameters. Our result circumvents these hardness results by providing a distribution which is not in the maximum-entropy exponential family, and is allowed to only approximately match the moments as well. To the best of our knowledge, such an approximation, while very natural, has not been considered in the literature. Provable variational methods The main theorems in this section will concern the approximation factor that can be achieved by degree-2 pseudo-moment relaxations of the standard variational principle due to Gibbs. ([Ell12]) As outlined before, we will be concerned with a particularly popular exponential family: Ising models. We will prove the following three results: Theorem 2.2 (Ferromagnetic Ising, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor 50 the value of log Z where Z is the partition function of the exponential distribution µ(x) ∝exp( X i,j Ji,jxixj) for Ji,j > 0. Theorem 2.3 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log n) the value of log Z where Z is the partition function of the exponential distribution µ(x) ∝exp( X i,j Ji,jxixj). Theorem 2.4 (Ising model, informal). There is a convex programming relaxation based on degree-2 pseudo-moments that calculates up to multiplicative approximation factor O(log χ(G)) the value of log Z where Z is the partition function of the exponential distribution µ(x) ∝exp( X i,j∈E(G) Ji,jxixj) where G = (V (G), E(G)) is a graph with chromatic number χ(G). 3 While a lot of work is done on variational methods in general (see the survey by [WJ08] for a detailed overview), to the best of our knowledge nothing is known about the worst-case guarantee that we are interested in here. Moreover, other than a recent paper by [Ris16], no other work has provided provable bounds for variational methods that proceed via a convex relaxation and a rounding thereof.4 [Ris16] provides guarantees in the case of Ising models that are also based on pseudo-moment relaxations of the variational principle, albeit only in the special case when the graph is “dense” in a suitably defined sense. 5 The results there are very specific to the density assumption and can not be adapted to our worst-case setting. Finally, we mention that in the special case of the ferromagnetic Ising models, an algorithm based on MCMC was provided by [JS93], which can give an approximation factor of (1 + ϵ) to the partition function and runs in time O(n11poly(1/ϵ)). In spite of this, the focus of this part of our paper is to provide understanding of variational methods in certain cases, as they continue to be popular in practice for their faster running time compared to MCMC-based methods but are theoretically much more poorly studied. 3Theorem 2.4 is strictly more general than Theorem 2.3, however the proof of Theorem 2.3 uses less heavy machinery and is illuminating enough that we feel merits being presented as a separate theorem. 4In some sense, it is possible to give provable bounds for Bethe-entropy based relaxations, via analyzing belief propagation directly, which has been done in cases where there is correlation decay and the graph is locally tree-like. [WJ08] has a detailed overview of such results. 5More precisely, they prove that in the case when ∀i, j, ∆|Ji,j| ≤ ∆ n2 P i,j |Ji,j|, one can get an additive ϵ(P i,j Ji,j) approximation to log Z in time nO( ∆ ϵ2 ). 3 3 Approximate maximum entropy principles Let us recall what the problem we want to solve: Approximate maximum entropy principles We are given a positive-semidefinite matrix Σ ∈Rn×n with Σi,i = 1, ∀i ∈[n], which is the covariance matrix of a centered distribution over {−1, 1}n, i.e. Eµ[xixj] = Σi,j, Eµ[xi] = 0, for a distribution µ : {−1, 1}n →R. We wish to produce a distribution ˜µ : {−1, 1}n →R with pairwise covariances that match the given ones up to constant factors, and entropy within a constant factor of the maximum entropy distribution with covariance Σ. 6 Before stating the result formally, it will be useful to define the following constant: Definition 3.1. Define the constant G = mint∈[−1,1]  2 π arcsin(t)/t ≈0.64. We will prove the following main theorem: Theorem 3.1 (Main, approximate entropy principle). For any positive-semidefinite matrix Σ with Σi,i = 1, ∀i, there is an efficiently sampleable distribution ˜µ : {−1, 1}n →R, which can be sampled as sign(g), where g ∼N(0, Σ+βI), and satisfies G 1+β Σi,j ≤E˜µ[xixj] ≤ 1 1+β Σi,j and has entropy H(˜µ) ≥n 25 (31/4√β−1)2 √ 3β , where β ≥ 1 31/2 . Note ˜µ is in fact very close to the the one which is classically used to round semidefinite relaxations for solving the MAX-CUT problem. [GW95] We will prove Theorem 3.1 in two parts – by first lower bounding the entropy of ˜µ, and then by bounding the moments of ˜µ. Theorem 3.2. The entropy of the distribution ˜µ satisfies H(˜µ) ≥n 25 (31/4√β−1)2 √ 3β when β ≥ 1 31/2 . Proof. A sample g from N(0, ˜Σ) can be produced by sampling g1 ∼N(0, Σ), g2 ∼N(0, βI) and setting g = g1 +g2. The sum of two multivariate normals is again a multivariate normal. Furthermore, the mean of g is 0, and since g1, g2 are independent, the covariance of g is Σ + βI = ˜Σ. Let’s denote the random variable Y = sign(g1 + g2) which is distributed according to ˜µ. We wish to lower bound the entropy of Y. Toward that goal, denote the random variable S := {i ∈[n] : |(g1)i| ≤cD} for c, D to be chosen. Then, we have: for γ = c−1 c , H(Y) ≥H(Y|S) = X S⊆[n] Pr[S = S]H(Y|S = S) ≥ X S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) where the first inequality follows since conditioning doesn’t decrease entropy, and the latter by the non-negativity of entropy. Continue the calculation we can get: X S⊆[n],|S|≥γn Pr[S = S]H(Y|S = S) ≥ X S⊆[n],|S|≥γn Pr[S = S] min S⊆[n],|S|≥γn H(Y|S = S) = Pr [|S| ≥γn] min S⊆[n],|S|≥γn H(Y|S = S) We will lower bound Pr[|S| ≥γn] first. Notice that E[Pn i=1(g1)2 i ] = n, therefore by Markov’s inequality, Pr " n X i=1 (g1)2 i ≥Dn # ≤1 D. On the other hand, if Pn i=1(g1)2 i ≤Dn, then |{i : (g1)2 i ≥ cD}| ≤n c , which means that |{i : (g1)2 i ≤cD}| ≥n −n c = (c−1)n c = γn. Putting things together, this means Pr [|S| ≥γn] ≥1 −1 D. It remains to lower bound minS⊆[n],|S|≥γn H(Y|S = S). For every S ⊆[n], |S| ≥γn, denote by YS the coordinates of Y restricted to S, we get H(Y|S = S) ≥H(YS|S = S) ≥H∞(YS|S = S) = −log(max yS Pr[YS = yS|S = S]) 6Note for a distribution over {−1, 1}n, the maximal entropy a distribution can have is n, which is achieved by the uniform distribution. 4 (where H∞is the min-entropy) so we only need to bound maxyS Pr[YS = yS|S = S] We will now, for any yS, upper bound Pr[YS = yS|S = S]. Recall that the event S = S implies that ∀i ∈S, |(g1)i| ≤cD. Since g2 is independent of g1, we know that for every fixed g ∈Rn: Pr[YS = yS|S = S, g1 = g] = Πi∈S Pr[sign([g]i + [g2]i) = yi] For a fixed i ∈[S], consider the term Pr[sign([g]i + [g2]i) = yi]. Without loss of generality, let’s assume [g]i > 0 (the proof is completely symmetric in the other case). Then, since [g]i is positive and g2 has mean 0, we have Pr[[g]i + (g2)i < 0] ≤1 2. Moreover, Pr [[g]i + [g2]i > 0] = Pr[[g2]i > 0] Pr [[g]i + [g2]i > 0 | [g2]i > 0] + Pr[[g2]i < 0] Pr [[g]i + [g2]i > 0 | [g2]i < 0] The first term is upper bounded by 1 2 since Pr[[g2]i > 0] ≤1 2. The second term we will bound using standard Gaussian tail bounds: Pr [[g]i + [g2]i > 0 | [g2]i < 0] ≤ Pr [|[g2]i| ≤|[g]i| | [g2]i < 0] = Pr[|[g2]i| ≤|[g]i|] ≤Pr[([g2]i)2 ≤cD] = 1 −Pr[([g2]i)2 > cD] ≤ 1 − 2 √ 2π exp (−cD/2β)   r β cD − r β cD !3  which implies Pr[[g2]i < 0] Pr[[g]i + [g2]i > 0 | [g2]i < 0] ≤1 2 1 − 2 √ 2π exp (−cD/2β) r β cD − r β cD !3!! Putting together, we have Pr[sign((g1)i + (g2)i) = yi] ≤1 − 1 √ 2π exp (−cD/2β)   r β cD − r β cD !3  Together with the fact that |S| ≥γn we get Pr[YS = yS|S = s, g1 = g] ≤  1 − 1 √ 2π exp (−cD/2β)   r β cD − r β cD !3    γn which implies that H(Y) ≥ −  1 −1 D  (c −1)n c log  1 − 1 √ 2π exp (−cD/2β)   r β cD − r β cD !3    By setting c = D = 31/4√β and a straightforward (albeit unpleasant) calculation, we can check that H(Y) ≥n 25 (31/4√β−1)2 √ 3β , as we need. We next show that the moments of the distribution are preserved up to a constant G 1+β . Lemma 3.1. The distribution ˜µ has G 1+β Σi,j ≤E˜µ[XiXj] ≤ 1 1+β Σi,j 5 Proof. Consider the Gram decomposition of ˜Σi,j = ⟨vi, vj⟩. Then, N(0, ˜Σ) is in distribution equal to (sign(⟨v1, s⟩), . . . , sign(⟨vn, s⟩)) where s ∼N(0, I). Similarly as in the analysis of GoemansWilliamson [GW95], if ¯vi = 1 ∥vi∥vi, we have G⟨¯vi, ¯vj⟩≤E˜µ[XiXj] = 2 π arcsin(⟨¯vi, ¯vj⟩) ≤ ⟨¯vi, ¯vj⟩. However, since ⟨¯vi, ¯vj⟩= 1 ∥vi∥∥vj∥⟨vi, vj⟩= 1 ∥vi∥∥vj∥ ˜Σi,j = 1 ∥vi∥∥vj∥Σi,j and ∥vi∥= q ˜Σi,i = p 1 + β, ∀i ∈[1, n], we get that G 1 + β Σi,j ≤E˜µ[XiXj] ≤ 1 1 + β Σi,j as we want. Lemma 3.2 and 3.1 together imply Theorem 3.1. 4 Provable bounds for variational methods We will in this section consider applications of the approximate maximum entropy principles we developed for calculating partition functions of Ising models. Before we dive into the results, we give brief preliminaries on variational methods and pseudo-moment convex relaxations. Preliminaries on variational methods and pseudo-moment convex relaxations Recall, variational methods are based on the following simple lemma, which characterizes log Z as the solution of an optimization problem. It essentially dates back to Gibbs [Ell12], who used it in the context of statistical mechanics, though it has been rediscovered by machine learning researchers [WJ08]: Lemma 4.1 (Variational characterization of log Z). Let us denote by M the polytope of distributions over {−1, 1}n. Then, log Z = max µ∈M (X t JtEµ[φt(x)] + H(µ) ) (1) While the above lemma reduces calculating log Z to an optimization problem, optimizing over the polytope M is impossible in polynomial time. We will proceed in a way which is natural for optimization problems – by instead optimizing over a relaxation M′ of that polytope. The relaxation will be associated with the degree-2 Lasserre hierarchy. Intuitively, M′ has as variables tentative pairwise moments of a distribution of {−1, 1}n, and it imposes all constraints on the moments that hold for distributions over {−1, 1}n. To define M′ more precisely we will need the following notion: (for a more in-depth review of moment-based convex hierarchies, the reader can consult [BKS14]) Definition 4.1. A degree-2 pseudo-moment 7 ˜Eν[·] is a linear operator mapping polynomials of degree 2 to R, such that ˜Eν[x2 i ] = 1, and ˜Eν[p(x)2] ≥0 for any polynomial p(x) of degree 1. We will be optimizing over the polytope M′ of all degree-2 pseudo-moments, i.e. we will consider solving max ˜Eν[·]∈M′ (X t Jt˜Eν[φt(x)] + ˜H(˜Eν[·]) ) where ˜H will be a proxy for the entropy we will have to define (since entropy is a global property that depends on all moments, and ˜Eν only contains information about second order moments). To see this optimization problem is convex, we show that it can easily be written as a semidefinite program. Namely, note that the pseudo-moment operators are linear, so it suffices to define them over monomials only. Hence, the variables will simply be ˜Eν(xS) for all monomials xS of degree at most 2. The constraints ˜Eν[x2 i ] = 1 then are clearly linear, as is the “energy part” of the objective function. So we only need to worry about the constraint ˜Eν[p(x)2] ≥0 and the entropy functional. We claim the constraint ˜Eν[p(x)2] ≥0 can be written as a PSD constraint: namely if we define the matrix Q, which is indexed by all the monomials of degree at most 1, and it satisfies Q(xS, xT ) = ˜Eν[xSxT ]. It is easy to see that ˜Eν[p(x)2] ≥0 ≡Q ⪰0. 7The reason ˜Eν[·] is called a pseudo-moment, is that it behaves like the moments of a distribution ν : {−1, 1}n →[0, 1], albeit only over polynomials of degree at most 2. 6 Hence, the final concern is how to write an expression for the entropy in terms of the low-order moments, since entropy is a global property that depends on all moments. There are many candidates for this in machine learning are like Bethe/Kikuchi entropy, tree-reweighted Bethe entropy, logdeterminant etc. However, in the worst case – none of them come with any guarantees. We will in fact show that the entropy functional is not an issue – we will relax the entropy trivially to n. Given all of this, the final relaxation we will consider is: max ˜Eν[·]∈M′ (X t Jt˜Eν[φt(x)] + n ) (2) From the prior setup it is clear that the solution to 2 is an upper bound to log Z. To prove a claim like Theorem 2.3 or Theorem 2.4, we will then provide a rounding of the solution. In this instance, this will mean producing a distribution ˜µ which has the value of P t JtE˜µ[φt(x)] + H(˜µ) comparable to the value of the solution. Note this is slightly different than the usual requirement in optimization, where one cares only about producing a single x ∈{−1, 1}n with comparable value to the solution. Our distribution ˜µ will have entropy Ω(n), and preserves the “energy” portion of the objective P t JtEµ[φt(x)] up to a comparable factor to what is achievable in the optimization setting. Warmup: exponential family analogue of MAX-CUT As a warmup, to illustrate the basic ideas behind the above rounding strategy, before we consider Ising models we consider the exponential family analogue of MAX-CUT. It is defined by the functionals φi,j(x) = (xi −xj)2. Concretely, we wish to approximate the partition function of the distribution µ(x) ∝exp  X i,j Ji,j(xi −xj)2  . We will prove the following simple observation: Observation 4.1. The relaxation 2 provides a factor 2 approximation of log Z. Proof. We proceed as outlined in the previous section, by providing a rounding of 2. We point out again, unlike the standard case in optimization, where typically one needs to produce an assignment of the variables, because of the entropy term here it is crucial that the rounding produces a distribution. The distribution ˜µ we produce here will be especially simple: we will round each xi independently with probability 1 2. Then, clearly H(˜µ) = n. On the other hand, we similarly have Pr˜µ[(xi −xj)2 = 1] = 1 2, since xi and xj are rounded independently. Hence, E˜µ[(xi −xj)2] ≥1 2. Altogether, this implies P i,j Ji,jE˜µ[(xi −xj)2] + H(˜µ) ≥1 2 P i,j Ji,jEν[(xi −xj)2] + n  as we needed. 4.1 Ising models We proceed with the main results of this section on Ising models, which is the case where φi,j(x) = xixj. We will split into the ferromagnetic and general case separately, as outlined in Section 2. To be concrete, we will be given potentials Ji,j, and we wish to calculate the partition function of the Ising model µ(x) ∝exp(P i,j Ji,jxixj). Ferromagnetic case Recall, in the ferromagnetic case of Ising model, we have the conditions that the potentials Ji,j > 0. We will provide a convex relaxation which has a constant factor approximation in this case. First, recall the famous first Griffiths inequality due to Griffiths [Gri67] which states that in the ferromagnetic case, Eµ[xixj] ≥0, ∀i, j. Using this inequality, we will look at the following natural strenghtening of the relaxation 2: max ˜Eν[·]∈M′;˜Eν[xixj]≥0,∀i,j (X t Jt˜Eν[φt(x)] + n ) (3) We will prove the following theorem, as a straightforward implication of our claims from Section 3: 7 Theorem 4.1. The relaxation 3 provides a factor 50 approximation of log Z. Proof. Notice, due to Griffiths’ inequality, 3 is in fact a relaxation of the Gibbs variational principle and hence an upper bound)of log Z. Same as before, we will provide a rounding of 3. We will use the distribution ˜µ we designed in Section 3 the sign of a Gaussian with covariance matrix Σ + βI, for a β which we will specify. By Lemma 3.2, we then have H(˜µ) ≥n 25 (31/4√β−1)2 √ 3β whenever β ≥ 1 31/2 . By Lemma 3.1, on the other hand, we can prove that E˜µ[xixj] ≥ G 1 + β ˜Eν[xixj] By setting β = 21.8202, we get n 25 (31/4√β−1)2 √ 3β ≥0.02 and G 1+β ≥0.02, which implies that X i,j Ji,jE˜µ[xixj] + H(˜µ) ≥0.02  X i,j Ji,j ˜Eν[xixj] + n   which is what we need. Note that the above proof does not work in the general Ising model case: when ˜Eν[xixj] can be either positive or negative, even if we preserved each ˜Eν[xixj] up to a constant factor, this may not preserve the sum P i,j Ji,j ˜Eν[xixj] due to cancellations in that expression. General Ising models case Finally, we will tackle the general Ising model case. As noted in the previous section, the straightforward application of the results proven in Section 3 doesn’t work, so we have to consider a different rounding – again inspired by roundings used in optimization. The intuition is the same as in the ferromagnetic case: we wish to design a rounding which preserves the “energy” portion of the objective, while having a high entropy. In the previous section, this was achieved by modifying the Goemans-Williamson rounding so that it produces a high-entropy distribution. We will do a similar thing here, by modifying rounding due to [CW04] and [AMMN06]. The convex relaxation we will consider will just be the basic one: 2 and we will prove the following two theorems: Theorem 4.2. The relaxation 2 provides a factor O(log n) approximation to log Z when φi,j(x) = xixj. Theorem 4.3. The relaxation 2 provides a factor O(log(χ(G))) approximation to log Z when φi,j(x) = xixj for i, j ∈E(G) of some graph G = (V (G), E(G)), and χ(G) is the chromatic number of G. Since the chromatic number of a graph is bounded by n, the second theorem is in fact strictly stronger than the first, however the proof of the first theorem uses less heavy machinery, and is illuminating enough to be presented on its own. Due to space constraints, the proofs of these theorems are forwarded to the appendix. 5 Conclusion In summary, we presented computationally efficient approximate versions of the classical maxentropy principle by [Jay57]: efficiently sampleable distributions which preserve given pairwise moments up to a multiplicative constant factor, while having entropy within a constant factor of the maximum entropy distribution matching those moments. Additionally, we applied our insights to designing provable variational methods for Ising models which provide comparable guarantees for approximating the log-partition function to those in the optimization setting. Our methods are based on convex relaxations of the standard variational principle due to Gibbs, and are extremely generic and we hope they will find applications for other exponential families. 8 References [AMMN06] Noga Alon, Konstantin Makarychev, Yury Makarychev, and Assaf Naor. Quadratic forms on graphs. Inventiones mathematicae, 163(3):499–522, 2006. [AN06] Noga Alon and Assaf Naor. Approximating the cut-norm via grothendieck’s inequality. SIAM Journal on Computing, 35(4):787–803, 2006. [BB07] Matthias Bethge and Philipp Berens. Near-maximum entropy models for binary neural representations of natural images. 2007. [BGS14] Guy Bresler, David Gamarnik, and Devavrat Shah. Hardness of parameter estimation in graphical models. In Advances in Neural Information Processing Systems, pages 1062–1070, 2014. [BKS14] Boaz Barak, Jonathan A Kelner, and David Steurer. Rounding sum-of-squares relaxations. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 31–40. ACM, 2014. [CW04] Moses Charikar and Anthony Wirth. Maximizing quadratic programs: extending grothendieck’s inequality. In Foundations of Computer Science, 2004. Proceedings. 45th Annual IEEE Symposium on, pages 54–60. IEEE, 2004. [Ell12] Richard S Ellis. Entropy, large deviations, and statistical mechanics, volume 271. Springer Science & Business Media, 2012. [EN78] Richard S Ellis and Charles M Newman. The statistics of curie-weiss models. Journal of Statistical Physics, 19(2):149–161, 1978. [Gri67] Robert B Griffiths. Correlations in ising ferromagnets. i. Journal of Mathematical Physics, 8(3):478–483, 1967. [GW95] Michel X Goemans and David P Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. [Jay57] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957. [JS93] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the ising model. SIAM Journal on computing, 22(5):1087–1116, 1993. [Ris16] Andrej Risteski. How to compute partition functions using convex programming hierarchies: provable bounds for variational methods. In Proceedings of the Conference on Learning Theory (COLT), 2016. [SS12] Allan Sly and Nike Sun. The computational hardness of counting in two-spin models on d-regular graphs. In Foundations of Computer Science (FOCS), 2012 IEEE 53rd Annual Symposium on, pages 361–369. IEEE, 2012. [SV14] Mohit Singh and Nisheeth K Vishnoi. Entropy, optimization and counting. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 50–59. ACM, 2014. [WJ08] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [WJW03] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. Tree-reweighted belief propagation algorithms and approximate ml estimation by pseudo-moment matching. 2003. [WJW05] Martin J Wainwright, Tommi S Jaakkola, and Alan S Willsky. A new class of upper bounds on the log partition function. Information Theory, IEEE Transactions on, 51(7):2313–2335, 2005. 9
2016
12
6,017
Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian Victor Picheny MIAT, Université de Toulouse, INRA Castanet-Tolosan, France victor.picheny@toulouse.inra.fr Robert B. Gramacy Virginia Tech Blacksburg, VA, USA rbg@vt.edu Stefan Wild Argonne National Laboratory Argonne, IL, USA wildmcs.anl.gov Sébastien Le Digabel École Polytechnique de Montréal Montréal, QC, Canada sebastien.le-digabel@polymtl.ca Abstract An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof. We show our new slack “ALBO” compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples. 1 Introduction Bayesian optimization (BO), as applied to so-called blackbox objectives, is a modernization of 197080s statistical response surface methodology for sequential design [3, 14]. In BO, nonparametric (Gaussian) processes (GPs) provide flexible response surface fits. Sequential design decisions, socalled acquisitions, judiciously balance exploration and exploitation in search for global optima. For reviews, see [5, 4]; until recently this literature has focused on unconstrained optimization. Many interesting problems contain constraints, typically specified as equalities or inequalities: min x {f(x) : g(x) ≤0, h(x) = 0, x ∈B} , (1) where B ⊂Rd is usually a bounded hyperrectangle, f : Rd →R is a scalar-valued objective function, and g : Rd →Rm and h : Rd →Rp are vector-valued constraint functions taken componentwise (i.e., gj(x) ≤0, j = 1, . . . , m; hk(x) = 0, and k = 1, . . . , p). The typical setup treats f, g, and h as a “joint” blackbox, meaning that providing x to a single computer code reveals f(x), g(x), and h(x) simultaneously, often at great computational expense. A common special case treats f(x) as known (e.g., linear); however the problem is still hard when g(x) ≤0 defines a nonconvex valid region. Not many algorithms target global solutions to this general, constrained blackbox optimization problem. Statistical methods are acutely few. We know of no methods from the BO literature natively accommodating equality constraints, let alone mixed (equality and inequality) ones. Schonlau et al. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. [21] describe how their expected improvement (EI) heuristic can be extended to multiple inequality constraints by multiplying by an estimated probability of constraint satisfaction. Here, we call this expected feasible improvement (EFI). EFI has recently been revisited by several authors [23, 7, 6]. However, the technique has pathological behavior in otherwise idealized setups [9], which is related to a so-called “decoupled” pathology [7]. Some recent information-theoretic alternatives have shown promise in the inequality constrained setting [10, 17]. We remark that any problem with equality constraints can be “transformed” to inequality constraints only, by applying h(x) ≤0 and h(x) ≥0 simultaneously. However, the effect of such a reformulation is rather uncertain. It puts double-weight on equalities and violates certain regularity (i.e., constraint qualification [15]) conditions. Numerical issues have been reported in empirical work [1, 20]. In this paper we show how a recent BO method for inequality constraints [9] is naturally enhanced to handle equality constraints, and therefore mixed ones too. The method involves converting inequality constrained problems into a sequence of simpler subproblems via the augmented Lagrangian (AL, [2]). AL-based solvers can, under certain regularity conditions, be shown to converge to locally optimal solutions that satisfy the constraints, so long as the sub-solver converges to local solutions. By deploying modern BO on the subproblems, as opposed to the usual local solvers, the resulting meta-optimizer is able to find better, less local solutions with fewer evaluations of the expensive blackbox, compared to several classical and statistical alternatives. Here we dub that method ALBO. To extend ALBO to equality constraints, we suggest the opposite transformation to the one described above: we convert inequality constraints into equalities by introducing slack variables. In the context of earlier work with the AL, via conventional solvers, this is rather textbook [15, Ch. 17]. Handling the inequalities in this way leads naturally to solutions for mixed constraints and, more importantly, dramatically improves the original inequality-only version. In the original (non-slack) ALBO setup, the density and distribution of an important composite random predictive quantity is not known in closed form. Except in a few particular cases [18], calculating EI and related quantities under the AL required Monte Carlo integration, which means that acquisition function evaluations are computationally expensive, noisy, or both. A reformulated slack-AL version emits a composite that has a known distribution, a so-called weighted non-central Chi-square (WNCS) distribution. We show that, in that setting, EI calculations involve a simple 1-d integral via ordinary quadrature. Adding slack variables increases the input dimension of the optimization subproblems, but only artificially so. The effects of expansion can be mitigated through optimal default settings, which we provide. The remainder of the paper is organized as follows. Section 2 outlines the components germane to the ALBO approach: AL, Bayesian surrogate modeling, and acquisition via EI. Section 3 contains the bulk of our methodological contribution: a slack variable AL, a closed form EI, optimal default slack settings, and open-source software. Implementation details are provided by our online supplementary material. Section 4 provides empirical comparisons, and Section 5 concludes. 2 A review of relevant concepts: EI and AL EI: The canonical acquisition function in BO is expected improvement (EI) [12]. Consider a surrogate f n(x), trained on n pairs (xi, yi = f(xi)) emitting Gaussian predictive equations with mean µn(x) and standard deviation σn(x). Define f n min = mini=1,...,n yi, the smallest y-value seen so far, and let I(x) = max{0, f n min −Y (x)} be the improvement at x. I(x) is largest when Y (x) ∼f n(x) has substantial distribution below f n min. The expectation of I(x) over Y (x) has a convenient closed form, revealing balance between exploitation (µn(x) under f n min) and exploration (large σn(x)): E{I(x)} = (f n min −µn(x))Φ f n min −µn(x) σn(x)  + σn(x)φ f n min −µn(x) σn(x)  , (2) where Φ (φ) is the standard normal cdf (pdf). Accurate, approximately Gaussian predictive equations are provided by many statistical models (e.g., GPs). In non-Gaussian contexts, Monte Carlo schemes— sampling Y (x)’s and averaging I(x)’s—offer a computationally intensive alternative. AL: Although several authors have suggested extensions to EI for constraints, the BO literature has primarily focused on unconstrained problems. The range of constrained BO options was recently extended by borrowing an apparatus from the mathematical optimization literature, the augmented Lagrangian, allowing unconstrained methods to be adapted to constrained problems. The AL, as a 2 device for solving problems with inequality constraints (no h(x) in Eq. (1)), may be defined as LA(x; λ, ρ) = f(x) + λ⊤g(x) + 1 2ρ m X j=1 max {0, gj(x)}2 , (3) where ρ > 0 is a penalty parameter on constraint violation and λ ∈Rm + serves as a Lagrange multiplier. AL methods are iterative, involving a particular sequence of (x; λ, ρ). Given the current values ρk−1 and λk−1, one approximately solves the subproblem min x  LA(x; λk−1, ρk−1) : x ∈B , (4) via a conventional (bound-constrained) solver. The parameters (λ, ρ) are updated depending on the nature of the solution found, and the process repeats. The particulars in our setup are provided in Alg. 1; for more details see [15, Ch. 17]. Local convergence is guaranteed under relatively mild conditions involving the choice of subroutine solving (4). Loosely, all that is required is that the solver “makes progress” on the subproblem. In contexts where termination depends more upon computational budget than on a measure of convergence, as in many BO problems, that added flexibility is welcome. Require: λ0 ≥0, ρ0 > 0 1: for k = 1, 2, . . . do 2: Let xk (approximately) solve (4) 3: Set λk j =max{0, λk−1 j + 1 ρk−1 gj(xk)}, j = 1, . . . , m 4: If g(xk) ≤0, set ρk = ρk−1; else, set ρk = 1 2ρk−1 5: end for Algorithm 1: Basic augmented Lagrangian method However, the AL does not typically enjoy global scope. The local minima found by the method are sensitive to initialization—of starting choices for (λ0, ρ0) or x0; local searches in iteration k are usually started from xk−1. However, this dependence is broken when statistical surrogates drive search for solutions to the subproblems. Independently fit GP surrogates, f n(x) for the objective and gn(x) = (gn 1 (x), . . . , gn m(x)) for the constraints, yield predictive distributions for Y n f (x) and Y n g (x) = (Y n g1(x), . . . , Y n gm(x)). Dropping the n superscripts, the AL composite random variable Y (x) = Yf(x) + λ⊤Yg(x) + 1 2ρ Pm j=1 max{0, Ygj(x)}2 can serve as a surrogate for (3); however, it is difficult to deduce its distribution from the components of Yf and Yg, even when those are independently Gaussian. While its mean is available in closed form, EI requires Monte Carlo. 3 A novel formulation involving slack variables An equivalent formulation of (1) involves introducing slack variables, sj, for j = 1, . . . , m (i.e., one for each inequality constraint gj(x)), and converting the mixed constraint problem (1) to one with only equality constraints (plus bound constraints for sj): gj(x) −sj = 0, sj ∈R+, for j = 1, . . . , m. Observe that introducing the slack "inputs" increases dimension of the problem from d to d + m. Reducing a mixed constraint problem to one involving only equality and bound constraints is valuable insofar as one has good solvers for those problems. Suppose, for the moment, that the original problem (1) has no equality constraints (i.e., p = 0). In this case, a slack variable-based AL method is readily available—as an alternative to the description in Section 2. Although we frame it as an “alternative”, some would describe this as the standard version [see, e.g., 15, Ch. 17]. The AL is LA(x, s; λg, ρ) = f(x) + λ⊤(g(x)+s) + 1 2ρ m X j=1 (gj(x)+sj)2 . (5) This formulation is more convenient than (3) because the “max” is missing, but the extra slack variables mean solving a higher (d + m) dimensional subproblem compared to (4). That AL can be expanded to handle equality (and thereby mixed constraints) as follows: LA(x, s; λg, λh, ρ) = f(x)+λ⊤ g (g(x)+s)+λ⊤ h h(x)+ 1 2ρ   m X j=1 (gj(x)+sj)2 + p X k=1 hk(x)2  . (6) Defining c(x) :=  g(x)⊤, h(x)⊤⊤, λ :=  λ⊤ g , λ⊤ h ⊤, and enlarging the dimension of s with the understanding that sm+1 = · · · = sm+p = 0, leads to a streamlined AL for mixed constraints LA(x, s; λ, ρ) = f(x) + λ⊤(c(x) + s) + 1 2ρ m+p X j=1 (cj(x) + sj)2 , (7) 3 with λ ∈Rm+p. A non-slack AL formulation (3) can analogously be written as LA(x; λg, λh, ρ) = f(x) + λ⊤ g g(x) + λ⊤ h h(x) + 1 2ρ   m X j=1 max {0, gj(x)}2 + p X k=1 hk(x)2  , with λg ∈Rm + and λh ∈Rp. Eq. (7), by contrast, is easier to work with because it is a smooth quadratic in the objective (f) and constraints (c). In what follows, we show that (7) facilitates calculation of important quantities like EI, in the GP-based BO framework, via a library routine. So slack variables not only facilitate mixed constraints in a unified framework, but they also lead to a more efficient handling of the original inequality (only) constrained problem. 3.1 Distribution of the slack-AL composite If Yf and Yc1, . . . , Ycm+p represent random predictive variables from m + p + 1 surrogates fitted to n realized objective and constraint evaluations, then the analogous slack-AL random variable is Y (x, s) = Yf(x) + m+p X j=1 λj(Ycj(x) + sj) + 1 2ρ m+p X j=1 (Ycj(x) + sj)2. (8) As for the original AL, the mean of this RV has a simple closed form in terms of the means and variances of surrogates. In the Gaussian case, we show that we can obtain a closed form for the full distribution of the slack-AL variate (8). Toward that aim, first rewrite Y as: Y (x, s) = Yf(x) + m+p X j=1 λjsj + 1 2ρ m+p X j=1 s2 j + 1 2ρ m+p X j=1  2λjρYcj(x) + 2sjYcj(x) + Ycj(x)2 = Yf(x) + m+p X j=1 λjsj + 1 2ρ m+p X j=1 s2 j + 1 2ρ m+p X j=1 hαj + Ycj(x) 2 −α2 j i , with αj = λjρ + sj. Now decompose the Y (x, s) into a sum of three quantities: Y (x, s) = Yf(x) + r(s) + 1 2ρW(x, s), with (9) r(s) = m+p X j=1 λjsj + 1 2ρ m+p X j=1 s2 j −1 2ρ m+p X j=1 α2 j and W(x, s) = m+p X j=1 αj + Ycj(x) 2 . Using Ycj ∼N  µcj(x), σ2 cj(x)  , i.e., leveraging Gaussianity, W can be written as W(x, s) = m+p X j=1 σ2 cj(x)Xj(x, s), with Xj(x, s) ∼χ2 dof =1, δ= µcj(x) + αj σcj(x) 2! . (10) The line above is the expression of a weighted sum of non-central chi-square (WSNC) variates. Each of the m + p variates involves a unit degrees-of-freedom (dof) parameter, and a non-centrality parameter δ. A number of efficient methods exist for evaluating the density, distribution, and quantile functions of WSNC random variables. Details and code are provided in our supplementary materials. Some constrained optimization problems involve a known objective f(x). In that case, referring back to (9), we are done: Y (x, s) is WSNC (as in (10)) shifted by a known quantity f(x) + r(s). When Yf(x) is conditionally Gaussian, ˜W(x, s) = Yf(x) + 1 2ρW(x, s) is the weighted sum of a Gaussian and WNCS variates, a problem that is again well-studied—see the supplementary material. 3.2 Slack-AL expected improvement Evaluating EI at candidate (x, s) locations under the AL-composite involves working with EI(x, s) = E  (yn min −Y (x, s)) I{Y (x,s)≤yn min}  , given the current minimum yn min of the AL over all n runs. 4 When f(x) is known, let wn min(x, s) = 2ρ (yn min −f(x) −r(s)) absorb all of the non-random quantities involved in the EI calculation. Then, with DW (·; x, s) denoting the distribution of W(x, s), EI(x, s) = 1 2ρE  (wn min(x, s) −W(x, s)) IW (x,s)≤wmin(x,s)  = 1 2ρ Z wn min(x,s) −∞ DW (t; x, s)dt = 1 2ρ Z wn min(x,s) 0 DW (t; x, s)dt (11) if wn min(x, s) ≥0 and zero otherwise. That is, the EI boils down to integrating the distribution function of W(x, s) between 0 (since W is positive) and wn min(x, s). This is a one-dimensional definite integral that is easy to approximate via quadrature; details are in the supplementary material. Since W(x, s) is quadratic in the Yc(x) values, it is often the case, especially for smaller ρ-values in later AL iterations, that DW (t; x, s) is zero over most of [0, wn min(x, s)], simplifying numerical integration. However, this has deleterious impacts on search over (x, s), as we discuss in our supplement. When f(x) is unknown and Yf(x) is conditionally normal, let ˜wn min(s) = 2ρ (yn min −r(s)). Then, EI(x, s) = 1 2ρE h ˜wn min(s) −˜W(x, s)  I ˜ W (x,s)≤˜ wn min(s) i = 1 2ρ Z ˜ wn min(s) −∞ D ˜ W (t; x, s)dt. Here the lower bound of the definite integral cannot be zero since Yf(x) may be negative, and thus ˜W(x, s) may have non-zero distribution for negative t-values. This can challenge the numerical quadrature , although many library functions allow indefinite bounds. We obtain better performance by supplying a conservative finite lower bound, for example three standard deviations in Yf(x), in units of the penalty (2ρ), below zero: 6ρσf(x). Implementation details are in our supplement. 3.3 AL updates, optimal slack settings, and other implementation notes The new slack-AL method is completed by describing when the subproblem (7) is deemed to be “solved” (step 2 in Alg. 1), how λ and ρ updated (steps 3–4). We terminate the BO search sub-solver after a single iteration as this matches with the spirit of EI-based search, whose choice of next location can be shown to be optimal, in a certain sense, if it is the final point being selected. It also meshes well with an updating scheme analogous to that in steps 3–4: updating only when no actual improvement (in terms of constraint violation) is realized by that choice. That is, step 2: Let (xk, sk) approx. solve minx,s n LA(x, s; λk−1, ρk−1) : (x, s1:m) ∈˜B o step 3: λk j = λk−1 j + 1 ρk−1 (cj(xk) + sk j ), for j = 1, . . . , m + p step 4: If c1:m(xk) ≤0 and |cm+1:m+p(xk)| ≤ϵ, set ρk=ρk−1; else ρk = 1 2ρk−1 Above, step 3 is the same as in Alg. 1 except without the “max”, and with slacks augmenting the constraint values. The “if” statement in step 4 checks for validity at xk, deploying a threshold ϵ > 0 on equality constraints; further discussion of the threshold ϵ is deferred to Section 4, where we discuss progress metrics under mixed constraints. If validity holds at (xk, sk), the current AL iteration is deemed to have “made progress” and the penalty remains unchanged; otherwise it is doubled. An alternate formulation may check |c1:m(xk) + sk 1:m| ≤ϵ. We find that the version in step 4, above, is cleaner because it limits sensitivity to the choice of threshold ϵ. In our supplementary material we recommend initial (λ0, ρ0) values which are analogous to the original, non-slack AL settings. Optimal choice of slacks: The biggest difference between the original AL (3) and slack-AL (7) is that the latter requires searching over both x and s, whereas the former involves only x-values. In what follows we show that there are automatic choices for the s-values as a function of the corresponding x’s, keeping the search space d-dimensional, rather than d + m. For an observed cj(x) value, associated slack variables minimizing the AL (7) can be obtained analytically. Using the form of (9), observe that mins∈Rm + y(x, s) is equivalent to mins∈Rm + Pm j=1 2λjρsj + s2 j +2sjcj(x). For fixed x, this is strictly convex in s. Therefore, its unconstrained minimum can only be its stationary point, which satisfies 0 = 2λjρ + 2s∗ j(x) + 2cj(x), for j = 1, . . . , m. Accounting for the nonnegativity constraint, we obtain the following optimal slack as a function of x: s∗ j(x) = max {0, −λjρ −cj(x)} , j = 1, . . . , m. (12) 5 Above we write s∗as a function of x to convey that x remains a “free” quantity in y(x, s∗(x)). Recall that slacks on equality constraints are zero, sk(x) = 0, k = m + 1, . . . , m + p, for all x. In the blackbox c(x) setting, y(x, s∗(x)) is only directly accessible at the data locations xi. At other x-values, however, the surrogates provide a useful approximation. When Yc(x) is (approximately) Gaussian it is straightforward to show that the optimal setting of the slack variables, solving mins∈Rm + E[Y (x, s)], are s∗ j(x) = max{0, −λjρ −µcj(x)}, i.e., the same as (12) with a prediction µcj(x) for Ycj(x), the unknown cj(x) value. Again, slacks on the equality constraints are set to zero. Other criteria can be used to choose slack variables. Instead of minimizing the mean of the composite, one could maximize the EI. In our supplementary material we explain how this is of dubious practical value, being more computationally intensive and providing near identical results in practice. Implementation notes: Code supporting all methods in this manuscript is provided in two opensource R packages: laGP [8] and DiceOptim [19], both on CRAN [22]. Implementation details vary somewhat across those packages, due primarily to particulars of their surrogate modeling capability and how they search the EI surface. For example, laGP can accommodate a smaller initial design size because it learns fewer parameters (i.e., has fewer degrees of freedom). DiceOptim uses a multi-start search procedure for EI, whereas laGP deploys a random candidate grid, which may optionally be “finished” with an L-BFGS-B search. Nevertheless, their qualitative behavior exhibits strong similarity. Both packages also implement the original AL scheme (i.e., without slack variables) updated (6) for mixed constraints. Further details are provided in our supplementary material. 4 Empirical comparison Here we describe three test problems, each mixing challenging elements from traditional unconstrained blackbox optimization benchmarks, but in a constrained optimization format. We run our optimizers on these problems 100 times under random initializations. In the case of our GP surrogate comparators, this initialization involves choosing random space-filling designs. Our primary means of comparison is an averaged (over the 100 runs) measure of progress defined by the best valid value of the objective for increasing budgets (number of evaluations of the blackbox), n. In the presence of equality constraints it is necessary to relax this definition somewhat, as the valid set may be of measure zero. In such cases we choose a tolerance ϵ ≥0 and declare a solution to be “valid” when inequality constraints are all valid, and when |hk(x)| < ϵ for all k = 1, . . . , p. In our figures we choose ϵ = 10−2; however, the results are similar under stronger thresholds, with a higher variability over initializations. As finding a valid solution is, in itself, sometimes a difficult task, we additionally report the proportion of runs that find valid and optimal solutions as a function of budget, n, for problems with equality (and mixed) constraints. 4.1 An inequality constrained problem We first revisit the “toy” problem from [9], having a 2d input space limited to the unit cube, a (known) linear objective, with sinusoidal and quadratic inequality constraints (henceforth the LSQ problem; see the supplementary material for details). Figure 1 shows progress over repeated solves with a maximum budget of 40 blackbox evaluations. The left-hand plot in Figure 1 tracks the average best valid value of the objective found over the iterations, using the progress metric described above. Random initial designs of size n = 5 were used, as indicated by the vertical-dashed gray line. The solid gray lines are extracted from a similar plot from [9], containing both AL-based comparators, and several from the derivative-free optimization and BO literatures. The details are omitted here. Our new ALBO comparators are shown in thicker colored lines; the solid black line is the original AL(BO)-EI comparator, under a revised (compared to [9]) initialization and updating scheme. The two red lines are variations on the slack-AL algorithm under EI: with (dashed) and without (solid) L-BFGS-B optimizing EI acquisition at each iteration. Finally, the blue line is PESC [10], using the Python library available at https://github.com/HIPS/Spearmint/tree/PESC. The take-home message from the plot is that all four new methods outperform those considered by the original ALBO paper [9]. Focusing on the new comparators only, observe that their progress is nearly statistically equivalent during the first 20 iterations. However, in the latter iterations stark distinctions emerge, with Slack-AL+optim and PESC, both leveraging L-BFGS-B subroutines, outperforming. This 6 0 10 20 30 40 0.6 0.7 0.8 0.9 1.0 1.1 1.2 blackbox evaluations (n) best valid objective (f) Initial Design Gramacy, et al. (2016) 20 25 30 35 40 −7 −6 −5 −4 blackbox evaluations (n) log utility gap Original AL Slack AL Slack AL + optim PESC Figure 1: Results on the LSQ problem with initial designs of size n = 10. The left panel shows the best valid value of the objective over the first 40 evaluations, whereas the right shows the log utility-gap for the second 20 evaluations. The solid gray lines show comparators from [9]. discrepancy is more easily visualized in the right panel with a so-called log “utility-gap” plot [10], tracking the log difference between the theoretical best valid value and those found by search. 4.2 Mixed inequality and equality constrained problems Next consider a problem in four input dimensions with a (known) linear objective and two constraints. The first inequality constraint is the so-called “Ackley” function in d = 4 input dimensions. The second is an equality constraint following the so-called “Hartman 4-dimensional function”. Our supplementary material provides a full mathematical specification. Figure 2 shows two views into 0 10 20 30 40 50 0 1 2 3 4 blackbox evaluations (n) best valid (1e−2 for equality) objective (f) Original AL Slack AL Slack AL + optim EFI nlopt/140 NOMAD−P1/15 NOMAD−AL−P1/15 NOMAD−AL−PBP1/15 0 10 20 30 40 50 0.0 0.2 0.4 0.6 0.8 1.0 blackbox evaluations (n) proportion of valid and solved runs Figure 2: Results on the Linear-Ackley-Hartman mixed constraint problem. The left panel shows a progress comparison based on laGP code with initial designs of size n = 10. The x-scale has been divided by 140 for the nlopt comparator. A value of four indicates that no valid solution has been found. The right panel shows the proportion of valid (thin lines) and optimal (thick lines) solutions for the EFI and “Slack AL + optim” comparators. progress on this problem. Since it involves mixed constraints, comparators from the BO literature are scarce. Our EFI implementation deploys the (−h, h) heuristic mentioned in the introduction. As representatives from the nonlinear optimization literature we include nlopt [11] and three adapted NOMAD [13] comparators, which are detailed in our supplementary material. In the left-hand plot we can see that our new ALBO comparators are the clear winner, with an L-BFGS-B optimized EI search under the slack-variable AL implementation performing exceptionally well. The nlopt and NOMAD comparators are particularly poor. We allowed those to run up to 7000 and 1000 iterations, respectively, and in the plot we scaled the x-axis (i.e., n) to put them on the same scale as the others. 7 The right-hand plot provides a view into the distribution of two key aspects of performance over the MC repetitions. Observe that “Slack AL + optim” finds valid values quickly, and optimal values not much later. Our adapted EFI is particularly slow at converging to optimal (valid) solutions. Our final problem involves two input dimensions, an unknown objective function (i.e., one that must be modeled with a GP), one inequality constraint and two equality constraints. The objective is a centered and re-scaled version of the “Goldstein–Price” function. The inequality constraint is the sinusoidal constraint from the LSQ problem [Section 4.1]. The first equality constraint is a centered “Branin” function, the second equality constraint is taken from [16] (henceforth the GBSP problem). Our supplement contains a full mathematical specification. Figure 3 shows our results on 0 50 100 150 −0.5 0.0 0.5 1.0 1.5 2.0 blackbox evaluations (n) best valid (1e−2 for equality) objective (f) Original AL Slack AL Slack AL + optim EFI nlopt/46 NOMAD−P1/8 NOMAD−AL−P1/8 NOMAD−AL−PBP1/8 0 50 100 150 0.0 0.2 0.4 0.6 0.8 1.0 blackbox evaluations (n) proportion of valid and solved runs Figure 3: Results on the GBSP problem. See Figure 2 caption. this problem. Observe (left panel) that the original ALBO comparator makes rapid progress at first, but dramatically slows for later iterations. The other ALBO comparators, including EFI, converge much more reliably, with the “Slack AL + optim” comparator leading in both stages (early progress and ultimate convergence). Again, nlopt and NOMAD are poor, however note that their relative comparison is reversed; again, we scaled the x-axis to view these on a similar scale as the others. The right panel shows the proportion of valid and optimal solutions for “Slack AL + optim” and EFI. Notice that the AL method finds an optimal solution almost as quickly as it finds a valid one—both substantially faster than EFI. 5 Conclusion The augmented Lagrangian (AL) is an established apparatus from the mathematical optimization literature, enabling objective-only or bound-constrained optimizers to be deployed in settings with constraints. Recent work involving Bayesian optimization (BO) within the AL framework (ALBO) has shown great promise, especially toward obtaining global solutions under constraints. However, those methods were deficient in at least two respects. One is that only inequality constraints could be supported. Another was that evaluating the acquisition function, combining predictive mean and variance information via expected improvement (EI), required Monte Carlo approximation. In this paper we showed that both drawbacks could be addressed via a slack-variable reformulation of the AL. Our method supports inequality, equality, and mixed constraints, and to our knowledge this updated ALBO procedure is unique in the BO literature in its applicability to the most general mixed constraints problem (1). We showed that the slack ALBO method outperforms modern alternatives in several challenging constrained optimization problems. Acknowledgments We are grateful to Mickael Binois for comments on early drafts. RBG is grateful for partial support from National Science Foundation grant DMS-1521702. The work of SMW is supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research under Contract No. DE-AC02-06CH11357. The work of SLD is supported by the Natural Sciences and Engineering Research Council of Canada grant 418250. 8 References [1] C. Audet, J. Dennis, Jr., D.W. Moore, A. Booker, and P.D. Frank. Surrogate-model-based method for constrained optimization. In AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, 2000. [2] D. Bertsekas. Constrained Optimization and Lagrange Multiplier Methods. Academic Press, New York, NY, 1982. [3] G. E. P. Box and N. R. Draper. Empirical Model Building and Response Surfaces. Wiley, Oxford, 1987. [4] P. Boyle. Gaussian Processes for Regression and Optimization. PhD thesis, Victoria University of Wellington, 2007. [5] E. Brochu, V. M. Cora, and N. de Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. Technical report, University of British Columbia, 2010. arXiv:1012.2599v1. [6] J. R. Gardner, M. J. Kusner, Z. Xu, K. W. Weinberger, and J. P. Cunningham. Bayesian optimization with inequality constraints. In Proceedings of the 31st International Conference on Machine Learning, volume 32. JMLR, W&CP, 2014. [7] M. A. Gelbart, J. Snoek, and R. P. Adams. Bayesian optimization with unknown constraints. In Uncertainty in Artificial Intelligence (UAI), 2014. [8] R. B. Gramacy. laGP: Large-scale spatial modeling via local approximate Gaussian processes in R. Journal of Statistical Software, 72(1):1–46, 2016. [9] R.B. Gramacy, G.A. Gray, S. Le Digabel, H.K.H. Lee, P. Ranjan, G. Wells, and S.M. Wild. Modeling an augmented Lagrangian for blackbox constrained optimization. Technometrics, 58:1–11, 2016. [10] J.M. Hernández-Lobato, M. A. Gelbart, M. W. Hoffman, R. P. Adams, and Z. Ghahramani. Predictive entropy search for Bayesian optimization with unknown constraints. In Proceedings of the 32nd International Conference on Machine Learning, volume 37. JMLR, W&CP, 2015. [11] S. G. Johnson. The NLopt nonlinear-optimization package, 2014. via the R package nloptr. [12] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black box functions. J. of Global Optimization, 13:455–492, 1998. [13] S. Le Digabel. Algorithm 909: NOMAD: Nonlinear Optimization with the MADS algorithm. ACM Transactions on Mathematical Software, 37(4):44:1–44:15. doi: 10.1145/1916461.1916468. [14] J. Mockus. Bayesian Approach to Global Optimization: Theory and Applications. Springer, 1989. [15] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006. [16] J. Parr, A. Keane, A Forrester, and C. Holden. Infill sampling criteria for surrogate-based optimization with constraint handling. Engineering Optimization, 44:1147–1166, 2012. [17] V. Picheny. A stepwise uncertainty reduction approach to constrained global optimization. In Proceedings of the 7th International Conference on Artificial Intelligence and Statistics, volume 33, pages 787–795. JMLR W&CP, 2014. [18] V. Picheny, D. Ginsbourger, and T. Krityakierne. Comment: Some enhancements over the augmented lagrangian approach. Technometrics, 58(1):17–21, 2016. [19] V. Picheny, D. Ginsbourger, O. Roustant, with contributions by M. Binois, C. Chevalier, S. Marmin, and T. Wagner. DiceOptim: Kriging-Based Optimization for Computer Experiments, 2016. R package version 2.0. [20] M. J. Sasena. Flexibility and Efficiency Enhancement for Constrained Global Design Optimization with Kriging Approximations. PhD thesis, University of Michigan, 2002. [21] M. Schonlau, W. J. Welch, and D. R. Jones. Global versus local search in constrained optimization of computer models. Lecture Notes-Monograph Series, pages 11–25, 1998. [22] R Development Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Aus., 2004. URL http://www.R-project.org. ISBN 3-900051-00-3. [23] J. Snoek, H. Larochelle, and R. P. Adams. Bayesian optimization of machine learning algorithms. In Neural Information Processing Systems (NIPS), 2012. 9
2016
120
6,018
Combinatorial Energy Learning for Image Segmentation Jeremy Maitin-Shepard UC Berkeley Google jbms@google.com Viren Jain Google viren@google.com Michal Januszewski Google mjanusz@google.com Peter Li Google phli@google.com Pieter Abbeel UC Berkeley pabbeel@cs.berkeley.edu Abstract We introduce a new machine learning approach for image segmentation that uses a neural network to model the conditional energy of a segmentation given an image. Our approach, combinatorial energy learning for image segmentation (CELIS) places a particular emphasis on modeling the inherent combinatorial nature of dense image segmentation problems. We propose efficient algorithms for learning deep neural networks to model the energy function, and for local optimization of this energy in the space of supervoxel agglomerations. We extensively evaluate our method on a publicly available 3-D microscopy dataset with 25 billion voxels of ground truth data. On an 11 billion voxel test set, we find that our method improves volumetric reconstruction accuracy by more than 20% as compared to two state-of-the-art baseline methods: graph-based segmentation of the output of a 3-D convolutional neural network trained to predict boundaries, as well as a random forest classifier trained to agglomerate supervoxels that were generated by a 3-D convolutional neural network. 1 Introduction Mapping neuroanatomy, in the pursuit of linking hypothesized computational models consistent with observed functions to the actual physical structures, is a long-standing fundamental problem in neuroscience. One primary interest is in mapping the network structure of neural circuits by identifying the morphology of each neuron and the locations of synaptic connections between neurons, a field called connectomics. Currently, the most promising approach for obtaining such maps of neural circuit structure is volume electron microscopy of a stained and fixed block of tissue. [4, 16, 17, 10] This technique was first used successfully decades ago in mapping the structure of the complete nervous system of the 302-neuron Caenorhabditis elegans; due to the need to manually cut, image, align, and trace all neuronal processes in about 8000 50 nm serial sections, even this small circuit required over 10 years of labor, much of it spent on image analysis. [31] At the time, scaling this approach to larger circuits was not practical. Recent advances in volume electron microscopy [11, 20, 15] make feasible the imaging of large circuits, potentially containing hundreds of thousands of neurons, at sufficient resolution to discern even the smallest neuronal processes. [4, 16, 17, 10] The high image quality and near-isotropic resolution achievable with these methods enables the resultant data to be treated as a true 3-D volume, which significantly aids reconstruction of processes that do not run parallel to the sectioning axis, and is potentially more amenable to automated image processing. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Image (I) Initial oversegmentation Candidate segmentation (S) Shape descriptors Local energy Global energy Fully-connected layer Convolutional neural network E(S; I) Es1 Es2 Es3 Boundary classification Agglomeration P all voxel positions x Figure 1: Illustration of computation of global energy for a single candidate segmentation S. The local energy Es(x; S; I) ∈[0, 1], computed by a deep neural network, is summed over all shape descriptor types s and voxel positions x. Image analysis remains a key challenge, however. The primary bottleneck is in segmenting the full volume, which is filled almost entirely by heavily intertwined neuronal processes, into the volumes occupied by each individual neuron. While the cell boundaries shown by the stain provide a strong visual cue in most cases, neurons can extend for tens of centimeters in path length while in some places becoming as narrow as 40 nm; a single mistake anywhere along the path can render connectivity information for the neuron largely inaccurate. Existing automated and semi-automated segmentation methods do not sufficiently reduce the amount of human labor required: a recent reconstruction of 950 neurons in the mouse retina required over 20000 hours of human labor, even with an efficient method of tracing just a skeleton of each neuron [18]; a recent reconstruction of 379 neurons in the Drosophila medulla column (part of the visual pathway) required 12940 hours of manual proof-reading/correction of an automated segmentation [26]. Related work: Algorithmic approaches to image segmentation are often formulated as variations on the following pipeline: a boundary detection step establishes local hypotheses of object boundaries, a region formation step integrates boundary evidence into local regions (i.e. superpixels or supervoxels), and a region agglomeration step merges adjacent regions based on image and object features. [1, 19, 30, 2] Although extensive integration of machine learning into such pipelines has begun to yield promising segmentation results [3, 14, 22], we argue that such pipelines, as previously formulated, fundamentally neglect two potentially important aspects of achieving accurate segmentation: (i) the combinatorial nature of reasoning about dense image segmentation structure,1 and (ii) the fundamental importance of shape as a criterion for segmentation quality. Contributions: We propose a method that attempts to overcome these deficiencies. In particular, we propose an energy-based model that scores segmentation quality using a deep neural network that flexibly integrates shape and image information: Combinatorial Energy Learning for Image Segmentation (CELIS). In pursuit of such a model this paper makes several specific contributions: a novel connectivity region data structure for efficiently computing the energy of configurations of 3-D objects; a binary shape descriptor for efficient representation of 3-D shape configurations; a neural network architecture that splices the intermediate unit output from a trained convolutional network as input to a deep fully-connected neural network architecture that scores a segmentation and 3-D image; a training procedure that uses pairwise object relations within a segmentation to learn the energy-based model. an experimental evaluation of the proposed and baseline automated reconstruction methods on a massive and (to our knowledge) unprecedented scale that reflects the true size of connectomic datasets required for biological analysis (many billions of voxels). 2 Conditional energy modeling of segmentations given images We define a global, translation-invariant energy model for predicting the cost of a complete segmentation S given a corresponding image I. This cost can be seen as analogous to the negative 1While prior work [30, 14, 2] has recognized the importance of combinatorial reasoning, the previously proposed global optimization methods allow local decisions to interact only in a very limited way. 2 log-likelihood of the segmentation given the image, but we do not actually treat it probabilistically. Our goal is to define a model such that the true segmentation corresponding to a given image can be found by minimizing the cost; the energy can reflect both a prior over object configurations alone, as well as compatibility between object configurations and the image. As shown in Fig. 1, we define the global energy E(S; I) as the sum over local energy models (defined by a deep neural network) Es(x; S; I) at several different scales s computed in sliding-window fashion centered at every position x within the volume: E(S; I) := X s X x Es(x; S; I), Es(x; S; I) := ˆEs (rs(x; S); φ(x; I)) . The local energy Es(x; S; I) depends on the local image context centered at position x by way of a vector representation φ(x; I) computed by a deep convolutional neural network, and on the local shape/object configuration at scale s by way of a novel local binary shape descriptor rs(x; S), defined in Section 3. To find (locally) minimal-cost segmentations under this model, we use local search over the space of agglomerations starting from some initial supervoxel segmentation. Using a simple greedy policy, at each step we consider all possible agglomeration actions, i.e. merges between any two adjacent segments, and pick the action that results in the lowest energy. Naïvely, computing the energy for just a single segmentation requires computing shape descriptors and then evaluating the energy model at every voxel position with the volume; a small volume may have tens or hundreds of millions of voxels. At each stage of the agglomeration, there may be thousands, or tens of thousands, of potential next agglomeration steps, each of which results in a unique segmentation. In order to choose the best next step, we must know the energy of all of these potential next segmentations. The computational cost to perform these computations directly would be tremendous, but in the supplement, we prove a collection of theorems that allow for an efficient implementation that computes these energy terms incrementally. 3 Representing 3-D Shape Configurations with Local Binary Descriptors We propose a binary shape descriptor based on subsampled pairwise connectivity information: given a specification s of k pairs of position offsets {a1, b1}, . . . , {ak, bk} relative to the center of some fixed-size bounding box of size Bs, the corresponding k-bit binary shape descriptor r(U) for a particular segmentation U of that bounding box is defined by ri(U) := 1 if ai is connected to bi in U; 0 otherwise. for i ∈[1, k]. As shown in Fig. 2a, each bit of the descriptor specifies whether a particular pair of positions are part of the same segment, which can be determined in constant time by the use of a suitable data structure. In the limit case, if we use the list of all n 2  pairs of positions within an n-voxel bounding box, no information is lost and the Hamming distance between two descriptors is precisely equal to the Rand index. [23] In general we can sample a subset of only k pairs out of the n 2  possible; if we sample uniformly at random, we retain the property that the expected Hamming distance between two descriptors is equal to the Rand index. We found that picking k = 512 bits provides a reasonable trade-off between fidelity and representation size. While the pairs may be randomly sampled initially, naturally to obtain consistent results when learning models based on these descriptors we must use the same fixed list of positions for defining the descriptor at both training and test time. 2 Note that this descriptor serves in general as a type of sketch of a full segmentation of a given bounding box. By restricting one of the two positions of each pair to be the center position of the bounding box, we instead obtain a sketch of just the single segment containing the center position. We refer to the descriptor in this case as center-based, and to the general case as pairwise, as shown in Fig. 2b. We will use these shape descriptors to represent only local sub-regions of a segmentation. To represent shape information throughout a large volume, we compute shape descriptors densely at all positions in a sliding window fashion, as shown in Fig. 2c. 2The BRIEF descriptor [5] is similarly defined as a binary descriptor based on a subset of the pairs of points within a patch, but each bit is based on the intensity difference, rather than connectivity, between each pair. 3 r = 1 . . . r = 100000000110 . . . r = 10000000011000000110100000101001 (a) Sequence showing computation of a shape descriptor. r = 00001000001011100111100100001000 r = 00000000000101110000010000110010 r = 10001001101100010100000010000111 (b) Shape descriptors are computed at multiple scales. Pairwise descriptors (shown left and center) consider arbitrary pairwise connectivity, while center-based shape descriptors (shown right) restrict one position of each pair to be the center point. r = 10000001110010100110100001011001 r = 11000011110011100100100011011011 r = 10000011100111100100110011011111 (c) Shape descriptors are computed densely at every position within the volume. Figure 2: Illustration of shape descriptors. The connected components of the bounding box U for which the descriptor is computed are shown in distinct colors. The pairwise connectivity relationships that define the descriptor are indicated by dashed lines; connected pairs are shown in white, while disconnected pairs are shown in black. Connectivity is determined based on the connected components of the underlying segmentation, not the geometry of the line itself. While this illustration is 2-D, in our experiments shape descriptors are computed fully in 3-D. Connectivity Regions As defined, a single shape descriptor represents the segmentation within its fixed-size bounding box; by shifting the position of the bounding box we can obtain descriptors corresponding to different local regions of some larger segmentation. The size of the bounding box determines the scale of the local representation. This raises the question of how connectivity should be defined within these local regions. Two voxels may be connected only by a long path well outside the descriptor bounding box. As we would like the shape descriptors to be consistent with the local topology, such pairs should be considered disconnected. Shape descriptors are, therefore, defined with respect to connectivity within some larger connectivity region, which necessarily contains one or more descriptor bounding boxes but may in general be significantly smaller than the full segmentation; conceptually, the shape descriptor bounding box slides around to all possible positions contained within the connectivity region. (This sliding necessarily results in some minor inconsistency in context between different positions, but reduces computational and memory costs.) To obtain shape descriptors at all positions, we simply tile the space with overlapping rectangular connectivity regions of appropriate uniform size and stride, as shown in the supplement. The connectivity region size determines the degree of locality of the connectivity information captured by the shape descriptor (independent of the descriptor bounding box size). It also affects computational costs, as described in the supplement. 4 4 Energy model learning We define the local energy model ˆEs (r; v) for each shape descriptor type/scale s by a learned neural network model that computes a real-valued score in [0, 1] from a shape descriptor r and image feature vector v. To simplify the presentation, we define the following notation for the forward discrete derivative of f with respect to S: ∆e Sf(S) := f(S + e) −f(S). Based on this notation, we have the discrete derivative of the energy function ∆e SE(S; I) = E(S + e; I) −E(S; I), where S + e denotes the result of merging the two supervoxels corresponding to e in the existing segmentation S. To agglomerate, our greedy policy simply chooses at step t the action e that minimizes ∆e StE(St; I), where St denotes the current segmentation at step t. As in prior work [22], we treat this as a classification problem, with the goal of matching the sign of ∆e StE(St; I) to ∆e Sterror(St, S∗), the corresponding change in segmentation error with respect to a ground truth segmentation S∗, measured using Variation of Information [21]. 4.1 Local training procedure Because the ∆e StE(St; I) term is simply the sum of the change in energies from each position and descriptor type s, as a heuristic we optimize the parameters of the energy model ˆEs (r; v) independently for each shape descriptor type/scale s. We seek to minimize the expectation Ei  ℓ(∆ei Si error(Si, S∗), ˆEs (rs(xi; Si + e); φ(xi; I)))+ ℓ(−∆ei Si error(Si, S∗), ˆEs (rs(x; Si); φ(xi; I)))  , where i indexes over training examples that correspond to a particular sampled position xi and a merge action ei applied to a segmentation Si. ℓ(y, a) denotes a binary classification loss function, where a ∈[0, 1] is the predicted probability that the true label y is positive, weighted by |y|. Note that if ∆ei Si error(Si, S∗) < 0, then action e improved the score and therefore we want a low predicted score for the post-merge descriptor rs(xi; Si + e) and a high predicted score for the pre-merge descriptor rs(xi; Si); if ∆ei Si error(Si, S∗) > 0 the opposite applies. We tested the standard log loss ℓ(y, a) := |y| · [1y>0 log(a) + 1y<0 log(1 −a)], as well as the signed linear loss ℓ(y, a) := y · a, which more closely matches how the Es(x; Si; I) terms contribute to the overall ∆e SE(S; I) scores. Stochastic gradient descent (SGD) is used to perform the optimization. We obtain training examples by agglomerating using the expert policy that greedily optimizes error(St, S∗). At each segmentation state St during an agglomeration step (including the initial state), for each possible agglomeration action e, and each position x within the volume, we compute the shape descriptor pair rs(x; St) and rs(x; St+e) reflecting the pre-merge and post-merge states, respectively. If rs(x; St) ̸= rs(x; St + e), we emit a training example corresponding to this descriptor pair. We thereby obtain a conceptual stream of examples ⟨e, ∆e St error(St, S∗), φ(x; I), rs(x; St), rs(x; St + e)⟩. This stream of examples may contain billions of examples (and many highly correlated), far more than required to learn the parameters of Es. To reduce resource requirements, we use priority sampling [12], based on |∆e S error(S, S∗)|, to obtain a fixed number of weighted samples without replacement for each descriptor type s. We equalize the total weight of true merge examples (∆e S error(S, S∗) < 0) and false merge examples (∆e S error(S, S∗) > 0) in order to avoid learning degenerate models.3 5 Experiments We tested our approach on a large, publicly available electron microscopy dataset, called Janelia FIB25, of a portion of the Drosophila melangaster optic lobe. The dataset was collected at 8 × 8 × 8 nm 3For example, if most of the weight is on false merge examples, as would often occur without balancing, the model can simply learn to assign a score that increases with the number of 1 bits in the shape descriptor. 5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Merge error (H(t|p)) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Split error (H(p|t)) VI Rand F1 CELIS (this paper) 1.672 0.691 3d-CNN+GALA 2.069 0.597 3d-CNN+Watershed 2.143 0.629 •◦7colseg1 2.981 0.099 Oracle 0.428 0.901 Figure 3: Segmentation accuracy on 11-gigavoxel FIB-25 test set. Left: Pareto frontiers of information-theoretic split/merge error, as used previously to evaluate segmentation accuracy. [22] Right: Comparison of Variation of Information (lower is better) and Rand F1 score (higher is better). For CELIS, 3d-CNN+GALA, and 3d-CNN+watershed, the hyperparameters were optimized for each metric on the training set. resolution using Focused Ion Beam Scanning Electron Microscopy (FIB-SEM); a labor-intensive semi-automated approach was used to segment all of the larger neuronal processes within a ≈20,000 cubic micron volume (comprising about 25 billion voxels). [27] To our knowledge, this challenging dataset is the largest publicly available electron microscopy dataset of neuropil with a corresponding “ground truth” segmentation. For our experiments, we split the dataset into separate training and testing portions along the z axis: the training portion comprises z-sections 2005–5005, and the testing portion comprises z-sections 5005–8000 (about 11 billion voxels). 5.1 Boundary classification and oversegmentation To obtain image features and an oversegmentation to use as input for agglomeration, we trained convolutional neural networks to predict, based on a 35 × 35 × 9 voxel image context region, whether the center voxel is part of the same neurite as the adjacent voxel in each of the x, y, and z directions, as in prior work. [29] We optimized the parameters of the network using stochastic gradient descent with log loss. We trained several different networks, varying as hyperparameters the amount of dilation of boundaries in the training data (in order to increase extracellular space) from 0 to 8 voxels and whether components smaller than 10000 voxels were excluded. See the supplementary information for a description of the network architecture. Using these connection affinities, we applied a watershed algorithm [33, 34] to obtain an (approximate) oversegmentation. We used parameters Tl = 0.95, Th = 0.95, Te = 0.5, and Ts = 1000 voxels. 5.2 Energy model architecture We used five types of 512-dimensional shape descriptors: three pairwise descriptor types with 93, 173, and 333 bounding boxes, and two center-based descriptor types with 173 and 333 bounding boxes, respectively. The connectivity positions within the bounding boxes for each descriptor type were sampled uniformly at random. We used the 512-dimensional fully-connected penultimate layer output of the low-level classification convolutional neural network as the image feature vector φ(x; I). For each shape descriptor type s, we used the following architecture for the local energy model ˆEs (r; v): we concatenated the shape descriptor vector and the image feature vector to obtain a 1024-dimensional input vector. We used two 2048-dimensional fully-connected rectified linear hidden layers, followed by a logistic output unit, and applied dropout (with p = 0.5) after the last hidden layer. While this effectively computes a 6 score from a raw image patch and a shape descriptor, by segregating expensive convolutional image processing that does not depend on the shape descriptor, this architecture allows us to benefit from pre-training and precomputation of the intermediate image feature vector φ(x; I) for each position x. Training for both the energy models and the boundary classifier was performed using asynchronous SGD using a distributed architecture. [9] 5.3 Evaluation We compared our method to the state-of-the-art agglomeration method GALA [22], which trains a random forest classifier to predict merge decisions using image features derived from boundary probabilities. 4 To obtain such probabilities from our low-level convolutional neural network classifier, which predicts edge affinities between adjacent voxels rather than per-voxel predictions, we compute for each voxel the minimum connection probability to any voxel in its 6-connectivity neighborhood, and treat this as the probability/score of it being cell interior. For comparison, we also evaluated a watershed procedure applied to the CNN affinity graph output, under varying parameter choices, to measure the accuracy of the deep CNN boundary classification without the use of an agglomeration procedure. Finally, we evaluated the accuracy of the publicly released automated segmentation of FIB-25 (referred to as 7colseg1) [13] that was the basis of the proofreading process used to obtain the ground truth; it was produced by applying watershed segmentation and a variant of GALA agglomeration to the predictions made by an Ilastik [25]-trained voxel classifier. We tested both GALA and CELIS using the same initial oversegmentations for the training and test regions. To compare the accuracy of the reconstructions, we computed two measures of segmentation consistency relative to the ground truth: Variation of Information [21] and Rand F1 score, defined as the F1 classification score over connectivity between all voxel pairs within the volumes; these are the primary metrics used in prior work. [28, 8, 22] The former has the advantage of weighing segments linearly in their size rather than quadratically. Because any agglomeration method is ultimately limited by the quality of the initial oversegmentation, we also computed the accuracy of an oracle agglomeration policy that greedily optimizes the error metric directly. (Computing the true globally-optimal agglomeration under either metric is intractable.) This serves as an (approximate) upper bound that is useful for separating the error due to agglomeration from the error due to the initial oversegmentation. 6 Results Figure 3 shows the Pareto optimal trade-offs between test set split and merge error of each method obtained by varying the choice of hyperparameters and agglomeration thresholds, as well as the Variation of Information and Rand F1 scores obtained from the training set-optimal hyperparameters. CELIS consistently outperforms all other methods by a significant margin under both metrics. The large gap between the Oracle results and the best automated reconstruction indicates, however, that there is still large room for improvement in agglomeration. While the evaluations are done on a single dataset, it is a single very large dataset; to verify that the improvement due to CELIS is broad and general (rather than localized to a very specific part of the image volume), we also evaluated accuracy independently on 18 non-overlapping 5003-voxel subvolumes evenly spaced within the test region. On all subvolumes CELIS outperformed the best existing method under both metrics, with a median reduction in Variation of Information error of 19% and in Rand F1 error of 22%. This suggests that CELIS is improving accuracy in many parts of the volume that span significant variations in shape and image characteristics. 4GALA also supports multi-channel image features, potentially representing predicted probabilities of additional classes, such as mitochondria, but we did not make use of this functionality as we did not have training data for additional classes. 7 7 Discussion We have introduced CELIS, a framework for modeling image segmentations using a learned energy function that specifically exploits the combinatorial nature of dense segmentation. We have described how this approach can be used to model the conditional energy of a segmentation given an image, and how the resulting model can be used to guide supervoxel agglomeration decisions. In our experiments on a challenging 3d microscopy reconstruction problem, CELIS improved volumetric reconstruction accuracy by 20% over the best existing method, and offered a strictly better trade-off between split and merge errors, by a wide margin, compared to existing methods. The experimental results are unique in the scale of the evaluations: the 11-gigavoxel test region is 2–4 orders of magnitude larger than used for evaluation in prior work, and we believe this large scale of evaluation to be critically important; we have found evaluations on smaller volumes, containing only short neurite fragments, to be unreliable at predicting accuracy on larger volumes (where propagation of merge errors is a major challenge). While more computationally expensive than many prior methods, CELIS is nonetheless practical: we have successfully run CELIS on volumes approaching ≈1 teravoxel in a matter of hours, albeit using many thousands of CPU cores. In addition to advancing the state of the art in learning-based image segmentation, this work also has significant implications for the application area we have studied, connectomic reconstruction. The FIB-25 dataset reflects state-of-the-art techniques in sample preparation and imaging for large-scale neuron reconstruction, and in particular is highly representative of much larger datasets actively being collected (e.g. of a full adult fly brain). We expect, therefore, that the significant improvements in automated reconstruction accuracy made by CELIS on this dataset will directly translate to a corresponding decrease in human proof-reading effort required to reconstruct a given volume of tissue, and a corresponding increase in the total size of neural circuit that may reasonably be reconstructed. Future work in several specific areas seems particularly fruitful: • End-to-end training of the CELIS energy modeling pipeline, including the CNN model for computing the image feature representation and the aggregation of local energies at each position and scale. Because the existing pipeline is fully differentiable, it is directly amenable to end-to-end training. • Integration of the CELIS energy model with discriminative training of a neural networkbased agglomeration policy. Such a policy could depend on the distribution of local energy changes, rather than just the sum, as well as other per-object and per-action features proposed in prior work. [22, 3] • Use of a CELIS energy model for fixing undersegmentation errors. While the energy minimization procedure proposed in this paper is based on a greedy local search limited to performing merges, the CELIS energy model is capable of evaluating arbitrary changes to the segmentation. Evaluation of candidate splits (based on a hierarchical initial segmentation or other heuristic criteria) would allow for the use of a potentially more robust simulated annealing energy minimization procedure capable of both splits and merges. Several recent works [24, 32, 7, 6] have integrated deep neural networks into pairwise-potential conditional random field models. Similar to CELIS, these approaches combine deep learning with structured prediction, but differ from CELIS in several key ways: • Through a restriction to models that can be factored into pairwise potentials, these approaches are able to use mean field and pseudomarginal approximations to perform efficient approximate inference. The CELIS energy model, in contrast, sacrifices factorization for the richer combinatorial modeling provided by the proposed 3-D shape descriptors. • More generally, these prior CRF methods are focused on refining predictions (e.g. improving boundary localization/detail for semantic segmentation) made by a feed-forward neural network that are correct at a high level. In contrast, CELIS is designed to correct fundamental inaccuracy of the feed-forward convolutional neural network in critical cases of ambiguity, which is reflected in the much greater complexity of the structured model. Acknowledgments This material is based upon work supported by the National Science Foundation under Grant No. 1118055. 8 References [1] B. Andres, U. Köthe, M. Helmstaedter, W. Denk, and F. Hamprecht. Segmentation of SBFSEM volume data of neural tissue by hierarchical classification. Pattern recognition, pages 142–152, 2008. 2 [2] Bjoern Andres, Thorben Kroeger, Kevin L Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Koethe, and Fred A Hamprecht. Globally optimal closed-surface segmentation for connectomics. In Computer Vision–ECCV 2012, pages 778–791. Springer, 2012. 2 [3] John A Bogovic, Gary B Huang, and Viren Jain. Learned versus hand-designed feature representations for 3d agglomeration. arXiv:1312.6159, 2013. 2, 8 [4] Kevin L Briggman and Winfried Denk. Towards neural circuit reconstruction with volume electron microscopy techniques. Current Opinion in Neurobiology, 16(5):562 – 570, 2006. Neuronal and glial cell biology / New technologies. 1 [5] Michael Calonder, Vincent Lepetit, Christoph Strecha, and Pascal Fua. Brief: Binary robust independent elementary features. In European conference on computer vision, pages 778–792. Springer, 2010. 3 [6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. CoRR, abs/1412.7062, 2014. 8 [7] Liang-Chieh Chen, Alexander G Schwing, Alan L Yuille, and Raquel Urtasun. Learning deep structured models. In Proc. ICML, 2015. 8 [8] Dan Claudiu Ciresan, Alessandro Giusti, Luca Maria Gambardella, and Jürgen Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In NIPS, pages 2852–2860, 2012. 7 [9] Jeffrey Dean, Greg Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Mark Mao, Marc’Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, Quoc V. Le, and Andrew Y. Ng. Large scale distributed deep networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223–1231. Curran Associates, Inc., 2012. 7 [10] Winfried Denk, Kevin L Briggman, and Moritz Helmstaedter. Structural neurobiology: missing link to a mechanistic understanding of neural computation. Nature Reviews Neuroscience, 13(5):351–358, 2012. 1 [11] Winfried Denk and Heinz Horstmann. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol, 2(11):e329, 10 2004. 1 [12] Nick Duffield, Carsten Lund, and Mikkel Thorup. Priority sampling for estimation of arbitrary subset sums. Journal of the ACM (JACM), 54(6):32, 2007. 5 [13] Janelia FlyEM. https://www.janelia.org/project-team/flyem/data-and-software-release. Accessed: 2016-05-19. 7 [14] Jan Funke, Bjoern Andres, Fred A Hamprecht, Albert Cardona, and Matthew Cook. Efficient automatic 3d-reconstruction of branching neurons from em data. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 1004–1011. IEEE, 2012. 2 [15] KJ Hayworth, N Kasthuri, R Schalek, and JW Lichtman. Automating the collection of ultrathin serial sections for large volume tem reconstructions. Microscopy and Microanalysis, 12(Supplement,S02):86–87, 2006. 1 [16] Moritz Helmstaedter, Kevin L Briggman, and Winfried Denk. 3d structural imaging of the brain with photons and electrons. Current Opinion in Neurobiology, 18(6):633 – 641, 2008. 1 [17] Moritz Helmstaedter, Kevin L Briggman, and Winfried Denk. High-accuracy neurite reconstruction for high-throughput neuroanatomy. Nature neuroscience, 14(8):1081–1088, 2011. 1 [18] Moritz Helmstaedter, Kevin L Briggman, Srinivas C Turaga, Viren Jain, H Sebastian Seung, and Winfried Denk. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature, 500(7461):168–174, 2013. 2 [19] Viren Jain, Srinivas C Turaga, Kevin L Briggman, Moritz N Helmstaedter, Winfried Denk, and H Sebastian Seung. Learning to agglomerate superpixel hierarchies. Advances in Neural Information Processing Systems, 2(5), 2011. 2 [20] Graham Knott, Herschel Marchman, David Wall, and Ben Lich. Serial section scanning electron microscopy of adult brain tissue using focused ion beam milling. The Journal of Neuroscience, 28(12):2959–2964, 2008. 1 [21] Marina Meil˘a. Comparing clusterings—an information based distance. Journal of Multivariate Analysis, 98(5):873–895, 2007. 5, 7 [22] Juan Nunez-Iglesias, Ryan Kennedy, Toufiq Parag, Jianbo Shi, and Dmitri B Chklovskii. Machine learning of hierarchical clustering to segment 2d and 3d images. PloS one, 8(8):e71715, 2013. 2, 5, 6, 7, 8 [23] William M. Rand. Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66(336):846–850, 1971. 3 [24] Alexander G Schwing and Raquel Urtasun. Fully connected deep structured networks. arXiv preprint arXiv:1503.02351, 2015. 8 [25] Christoph Sommer, Christoph Straehle, Ullrich Kothe, and Fred A Hamprecht. ilastik: Interactive learning and segmentation toolkit. In Biomedical Imaging: From Nano to Macro, 2011 IEEE International Symposium on, pages 230–233. IEEE, 2011. 7 [26] Shin-ya Takemura, Arjun Bharioke, Zhiyuan Lu, Aljoscha Nern, Shiv Vitaladevuni, Patricia K Rivlin, William T Katz, Donald J Olbris, Stephen M Plaza, Philip Winston, et al. A visual motion detection circuit suggested by drosophila connectomics. Nature, 500(7461):175– 181, 2013. 2 [27] Shin-ya Takemura, C Shan Xu, Zhiyuan Lu, Patricia K Rivlin, Toufiq Parag, Donald J Olbris, Stephen Plaza, Ting Zhao, William T Katz, Lowell Umayam, et al. Synaptic circuits and their variations within different columns in the visual system of drosophila. Proceedings of the National Academy of Sciences, 112(44):13711–13716, 2015. 6 [28] Srinivas Turaga, Kevin Briggman, Moritz Helmstaedter, Winfried Denk, and Sebastian Seung. Maximin affinity learning of image segmentation. In Advances in Neural Information Processing Systems 22, pages 1865–1873. MIT Press, Cambridge, MA, 2009. 7 [29] Srinivas C. Turaga, Joseph F. Murray, Viren Jain, Fabian Roth, Moritz Helmstaedter, Kevin Briggman, Winfried Denk, and H. Sebastian Seung. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Comput., 22(2):511–538, 2010. 6 [30] Amelio Vazquez-Reina, Michael Gelbart, Daniel Huang, Jeff Lichtman, Eric Miller, and Hanspeter Pfister. Segmentation fusion for connectomics. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 177–184. IEEE, 2011. 2 [31] J. G. White, E. Southgate, J. N. Thomson, and S. Brenner. The Structure of the Nervous System of the Nematode Caenorhabditis elegans. Philosophical Transactions of the Royal Society of London. B, Biological Sciences, 314(1165):1–340, 1986. 1 [32] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 1529–1537, 2015. 8 [33] Aleksandar Zlateski. A design and implementation of an efficient, parallel watershed algorithm for affinity graphs. PhD thesis, Massachusetts Institute of Technology, 2011. 6 [34] Aleksandar Zlateski and H. Sebastian Seung. Image segmentation by size-dependent single linkage clustering of a watershed basin graph. CoRR, 2015. 6 9
2016
121
6,019
Bayesian Optimization for Probabilistic Programs Tom Rainforth† Tuan Anh Le† Jan-Willem van de Meent‡ Michael A. Osborne† Frank Wood† † Department of Engineering Science, University of Oxford ‡ College of Computer and Information Science, Northeastern University {twgr,tuananh,mosb,fwood}@robots.ox.ac.uk, j.vandemeent@northeastern.edu Abstract We present the first general purpose framework for marginal maximum a posteriori estimation of probabilistic program variables. By using a series of code transformations, the evidence of any probabilistic program, and therefore of any graphical model, can be optimized with respect to an arbitrary subset of its sampled variables. To carry out this optimization, we develop the first Bayesian optimization package to directly exploit the source code of its target, leading to innovations in problem-independent hyperpriors, unbounded optimization, and implicit constraint satisfaction; delivering significant performance improvements over prominent existing packages. We present applications of our method to a number of tasks including engineering design and parameter optimization. 1 Introduction Probabilistic programming systems (PPS) allow probabilistic models to be represented in the form of a generative model and statements for conditioning on data [4, 9, 10, 16, 17, 29]. Their core philosophy is to decouple model specification and inference, the former corresponding to the userspecified program code and the latter to an inference engine capable of operating on arbitrary programs. Removing the need for users to write inference algorithms significantly reduces the burden of developing new models and makes effective statistical methods accessible to non-experts. Although significant progress has been made on the problem of general purpose inference of program variables, less attention has been given to their optimization. Optimization is an essential tool for effective machine learning, necessary when the user requires a single estimate. It also often forms a tractable alternative when full inference is infeasible [18]. Moreover, coincident optimization and inference is often required, corresponding to a marginal maximum a posteriori (MMAP) setting where one wishes to maximize some variables, while marginalizing out others. Examples of MMAP problems include hyperparameter optimization, expectation maximization, and policy search [27]. In this paper we develop the first system that extends probabilistic programming (PP) to this more general MMAP framework, wherein the user specifies a model in the same manner as existing systems, but then selects some subset of the sampled variables in the program to be optimized, with the rest marginalized out using existing inference algorithms. The optimization query we introduce can be implemented and utilized in any PPS that supports an inference method returning a marginal likelihood estimate. This framework increases the scope of models that can be expressed in PPS and gives additional flexibility in the outputs a user can request from the program. MMAP estimation is difficult as it corresponds to the optimization of an intractable integral, such that the optimization target is expensive to evaluate and gives noisy results. Current PPS inference engines are typically unsuited to such settings. We therefore introduce BOPP1 (Bayesian optimization for probabilistic programs) which couples existing inference algorithms from PPS, like Anglican [29], with a new Gaussian process (GP) [22] based Bayesian optimization (BO) [11, 15, 20, 23] package. 1Code available at http://www.github.com/probprog/bopp/ 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Iteration 0 50 100 p(Y,3) 0.05 0.1 0.15 0.2 0.25 BOPP Even Powers Figure 1: Simulation-based optimization of radiator powers subject to varying solar intensity. Shown are output heat maps from Energy2D [30] simulations at one intensity, corresponding from left to right to setting all the radiators to the same power, the best result from a set of randomly chosen powers, and the best setup found after 100 iterations of BOPP. The far right plot shows convergence of the evidence of the respective model, giving the median and 25/75% quartiles. (defopt house-heating [alphas] [powers] (let [solar-intensity (sample weather-prior) powers (sample (dirichlet alphas)) temperatures (simulate solar-intensity powers)] (observe abc-likelihood temperatures))) Figure 2: BOPP query for optimizing the power allocation to radiators in a house. Here weather-prior is a distribution over the solar intensity and a uniform Dirichlet prior with concentration alpha is placed over the powers. Calling simulate performs an Energy2D simulation of house temperatures. The utility of the resulting output is conditioned upon using abc-likelihood. Calling doopt on this query invokes the BOPP algorithm to perform MMAP estimation, where the second input powers indicates the variable to be optimized. To demonstrate the functionality provided by BOPP, we consider an example application of engineering design. Engineering design relies extensively on simulations which typically have two things in common: the desire of the user to find a single best design and an uncertainty in the environment in which the designed component will live. Even when these simulations are deterministic, this is an approximation to a truly stochastic world. By expressing the utility of a particular design-environment combination using an approximate Bayesian computation (ABC) likelihood [5], one can pose this as a MMAP problem, optimizing the design while marginalizing out the environmental uncertainty. Figure 1 illustrates how BOPP can be applied to engineering design, taking the example of optimizing the distribution of power between radiators in a house so as to homogenize the temperature, while marginalizing out possible weather conditions and subject to a total energy budget. The probabilistic program shown in Figure 2 allows us to define a prior over the uncertain weather, while conditioning on the output of a deterministic simulator (here Energy2D [30]-a finite element package for heat transfer) using an ABC likelihood. BOPP now allows the required coincident inference and optimization to be carried out automatically, directly returning increasingly optimal configurations. BO is an attractive choice for the required optimization in MMAP as it is typically efficient in the number of target evaluations, operates on non-differentiable targets, and incorporates noise in the target function evaluations. However, applying BO to probabilistic programs presents challenges, such as the need to give robust performance on a wide range of problems with varying scaling and potentially unbounded support. Furthermore, the target program may contain unknown constraints, implicitly defined by the generative model, and variables whose type is unknown (i.e. they may be continuous or discrete). On the other hand, the availability of the target source code in a PPS presents opportunities to overcome these issues and go beyond what can be done with existing BO packages. BOPP exploits the source code in a number of ways, such as optimizing the acquisition function using the original generative model to ensure the solution satisfies the implicit constaints, performing adaptive domain scaling to ensure that GP kernel hyperparameters can be set according to problem-independent hyperpriors, and defining an adaptive non-stationary mean function to support unbounded BO. Together, these innovations mean that BOPP can be run in a manner that is fully black-box from the user’s perspective, requiring only the identification of the target variables relative to current syntax for operating on arbitrary programs. We further show that BOPP is competitive with existing BO engines for direct optimization on common benchmarks problems that do not require marginalization. 2 2 Background 2.1 Probabilistic Programming Probabilistic programming systems allow users to define probabilistic models using a domain-specific programming language. A probabilistic program implicitly defines a distribution on random variables, whilst the system back-end implements general-purpose inference methods. PPS such as Infer.Net [17] and Stan [4] can be thought of as defining graphical models or factor graphs. Our focus will instead be on systems such as Church [9], Venture [16], WebPPL [10], and Anglican [29], which employ a general-purpose programming language for model specification. In these systems, the set of random variables is dynamically typed, such that it is possible to write programs in which this set differs from execution to execution. This allows an unspecified number of random variables and incorporation of arbitrary black box deterministic functions, such as was exploited by the simulate function in Figure 2. The price for this expressivity is that inference methods must be formulated in such a manner that they are applicable to models where the density function is intractable and can only be evaluated during forwards simulation of the program. One such general purpose system, Anglican, will be used as a reference in this paper. In Anglican, models are defined using the inference macro defquery. These models, which we refer to as queries [9], specify a joint distribution p(Y, X) over data Y and variables X. Inference on the model is performed using the macro doquery, which produces a sequence of approximate samples from the conditional distribution p(X|Y ) and, for importance sampling based inference algorithms (e.g. sequential Monte Carlo), a marginal likelihood estimate p(Y ). Random variables in an Anglican program are specified using sample statements, which can be thought of as terms in the prior. Conditioning is specified using observe statements which can be thought of as likelihood terms. Outputs of the program, taking the form of posterior samples, are indicated by the return values. There is a finite set of sample and observe statements in a program source code, but the number of times each statement is called can vary between executions. We refer the reader to http://www.robots.ox.ac.uk/˜fwood/anglican/ for more details. 2.2 Bayesian Optimization Consider an arbitrary black-box target function f : ϑ →R that can be evaluated for an arbitrary point θ ∈ϑ to produce, potentially noisy, outputs ˆw ∈R. BO [15, 20] aims to find the global maximum θ∗= argmax θ∈ϑ f (θ) . (1) The key idea of BO is to place a prior on f that expresses belief about the space of functions within which f might live. When the function is evaluated, the resultant information is incorporated by conditioning upon the observed data to give a posterior over functions. This allows estimation of the expected value and uncertainty in f (θ) for all θ ∈ϑ. From this, an acquisition function ζ : ϑ →R is defined, which assigns an expected utility to evaluating f at particular θ, based on the trade-off between exploration and exploitation in finding the maximum. When direct evaluation of f is expensive, the acquisition function constitutes a cheaper to evaluate substitute, which is optimized to ascertain the next point at which the target function should be evaluated in a sequential fashion. By interleaving optimization of the acquisition function, evaluating f at the suggested point, and updating the surrogate, BO forms a global optimization algorithm that is typically very efficient in the required number of function evaluations, whilst naturally dealing with noise in the outputs. Although alternatives such as random forests [3, 14] or neural networks [26] exist, the most common prior used for f is a GP [22]. For further information on BO we refer the reader to the recent review by Shahriari et al [24]. 3 Problem Formulation Given a program defining the joint density p(Y, X, θ) with fixed Y , our aim is to optimize with respect to a subset of the variables θ whilst marginalizing out latent variables X θ∗= argmax θ∈ϑ p(θ|Y ) = argmax θ∈ϑ p(Y, θ) = argmax θ∈ϑ Z p(Y, X, θ)dX. (2) 3 To provide syntax to differentiate between θ and X, we introduce a new query macro defopt. The syntax of defopt is identical to defquery except that it has an additional input identifying the variables to be optimized. To allow for the interleaving of inference and optimization required in MMAP estimation, we further introduce doopt, which, analogous to doquery, returns a lazy sequence {ˆθ∗ m, ˆΩ∗ m, ˆu∗ m}m=1,... where ˆΩ∗ m ⊆X are the program outputs associated with θ = ˆθ∗ m and each ˆu∗ m ∈R+ is an estimate of the corresponding log marginal log p(Y, ˆθ∗ m) (see Section 4.2). The sequence is defined such that, at any time, ˆθ∗ m corresponds to the point expected to be most optimal of those evaluated so far and allows both inference and optimization to be carried out online. Although no restrictions are placed on X, it is necessary to place some restrictions on how programs use the optimization variables θ = φ1:K specified by the optimization argument list of defopt. First, each optimization variable φk must be bound to a value directly by a sample statement with fixed measure-type distribution argument. This avoids change of variable complications arising from nonlinear deterministic mappings. Second, in order for the optimization to be well defined, the program must be written such that any possible execution trace binds each optimization variable φk exactly once. Finally, although any φk may be lexically multiply bound, it must have the same base measure in all possible execution traces, because, for instance, if the base measure of a φk were to change from Lebesgue to counting, the notion of optimality would no longer admit a conventional interpretation. Note that although the transformation implementations shown in Figure 3 do not contain runtime exception generators that disallow continued execution of programs that violate these constraints, those actually implemented in the BOPP system do. 4 Bayesian Program Optimization In addition to the syntax introduced in the previous section, there are five main components to BOPP: - A program transformation, q→q-marg, allowing estimation of the evidence p(Y, θ) at a fixed θ. - A high-performance, GP based, BO implementation for actively sampling θ. - A program transformation, q→q-prior, used for automatic and adaptive domain scaling, such that a problem-independent hyperprior can be placed over the GP hyperparameters. - An adaptive non-stationary mean function to support unbounded optimization. - A program transformation, q→q-acq, and annealing maximum likelihood estimation method to optimize the acquisition function subject the implicit constraints imposed by the generative model. Together these allow BOPP to perform online MMAP estimation for arbitrary programs in a manner that is black-box from the user’s perspective - requiring only the definition of the target program in the same way as existing PPS and identifying which variables to optimize. The BO component of BOPP is both probabilistic programming and language independent, and is provided as a stand-alone package.2 It requires as input only a target function, a sampler to establish rough input scaling, and a problem specific optimizer for the acquisition function that imposes the problem constraints. Figure 3 provides a high level overview of the algorithm invoked when doopt is called on a query q that defines a distribution p (Y, a, θ, b). We wish to optimize θ whilst marginalizing out a and b, as indicated by the the second input to q. In summary, BOPP performs iterative optimization in 5 steps - Step 1 (blue arrows) generates unweighted samples from the transformed prior program q-prior (top center), constructed by removing all conditioning. This initializes the domain scaling for θ. - Step 2 (red arrows) evaluates the marginal p(Y, θ) at a small number of the generated ˆθ by performing inference on the marginal program q-marg (middle centre), which returns samples from the distribution p (a, b|Y, θ) along with an estimate of p(Y, θ). The evaluated points (middle right) provide an initial domain scaling of the outputs and starting points for the BO surrogate. - Step 3 (black arrow) fits a mixture of GPs posterior [22] to the scaled data (bottom centre) using a problem independent hyperprior. The solid blue line and shaded area show the posterior mean and ±2 standard deviations respectively. The new estimate of the optimum ˆθ∗is the value for which the mean estimate is largest, with ˆu∗equal to the corresponding mean value. 2Code available at http://www.github.com/probprog/deodorant/ 4 3 -15 -10 -5 0 5 10 15 Expected improvement 0 0.02 0.04 0.06 0.08 0.1 3 -15 -10 -5 0 5 10 15 3 -15 -10 -5 0 5 10 15 log p(Y,3) -60 -40 -20 0 ˆ✓next {ˆ✓⇤, ˆ⌦⇤, ˆu⇤} ˆ✓⇤ ˆu⇤ 1 1 2 2 2 3 3 4 4 5 4 (defquery q-prior [y] (let [a (sample (p-a)) ✓(sample (p-✓a))] ✓)) (a) Prior query (defquery q-acq [y ⇣] (let [a (sample (p-a)) ✓(sample (p-✓a))] (observe (factor) (⇣✓)) ✓)) (b) Acquisition query Figure 2: Left: a transformation of q that samples from the prior p(✓). Right: a transformation of q used in the optimization of the acquisition function. Observing from factor assigns a probability exp ⇣(✓) to the execution, i.e. (factor) returns a distribution of object for which the log probability density function is the identity function. 1 ry q-prior [y] a (sample (p-a)) ✓(sample (p-✓a))] (a) Prior query (defquery q-acq [y ⇣] (let [a (sample (p-a)) ✓(sample (p-✓a))] (observe (factor) (⇣✓)) ✓)) (b) Acquisition query 2: Left: a transformation of q that samples from the prior p(✓). Right: a transformation of q the optimization of the acquisition function. Observing from factor assigns a probability ) to the execution, i.e. (factor) returns a distribution of object for which the log probability function is the identity function. 1 (defopt q [y] [✓] (let [a (sample (p-a)) ✓(sample (p-✓a)) b (sample (p-b a ✓))] (observe (lik a ✓b) y) [a b])) (a) Original query (defquery q-marg [y ˆ✓] (let [a (sample (p-a)) ✓(observe<- (p-✓a) ˆ✓) b (sample (p-b a ✓))] (observe (lik a ✓b) y) [a b])) (b) Conditional query Figure 1: Left: a simple example optimization query where we want to optimize ✓. Right: the same query after the transformation applied by BOPP to make the query amenable to optimization. Note p-u represents a distribution object, whilst p-✓, p-v and lik all represent functions which return distributions objects. (defquery q-prior [y] (let [a (sample (p-a)) ✓(sample (p-✓a))] ✓)) (a) Prior query (defquery q-acq [y ⇣] (let [a (sample (p-a)) ✓(sample (p-✓a))] (observe (factor) (⇣✓)) ✓)) (b) Acquisition query Figure 2: Left: a transformation of q that samples from the prior p(✓). Right: a transformation of q used in the optimization of the acquisition function. Observing from factor assigns a probability exp ⇣(✓) to the execution, i.e. (factor) returns a distribution of object for which the log probability density function is the identity function. 1 (defopt q [y] [✓] (let [a (sample (p-a)) ✓(sample (p-✓a)) b (sample (p-b a ✓))] (observe (lik a ✓b) y) [a b])) (a) Original query (defquery q-marg [y ˆ✓] (let [a (sample (p-a)) ✓(observe<- (p-✓a) ˆ✓) b (sample (p-b a ✓))] (observe (lik a ✓b) y) [a b])) (b) Conditional query Figure 1: Left: a simple example optimization query where we want to optimize ✓. Right: the same query after the transformation applied by BOPP to make the query amenable to optimization. Note p-u represents a distribution object, whilst p-✓, p-v and lik all represent functions which return distributions objects. (defquery q-prior [y] (let [a (sample (p-a)) ✓(sample (p-✓a))] ✓)) (a) Prior query (defquery q-acq [y ⇣] (let [a (sample (p-a)) ✓(sample (p-✓a))] (observe (factor) (⇣✓)) ✓)) (b) Acquisition query Figure 2: Left: a transformation of q that samples from the prior p(✓). Right: a transformation of q used in the optimization of the acquisition function. Observing from factor assigns a probability exp ⇣(✓) to the execution, i.e. (factor) returns a distribution of object for which the log probability density function is the identity function. 1 Figure 3: Overview of the BOPP algorithm, description given in main text. p-a, p-θ, p-b and lik all represent distribution object constructors. factor is a special distribution constructor that assigns probability p(y) = y, in this case y = ζ(θ). - Step 4 (purple arrows) constructs an acquisition function ζ : ϑ →R+ (bottom left) using the GP posterior. This is optimized, giving the next point to evaluate ˆθnext, by performing annealed importance sampling on a transformed program q-acq (middle left) in which all observe statements are removed and replaced with a single observe assigning probability ζ(θ) to the execution. - Step 5 (green arrow) evaluates ˆθnext using q-marg and continues to step 3. 4.1 Program Transformation to Generate the Target Consider the defopt query q in Figure 3, the body of which defines the joint distribution p (Y, a, θ, b). Calculating (2) (defining X = {a, b}) using a standard optimization scheme presents two issues: θ is a random variable within the program rather than something we control and its probability distribution is only defined conditioned on a. We deal with both these issues simultaneously using a program transformation similar to the disintegration transformation in Hakaru [31]. Our marginal transformation returns a new query object, q-marg as shown in Figure 3, that defines the same joint distribution on program variables and inputs, but now accepts the value for θ as an input. This is done by replacing all sample statements associated with θ with equivalent observe<- statements, taking θ as the observed value, where observe<- is identical to observe except that it returns the observed value. As both sample and observe operate on the same variable type - a distribution object - this transformation can always be made, while the identical returns of sample and observe<- trivially ensures validity of the transformed program. 4.2 Bayesian Optimization of the Marginal The target function for our BO scheme is log p(Y, θ), noting argmax f (θ) = argmax log f (θ) for any f : ϑ →R+. The log is taken because GPs have unbounded support, while p (Y, θ) is always positive, and because we expect variations over many orders of magnitude. PPS with importance sampling based inference engines, e.g. sequential Monte Carlo [29] or the particle cascade [21], can return noisy estimates of this target given the transformed program q-marg. 5 Our BO scheme uses a GP prior and a Gaussian likelihood. Though the rationale for the latter is predominantly computational, giving an analytic posterior, there are also theoretical results suggesting that this choice is appropriate [2]. We use as a default covariance function a combination of a Mat´ern3/2 and Mat´ern-5/2 kernel. By using automatic domain scaling as described in the next section, problem independent priors are placed over the GP hyperparameters such as the length scales and observation noise. Inference over hyperparameters is performed using Hamiltonian Monte Carlo (HMC) [6], giving an unweighted mixture of GPs. Each term in this mixture has an analytic distribution fully specified by its mean function µi m : ϑ →R and covariance function ki m : ϑ×ϑ →R, where m indexes the BO iteration and i the hyperparameter sample. This posterior is first used to estimate which of the previously evaluated ˆθj is the most optimal, by taking the point with highest expected value , ˆu∗ m = maxj∈1...m PN i=1 µi m(ˆθj). This completes the definition of the output sequence returned by the doopt macro. Note that as the posterior updates globally with each new observation, the relative estimated optimality of previously evaluated points changes at each iteration. Secondly it is used to define the acquisition function ζ, for which we take the expected improvement [25], defining σi m (θ) = p kim (θ, θ) and γi m (θ) = µi m(θ)−ˆu∗ m σim(θ) , ζ (θ) = N X i=1 µi m (θ) −ˆu∗ m  Φ γi m (θ)  + σi m (θ) φ γi m (θ)  (3) where φ and Φ represent the pdf and cdf of a unit normal distribution respectively. We note that more powerful, but more involved, acquisition functions, e.g. [12], could be used instead. 4.3 Automatic and Adaptive Domain Scaling Domain scaling, by mapping to a common space, is crucial for BOPP to operate in the required black-box fashion as it allows a general purpose and problem independent hyperprior to be placed on the GP hyperparameters. BOPP therefore employs an affine scaling to a [−1, 1] hypercube for both the inputs and outputs of the GP. To initialize scaling for the input variables, we sample directly from the generative model defined by the program. This is achieved using a second transformed program, q-prior, which removes all conditioning, i.e. observe statements, and returns θ. This transformation also introduces code to terminate execution of the query once all θ are sampled, in order to avoid unnecessary computation. As observe statements return nil, this transformation trivially preserves the generative model of the program, but the probability of the execution changes. Simulating from the generative model does not require inference or calling potentially expensive likelihood functions and is therefore computationally inexpensive. By running inference on q-marg given a small number of these samples as arguments, a rough initial characterization of output scaling can also be achieved. If points are observed that fall outside the hypercube under the initial scaling, the domain scaling is appropriately updated3 so that the target for the GP remains the [−1, 1] hypercube. 4.4 Unbounded Bayesian Optimization via Non-Stationary Mean Function Adaptation Unlike standard BO implementations, BOPP is not provided with external constraints and we therefore develop a scheme for operating on targets with potentially unbounded support. Our method exploits the knowledge that the target function is a probability density, implying that the area that must be searched in practice to find the optimum is finite, by defining a non-stationary prior mean function. This takes the form of a bump function that is constant within a region of interest, but decays rapidly outside. Specifically we define this bump function in the transformed space as µprior (r; re, r∞) = ( 0 if r ≤re log  r−re r∞−re  + r−re r∞−re otherwise (4) where r is the radius from the origin, re is the maximum radius of any point generated in the initial scaling or subsequent evaluations, and r∞is a parameter set to 1.5re by default. Consequently, the acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation 3An important exception is that the output mapping to the bottom of the hypercube remains fixed such that low likelihood new points are not incorporated. This ensures stability when considering unbounded problems. 6 Figure 4: Convergence of BOPP on unconstrained bimodal problem with p (✓) = Normal(0, 0.5) and p (Y |✓) = Normal(5 −|✓| , 0.5) giving significant prior misspecification. The top plots show the regressed GP, with the solid line corresponding to the mean and the shading shows ± 2 standard deviations. Below is the corresponding acquisition function which away from the region of interest. acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation of the scaling will automatically update this mean function appropriately, learning a region of interest that matches that of the true problem, without complicating the optimization by over-extending this region. We note that our method shares similarity with the recent work of Shahriari et al [24], but overcomes the sensitivity of their method upon a user-specified bounding box representing soft constraints, by initializing automatically and adapting as more data is observed. 4.5 Optimizing the Acquisition Function Optimizing the acquisition function for BOPP presents the issue that the query contains implicit constraints that are unknown to the surrogate function. The problem of unknown constraints has been previously covered in the literature [8, 11] by assuming that constraints take the form of a black-box function which is modelled with a second surrogate function and must be evaluated in guess-and-check strategy to establish whether a point is valid. Along with the potentially significance expense such a method incurs, this approach is inappropriate for equality constraints or when the target variables are potentially discrete. We therefore take an alternative approach based on directly using the program to optimize the acquisition function. To do so we consider use a transformed program q-acq that is identical to q-prior (see Section 4.3), but adds an additional observe statement that assigns a weight ⇣(✓) to the execution. By setting ⇣(✓) to the acquisition function, the maximum likelihood corresponds to the optimum of the acquisition function subject to the implicit program constraints. We obtain a maximum likelihood estimate for q-acq using a variant of annealed importance sampling [18] in which lightweight Metropolis Hastings (LMH) [29] with local random-walk moves is used as the base transition kernel. 5 Experiments We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations the acquisitions begin to explore the left mode, after 50 both modes have been fully uncovered. Next we compare BOPP to the prominent BO packages SMAC [12], Spearmint [26] and TPE [3] on a number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides substantial advantages over these systems when used simply as an optimizer on both continuous and discrete optimization problems. Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference does not return estimates of the density function p (Y, ✓). We consider the possible alternative of using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH, 7 Figure 4: Convergence of BOPP on unconstrained bimodal problem with p (✓) = Normal(0, 0.5) and p (Y |✓) = Normal(5 −|✓| , 0.5) giving significant prior misspecification. The top plots show the regressed GP, with the solid line corresponding to the mean and the shading shows ± 2 standard deviations. Below is the corresponding acquisition function which away from the region of interest. acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation of the scaling will automatically update this mean function appropriately, learning a region of interest that matches that of the true problem, without complicating the optimization by over-extending this region. We note that our method shares similarity with the recent work of Shahriari et al [24], but overcomes the sensitivity of their method upon a user-specified bounding box representing soft constraints, by initializing automatically and adapting as more data is observed. 4.5 Optimizing the Acquisition Function Optimizing the acquisition function for BOPP presents the issue that the query contains implicit constraints that are unknown to the surrogate function. The problem of unknown constraints has been previously covered in the literature [8, 11] by assuming that constraints take the form of a black-box function which is modelled with a second surrogate function and must be evaluated in guess-and-check strategy to establish whether a point is valid. Along with the potentially significance expense such a method incurs, this approach is inappropriate for equality constraints or when the target variables are potentially discrete. We therefore take an alternative approach based on directly using the program to optimize the acquisition function. To do so we consider use a transformed program q-acq that is identical to q-prior (see Section 4.3), but adds an additional observe statement that assigns a weight ⇣(✓) to the execution. By setting ⇣(✓) to the acquisition function, the maximum likelihood corresponds to the optimum of the acquisition function subject to the implicit program constraints. We obtain a maximum likelihood estimate for q-acq using a variant of annealed importance sampling [18] in which lightweight Metropolis Hastings (LMH) [29] with local random-walk moves is used as the base transition kernel. 5 Experiments We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations the acquisitions begin to explore the left mode, after 50 both modes have been fully uncovered. Next we compare BOPP to the prominent BO packages SMAC [12], Spearmint [26] and TPE [3] on a number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides substantial advantages over these systems when used simply as an optimizer on both continuous and discrete optimization problems. Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference does not return estimates of the density function p (Y, ✓). We consider the possible alternative of using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH, 7 Figure 4: Convergence of BOPP on unconstrained bimodal problem with p (✓) = Normal(0, 0.5) and p (Y |✓) = Normal(5 −|✓| , 0.5) giving significant prior misspecification. The top plots show the regressed GP, with the solid line corresponding to the mean and the shading shows ± 2 standard deviations. Below is the corresponding acquisition function which away from the region of interest. acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation of the scaling will automatically update this mean function appropriately, learning a region of interest that matches that of the true problem, without complicating the optimization by over-extending this region. We note that our method shares similarity with the recent work of Shahriari et al [24], but overcomes the sensitivity of their method upon a user-specified bounding box representing soft constraints, by initializing automatically and adapting as more data is observed. 4.5 Optimizing the Acquisition Function Optimizing the acquisition function for BOPP presents the issue that the query contains implicit constraints that are unknown to the surrogate function. The problem of unknown constraints has been previously covered in the literature [8, 11] by assuming that constraints take the form of a black-box function which is modelled with a second surrogate function and must be evaluated in guess-and-check strategy to establish whether a point is valid. Along with the potentially significance expense such a method incurs, this approach is inappropriate for equality constraints or when the target variables are potentially discrete. We therefore take an alternative approach based on directly using the program to optimize the acquisition function. To do so we consider use a transformed program q-acq that is identical to q-prior (see Section 4.3), but adds an additional observe statement that assigns a weight ⇣(✓) to the execution. By setting ⇣(✓) to the acquisition function, the maximum likelihood corresponds to the optimum of the acquisition function subject to the implicit program constraints. We obtain a maximum likelihood estimate for q-acq using a variant of annealed importance sampling [18] in which lightweight Metropolis Hastings (LMH) [29] with local random-walk moves is used as the base transition kernel. 5 Experiments We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations the acquisitions begin to explore the left mode, after 50 both modes have been fully uncovered. Next we compare BOPP to the prominent BO packages SMAC [12], Spearmint [26] and TPE [3] on a number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides substantial advantages over these systems when used simply as an optimizer on both continuous and discrete optimization problems. Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference does not return estimates of the density function p (Y, ✓). We consider the possible alternative of using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH, 7 Figure 4: Convergence of BOPP on unconstrained bimodal problem with p (✓) = Normal(0, 0.5) and p (Y |✓) = Normal(5 −|✓| , 0.5) giving significant prior misspecification. The top plots show the regressed GP, with the solid line corresponding to the mean and the shading shows ± 2 standard deviations. Below is the corresponding acquisition function which away from the region of interest. acquisition function also decays and new points are never suggested arbitrarily far away. Adaptation of the scaling will automatically update this mean function appropriately, learning a region of interest that matches that of the true problem, without complicating the optimization by over-extending this region. We note that our method shares similarity with the recent work of Shahriari et al [24], but overcomes the sensitivity of their method upon a user-specified bounding box representing soft constraints, by initializing automatically and adapting as more data is observed. 4.5 Optimizing the Acquisition Function Optimizing the acquisition function for BOPP presents the issue that the query contains implicit constraints that are unknown to the surrogate function. The problem of unknown constraints has been previously covered in the literature [8, 11] by assuming that constraints take the form of a black-box function which is modelled with a second surrogate function and must be evaluated in guess-and-check strategy to establish whether a point is valid. Along with the potentially significance expense such a method incurs, this approach is inappropriate for equality constraints or when the target variables are potentially discrete. We therefore take an alternative approach based on directly using the program to optimize the acquisition function. To do so we consider use a transformed program q-acq that is identical to q-prior (see Section 4.3), but adds an additional observe statement that assigns a weight ⇣(✓) to the execution. By setting ⇣(✓) to the acquisition function, the maximum likelihood corresponds to the optimum of the acquisition function subject to the implicit program constraints. We obtain a maximum likelihood estimate for q-acq using a variant of annealed importance sampling [18] in which lightweight Metropolis Hastings (LMH) [29] with local random-walk moves is used as the base transition kernel. 5 Experiments We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations the acquisitions begin to explore the left mode, after 50 both modes have been fully uncovered. Next we compare BOPP to the prominent BO packages SMAC [12], Spearmint [26] and TPE [3] on a number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides substantial advantages over these systems when used simply as an optimizer on both continuous and discrete optimization problems. Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference does not return estimates of the density function p (Y, ✓). We consider the possible alternative of using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH, 7 p(Y, ✓) p(Y, ✓) p(Y, ✓) p(Y, ✓) p(Y, ✓) p(Y, ✓) p(Y, ✓) p(Y, ✓) Figure 4: Convergence on an unconstrained bimodal problem with p (θ) = Normal(0, 0.5) and p (Y |θ) = Normal(5 −|θ| , 0.5) giving significant prior misspecification. The top plots show a regressed GP, with the solid line corresponding to the mean and the shading shows ± 2 standard deviations. The bottom plots show the corresponding acquisition functions. Iteration 0 100 200 Error 10-5 10-3 10-1 Branin Iteration 0 100 200 10-1 100 Hartmann 6D Iteration 0 50 100 10-2 100 SVM on-grid Error Error Error Iteration 0 25 50 10-2 100 102 LDA on-grid BOPP mean BOPP median SMAC Spearmint TPE Figure 5: Comparison of BOPP used as an optimizer to prominent BO packages on common benchmark problems. The dashed lines shows the final mean error of SMAC (red), Spearmint (green) and TPE (black) as quoted by [7]. The dark blue line shows the mean error for BOPP averaged over 100 runs, whilst the median and 25/75% percentiles are shown in cyan. Results for Spearmint on Branin and SMAC on SVM on-grid are omitted because both BOPP and the respective algorithms averaged zero error to the provided number of significant figures in [7]. of the scaling will automatically update this mean function appropriately, learning a region of interest that matches that of the true problem, without complicating the optimization by over-extending this region. We note that our method shares similarity with the recent work of Shahriari et al [23], but overcomes the sensitivity of their method upon a user-specified bounding box representing soft constraints, by initializing automatically and adapting as more data is observed. 4.5 Optimizing the Acquisition Function Optimizing the acquisition function for BOPP presents the issue that the query contains implicit constraints that are unknown to the surrogate function. The problem of unknown constraints has been previously covered in the literature [8, 13] by assuming that constraints take the form of a black-box function which is modeled with a second surrogate function and must be evaluated in guess-and-check strategy to establish whether a point is valid. Along with the potentially significant expense such a method incurs, this approach is inappropriate for equality constraints or when the target variables are potentially discrete. For example, the Dirichlet distribution in Figure 2 introduces an equality constraint on powers, namely that its components must sum to 1. We therefore take an alternative approach based on directly using the program to optimize the acquisition function. To do so we consider a transformed program q-acq that is identical to q-prior (see Section 4.3), but adds an additional observe statement that assigns a weight ζ(θ) to the execution. By setting ζ(θ) to the acquisition function, the maximum likelihood corresponds to the optimum of the acquisition function subject to the implicit program constraints. We obtain a maximum likelihood estimate for q-acq using a variant of annealed importance sampling [19] in which lightweight Metropolis Hastings (LMH) [28] with local random-walk moves is used as the base transition kernel. 7 Figure 6: Convergence for transition dynamics parameters of the pickover attractor in terms of the cumulative best log p (Y, θ) (left) and distance to the “true” θ used in generating the data (right). Solid line shows median over 100 runs, whilst the shaded region the 25/75% quantiles. 5 Experiments We first demonstrate the ability of BOPP to carry out unbounded optimization using a 1D problem with a significant prior-posterior mismatch as shown in Figure 4. It shows BOPP adapting to the target and effectively establishing a maxima in the presence of multiple modes. After 20 evaluations the acquisitions begin to explore the right mode, after 50 both modes have been fully uncovered. Next we compare BOPP to the prominent BO packages SMAC [14], Spearmint [25] and TPE [3] on a number of classical benchmarks as shown in Figure 5. These results demonstrate that BOPP provides substantial advantages over these systems when used simply as an optimizer on both continuous and discrete optimization problems. In particular, it offers a large advantage over SMAC and TPE on the continuous problems (Branin and Hartmann), due to using a more powerful surrogate, and over Spearmint on the others due to not needing to make approximations to deal with discrete problems. Finally we demonstrate performance of BOPP on a MMAP problem. Comparison here is more difficult due to the dearth of existing alternatives for PPS. In particular, simply running inference on the original query does not return estimates for p (Y, θ). We consider the possible alternative of using our conditional code transformation to design a particle marginal Metropolis Hastings (PMMH, [1]) sampler which operates in a similar fashion to BOPP except that new θ are chosen using a MH step instead of actively sampling with BO. For these MH steps we consider both LMH [28] with proposals from the prior and the random-walk MH (RMH) variant introduced in Section 4.5. Results for estimating the dynamics parameters of a chaotic pickover attractor, while using an extended Kalman smoother to estimate the latent states are shown in Figure 6. Model details are given in the supplementary material along with additional experiments. 6 Discussion and Future Work We have introduced a new method for carrying out MMAP estimation of probabilistic program variables using Bayesian optimization, representing the first unified framework for optimization and inference of probabilistic programs. By using a series of code transformations, our method allows an arbitrary program to be optimized with respect to a defined subset of its variables, whilst marginalizing out the rest. To carry out the required optimization, we introduce a new GP-based BO package that exploits the availability of the target source code to provide a number of novel features, such as automatic domain scaling and constraint satisfaction. The concepts we introduce lead directly to a number of extensions of interest, including but not restricted to smart initialization of inference algorithms, adaptive proposals, and nested optimization. Further work might consider maximum marginal likelihood estimation and risk minimization. Though only requiring minor algorithmic changes, these cases require distinct theoretical considerations. Acknowledgements Tom Rainforth is supported by a BP industrial grant. Tuan Anh Le is supported by a Google studentship, project code DF6700. Frank Wood is supported under DARPA PPAML through the U.S. AFRL under Cooperative Agreement FA8750-14-2-0006, Sub Award number 61160290-111668. 8 References [1] C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. J Royal Stat. Soc.: Series B (Stat. Methodol.), 72(3):269–342, 2010. [2] J. B´erard, P. Del Moral, A. Doucet, et al. A lognormal central limit theorem for particle approximations of normalizing constants. Electronic Journal of Probability, 19(94):1–28, 2014. [3] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K´egl. Algorithms for hyper-parameter optimization. In NIPS, pages 2546–2554, 2011. [4] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: a probabilistic programming language. Journal of Statistical Software, 2015. [5] K. Csill´ery, M. G. Blum, O. E. Gaggiotti, and O. Franc¸ois. Approximate Bayesian Computation (ABC) in practice. Trends in Ecology & Evolution, 25(7):410–418, 2010. [6] S. Duane, A. D. Kennedy, B. J. Pendleton, and D. Roweth. Hybrid Monte Carlo. Physics letters B, 1987. [7] K. Eggensperger, M. Feurer, F. Hutter, J. Bergstra, J. Snoek, H. Hoos, and K. Leyton-Brown. Towards an empirical foundation for assessing Bayesian optimization of hyperparameters. In NIPS workshop on Bayesian Optimization in Theory and Practice, pages 1–5, 2013. [8] J. R. Gardner, M. J. Kusner, Z. E. Xu, K. Q. Weinberger, and J. Cunningham. Bayesian optimization with inequality constraints. In ICML, pages 937–945, 2014. [9] N. Goodman, V. Mansinghka, D. M. Roy, K. Bonawitz, and J. B. Tenenbaum. Church: a language for generative models. In UAI, pages 220–229, 2008. [10] N. D. Goodman and A. Stuhlm¨uller. The Design and Implementation of Probabilistic Programming Languages. 2014. [11] M. U. Gutmann and J. Corander. Bayesian optimization for likelihood-free inference of simulator-based statistical models. JMLR, 17:1–47, 2016. [12] J. M. Hern´andez-Lobato, M. W. Hoffman, and Z. Ghahramani. Predictive entropy search for efficient global optimization of black-box functions. In NIPS, pages 918–926, 2014. [13] J. M. Hern´andez-Lobato, M. A. Gelbart, R. P. Adams, M. W. Hoffman, and Z. Ghahramani. A general framework for constrained Bayesian optimization using information-based search. JMLR, 17:1–53, 2016. [14] F. Hutter, H. H. Hoos, and K. Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In Learn. Intell. Optim., pages 507–523. Springer, 2011. [15] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. J Global Optim, 13(4):455–492, 1998. [16] V. Mansinghka, D. Selsam, and Y. Perov. Venture: a higher-order probabilistic programming platform with programmable inference. arXiv preprint arXiv:1404.0099, 2014. [17] T. Minka, J. Winn, J. Guiver, and D. Knowles. Infer .NET 2.4, Microsoft Research Cambridge, 2010. [18] K. P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. [19] R. M. Neal. Annealed importance sampling. Statistics and Computing, 11(2):125–139, 2001. [20] M. A. Osborne, R. Garnett, and S. J. Roberts. Gaussian processes for global optimization. In 3rd international conference on learning and intelligent optimization (LION3), pages 1–15, 2009. [21] B. Paige, F. Wood, A. Doucet, and Y. W. Teh. Asynchronous anytime sequential monte carlo. In NIPS, pages 3410–3418, 2014. [22] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [23] B. Shahriari, A. Bouchard-Cˆot´e, and N. de Freitas. Unbounded Bayesian optimization via regularization. AISTATS, 2016. [24] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. de Freitas. Taking the human out of the loop: A review of Bayesian optimization. Proceedings of the IEEE, 104(1):148–175, 2016. [25] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian optimization of machine learning algorithms. In NIPS, pages 2951–2959, 2012. [26] J. Snoek, O. Rippel, K. Swersky, R. Kiros, N. Satish, N. Sundaram, M. Patwary, M. Ali, R. P. Adams, et al. Scalable Bayesian optimization using deep neural networks. In ICML, 2015. [27] J.-W. van de Meent, B. Paige, D. Tolpin, and F. Wood. Black-box policy search with probabilistic programs. In AISTATS, pages 1195–1204, 2016. [28] D. Wingate, A. Stuhlmueller, and N. D. Goodman. Lightweight implementations of probabilistic programming languages via transformational compilation. In AISTATS, pages 770–778, 2011. [29] F. Wood, J. W. van de Meent, and V. Mansinghka. A new approach to probabilistic programming inference. In AISTATS, pages 2–46, 2014. [30] C. Xie. Interactive heat transfer simulations for everyone. The Physics Teacher, 50(4), 2012. [31] R. Zinkov and C.-C. Shan. Composing inference algorithms as program transformations. arXiv preprint arXiv:1603.01882, 2016. 9
2016
122
6,020
Coin Betting and Parameter-Free Online Learning Francesco Orabona Stony Brook University, Stony Brook, NY francesco@orabona.com D´avid P´al Yahoo Research, New York, NY dpal@yahoo-inc.com Abstract In the recent years, a number of parameter-free algorithms have been developed for online linear optimization over Hilbert spaces and for learning with expert advice. These algorithms achieve optimal regret bounds that depend on the unknown competitors, without having to tune the learning rates with oracle choices. We present a new intuitive framework to design parameter-free algorithms for both online linear optimization over Hilbert spaces and for learning with expert advice, based on reductions to betting on outcomes of adversarial coins. We instantiate it using a betting algorithm based on the Krichevsky-Trofimov estimator. The resulting algorithms are simple, with no parameters to be tuned, and they improve or match previous results in terms of regret guarantee and per-round complexity. 1 Introduction We consider the Online Linear Optimization (OLO) [4, 25] setting. In each round t, an algorithm chooses a point wt from a convex decision set K and then receives a reward vector gt. The algorithm’s goal is to keep its regret small, defined as the difference between its cumulative reward and the cumulative reward of a fixed strategy u ∈K, that is RegretT (u) = T X t=1 ⟨gt, u⟩− T X t=1 ⟨gt, wt⟩. We focus on two particular decision sets, the N-dimensional probability simplex ∆N = {x ∈ RN : x ≥0, ∥x∥1 = 1} and a Hilbert space H. OLO over ∆N is referred to as the problem of Learning with Expert Advice (LEA). We assume bounds on the norms of the reward vectors: For OLO over H, we assume that ∥gt∥≤1, and for LEA we assume that gt ∈[0, 1]N. OLO is a basic building block of many machine learning problems. For example, Online Convex Optimization (OCO), the problem analogous to OLO where ⟨gt, u⟩is generalized to an arbitrary convex function ℓt(u), is solved through a reduction to OLO [25]. LEA [17, 27, 5] provides a way of combining classifiers and it is at the heart of boosting [12]. Batch and stochastic convex optimization can also be solved through a reduction to OLO [25]. To achieve optimal regret, most of the existing online algorithms require the user to set the learning rate (step size) η to an unknown/oracle value. For example, to obtain the optimal bound for Online Gradient Descent (OGD), the learning rate has to be set with the knowledge of the norm of the competitor u, ∥u∥; second entry in Table 1. Likewise, the optimal learning rate for Hedge depends on the KL divergence between the prior weighting π and the unknown competitor u, D (u∥π); seventh entry in Table 1. Recently, new parameter-free algorithms have been proposed, both for LEA [6, 8, 18, 19, 15, 11] and for OLO/OCO over Hilbert spaces [26, 23, 21, 22, 24]. These algorithms adapt to the number of experts and to the norm of the optimal predictor, respectively, without the need to tune parameters. However, their design and underlying intuition is still a challenge. Foster et al. [11] proposed a unified framework, but it is not constructive. Furthermore, all existing algorithms for LEA either have sub-optimal regret bound (e.g. extra O(log log T) factor) or sub-optimal running time (e.g. requiring solving a numerical problem in every round, or with extra factors); see Table 1. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Algorithm Worst-case regret guarantee Per-round time complexity Adaptive Unified analysis OGD, η = 1 √ T [25] O((1 + ∥u∥2) √ T), ∀u ∈H O(1) OGD, η = U √ T [25] U √ T for any u ∈H s.t. ∥u∥≤U O(1) [23] O(∥u∥ln(1 + ∥u∥T) √ T), ∀u ∈H O(1) ✓ [22, 24] O(∥u∥ p T ln(1 + ∥u∥T)), ∀u ∈H O(1) ✓ This paper, Sec. 7.1 O(∥u∥ p T ln(1 + ∥u∥T)), ∀u ∈H O(1) ✓ ✓ Hedge, η = q ln N T , πi = 1 N [12] O( √ T ln N), ∀u ∈∆N O(N) Hedge, η = U √ T [12] O(U √ T) for any u ∈∆N s.t. p D (u∥π) ≤U O(N) [6] O( p T(1 + D (u∥π)) + ln2 N), ∀u ∈∆N O(N K)1 ✓ [8] O( p T (1 + D (u∥π))), ∀u ∈∆N O(N K)1 ✓ [8, 19, 15]2 O( p T (ln ln T + D (u∥π))), ∀u ∈∆N O(N) ✓ [11] O( p T (1 + D (u∥π))), ∀u ∈∆N O(N ln maxu∈∆N D (u∥π))3 ✓ ✓ This paper, Sec. 7.2 O( p T (1 + D (u∥π))), ∀u ∈∆N O(N) ✓ ✓ Table 1: Algorithms for OLO over Hilbert space and LEA. Contributions. We show that a more fundamental notion subsumes both OLO and LEA parameterfree algorithms. We prove that the ability to maximize the wealth in bets on the outcomes of coin flips implies OLO and LEA parameter-free algorithms. We develop a novel potential-based framework for betting algorithms. It gives intuition to previous constructions and, instantiated with the Krichevsky-Trofimov estimator, provides new and elegant algorithms for OLO and LEA. The new algorithms also have optimal worst-case guarantees on regret and time complexity; see Table 1. 2 Preliminaries We begin by providing some definitions. The Kullback-Leibler (KL) divergence between two discrete distributions p and q is D (p∥q) = P i pi ln (pi/qi). If p, q are real numbers in [0, 1], we denote by D (p∥q) = p ln (p/q)+(1−p) ln ((1 −p)/(1 −q)) the KL divergence between two Bernoulli distributions with parameters p and q. We denote by H a Hilbert space, by ⟨·, ·⟩its inner product, and by ∥·∥the induced norm. We denote by ∥·∥1 the 1-norm in RN. A function F : I →R+ is called logarithmically convex iff f(x) = ln(F(x)) is convex. Let f : V →R ∪{±∞}, the Fenchel conjugate of f is f ∗: V ∗→R∪{±∞} defined on the dual vector space V ∗by f ∗(θ) = supx∈V ⟨θ, x⟩−f(x). A function f : V →R ∪{+∞} is said to be proper if there exists x ∈V such that f(x) is finite. If f is a proper lower semi-continuous convex function then f ∗is also proper lower semi-continuous convex and f ∗∗= f. Coin Betting. We consider a gambler making repeated bets on the outcomes of adversarial coin flips. The gambler starts with an initial endowment ϵ > 0. In each round t, he bets on the outcome of a coin flip gt ∈{−1, 1}, where +1 denotes heads and −1 denotes tails. We do not make any assumption on how gt is generated, that is, it can be chosen by an adversary. The gambler can bet any amount on either heads or tails. However, he is not allowed to borrow any additional money. If he loses, he loses the betted amount; if he wins, he gets the betted amount back and, in addition to that, he gets the same amount as a reward. We encode the gambler’s bet in round t by a single number wt. The sign of wt encodes whether he is betting on heads or tails. The absolute value encodes the betted amount. We define Wealtht as the gambler’s wealth at the end of round t and Rewardt as the gambler’s net reward (the difference of wealth and initial endowment), that is Wealtht = ϵ + t X i=1 wigi and Rewardt = Wealtht −ϵ . (1) In the following, we will also refer to a bet with βt, where βt is such that wt = βt Wealtht−1 . (2) The absolute value of βt is the fraction of the current wealth to bet, and sign of βt encodes whether he is betting on heads or tails. The constraint that the gambler cannot borrow money implies that βt ∈[−1, 1]. We also generalize the problem slightly by allowing the outcome of the coin flip gt to be any real number in the interval [−1, 1]; wealth and reward in (1) remain exactly the same. 1These algorithms require to solve a numerical problem at each step. The number K is the number of steps needed to reach the required precision. Neither the precision nor K are calculated in these papers. 2The proof in [15] can be modified to prove a KL bound, see http://blog.wouterkoolen.info. 3A variant of the algorithm in [11] can be implemented with the stated time complexity [10]. 2 3 Warm-Up: From Betting to One-Dimensional Online Linear Optimization In this section, we sketch how to reduce one-dimensional OLO to betting on a coin. The reasoning for generic Hilbert spaces (Section 5) and for LEA (Section 6) will be similar. We will show that the betting view provides a natural way for the analysis and design of online learning algorithms, where the only design choice is the potential function of the betting algorithm (Section 4). A specific example of coin betting potential and the resulting algorithms are in Section 7. As a warm-up, let us consider an algorithm for OLO over one-dimensional Hilbert space R. Let {wt}∞ t=1 be its sequence of predictions on a sequence of rewards {gt}∞ t=1, gt ∈[−1, 1]. The total reward of the algorithm after t rounds is Rewardt = Pt i=1 giwi. Also, even if in OLO there is no concept of “wealth”, define the wealth of the OLO algorithm as Wealtht = ϵ + Rewardt, as in (1). We now restrict our attention to algorithms whose predictions wt are of the form of a bet, that is wt = βt Wealtht−1, where βt ∈[−1, 1]. We will see that the restriction on βt does not prevent us from obtaining parameter-free algorithms with optimal bounds. Given the above, it is immediate to see that any coin betting algorithm that, on a sequence of coin flips {gt}∞ t=1, gt ∈[−1, 1], bets the amounts wt can be used as an OLO algorithm in a onedimensional Hilbert space R. But, what would be the regret of such OLO algorithms? Assume that the betting algorithm at hand guarantees that its wealth is at least F(PT t=1 gt) starting from an endowment ϵ, for a given potential function F, then RewardT = T X t=1 gtwt = WealthT −ϵ ≥F T X t=1 gt ! −ϵ . (3) Intuitively, if the reward is big we can expect the regret to be small. Indeed, the following lemma converts the lower bound on the reward to an upper bound on the regret. Lemma 1 (Reward-Regret relationship [22]). Let V, V ∗be a pair of dual vector spaces. Let F : V →R∪{+∞} be a proper convex lower semi-continuous function and let F ∗: V ∗→R∪{+∞} be its Fenchel conjugate. Let w1, w2, . . . , wT ∈V and g1, g2, . . . , gT ∈V ∗. Let ϵ ∈R. Then, T X t=1 ⟨gt, wt⟩ | {z } RewardT ≥F T X t=1 gt ! −ϵ if and only if ∀u ∈V ∗, T X t=1 ⟨gt, u −wt⟩ | {z } RegretT (u) ≤F ∗(u) + ϵ . Applying the lemma, we get a regret upper bound: RegretT (u) ≤F ∗(u) + ϵ for all u ∈H. To summarize, if we have a betting algorithm that guarantees a minimum wealth of F(PT t=1 gt), it can be used to design and analyze a one-dimensional OLO algorithm. The faster the growth of the wealth, the smaller the regret will be. Moreover, the lemma also shows that trying to design an algorithm that is adaptive to u is equivalent to designing an algorithm that is adaptive to PT t=1 gt. Also, most importantly, methods that guarantee optimal wealth for the betting scenario are already known, see, e.g., [4, Chapter 9]. We can just re-use them to get optimal online algorithms! 4 Designing a Betting Algorithm: Coin Betting Potentials For sequential betting on i.i.d. coin flips, an optimal strategy has been proposed by Kelly [14]. The strategy assumes that the coin flips {gt}∞ t=1, gt ∈{+1, −1}, are generated i.i.d. with known probability of heads. If p ∈[0, 1] is the probability of heads, the Kelly bet is to bet βt = 2p −1 at each round. He showed that, in the long run, this strategy will provide more wealth than betting any other fixed fraction of the current wealth [14]. For adversarial coins, Kelly betting does not make sense. With perfect knowledge of the future, the gambler could always bet everything on the right outcome. Hence, after T rounds from an initial endowment ϵ, the maximum wealth he would get is ϵ2T . Instead, assume he bets the same fraction β of its wealth at each round. Let Wealtht(β) the wealth of such strategy after t rounds. As observed in [21], the optimal fixed fraction to bet is β∗= (PT t=1 gt)/T and it gives the wealth WealthT (β∗) = ϵ exp  T · D  1 2 + PT t=1 gt 2T 1 2  ≥ϵ exp  (PT t=1 gt)2 2T  , (4) 3 where the inequality follows from Pinsker’s inequality [9, Lemma 11.6.1]. However, even without knowledge of the future, it is possible to go very close to the wealth in (4). This problem was studied by Krichevsky and Trofimov [16], who proposed that after seeing the coin flips g1, g2, . . . , gt−1 the empirical estimate kt = 1/2+Pt−1 i=1 1[gi=+1] t should be used instead of p. Their estimate is commonly called KT estimator.1 The KT estimator results in the betting βt = 2kt −1 = Pt−1 i=1 gi t (5) which we call adaptive Kelly betting based on the KT estimator. It looks like an online and slightly biased version of the oracle choice of β∗. This strategy guarantees2 WealthT ≥WealthT (β∗) 2 √ T = ϵ 2 √ T exp  T · D  1 2 + PT t=1 gt 2T 1 2  . This guarantee is optimal up to constant factors [4] and mirrors the guarantee of the Kelly bet. Here, we propose a new set of definitions that allows to generalize the strategy of adaptive Kelly betting based on the KT estimator. For these strategies it will be possible to prove that, for any g1, g2, . . . , gt ∈[−1, 1], Wealtht ≥Ft t X i=1 gi ! , (6) where Ft(x) is a certain function. We call such functions potentials. The betting strategy will be determined uniquely by the potential (see (c) in the Definition 2), and we restrict our attention to potentials for which (6) holds. These constraints are specified in the definition below. Definition 2 (Coin Betting Potential). Let ϵ > 0. Let {Ft}∞ t=0 be a sequence of functions Ft : (−at, at) →R+ where at > t. The sequence {Ft}∞ t=0 is called a sequence of coin betting potentials for initial endowment ϵ, if it satisfies the following three conditions: (a) F0(0) = ϵ. (b) For every t ≥0, Ft(x) is even, logarithmically convex, strictly increasing on [0, at), and limx→at Ft(x) = +∞. (c) For every t ≥1, every x ∈[−(t −1), (t −1)] and every g ∈[−1, 1], (1 + gβt) Ft−1(x) ≥ Ft(x + g), where βt = Ft(x+1)−Ft(x−1) Ft(x+1)+Ft(x−1) . (7) The sequence {Ft}∞ t=0 is called a sequence of excellent coin betting potentials for initial endowment ϵ if it satisfies conditions (a)–(c) and the condition (d) below. (d) For every t ≥0, Ft is twice-differentiable and satisfies x · F ′′ t (x) ≥F ′ t(x) for every x ∈[0, at). Let’s give some intuition on this definition. First, let’s show by induction on t that (b) and (c) of the definition together with (2) give a betting strategy that satisfies (6). The base case t = 0 is trivial. At time t ≥1, bet wt = βt Wealtht−1 where βt is defined in (7), then Wealtht = Wealtht−1 +wtgt = (1 + gtβt) Wealtht−1 ≥(1 + gtβt)Ft−1 t−1 X i=1 gi ! ≥Ft t−1 X i=1 gi + gt ! = Ft t X i=1 gi ! . The formula for the potential-based strategy (7) might seem strange. However, it is derived—see Theorem 8 in Appendix B—by minimizing the worst-case value of the right-hand side of the inequality used w.r.t. to gt in the induction proof above: Ft−1(x) ≥Ft(x+gt) 1+gtβt . The last point, (d), is a technical condition that allows us to seamlessly reduce OLO over a Hilbert space to the one-dimensional problem, characterizing the worst case direction for the reward vectors. 1Compared to the maximum likelihood estimate Pt−1 i=1 1[gi=+1] t−1 , KT estimator shrinks slightly towards 1/2. 2See Appendix A for a proof. For lack of space, all the appendices are in the supplementary material. 4 Regarding the design of coin betting potentials, we expect any potential that approximates the best possible wealth in (4) to be a good candidate. In fact, Ft(x) = ϵ exp x2/(2t)  / √ t, essentially the potential used in the parameter-free algorithms in [22, 24] for OLO and in [6, 18, 19] for LEA, approximates (4) and it is an excellent coin betting potential—see Theorem 9 in Appendix B. Hence, our framework provides intuition to previous constructions and in Section 7 we show new examples of coin betting potentials. In the next two sections, we presents the reductions to effortlessly solve both the generic OLO case and LEA with a betting potential. 5 From Coin Betting to OLO over Hilbert Space In this section, generalizing the one-dimensional construction in Section 3, we show how to use a sequence of excellent coin betting potentials {Ft}∞ t=0 to construct an algorithm for OLO over a Hilbert space and how to prove a regret bound for it. We define reward and wealth analogously to the one-dimensional case: Rewardt = Pt i=1⟨gi, wi⟩ and Wealtht = ϵ + Rewardt. Given a sequence of coin betting potentials {Ft}∞ t=0, using (7) we define the fraction βt = Ft(∥ Pt−1 i=1 gi∥+1)−Ft(∥ Pt−1 i=1 gi∥−1) Ft(∥ Pt−1 i=1 gi∥+1)+Ft(∥ Pt−1 i=1 gi∥−1) . (8) The prediction of the OLO algorithm is defined similarly to the one-dimensional case, but now we also need a direction in the Hilbert space: wt = βt Wealtht−1 Pt−1 i=1 gi Pt−1 i=1 gi = βt Pt−1 i=1 gi Pt−1 i=1 gi ϵ + t−1 X i=1 ⟨gi, wi⟩ ! . (9) If Pt−1 i=1 gi is the zero vector, we define wt to be the zero vector as well. For this prediction strategy we can prove the following regret guarantee, proved in Appendix C. The proof reduces the general Hilbert case to the 1-d case, thanks to (d) in Definition 2, then it follows the reasoning of Section 3. Theorem 3 (Regret Bound for OLO in Hilbert Spaces). Let {Ft}∞ t=0 be a sequence of excellent coin betting potentials. Let {gt}∞ t=1 be any sequence of reward vectors in a Hilbert space H such that ∥gt∥≤1 for all t. Then, the algorithm that makes prediction wt defined by (9) and (8) satisfies ∀T ≥0 ∀u ∈H RegretT (u) ≤F ∗ T (∥u∥) + ϵ . 6 From Coin Betting to Learning with Expert Advice In this section, we show how to use the algorithm for OLO over one-dimensional Hilbert space R from Section 3—which is itself based on a coin betting strategy—to construct an algorithm for LEA. Let N ≥2 be the number of experts and ∆N be the N-dimensional probability simplex. Let π = (π1, π2, . . . , πN) ∈∆N be any prior distribution. Let A be an algorithm for OLO over the one-dimensional Hilbert space R, based on a sequence of the coin betting potentials {Ft}∞ t=0 with initial endowment3 1. We instantiate N copies of A. Consider any round t. Let wt,i ∈R be the prediction of the i-th copy of A. The LEA algorithm computes bpt = (bpt,1, bpt,2, . . . , bpt,N) ∈RN 0,+ as bpt,i = πi · [wt,i]+, (10) where [x]+ = max{0, x} is the positive part of x. Then, the LEA algorithm predicts pt = (pt,1, pt,2, . . . , pt,N) ∈∆N as pt = bpt ∥bpt∥1 . (11) If ∥bpt∥1 = 0, the algorithm predicts the prior π. Then, the algorithm receives the reward vector gt = (gt,1, gt,2, . . . , gt,N) ∈[0, 1]N. Finally, it feeds the reward to each copy of A. The reward for 3Any initial endowment ϵ > 0 can be rescaled to 1. Instead of Ft(x) we would use Ft(x)/ϵ. The wt would become wt/ϵ, but pt is invariant to scaling of wt. Hence, the LEA algorithm is the same regardless of ϵ. 5 the i-th copy of A is egt,i ∈[−1, 1] defined as egt,i = gt,i −⟨gt, pt⟩ if wt,i > 0 , [gt,i −⟨gt, pt⟩]+ if wt,i ≤0 . (12) The construction above defines a LEA algorithm defined by the predictions pt, based on the algorithm A. We can prove the following regret bound for it. Theorem 4 (Regret Bound for Experts). Let A be an algorithm for OLO over the one-dimensional Hilbert space R, based on the coin betting potentials {Ft}∞ t=0 for an initial endowment of 1. Let f −1 t be the inverse of ft(x) = ln(Ft(x)) restricted to [0, ∞). Then, the regret of the LEA algorithm with prior π ∈∆N that predicts at each round with pt in (11) satisfies ∀T ≥0 ∀u ∈∆N RegretT (u) ≤f −1 T (D (u∥π)) . The proof, in Appendix D, is based on the fact that (10)–(12) guarantee that PN i=1 πiegt,iwt,i ≤0 and on a variation of the change of measure lemma used in the PAC-Bayes literature, e.g. [20]. 7 Applications of the Krichevsky-Trofimov Estimator to OLO and LEA In the previous sections, we have shown that a coin betting potential with a guaranteed rapid growth of the wealth will give good regret guarantees for OLO and LEA. Here, we show that the KT estimator has associated an excellent coin betting potential, which we call KT potential. Then, the optimal wealth guarantee of the KT potentials will translate to optimal parameter-free regret bounds. The sequence of excellent coin betting potentials for an initial endowment ϵ corresponding to the adaptive Kelly betting strategy βt defined by (5) based on the KT estimator are Ft(x) = ϵ 2t·Γ  t+1 2 + x 2  ·Γ  t+1 2 −x 2  π·t! t ≥0, x ∈(−t −1, t + 1), (13) where Γ(x) = R ∞ 0 tx−1e−tdt is Euler’s gamma function—see Theorem 13 in Appendix E. This potential was used to prove regret bounds for online prediction with the logarithmic loss [16][4, Chapter 9.7]. Theorem 13 also shows that the KT betting strategy βt as defined by (5) satisfies (7). This potential has the nice property that is satisfies the inequality in (c) of Definition 2 with equality when gt ∈{−1, 1}, i.e. Ft(x + gt) = (1 + gtβt) Ft−1(x). We also generalize the KT potentials to δ-shifted KT potentials, where δ ≥0, defined as Ft(x) = 2t·Γ(δ+1)·Γ  t+δ+1 2 + x 2  ·Γ  t+δ+1 2 −x 2  Γ  δ+1 2 2 ·Γ(t+δ+1) . The reason for its name is that, up to a multiplicative constant, Ft is equal to the KT potential shifted in time by δ. Theorem 13 also proves that the δ-shifted KT potentials are excellent coin betting potentials with initial endowment 1, and the corresponding betting fraction is βt = Pt−1 j=1 gj δ+t . 7.1 OLO in Hilbert Space We apply the KT potential for the construction of an OLO algorithm over a Hilbert space H. We will use (9), and we just need to calculate βt. According to Theorem 13 in Appendix E, the formula for βt simplifies to βt = ∥ Pt−1 i=1 gi∥ t so that wt = 1 t  ϵ + Pt−1 i=1⟨gi, wi⟩  Pt−1 i=1 gi. The resulting algorithm is stated as Algorithm 1. We derive a regret bound for it as a very simple corollary of Theorem 3 to the KT potential (13). The only technical part of the proof, in Appendix F, is an upper bound on F ∗ t since it cannot be expressed as an elementary function. Corollary 5 (Regret Bound for Algorithm 1). Let ϵ > 0. Let {gt}∞ t=1 be any sequence of reward vectors in a Hilbert space H such that ∥gt∥≤1. Then Algorithm 1 satisfies ∀T ≥0 ∀u ∈H RegretT (u) ≤∥u∥ r T ln  1 + 24T 2∥u∥2 ϵ2  + ϵ  1 − 1 e √ πT  . 6 Algorithm 1 Algorithm for OLO over Hilbert space H based on KT potential Require: Initial endowment ϵ > 0 1: for t = 1, 2, . . . do 2: Predict with wt ←1 t  ϵ + Pt−1 i=1⟨gi, wi⟩  Pt−1 i=1 gi 3: Receive reward vector gt ∈H such that ∥gt∥≤1 4: end for Algorithm 2 Algorithm for Learning with Expert Advice based on δ-shifted KT potential Require: Number of experts N, prior distribution π ∈∆N, number of rounds T 1: for t = 1, 2, . . . , T do 2: For each i ∈[N], set wt,i ← Pt−1 j=1 egj,i t+T/2  1 + Pt−1 j=1 egj,iwj,i  3: For each i ∈[N], set bpt,i ←πi[wt,i]+ 4: Predict with pt ← bpt/ ∥bpt∥1 if ∥bpt∥1 > 0 π if ∥bpt∥1 = 0 5: Receive reward vector gt ∈[0, 1]N 6: For each i ∈[N], set egt,i ← gt,i −⟨gt, pt⟩ if wt,i > 0 [gt,i −⟨gt, pt⟩]+ if wt,i ≤0 7: end for It is worth noting the elegance and extreme simplicity of Algorithm 1 and contrast it with the algorithms in [26, 22–24]. Also, the regret bound is optimal [26, 23]. The parameter ϵ can be safely set to any constant, e.g. 1. Its role is equivalent to the initial guess used in doubling tricks [25]. 7.2 Learning with Expert Advice We will now construct an algorithm for LEA based on the δ-shifted KT potential. We set δ to T/2, requiring the algorithm to know the number of rounds T in advance; we will fix this later with the standard doubling trick. To use the construction in Section 6, we need an OLO algorithm for the 1-d Hilbert space R. Using the δ-shifted KT potentials, the algorithm predicts for any sequence {egt}∞ t=1 of reward wt = βt Wealtht−1 = βt  1 + t−1 X j=1 egjwj  = Pt−1 i=1 egi T/2 + t  1 + t−1 X j=1 egjwj  . Then, following the construction in Section 6, we arrive at the final algorithm, Algorithm 2. We can derive a regret bound for Algorithm 2 by applying Theorem 4 to the δ-shifted KT potential. Corollary 6 (Regret Bound for Algorithm 2). Let N ≥2 and T ≥0 be integers. Let π ∈∆N be a prior. Then Algorithm 2 with input N, π, T for any rewards vectors g1, g2, . . . , gT ∈[0, 1]N satisfies ∀u ∈∆N RegretT (u) ≤ p 3T(3 + D (u∥π)) . Hence, the Algorithm 2 has both the best known guarantee on worst-case regret and per-round time complexity, see Table 1. Also, it has the advantage of being very simple. The proof of the corollary is in the Appendix F. The only technical part of the proof is an upper bound on f −1 t (x), which we conveniently do by lower bounding Ft(x). The reason for using the shifted potential comes from the analysis of f −1 t (x). The unshifted algorithm would have a O( p T(log T + D (u∥π)) regret bound; the shifting improves the bound to O( p T(1 + D (u∥π)). By changing T/2 in Algorithm 2 to another constant fraction of T, it is possible to trade-off between the two constants 3 present in the square root in the regret upper bound. The requirement of knowing the number of rounds T in advance can be lifted by the standard doubling trick [25, Section 2.3.1], obtaining an anytime guarantee with a bigger leading constant, ∀T ≥0 ∀u ∈∆N RegretT (u) ≤ √ 2 √ 2−1 p 3T(3 + D (u∥π)) . 7 10 −2 10 −1 10 0 10 1 10 2 3 3.05 3.1 3.15 3.2 3.25 3.3 3.35 3.4 3.45 3.5 x 10 5 U Total loss YearPredictionMSD dataset, absolute loss OGD, ηt = U p 1/t DFEG Adaptive Normal PiSTOL KT-based 10 −1 10 0 10 1 10 2 5.5 6 6.5 7 7.5 8 8.5 9 x 10 4 U Total loss cpusmall dataset, absolute loss OGD, ηt = U p 1/t DFEG Adaptive Normal PiSTOL KT-based 10 0 10 2 10 4 10 6 1.7 1.75 1.8 1.85 1.9 1.95 2 2.05 x 10 9 U Total loss cadata dataset, absolute loss OGD, ηt = U p 1/t DFEG Adaptive Normal PiSTOL KT-based Figure 1: Total loss versus learning rate parameter of OGD (in log scale), compared with parameter-free algorithms DFEG [23], Adaptive Normal [22], PiSTOL [24] and the KT-based Algorithm 1. 10 0 10 1 200 250 300 350 400 450 Replicated Hadamard matrices, N=126, k=2 good experts U Regret to best expert after T=32768 Hedge, ηt = U p 1/t NormalHedge AdaNormalHedge Squint KT-based 10 0 10 1 180 200 220 240 260 280 300 320 340 360 380 400 Replicated Hadamard matrices, N=126, k=8 good experts U Regret to best expert after T=32768 Hedge, ηt = U p 1/t NormalHedge AdaNormalHedge Squint KT-based 10 0 10 1 150 200 250 300 350 Replicated Hadamard matrices, N=126, k=32 good experts U Regret to best expert after T=32768 Hedge, ηt = U p 1/t NormalHedge AdaNormalHedge Squint KT-based Figure 2: Regrets to the best expert after T = 32768 rounds, versus learning rate parameter of Hedge (in log scale). The “good” experts are ϵ = 0.025 better than the others. The competitor algorithms are NormalHedge [6], AdaNormalHedge [19], Squint [15], and the KT-based Algorithm 2. πi = 1/N for all algorithms. 8 Discussion of the Results We have presented a new interpretation of parameter-free algorithms as coin betting algorithms. This interpretation, far from being just a mathematical gimmick, reveals the common hidden structure of previous parameter-free algorithms for both OLO and LEA and also allows the design of new algorithms. For example, we show that the characteristic of parameter-freeness is just a consequence of having an algorithm that guarantees the maximum reward possible. The reductions in Sections 5 and 6 are also novel and they are in a certain sense optimal. In fact, the obtained Algorithms 1 and 2 achieve the optimal worst case upper bounds on the regret, see [26, 23] and [4] respectively. We have also run an empirical evaluation to show that the theoretical difference between classic online learning algorithms and parameter-free ones is real and not just theoretical. In Figure 1, we have used three regression datasets4, and solved the OCO problem through OLO. In all the three cases, we have used the absolute loss and normalized the input vectors to have L2 norm equal to 1. From the empirical results, it is clear that the optimal learning rate is completely data-dependent, yet parameter-free algorithms have performance very close to the unknown optimal tuning of the learning rate. Moreover, the KT-based Algorithm 1 seems to dominate all the other similar algorithms. For LEA, we have used the synthetic setting in [6]. The dataset is composed of Hadamard matrices of size 64, where the row with constant values is removed, the rows are duplicated to 126 inverting their signs, 0.025 is subtracted to k rows, and the matrix is replicated in order to generate T = 32768 samples. For more details, see [6]. Here, the KT-based algorithm is the one in Algorithm 2, where the term T/2 is removed, so that the final regret bound has an additional ln T term. Again, we see that the parameter-free algorithms have a performance close or even better than Hedge with an oracle tuning of the learning rate, with no clear winners among the parameter-free algorithms. Notice that since the adaptive Kelly strategy based on KT estimator is very close to optimal, the only possible improvement is to have a data-dependent bound, for example like the ones in [24, 15, 19]. In future work, we will extend our definitions and reductions to the data-dependent case. 4Datasets available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. 8 Acknowledgments. The authors thank Jacob Abernethy, Nicol`o Cesa-Bianchi, Satyen Kale, Chansoo Lee, Giuseppe Molteni, and Manfred Warmuth for useful discussions on this work. References [1] E. Artin. The Gamma Function. Holt, Rinehart and Winston, Inc., 1964. [2] N. Batir. Inequalities for the gamma function. Archiv der Mathematik, 91(6):554–563, 2008. [3] H. H. Bauschke and P. L. Combettes. Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer Publishing Company, Incorporated, 1st edition, 2011. [4] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [5] N. Cesa-Bianchi, Y. Freund, D. Haussler, D. P. Helmbold, R. E. Schapire, and M. K. Warmuth. How to use expert advice. J. ACM, 44(3):427–485, 1997. [6] K. Chaudhuri, Y. Freund, and D. Hsu. A parameter-free hedging algorithm. In Advances in Neural Information Processing Systems 22, pages 297–305, 2009. [7] C.-P. Chen. Inequalities for the polygamma functions with application. General Mathematics, 13(3): 65–72, 2005. [8] A. Chernov and V. Vovk. Prediction with advice of unknown number of experts. In Proc. of the 26th Conf. on Uncertainty in Artificial Intelligence. AUAI Press, 2010. [9] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, 2nd edition, 2006. [10] D. J. Foster. personal communication, 2016. [11] D. J. Foster, A. Rakhlin, and K. Sridharan. Adaptive online learning. In Advances in Neural Information Processing Systems 28, pages 3375–3383. Curran Associates, Inc., 2015. [12] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Computer and System Sciences, 55(1):119–139, 1997. [13] A. Hoorfar and M. Hassani. Inequalities on the Lambert W function and hyperpower function. J. Inequal. Pure and Appl. Math, 9(2), 2008. [14] J. L. Kelly. A new interpretation of information rate. Information Theory, IRE Trans. on, 2(3):185–189, September 1956. [15] W. M. Koolen and T. van Erven. Second-order quantile methods for experts and combinatorial games. In Proc. of the 28th Conf. on Learning Theory, pages 1155–1175, 2015. [16] R. E. Krichevsky and V. K. Trofimov. The performance of universal encoding. IEEE Trans. on Information Theory, 27(2):199–206, 1981. [17] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108 (2):212–261, 1994. [18] H. Luo and R. E. Schapire. A drifting-games analysis for online learning and applications to boosting. In Advances in Neural Information Processing Systems 27, pages 1368–1376, 2014. [19] H. Luo and R. E. Schapire. Achieving all with no parameters: AdaNormalHedge. In Proc. of the 28th Conf. on Learning Theory, pages 1286–1304, 2015. [20] D. McAllester. A PAC-Bayesian tutorial with a dropout bound, 2013. arXiv:1307.2118. [21] H. B. McMahan and J. Abernethy. Minimax optimal algorithms for unconstrained linear optimization. In Advances in Neural Information Processing Systems 26, pages 2724–2732, 2013. [22] H. B. McMahan and F. Orabona. Unconstrained online linear learning in Hilbert spaces: Minimax algorithms and normal approximations. In Proc. of the 27th Conf. on Learning Theory, pages 1020–1039, 2014. [23] F. Orabona. Dimension-free exponentiated gradient. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 1806–1814. Curran Associates, Inc., 2013. [24] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 1116–1124, 2014. [25] S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2011. [26] M. Streeter and B. McMahan. No-regret algorithms for unconstrained online convex optimization. In Advances in Neural Information Processing Systems 25 (NIPS 2012), pages 2402–2410, 2012. [27] V. Vovk. A game of prediction with expert advice. J. Computer and System Sciences, 56:153–173, 1998. [28] E. T. Whittaker and G. N. Watson. A Course of Modern Analysis. Cambridge University Press, fourth edition, 1962. Reprinted. [29] F. M. J. Willems, Y. M. Shtarkov, and T. J. Tjalkens. The context tree weighting method: Basic properties. IEEE Trans. on Information Theory, 41:653–664, 1995. 9
2016
123
6,021
Learning Deep Embeddings with Histogram Loss Evgeniya Ustinova and Victor Lempitsky Skolkovo Institute of Science and Technology (Skoltech) Moscow, Russia Abstract We suggest a loss for learning deep embeddings. The new loss does not introduce parameters that need to be tuned and results in very good embeddings across a range of datasets and problems. The loss is computed by estimating two distribution of similarities for positive (matching) and negative (non-matching) sample pairs, and then computing the probability of a positive pair to have a lower similarity score than a negative pair based on the estimated similarity distributions. We show that such operations can be performed in a simple and piecewise-differentiable manner using 1D histograms with soft assignment operations. This makes the proposed loss suitable for learning deep embeddings using stochastic optimization. In the experiments, the new loss performs favourably compared to recently proposed alternatives. 1 Introduction Deep feed-forward embeddings play a crucial role across a wide range of tasks and applications in image retrieval [1, 8, 15], biometric verification [3, 5, 13, 17, 22, 25, 28], visual product search [21], finding sparse and dense image correspondences [20, 29], etc. Under this approach, complex input patterns (e.g. images) are mapped into a high-dimensional space through a chain of feed-forward transformations, while the parameters of the transformations are learned from a large amount of supervised data. The objective of the learning process is to achieve the proximity of semanticallyrelated patterns (e.g. faces of the same person) and avoid the proximity of semantically-unrelated (e.g. faces of different people) in the target space. In this work, we focus on simple similarity measures such as Euclidean distance or scalar products, as they allow fast evaluation, the use of approximate search methods, and ultimately lead to faster and more scalable systems. Despite the ubiquity of deep feed-forward embeddings, learning them still poses a challenge and is relatively poorly understood. While it is not hard to write down a loss based on tuples of training points expressing the above-mentioned objective, optimizing such a loss rarely works “out of the box” for complex data. This is evidenced by the broad variety of losses, which can be based on pairs, triplets or quadruplets of points, as well as by a large number of optimization tricks employed in recent works to reach state-of-the-art, such as pretraining for the classification task while restricting fine-tuning to top layers only [13, 25], combining the embedding loss with the classification loss [22], using complex data sampling such as mining “semi-hard” training triplets [17]. Most of the proposed losses and optimization tricks come with a certain number of tunable parameters, and the quality of the final embedding is often sensitive to them. Here, we propose a new loss function for learning deep embeddings. In designing this function we strive to avoid highly-sensitive parameters such as margins or thresholds of any kind. While processing a batch of data points, the proposed loss is computed in two stages. Firstly, the two one-dimensional distributions of similarities in the embedding space are estimated, one corresponding to similarities between matching (positive) pairs, the other corresponding to similarities between non-matching (negative) pairs. The distributions are estimated in a simple non-parametric ways 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. input batch deep net embedded batch aggregation similarity histograms + Figure 1: The histogram loss computation for a batch of examples (color-coded; same color indicates matching samples). After the batch (left) is embedded into a high-dimensional space by a deep network (middle), we compute the histograms of similarities of positive (top-right) and negative pairs (bottom-right). We then evaluate the integral of the product between the negative distribution and the cumulative density function for the positive distribution (shown with a dashed line), which corresponds to a probability that a randomly sampled positive pair has smaller similarity than a randomly sampled negative pair. Such histogram loss can be minimized by backpropagation. The only associated parameter of such loss is the number of histogram bins, to which the results have very low sensitivity. (as histograms with linearly-interpolated values-to-bins assignments). In the second stage, the overlap between the two distributions is computed by estimating the probability that the two points sampled from the two distribution are in a wrong order, i.e. that a random negative pair has a higher similarity than a random positive pair. The two stages are implemented in a piecewise-differentiable manner, thus allowing to minimize the loss (i.e. the overlap between distributions) using standard backpropagation. The number of bins in the histograms is the only tunable parameter associated with our loss, and it can be set according to the batch size independently of the data itself. In the experiments, we fix this parameter (and the batch size) and demonstrate the versatility of the loss by applying it to four different image datasets of varying complexity and nature. Comparing the new loss to state-of-the-art reveals its favourable performance. Overall, we hope that the proposed loss will be used as an “out-of-the-box” solution for learning deep embeddings that requires little tuning and leads to close to the state-of-the-art results. 2 Related work Recent works on learning embeddings use deep architectures (typically ConvNets [8, 10]) and stochastic optimization. Below we review the loss functions that have been used in recent works. Classification losses. It has been observed in [8] and confirmed later in multiple works (e.g. [15]) that deep networks trained for classification can be used for deep embedding. In particular, it is sufficient to consider an intermediate representation arising in one of the last layers of the deep network. The normalization is added post-hoc. Many of the works mentioned below pre-train their embeddings as a part of the classification networks. Pairwise losses. Methods that use pairwise losses sample pairs of training points and score them independently. The pioneering work on deep embeddings [3] penalizes the deviation from the unit cosine similarity for positive pairs and the deviation from −1 or −0.9 for negative pairs. Perhaps, the most popular of pairwise losses is the contrastive loss [5, 20], which minimizes the distances in the positive pairs and tries to maximize the distances in the negative pairs as long as these distances are smaller than some margin M. Several works pointed to the fact that attempting to collapse all positive pairs may lead to excessive overfitting and therefore suggested losses that mitigate this effect, e.g. a double-margin contrastive loss [12], which drops to zero for positive pairs as long as their distances fall beyond the second (smaller) margin. Finally, several works use non-hinge based pairwise losses such as log-sum-exp and cross-entropy on the similarity values that softly encourage the similarity to be high for positive values and low for negative values (e.g. [25, 28]). The main problem with pairwise losses is that the margin parameters might be hard to tune, especially since the distributions of distances or similarities can be changing dramatically as the learning progresses. While most works “skip” the burn-in period by initializing the embedding to a network pre-trained 2 for classification [25], [22] further demonstrated the benefit of admixing the classification loss during the fine-tuning stage (which brings in another parameter). Triplet losses. While pairwise losses care about the absolute values of distances of positive and negative pairs, the quality of embeddings ultimately depends on the relative ordering between positive and negative distances (or similarities). Indeed, the embedding meets the needs of most practical applications as long as the similarities of positive pairs are greater than similarities of negative pairs [19, 27]. The most popular class of losses for metric learning therefore consider triplets of points x0, x+, x−, where x0, x+ form a positive pair and x0, x−form a negative pair and measure the difference in their distances or similarities. Triplet-based loss can then e.g. be aggregated over all triplets using a hinge function of these differences. Triplet-based losses are popular for large-scale embedding learning [4] and in particular for deep embeddings [13, 14, 17, 21, 29]. Setting the margin in the triplet hinge-loss still represents the challenge, as well as sampling “correct” triplets, since the majority of them quickly become associated with zero loss. On the other hand, focusing sampling on the hardest triplets can prevent efficient learning [17]. Triplet-based losses generally make learning less constrained than pairwise losses. This is because for a low-loss embedding, the characteristic distance separating positive and negative pairs can vary across the embedding space (depending on the location of x0), which is not possible for pairwise losses. In some situations, such added flexibility can increase overfitting. Quadruplet losses. Quadruplet-based losses are similar to triplet-based losses as they are computed by looking at the differences in distances/similarities of positive pairs and negative pairs. In the case of quadruplet-based losses, the compared positive and negative pairs do not share a common point (as they do for triplet-based losses). Quadruplet-based losses do not allow the flexibility of tripletbased losses discussed above (as they includes comparisons of positive and negative pairs located in different parts of the embedding space). At the same time, they are not as rigid as pairwise losses, as they only penalize the relative ordering for negative pairs and positive pairs. Nevertheless, despite these appealing properties, quadruplet-based losses remain rarely-used and confined to “shallow” embeddings [9, 31]. We are unaware of deep embedding approaches using quadruplet losses. A potential problem with quadruplet-based losses in the large-scale setting is that the number of all quadruplets is even larger than the number of triplets. Among all groups of losses, our approach is most related to quadruplet-based ones, and can be seen as a way to organize learning of deep embeddings with a quarduplet-based loss in an efficient and (almost) parameter-free manner. 3 Histogram loss We now describe our loss function and then relate it to the quadruplet-based loss. Our loss (Figure 1) is defined for a batch of examples X = {x1, x2, . . . xN} and a deep feedforward network f(·; θ), where θ represents learnable parameters of the network. We assume that the last layer of the network performs length-normalization, so that the embedded vectors {yi = f(xi; θ)} are L2-normalized. We further assume that we know which elements should match to each other and which ones are not. Let mij be +1 if xi and xj form a positive pair (correspond to a match) and mij be −1 if xi and xj are known to form a negative pair (these labels can be derived from class labels or be specified otherwise). Given {mij} and {yi} we can estimate the two probability distributions p+ and p−corresponding to the similarities in positive and negative pairs respectively. In particular S+ = {sij = ⟨xi, xj⟩| mij = +1} and S−= {sij = ⟨xi, xj⟩| mij = −1} can be regarded as sample sets from these two distributions. Although samples in these sets are not independent, we keep all of them to ensure a large sample size. Given sample sets S+ and S−, we can use any statistical approach to estimate p+ and p−. The fact that these distributions are one-dimensional and bounded to [−1; +1] simplifies the task. Perhaps, the most obvious choice in this case is fitting simple histograms with uniformly spaced bins, and we use this approach in our experiments. We therefore consider R-dimensional histograms H+ and H−, with the nodes t1 = −1, t2, . . . , tR = +1 uniformly filling [−1; +1] with the step ∆= 2 R−1. We estimate the value h+ r of the histogram H+ at each node as: h+ r = 1 |S+| X (i,j) : mij=+1 δi,j,r (1) 3 where (i, j) spans all positive pairs of points in the batch. The weights δi,j,r are chosen so that each pair sample is assigned to the two adjacent nodes: δi,j,r =    (sij −tr−1)/∆, if sij ∈[tr−1; tr], (tr+1 −sij)/∆, if sij ∈[tr; tr+1], 0, otherwise . (2) We thus use linear interpolation for each entry in the pair set, when assigning it to the two nodes. The estimation of H−proceeds analogously. Note, that the described approach is equivalent to using ”triangular” kernel for density estimation; other kernel functions can be used as well [2]. Once we have the estimates for the distributions p+ and p−, we use them to estimate the probability of the similarity in a random negative pair to be more than the similarity in a random positive pair ( the probability of reverse). Generally, this probability can be estimated as: preverse = Z 1 −1 p−(x) Z x −1 p+(y) dy  dx = Z 1 −1 p−(x) Φ+(x) dx = Ex∼p−[Φ+(x)] , (3) where Φ+(x) is the CDF (cumulative density function) of p+(x). The integral (3) can then be approximated and computed as: L(X, θ) = R X r=1 h− r r X q=1 h+ q ! = R X r=1 h− r φ+ r , (4) where L is our loss function (the histogram loss) computed for the batch X and the embedding parameters θ, which approximates the reverse probability; φ+ r = Pr q=1 h+ q is the cumulative sum of the histogram H+. Importantly, the loss (4) is differentiable w.r.t. the pairwise similarities s ∈S+ and s ∈S−. Indeed, it is straightforward to obtain ∂L ∂h− r = Pr q=1 h+ q and ∂L ∂h+ r = PR q=r h− q from (4). Furthermore, from (1) and (2) it follows that: ∂h+ r ∂sij =      +1 ∆|S+|, if sij ∈[tr−1; tr], −1 ∆|S+|, if sij ∈[tr; tr+1], 0, otherwise , (5) for any sij such that mij = +1 (and analogously for ∂h− r ∂sij ). Finally, ∂sij ∂xi = xj and ∂sij ∂xj = xi. One can thus backpropagate the loss to the scalar product similarities, then further to the individual embedded points, and then further into the deep embedding network. Relation to quadruplet loss. Our loss first estimates the probability distributions of similarities for positive and negative pairs in a semi-parametric ways (using histograms), and then computes the probability of reverse using these distributions via equation (4). An alternative and purely nonparametric way would be to consider all possible pairs of positive and negative pairs contained in the batch and to estimate this probability from such set of pairs of pairs. This would correspond to evaluating a quadruplet-based loss similarly to [9, 31]. The number of pairs of pairs in a batch, however tends to be quartic (fourth degree polynomial) of the batch size, rendering exhaustive sampling impractical. This is in contrast to our loss, for which the separation into two stages brings down the complexity to quadratic in batch size. Another efficient loss based on quadruplets is introduced in [24]. The training is done pairwise, but the threshold separating positive and negative pairs is also learned. We note that quadruplet-based losses as in [9, 31] often encourage the positive pairs to be more similar than negative pairs by some non-zero margin. It is also easy to incorporate such non-zero margin into our method by defining the loss to be: Lµ(X, θ) = R X r=1 h− r r+µ X q=1 h+ q ! , (6) where the new loss effectively enforces the margin µ ∆. We however do not use such modification in our experiments (preliminary experiments do not show any benefit of introducing the margin). ————————————————————————4 1 2 4 8 16 32 K 40 50 60 70 80 90 Recall@K, % CUB-200-2011 0.04 0.02 0.01 0.005 1 5 10 15 20 K 50 60 70 80 90 Recall@K, % CUHK03 256 hist 128 hist 64 hist Figure 2: (left) - Recall@K for the CUB-200-2011 dataset for the Histogram loss (4). Different curves correspond to variable histogram step ∆, which is the only parameter inherent to our loss. The curves are very similar for CUB-200-2011. (right) - Recall@K for the CUHK03 labeled dataset for different batch sizes. Results for batch size 256 is uniformly better than those for smaller values. 4 Experiments In this section we present the results of embedding learning. We compare our loss to state-of-theart pairwise and triplet losses, which have been reported in recent works to give state-of-the-art performance on these datasets. Baselines. In particular, we have evaluated the Binomial Deviance loss [28]. While we are aware only of its use in person re-identification approaches, in our experiments it performed very well for product image search and bird recognition significantly outperforming the baseline pairwise (contrastive) loss reported in [21], once its parameters are tuned. The binomial deviance loss is defined as: Jdev = X i,j∈I wi,j ln(exp−α(si,j−β)mi,j +1), (7) where I is the set of training image indices, and si,j is the similarity measure between ith and jth images (i.e. si,j = cosine(xi, xj). Furthermore, mi,j and wi,j are the learning supervision and scaling factors respectively: mi,j =  1,if (i, j) is a positive pair, −C,if (i, j) is a negative pair, wi,j =  1 n1 ,if (i, j) is a positive pair, 1 n2 ,if (i, j) is a negative pair, (8) where n1 and n2 are the number of positive and negative pairs in the training set (or mini-batch) correspondingly, α and β are hyper-parameters. Parameter C is the negative cost for balancing weights for positive and negative pairs that was introduced in [28]. Our experimental results suggest that the quality of the embedding is sensitive to this parameter. Therefore, in the experiments we report results for the two versions of the loss: with C = 10 that is close to optimal for re-identification datasets, and with C = 25 that is close to optimal for the product and bird datasets. We have also computed the results for the Lifted Structured Similarity Softmax (LSSS) loss [21] on CUB-200-2011 [26] and Online Products [21] datasets and additionally applied it to re-identification datasets. Lifted Structured Similarity Softmax loss is triplet-based and uses sophisticated triplet sampling strategy that was shown in [21] to outperform standard triplet-based loss. Additionally, we performed experiments for the triplet loss [18] that uses “semi-hard negative” triplet sampling. Such sampling considers only triplets violating the margin, but still having the positive distance smaller than the negative distance. 5 1 2 4 8 16 32 K 20 30 40 50 60 70 80 90 Recall@K, % CUB-200-2011 Histogram LSSS Binomial Deviance, c=10 Binomial Deviance, c=25 Triplet semi-hard GoogLeNet pool5 Contrastive (from [21]) Triplet (from [21]) 1 10 100 1000 K 20 30 40 50 60 70 80 90 Recall@K, % Online Products Histogram LSSS Binomial Deviance, c=10 Binomial Deviance, c=25 Triplet semi-hard GoogLeNet pool5 Contrastive (from [21]) Triplet (from [21]) Figure 3: Recall@K for (left) - CUB-200-2011 and (right) - Online Products datasets for different methods. Results for the Histogram loss (4), Binomial Deviance (7), LSSS [21] and Triplet [18] losses are present. Binomial Deviance loss for C = 25 outperforms all other methods. The best-performing method is Histogram loss. We also include results for contrastive and triplet losses from [21]. Datasets and evaluation metrics. We have evaluated the above mentioned loss functions on the four datasets : CUB200-2011 [26], CUHK03 [11], Market-1501 [30] and Online Products [21]. All these datasets have been used for evaluating methods of solving embedding learning tasks. The CUB-200-2011 dataset includes 11,788 images of 200 classes corresponding to different birds species. As in [21] we use the first 100 classes for training (5,864 images) and the remaining classes for testing (5,924 images). The Online Products dataset includes 120,053 images of 22,634 classes. Classes correspond to a number of online products from eBay.com. There are approximately 5.3 images for each product. We used the standard split from [21]: 11,318 classes (59,551 images) are used for training and 11,316 classes (60,502 images) are used for testing. The images from the CUB-200-2011 and the Online Products datasets are resized to 256 by 256, keeping the original aspect ratio (padding is done when needed). The CUHK03 dataset is commonly used for the person re-identification task. It includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. Each identity is observed by two cameras and has 4.8 images in each camera on average. Following most of the previous works we use the “CUHK03-labeled” version of the dataset with manually-annotated bounding boxes. According to the CUHK03 evaluation protocol, 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. We use the first split from the CUHK03 standard split set which is provided with the dataset. The Market-1501 dataset includes 32,643 images of 1,501 pedestrians, each pedestrian is captured by several cameras (from two to six). The dataset is divided randomly into the test set of 750 identities and the train set of 751 identities. Following [21, 28, 30], we report Recall@K1 metric for all the datasets. For CUB-200-2011 and Online products, every test image is used as the query in turn and remaining images are used as the gallery correspondingly. In contrast, for CUHK03 single-shot results are reported. This means that one image for each identity from the test set is chosen randomly in each of its two camera views. Recall@K values for 100 random query-gallery sets are averaged to compute the final result for a given split. For the Market-1501 dataset, we use the multi-shot protocol (as is done in most other works), as there are many images of the same person in the gallery set. Architectures used. For training on the CUB-200-2011 and the Online Products datasets we used the same architecture as in [21], which conincides with the GoogleNet architecture [23] up to the ‘pool5’ and the inner product layers, while the last layer is used to compute the embedding vectors. The GoogleNet part is pretrained on ImageNet ILSVRC [16] and the last layer is trained from scratch. As in [21], all GoogLeNet layers are fine-tuned with the learning rate that is ten times less than 1Recall@K is the probability of getting the right match among first K gallery candidates sorted by similarity. 6 1 5 10 15 20 K 50 60 70 80 90 Recall@K, % CUHK03 Histogram Binomial Deviance, c=10 Binomial Deviance, c=25 LSSS Triplet semi-hard 1 5 10 15 20 K 50 60 70 80 90 Recall@K, % Market-1501 Histogram Binomial Deviance, c=10 Binomial Deviance, c=25 LSSS Triplet semi-hard Figure 4: Recall@K for (left) - CUHK03 and (right) - Market-1501 datasets. The Histogram loss (4) outperforms Binomial Deviance, LSSS and Triplet losses. the learning rate of the last layer. We set the embedding size to 512 for all the experiments with this architecture. We reproduced the results for the LSSS loss [21] for these two datasets. For the architectures that use the Binomial Deviance loss, Histogram loss and Triplet loss the iteration number and the parameters value (for the former) are chosen using the validation set. For training on CUHK03 and Market-1501 we used the Deep Metric Learning (DML) architecture introduced in [28]. It has three CNN streams for the three parts of the pedestrian image (head and upper torso, torso, lower torso and legs). Each of the streams consists of 2 convolution layers followed by the ReLU non-linearity and max-pooling. The first convolution layers for the three streams have shared weights. Descriptors are produced by the last 500-dimensional inner product layer that has the concatenated outputs of the three streams as an input. Table 1: Final results for CUHK03-labeled and Market-1501. For CUHK03-labeled results for 5 random splits were averaged. Batch of size 256 was used for both experiments. Dataset r = 1 r = 5 r = 10 r = 15 r = 20 CUHK03 65.77 92.85 97.62 98.94 99.43 Market-1501 59.47 80.73 86.94 89.28 91.09 Implementation details. For all the experiments with loss functions (4) and (7) we used quadratic number of pairs in each batch (all the pairs that can be sampled from batch). For triplet loss “semi-hard” triplets chosen from all the possible triplets in the batch are used. For comparison with other methods the batch size was set to 128. We sample batches randomly in such a way that there are several images for each sampled class in the batch. We iterate over all the classes and all the images corresponding to the classes, sampling images in turn. The sequences of the classes and of the corresponding images are shuffled for every new epoch. CUB-200-2011 and Market-1501 include more than ten images per class on average, so we limit the number of images of the same class in the batch to ten for the experiments on these datasets. We used ADAM [7] for stochastic optimization in all of the experiments. For all losses the learning rate is set to 1e −4 for all the experiments except ones on the CUB-200-2011 datasets, for which we have found the learning rate of 1e −5 more effective. For the re-identification datasets the learning rate was decreased by 10 after the 100K iterations, for the other experiments learning rate was fixed. The iterations number for each method was chosen using the validation set. Results. The Recall@K values for the experiments on CUB-200-2011, Online Products, CUHK03 and Market-1501 are shown in Figure 3 and Figure 4. The Binomial Deviance loss (7) gives the best results for CUB-200-2011 and Online Products with the C parameter set to 25. We previously checked several values of C on the CUB-200-2011 dataset and found the value C = 25 to be the optimal one. We also observed that with smaller values of C the results are significantly worse than 7 (a) (b) (c) (d) Figure 5: Histograms for positive and negative distance distributions on the CUHK03 test set for: (a) Initial state: randomly initialized net, (b) Network training with the Histogram loss, (c) same for the Binomial Deviance loss, (d) same for the LSSS loss. Red is for negative pairs, green is for positive pairs. Negative cosine distance measure is used for Histogram and Binomial Deviance losses, Euclidean distance is used for the LSSS loss. Initially the two distributions are highly overlapped. For the Histogram loss the distribution overlap is less than for the LSSS. those presented in the Figure 3-left (for C equal to 2 the best Recall@1 is 43.50%). For CUHK03 the situation is reverse: the Histogram loss gives the boost of 2.64% over the Binomial Deviance loss with C = 10 (which we found to be optimal for this dataset). The results are shown in the figure Figure 4-left. Embedding distributions of the positive and negative pairs from CUHK03 test set for different methods are shown in Figure 5b,Figure 5c,Figure 5d. For the Market-1501 dataset our method also outperforms the Binomial Deviance loss for both values of C. In contrast to the experiments with CUHK03, the Binomial Deviance loss appeared to perform better with C set to 25 than to 10 for Market-1501. We have also investigated how the size of the histogram bin affects the model performance for the Histogram loss. As shown in the Figure 2-left, the results for CUB-2002011 remain stable for the sizes equal to 0.005, 0.01, 0.02 and 0.04 (these values correspond to 400, 200, 100 and 50 bins in the histograms). In our method, distributions of similarities of training data are estimated by distributions of similarities within mini-batches. Therefore we also show results for the Histogram loss for various batch size values (Figure 2-right). The larger batches are more preferable: for CUHK03, Recall@K for batch size equal to 256 is uniformly better than Recall@K for 128 and 64. We also observed similar behaviour for Market-1501. Additionally, we present our final results (batch size set to 256) for CUHK03 and Market-1501 in Table 1. For CUHK03, Rekall@K values for 5 random splits were averaged. To the best of our knowledge, these results corresponded to state-of-the-art on CUHK03 and Market-1501 at the moment of submission. To summarize the results of the comparison: the new (Histogram) loss gives the best results on the two person re-identification problems. For CUB-200-2011 and Online Products it came very close to the best loss (Binomial Deviance with C = 25). Interestingly, the histogram loss uniformly outperformed the triplet-based LSSS loss [21] in our experiments including two datasets from [21]. Importantly, the new loss does not require to tune parameters associated with it (though we have found learning with our loss to be sensitive to the learning rate). 5 Conclusion In this work we have suggested a new loss function for learning deep embeddings, called the Histogram loss. Like most previous losses, it is based on the idea of making the distributions of the similarities of the positive and negative pairs less overlapping. Unlike other losses used for deep embeddings, the new loss comes with virtually no parameters that need to be tuned. It also incorporates information across a large number of quadruplets formed from training samples in the mini-batch and implicitly takes into account all of such quadruplets. We have demonstrated the competitive results of the new loss on a number of datasets. In particular, the Histogram loss outperformed other losses for the person re-identification problem on CUHK03 and Market-1501 datasets. The code for Caffe [6] is available at: https://github.com/madkn/HistogramLoss. Acknowledgement: This research is supported by the Russian Ministry of Science and Education grant RFMEFI57914X0071. References [1] R. Arandjelovi´c, P. Gronat, A. Torii, T. Pajdla, and J. Sivic. Netvlad: Cnn architecture for weakly supervised place recognition. IEEE International Conference on Computer Vision, 2015. [2] A. Bowman and A. Azzalini. Applied smoothing techniques for data analysis. Number 18 in Oxford statistical science series. Clarendon Press, Oxford, 1997. 8 [3] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. Säckinger, and R. Shah. Signature verification using a “siamese” time delay neural network. International Journal of Pattern Recognition and Artificial Intelligence, 7(04):669–688, 1993. [4] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through ranking. The Journal of Machine Learning Research, 11:1109–1135, 2010. [5] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005), 20-26 June 2005, San Diego, CA, USA, pp. 539–546, 2005. [6] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [7] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. CoRR, abs/1412.6980, 2014. [8] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems (NIPS), pp. 1097–1105, 2012. [9] M. Law, N. Thome, and M. Cord. Quadruplet-wise image similarity learning. Proceedings of the IEEE International Conference on Computer Vision, pp. 249–256, 2013. [10] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. [11] W. Li, R. Zhao, T. Xiao, and X. Wang. Deepreid: Deep filter pairing neural network for person reidentification. 2014 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2014, Columbus, OH, USA, June 23-28, 2014, pp. 152–159, 2014. [12] J. Lin, O. Morère, V. Chandrasekhar, A. Veillard, and H. Goh. Deephash: Getting regularization, depth and fine-tuning right. CoRR, abs/1501.04711, 2015. [13] O. M. Parkhi, A. Vedaldi, and A. Zisserman. Deep face recognition. Proceedings of the British Machine Vision Conference 2015, BMVC 2015, Swansea, UK, September 7-10, 2015, pp. 41.1–41.12, 2015. [14] Q. Qian, R. Jin, S. Zhu, and Y. Lin. Fine-grained visual categorization via multi-stage metric learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3716–3724, 2015. [15] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. CNN features off-the-shelf: An astounding baseline for recognition. IEEE Conference on Computer Vision and Pattern Recognition, CVPR Workshops 2014, Columbus, OH, USA, June 23-28, 2014, pp. 512–519, 2014. [16] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015. [17] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823, 2015. [18] F. Schroff, D. Kalenichenko, and J. Philbin. Facenet: A unified embedding for face recognition and clustering. IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, pp. 815–823, 2015. [19] M. Schultz and T. Joachims. Learning a distance metric from relative comparisons. Advances in neural information processing systems (NIPS), p. 41, 2004. [20] E. Simo-Serra, E. Trulls, L. Ferraz, I. Kokkinos, P. Fua, and F. Moreno-Noguer. Discriminative learning of deep convolutional feature point descriptors. Proceedings of the IEEE International Conference on Computer Vision, pp. 118–126, 2015. [21] H. O. Song, Y. Xiang, S. Jegelka, and S. Savarese. Deep metric learning via lifted structured feature embedding. Computer Vision and Pattern Recognition (CVPR), 2016. [22] Y. Sun, Y. Chen, X. Wang, and X. Tang. Deep learning face representation by joint identification-verification. Advances in Neural Information Processing Systems, pp. 1988–1996, 2014. [23] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015. [24] O. Tadmor, T. Rosenwein, S. Shalev-Shwartz, Y. Wexler, and A. Shashua. Learning a metric embedding for face recognition using the multibatch method. NIPS, 2016. [25] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pp. 1701–1708. IEEE, 2014. [26] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. (CNS-TR-2011-001), 2011. [27] K. Q. Weinberger and L. K. Saul. Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research, 10:207–244, 2009. [28] D. Yi, Z. Lei, and S. Z. Li. Deep metric learning for practical person re-identification. arXiv prepzrint arXiv:1407.4979, 2014. [29] J. Žbontar and Y. LeCun. Stereo matching by training a convolutional neural network to compare image patches. arXiv preprint arXiv:1510.05970, 2015. [30] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, and Q. Tian. Scalable person re-identification: A benchmark. Computer Vision, IEEE International Conference on, 2015. [31] W.-S. Zheng, S. Gong, and T. Xiang. Reidentification by relative distance comparison. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(3):653–668, 2013. 9
2016
124
6,022
An Efficient Streaming Algorithm for the Submodular Cover Problem Ashkan Norouzi-Fard ⇤ ashkan.norouzifard@epfl.ch Abbas Bazzi ⇤ abbas.bazzi@epfl.ch Marwa El Halabi † marwa.elhalabi@epfl.ch Ilija Bogunovic † ilija.bogunovic@epfl.ch Ya-Ping Hsieh † ya-ping.hsieh@epfl.ch Volkan Cevher † volkan.cevher@epfl.ch Abstract We initiate the study of the classical Submodular Cover (SC) problem in the data streaming model which we refer to as the Streaming Submodular Cover (SSC). We show that any single pass streaming algorithm using sublinear memory in the size of the stream will fail to provide any non-trivial approximation guarantees for SSC. Hence, we consider a relaxed version of SSC, where we only seek to find a partial cover. We design the first Efficient bicriteria Submodular Cover Streaming (ESCStreaming) algorithm for this problem, and provide theoretical guarantees for its performance supported by numerical evidence. Our algorithm finds solutions that are competitive with the near-optimal offline greedy algorithm despite requiring only a single pass over the data stream. In our numerical experiments, we evaluate the performance of ESC-Streaming on active set selection and large-scale graph cover problems. 1 Introduction We consider the Streaming Submodular Cover (SSC) problem, where we seek to find the smallest subset that achieves a certain utility, as measured by a monotone submodular function. The data is assumed to arrive in an arbitrary order and the goal is to minimize the number of passes over the whole dataset while using a memory that is as small as possible. The motivation behind studying SSC is that many real-world applications can be modeled as cover problems, where we need to select a small subset of data points such that they maximize a particular utility criterion. Often, the quality criterion can be captured by a utility function that satisfies submodularity [27, 16, 15], an intuitive notion of diminishing returns. Despite the fact that the standard Submodular Cover (SC) problem is extensively studied and very well-understood, all the proposed algorithms in the literature heavily rely on having access to whole ground set during their execution. However, in many real-world applications, this assumption does not hold. For instance, when the dataset is being generated on the fly or is too large to fit in memory, having access to the whole ground set may not be feasible. Similarly, depending on the application, we may have some restrictions on how we can access the data. Namely, it could be that random access to the data is simply not possible, or we might be restricted to only accessing a small fraction of it. In all such scenarios, the optimization needs to be done on the fly. The SC problem is first considered by Wolsey [28], who shows that a simple greedy algorithm yields a logarithmic factor approximation. This algorithm performs well in practice and usually returns ⇤Theory of Computation Laboratory 2 (THL2), EPFL. These authors contributed equally to this work. †Laboratory for Information and Inference Systems (LIONS), EPFL 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. solutions that are near-optimal. Moreover, improving on its theoretical approximation guarantee is not possible under some natural complexity theoretic assumptions [12, 10]. However, such an offline greedy approach is impractical for the SSC, since it requires an infeasible number of passes over the stream. 1.1 Our Contribution In this work, we rigorously show that achieving any non-trivial approximation of SSC with a single pass over the data stream, while using a reasonable amount of memory, is not possible. More generally, we establish an uncontidional lower bound on the trade-off between the memory and approximation ratio for any p-pass streaming algorithm solving the SSC problem. Hence, we consider instead a relaxed version of SSC, where we only seek to achieve a fraction (1 −✏) of the specified utility. We develop the first Efficient bicriteria Submodular Cover Streaming (ESC-Streaming) algorithm. ESC-Streaming is simple, easy to implement, and memory as well time efficient. It returns solutions that are competitive with the near-optimal offline greedy algorithm. It requires only a single pass over the data in arbitrary order, and provides for any ✏> 0, a 2/✏approximation to the optimal solution, while achieving a (1 −✏) fraction of the specified utility. In our experiments, we test the performance of ESC-Streaming on active set selection in materials science and graph cover problems. In the latter, we consider a graph dataset that consists of more than 787 million nodes and 47.6 billion edges. 1.2 Related work Submodular optimization has attracted a lot of interest in machine learning, data mining, and theoretical computer science. Faced with streaming and massive data, the traditional (offline) greedy approaches fail. One popular approach to deal with the challenge of the data deluge is to adopt streaming or distributed perspective. Several submodular optimization problems have been studied so far under these two settings [25, 11, 9, 2, 20, 8, 18, 7, 14, 1]. In the streaming setting, the goal is to find nearly optimal solutions, with a minimal number of passes over the data stream, memory requirement, and computational cost (measured in terms of oracle queries). A related streaming problem to SSC was investigated by Badanidiyuru et al. [2], where the authors studied the streaming Submodular Maximization (SM) problem subject to a cardinality constraint. In their setting, given a budget k, the goal is to pick at most k elements that achieve the largest possible utility. Whereas for the SC problem, given a utility Q, the goal is to find the minimum number of elements that can achieve it. In the offline setting of cadinality constrained SM, the greedy algorithm returns a solution that is (1 −1/e) away from the optimal value [21], which is known to be the best solution that one can obtain efficiently [22]. In the streaming setting, Badanidiyuru et al. [2] designed an elegant single pass (1/2 −✏)-approximation algorithm that requires only O((k log k)/✏) memory. More general constraints for SM have also been studied in the streaming setting, e.g., in [8]. Moreover, the Streaming Set Cover problem, which is a special case of the SSC problem is extensively studied [25, 11, 9, 7, 14, 1]. In this special case, the elements in the data stream are m subsets of a universe X of size n, and the goal is to find the minimum number of sets k⇤that can cover all the elements in the universe X. The study of the Streaming Set Cover problem is mainly focused on the semi-streaming model, where the memory is restricted to eO(n)3. This regime is first investigated by Saha and Getoor [25], who designed a O(log n)-pass, O(log n)-approximation algorithm that uses eO(n) space. Emek and Rosén [11] show that if one restricts the streaming algorithm to perform only one pass over the data stream, then the best possible approximation guarantee is O(pn). This lower bound holds even for randomized algorithms. They also designed a deterministic greedy algorithm that matches this approximation guarantee. By relaxing the single pass constraint, Chakrabarti and Wirth [7] designed a p-pass semi-streaming (p + 1)n1/(p+1)-approximation algorithm, and proved that this is essentially tight up to a factor of (p + 1)3. Partial streaming submodular optimization. The Streaming Set Cover has also been studied from a bicriteria perspective, where one settles for solutions that only cover a (1−✏)-fraction of the universe. Building on the work of [11], the authors in [7] designed a semi-streaming p-pass streaming algorithm 3The eO notation is used to hide poly-log factors, i.e., eO(n) := O(n poly{log n, log m}) 2 that achieves a (1−✏, δ(n, ✏))-approximation, where δ(n, ✏) = min{8p✏1/p, (8p+1)n1/(p+1)}. They also provided a lower bound that matches their approximation ratio up to a factor of ⇥(p3). Distributed submodular optimization. Mirzasoleiman et al. [20] consider the SC problem in the distributed setting, where they design an efficient algorithm whose solution is close to that of the offline greedy algorithm. Moreover, they study the trade-off between the communication cost and the number of rounds to obtain such a solution. To the best of our knowledge, no other works have studied the general SSC problem. We propose the first efficient algorithm ESC-Streaming that approximately solves this problem with tight guarantees. 2 Problem Statement Preliminaries. We assume that we are given a utility function f : 2V 7! R+ that measures the quality of a given subset S ✓V , where V = {e1, · · · , em} is the ground set. The marginal gain associated with any given element e 2 V with respect to some set S ✓V , is defined as follows ∆f(e|S) := ∆(e|S) = f(S [ {e}) −f(S). In this work, we focus on normalized, monotone, submodular utility functions f, where f is referred to be: 1. submodular if for all S, T, such that S ✓T, and for all e 2 V \ T, ∆(e|S) ≥∆(e|T). 2. monotone if for all S, T such that S ✓T ✓V , we have f(S) f(T). 3. normalized if f(;) = 0. In the standard Submodular Cover (SC) problem, the goal is to find the smallest subset S ✓V that satisfies a certain utility Q, i.e., min S✓V |S| s.t. f(S) ≥Q. (SC) Hardness results. The SC problem is known to be NP-Hard. A simple greedy strategy [28] that in each round selects the element with the highest marginal gain until Q is reached, returns a solution of size at most H(maxe f({e}))k⇤, where k⇤is the size of the optimal solution set S⇤.4 Moreover, Feige [12] proved that this is the best possible approximation guarantee unless NP ✓DTIME " nO(log log n)# . This was recently improved to an NP-hardness result by Dinur and Steurer [10]. Streaming Submodular Cover (SSC). In the streaming setting, the main challenge is to solve the SC problem while maintaining a small memory and without performing a large number of passes over the data stream. We use m to denote the size of the data stream. Our first result states that any single pass streaming algorithm with an approximation ratio better than m/2, must use at least ⌦(m) memory. Hence, for large datasets, if we restrict ourselves to a single pass streaming algorithm with sublinear memory o(m), we cannot obtain any non-trivial approximation of the SSC problem (cf., Theorem 2 in Section 4). To obtain non-trivial and feasible guarantees, we need to relax the coverage constraint in SC. Thus, we instead solve the Streaming Bicriteria Submodular Cover (SBSC) defined as follows: Definition 1. Given ✏2 (0, 1) and δ ≥1, an algorithm is said to be a (1 −✏, δ)-bicriteria approximation algorithm for the SBSC problem if for any Submodular Cover instance with utility Q and optimal set size k⇤, the algorithm returns a solution S such that f(S) ≥(1 −✏)Q and |S| δk⇤. (1) 3 An efficient streaming submodular cover algorithm ESC-Streaming algorithm. The first phase of our algorithm is described in Algorithm 1. The algorithm receives as input a parameter M representing the size of the allowed memory. The 4Here, H(x) is the x-th harmonic number and is bounded by H(x) 1 + ln x. 3 discussion of the role of this parameter is postponed to Section 4. The algorithm keeps t + 1 = log(M/2) + 1 representative sets. Each representative set Sj (j = 0, .., t) has size at most 2j, and has a corresponding threshold value Q/2j. Once a new element e arrives in the stream, it is added to all the representative sets that are not fully populated, and for which the element’s marginal gain is above the corresponding threshold, i.e., ∆(e|Sj) ≥Q 2j . This phase of the algorithm requires only one pass over the data stream. The running time of the first phase of the algorithm is O(log(M)) for every element of the stream, since the per-element computational cost is O(log(M)) oracle calls. In the second phase (i.e., Algorithm 2), given a feasible ˜✏, the algorithm finds the smallest set Si among the stored sets, such that f(Si) ≥(1 −˜✏)Q. For any query, the running time of the second phase is O(log log(M)). Note that after one-pass over the stream, we have no limitation on the number of the queries that we can answer, i.e., we do not need another pass over the stream. Moreover, this phase does not require any oracle calls, and its total memory usage is at most M. Algorithm 1 ESC-Streaming Algorithm - Picking representative set t = log(M/2) 1: S0 = S1 = ... = St = ; 2: for i = 1, · · · , m do 3: Let e be the next element in the stream 4: for j = 0, · · · , t do 5: if ∆(e|Sj) ≥Q 2j and |Sj| 2j then 6: Sj Sj [ e 7: end if 8: end for 9: end for Algorithm 2 ESC-Streaming Algorithm - Responding to the queries Given value ˜✏, perform the following steps 1: Run a binary search on the S0, ..., St 2: Return the smallest set Si such that f(Si) ≥(1 −˜✏)Q 3: If no such set exists, Return “Assumption Violated” In the following section, we analyze ESC-Streaming and prove that it is an (1 −˜✏, 2/˜✏)-bicriteria approximation algorithm for SSC. Formally we prove the following: Theorem 1. For any given instance of SSC problem, and any values M, ˜✏, such that k⇤/˜✏M, where k⇤is size optimal solution to SSC, ESC-Streaming algorithm returns a (1 −˜✏, 2/˜✏)-approximation solution. 4 Theoretical Bounds Lower Bound. We start by establishing a lower bound on the tradeoff between the memory requirement and the approximation ratio of any p-pass streaming algorithm solving the SSC problem. Theorem 2. For any number of passes p and any stream size m, a p-pass streaming algorithm that, with probability at least 2/3, approximates the submodular cover problem to a factor smaller than m 1 p p+1, must use a memory of size at least ⌦ ✓ m 1 p p(p+1)2 ◆ . The proof of this theorem can be found in the supplementary material. Note that for p = 1, Theorem 2 states that any one-pass streaming algorithm with an approximation ratio better than m/2 requires at least ⌦(m) memory. Hence, for large datasets, Theorem 2 rules out any approximation of the streaming submodular cover problem, if we restrict ourselves to a one-pass streaming algorithm with sublinear memory o(m). This result motivates the study of the Streaming Bicriterion Submodular Cover (SBSC) problem as in Definition 1. Main result and discussion. Refining the analysis of the greedy algorithm for SC [28], where we stop once we have achieved a utility of (1 −✏)Q, yields that the number of elements that we 4 pick is at most k⇤ln(1/✏). This yields a tight (1 −✏, ln(1/✏))-bicriteria approximation algorithm for the Bicriteria Submoludar Cover (BSC) problem. One can turn this bicriteria algorithm into an (1 −✏, ln(1/✏))-bicriteria algorithm for SBSC, at the cost of doing k⇤ln(1/✏) passes over the data stream which may be infeasible for some applications. Moreover, this requires mk⇤ln " 1 ✏ # oracle calls, which may be infeasible for large datasets. To circumvent these issues, it is natural to parametrize our algorithm by a user defined memory budget M that the streaming algorithm is allowed to use. Assuming, for some 0 < ✏e−1, that the (1 −✏, ln(1/✏))-bicriteria solution given by the offline greedy algorithm’s fits in a memory of M/2 for the BSC variant of the problem, then our algorithm (ESC-Streaming) is guaranteed to return a (1 −1/ ln(1/✏), 2 ln(1/✏))-bicriteria solution for the SBSC problem, while using at most M memory. Hence in only one pass over the data stream, ESC-Streaming returns solutions guaranteed to cover, for small values of ✏, almost the same fraction of the utility as the greedy solution, loosing only a factor of two in the worst case solution size. Moreover, the number of oracle calls needed by ESC-Streaming is only m log M, which for M = 2k⇤ln(1/✏) is bounded by m log M = m log(2k⇤ln(1/✏)) | {z } oracle calls by ESC-Streaming algorithm ⌧mk⇤ln ✓1 ✏ ◆ | {z } oracle calls by greedy , which is more than a factor k⇤/ log(k⇤) smaller than the greedy algorithm. This enables ESCStreaming algorithm to perform much faster than the offline greedy algorithm. Another feature of ESC-Streaming is that it performs a single pass over the data stream, and after this unique pass, we are able to query a (1 −1/ ln(1/✏0), 2 ln(1/✏0))-bicriteria solution for any ✏✏0 e−1, without any additonal oracle calls. Whenever the above inequality does not hold, ESC-Streaming returns “Assumption Violated”. More precisely, we state the following Theorem whose proof can be found in the supplementary material. Theorem 3. For any given instance of SSC problem, and any values M, ✏, such that 2k⇤ln 1/✏M, where k⇤is the optimal solution size, ESC-Streaming algorithm returns a (1 −1/(ln 1/✏), 2 ln 1/✏)approximation solution. Remarks. Note that in Algorithm 1, we can replace the constant 2 by another choice of the constant 1 < ↵2. The representative sets are changed accordingly to ↵j, and t = log↵(M/↵). Varying ↵ provides a trade-off between memory and solution size guarantee. More precisely, for any 1 < ↵2, ESC-Streaming achieves a (1− 1 ln 1/✏, ↵ln 1/✏)-approximation guarantee, for instances of SSC where ↵k⇤ln 1/✏M. However, the improvement in the size approximation guarantee, comes at the cost of increased memory usage M−1 ↵−1 , and increased number of oracle calls m(log↵(M/↵) + 1). Notice that in the statement of Theorem 3, the approximation guarantee of ESC-Streaming is given with respect to a memory only large enough to fit the offline greedy algorithm’s solution. However, if we allow our memory M to be as large as k⇤/✏, then Theorem 1 follows immediately for ˜✏= 1/ ln(1/✏). 5 Example Applications Many real-world problems, such as data summarization [27], image segmentation [16], influence maximization in social networks [15], can be formulated as a submodular cover problem and can benefit from the streaming setting. In this section, we discuss two such concrete applications. 5.1 Active set selection To scale kernel methods (such as kernel ridge regression, Gaussian processes, etc.) to large data sets, we often rely on active set selection methods [23]. For example, a significant problem with Gaussian process prediction is that it scales as O(n3). Storing the kernel matrix K and solving the associated linear system is prohibitive when n is large. One way to overcome this is to select a small subset of data while maintaining a certain diversity. A popular approach for active set selection is Informative Vector Machine (IVM) [26], where the goal is to select a set S that maximizes the utility function f(S) = 1 2 log det(I + σ−2KS,S), . (2) 5 Here, KS,S is the submatrix of K, corresponding to rows/columns indexed by S, and σ > 0 is a regularization parameter. This utility function is monotone submodular, as shown in [17]. 5.2 Graph set cover In a lot of applications, e.g., influence maximization in social networks [15], community detection in graphs [13], etc., we are interested in selecting a small subset of vertices from a massive graph that “cover” in some sense a large fraction of the graph. In particular, in section 6, we consider two fundamental set cover problems: Dominating Set and Vertex Cover problems. Given a graph G(V, E) with vertex set V and edge set E, let ⇢(S) denote the neighbours of the vertices in S in the graph, and δ(S) the edges in the graph connect to a vertex in S. The dominating set is the problem of selecting the smallest set that covers the vertex set V , i.e., the corresponding utility is f(S) = |⇢(S) [ S|. The vertex cover is the problem of selecting the smallest set that covers the edge set E, i.e., the corresponding utility is f(S) = |δ(S)|. Both utilities are monotone submodular functions. 6 Experimental Results We address the following questions in our experiments: 1. How does ESC-Streaming perform in comparison to the offline greedy algorithm, in terms of solution size and speed? 2. How does ↵influence the trade-off between solution size and speed ? 3. How does ESC-Streaming scale to massive data sets? We evaluate the performance of ESC-Streaming on real-world data sets with two applications: active set selection and graph set cover problems, described in section 5. For active set selection, we choose a dataset having a size that permits the comparison with the offline greedy algorithm. For graph cover, we run ESC-Streaming on a large graph of 787 million nodes and 47.6 billion edges. We measure the computational cost in terms of the number of oracle calls, which is independent of the concrete implementation and platform. 6.1 Active Set Selection for Quantum Mechanics In quantum chemistry, computing certain properties, such as atomization energy of molecules, can be computationally challenging [24]. In this setting, it is of interest to choose a small and diverse training set, from which one can predict the atomization energy (e.g., by using kernel ridge regression) of other molecules. In this setting, we apply ESC-Streaming on the log-det function defined in Section 5.1 where we use the Gaussian kernel Kij = exp(−kxi−xjk2 2 2h2 ), and we set the hyperparameters as in [24]: σ = 1, h = 724. The dataset consists of 7k small organic molecules, each represented by a 276 dimensional vector. We set M = 215 and vary Q from f(V ) 2 to 3f(V ) 4 , and ↵from 1.1 to 2. We compare against offline greedy, and its accelerated version with lazy updates (Lazy Greedy)[19]. For all algorithms, we provide a vector of different values of ˜✏as input, and terminate once the utility (1 −˜✏)Q, corresponding to the smallest ˜✏, is achieved. Below we report the performance for the smallest and largest tested ˜✏= 0.01 and ˜✏= 0.5, respectively. In Figure 6.1, we show the performance of ESC-Streaming with respect to the offline greedy and lazy greedy, in terms of size of solutions picked and number of oracle calls made. The computational costs of all algorithms are normalized to those of offline greedy. It can be seen that standard ESC-Streaming, with ↵= 2, always chooses a set at most twice (largest ratio is 2.1089) as large as offline greedy, using at most 3.15% and 25.5% of the number of oracle calls made, respectively, by offline greedy and lazy greedy. As expected, varying the parameter ↵ leads to smaller solutions at the cost of more oracle calls: ↵= 1.1 leads to solutions roughly of the same size as the solutions found by the offline greedy. Note also that choosing larger values ↵leads to jumps in the solution sets sizes (c.f., 6.1). In particular, varying the required utility Q, even by a 6 Q=f(V ) 0.5 0.55 0.6 0.65 0.7 0.75 Number of oracle calls (%) 0 5 10 15 20 25 ESC-Streaming , = 1.1 ESC-Streaming , = 1.25 ESC-Streaming , = 1.5 ESC-Streaming , = 1.75 ESC-Streaming , = 2 Lazy Greedy Q=f(V ) 0.5 0.55 0.6 0.65 0.7 0.75 Size of solutions 0 500 1000 1500 2000 2500 3000 ESC-Streaming , = 1.1 ESC-Streaming , = 1.25 ESC-Streaming , = 1.5 ESC-Streaming , = 1.75 ESC-Streaming , = 2 Lazy Greedy Offline Greedy Q=f(V ) 0.5 0.55 0.6 0.65 0.7 0.75 Size of solutions 0 50 100 150 200 250 ESC-Streaming , = 1.1 ESC-Streaming , = 1.25 ESC-Streaming , = 1.5 ESC-Streaming , = 1.75 ESC-Streaming , = 2 Lazy Greedy Offline Greedy Figure 6.1: Active set selection of molecules: (Left) Percentage of oracle calls made relative to offline greedy, (Middle) Size of selected sets for ✏= 0.01, (Right) Size of selected sets for ✏= 0.5. small amount, may not be possible to achieve by the current solution’s size (↵j) and would require moving to a set larger by at least an ↵factor (↵j+1). Finally, we remark that even for this small dataset, offline greedy, for largest tested Q, required 1.2 ⇥107 oracle calls, and took almost 2 days to run on the same machine. 6.2 Cover problems on Massive graphs To assess the scalability of ESC-Streaming, we apply it to the "uk-2014" graph, a large snapshot of the .uk domain taken at the end of 2014 [5, 4, 3]. It consists of 787,801,471 nodes and 47,614,527,250 edges. This graph is sparse, with average degree 60.440, and hence requires large cover solutions. Storing this dataset (i.e., the adjacency list of the graph) on the hard-drive requires more than 190GB of memory. We solve both the Dominating Set and Vertex Cover problems, whose utility functions are defined in Section 5. For the Dominating Set problem, we set M = 520 MB, ↵= 2 and Q = 0.7|V |. We run the first phase of ESC-Streaming (c.f., Algorithm 1), then query for different values of ˜✏between 0 to 1, using Algorithm 2. Similarly, for the Vertex Cover problem, we set M = 320 MB, ↵= 2 and Q = 0.8|E|. Figure 6.2 shows the performance of ESC-Streaming on both the dominating set and vertex cover problems, in terms of utility achieved, i.e., number vertices/edges covered, for all the feasible ˜✏values, with respect to the size of the subset of vertices picked. As a baseline, we compare against a random selection procedure, that picks a random permutation of the vertices and then select any vertex with a non-zero marginal, until it reaches the same partial cover achieved by ESC-Streaming. Note that the offline greedy, even with lazy evaluations, is not applicable here since it does not terminate in a reasonable time, so we omit it from the comparison. Similarly, we do not compare against the Emek–Rosén’s algorithm [11], due to its large memory requirement of n log m, which in this case is roughly 20 times bigger than the memory used by ESC-Streaming. We do significantly better than a random selection, especially on the Vertex Cover problem, which for sparse graphs is more challenging than the Dominating Set problem. Since running the greedy algorithm on “uk-2014” graph goes beyond our computing infrastructure, we include another instance of the Dominating set cover problem on a smaller graph “Friendster", an online gaming network [29], to compare with offline greedy algorithm. This graph has 65.6 million nodes, and 1.8 billion edges. The memory required by ESC-Streaming is less than 30MB for ↵= 2. We let offline greedy run for 2 days, and gathered data for 2000 greedy iterations. Figure 6.2 (Right) shows that our performance almost matches the greedy solutions we managed to compute. 7 Conclusion In this paper, we consider the SC problem in the streaming setting, where we select the least number of elements that can achieve a certain utility, measured by a submodular function. We prove that there cannot exist any single pass streaming algorithm that can achieve a non-trivial approximation 7 Fraction of vertices selected 0 0.05 0.1 0.15 0.2 0.25 Fraction of edges covered 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 ESC-Streaming , = 2 Random Fraction of vertices selected 0 0.05 0.1 0.15 0.2 0.25 0.3 Fraction of vertices covered 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 ESC-Streaming , = 2 Random Fraction of vertices selected 0 0.05 0.1 0.15 0.2 0.25 Fraction of vertices covered 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 ESC-Streaming , = 2 Random Offline Greedy #10-4 0 0.5 1 0 0.05 0.1 0.15 Figure 6.2: (Left) Vertex cover on "uk-2014" (Middle) Dominating set on "uk-2014" (Right) Dominating set on "Friendster" of SSC, using sublinear memory, if the utility have to be met exactly. Consequently, we develop an efficient approximation algorithm, ESC-Streaming, which finds solution sets, slightly larger than the optimal solution, that partially cover the desired utility. We rigorously analyzed the approximation guarantees of ESC-Streaming, and compared these guarantees against the offline greedy algorithm. We demonstrate the performance of ESC-Streaming on real-world problems. We believe that our algorithm is an important step towards solving streaming and large scale submodular cover problems, which lie at the heart of many modern machine learning applications. Acknowledgments We would like to thank Michael Kapralov and Ola Svensson for useful discussions. This work was supported in part by the European Commission under ERC Future Proof, SNF 200021-146750, SNF CRSII2-147633, NCCR Marvel, and ERC Starting Grant 335288-OptApprox. References [1] Sepehr Assadi, Sanjeev Khanna, and Yang Li. Tight bounds for single-pass streaming complexity of the set cover problem. arXiv preprint arXiv:1603.05715, 2016. [2] Ashwinkumar Badanidiyuru, Baharan Mirzasoleiman, Amin Karbasi, and Andreas Krause. Streaming submodular maximization: Massive data summarization on the fly. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 671–680. ACM, 2014. [3] Paolo Boldi, Andrea Marino, Massimo Santini, and Sebastiano Vigna. BUbiNG: Massive crawling for the masses. In Proceedings of the Companion Publication of the 23rd International Conference on World Wide Web, pages 227–228. International World Wide Web Conferences Steering Committee, 2014. [4] Paolo Boldi, Marco Rosa, Massimo Santini, and Sebastiano Vigna. Layered label propagation: A multiresolution coordinate-free ordering for compressing social networks. In Sadagopan Srinivasan, Krithi Ramamritham, Arun Kumar, M. P. Ravindra, Elisa Bertino, and Ravi Kumar, editors, Proceedings of the 20th international conference on World Wide Web, pages 587–596. ACM Press, 2011. [5] Paolo Boldi and Sebastiano Vigna. The WebGraph framework I: Compression techniques. In Proc. of the Thirteenth International World Wide Web Conference (WWW 2004), pages 595–601, Manhattan, USA, 2004. ACM Press. [6] Amit Chakrabarti, Graham Cormode, and Andrew McGregor. Robust lower bounds for communication and stream computation. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 641–650. ACM, 2008. [7] Amit Chakrabarti and Tony Wirth. Incidence geometries and the pass complexity of semi-streaming set cover. arXiv preprint arXiv:1507.04645, 2015. [8] Chandra Chekuri, Shalmoli Gupta, and Kent Quanrud. Streaming algorithms for submodular function maximization. In Automata, Languages, and Programming, pages 318–330. Springer, 2015. [9] Erik D Demaine, Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. On streaming and communication complexity of the set cover problem. In Distributed Computing, pages 484–498. Springer, 2014. 8 [10] Irit Dinur and David Steurer. Analytical approach to parallel repetition. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, STOC ’14, pages 624–633, New York, NY, USA, 2014. ACM. [11] Yuval Emek and Adi Rosén. Semi-streaming set cover. In Automata, Languages, and Programming, pages 453–464. Springer, 2014. [12] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):634–652, 1998. [13] Santo Fortunato. Community detection in graphs. Physics reports, 486(3):75–174, 2010. [14] Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. Towards tight bounds for the streaming set cover problem. arXiv preprint arXiv:1509.00118, 2015. [15] David Kempe, Jon Kleinberg, and Éva Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146. ACM, 2003. [16] Gunhee Kim, Eric P Xing, Li Fei-Fei, and Takeo Kanade. Distributed cosegmentation via submodular optimization on anisotropic diffusion. In Computer Vision (ICCV), 2011 IEEE International Conference on, pages 169–176. IEEE, 2011. [17] Andreas Krause and Daniel Golovin. Submodular function maximization. Tractability: Practical Approaches to Hard Problems, 3:19, 2012. [18] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in mapreduce and streaming. ACM Transactions on Parallel Computing, 2(3):14, 2015. [19] Michel Minoux. Accelerated greedy algorithms for maximizing submodular set functions. In Optimization Techniques, pages 234–243. Springer, 1978. [20] Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, and Andreas Krause. Distributed submodular cover: Succinctly summarizing massive data. In Advances in Neural Information Processing Systems, pages 2863–2871, 2015. [21] George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions—i. Mathematical Programming, 14(1):265–294, 1978. [22] George L Nemhauser and Leonard A Wolsey. Best algorithms for approximating the maximum of a submodular set function. Mathematics of operations research, 3(3):177–188, 1978. [23] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning). The MIT Press, 2005. [24] Matthias Rupp. Machine learning for quantum mechanics in a nutshell. International Journal of Quantum Chemistry, 115(16):1058–1073, 2015. [25] Barna Saha and Lise Getoor. On maximum coverage in the streaming model & application to multi-topic blog-watch. In SDM, volume 9, pages 697–708. SIAM, 2009. [26] Matthias Seeger. Greedy forward selection in the informative vector machine. Technical report, Technical report, University of California at Berkeley, 2004. [27] Sebastian Tschiatschek, Rishabh K Iyer, Haochen Wei, and Jeff A Bilmes. Learning mixtures of submodular functions for image collection summarization. In Advances in Neural Information Processing Systems, pages 1413–1421, 2014. [28] Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982. [29] Jaewon Yang and Jure Leskovec. Defining and evaluating network communities based on ground-truth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, MDS ’12, pages 3:1–3:8, New York, NY, USA, 2012. ACM. 9
2016
125
6,023
Fundamental Limits of Budget-Fidelity Trade-off in Label Crowdsourcing Farshad Lahouti Electrical Engineering Department, California Institute of Technology lahouti@caltech.edu Babak Hassibi Electrical Engineering Department, California Institute of Technology hassibi@caltech.edu Abstract Digital crowdsourcing (CS) is a modern approach to perform certain large projects using small contributions of a large crowd. In CS, a taskmaster typically breaks down the project into small batches of tasks and assigns them to so-called workers with imperfect skill levels. The crowdsourcer then collects and analyzes the results for inference and serving the purpose of the project. In this work, the CS problem, as a human-in-the-loop computation problem, is modeled and analyzed in an information theoretic rate-distortion framework. The purpose is to identify the ultimate fidelity that one can achieve by any form of query from the crowd and any decoding (inference) algorithm with a given budget. The results are established by a joint source channel (de)coding scheme, which represent the query scheme and inference, over parallel noisy channels, which model workers with imperfect skill levels. We also present and analyze a query scheme dubbed k-ary incidence coding and study optimized query pricing in this setting. 1 Introduction Digital crowdsourcing (CS) is a modern approach to perform certain large projects using small contributions of a large crowd. Crowdsourcing is usually used when the tasks involved may better suite humans rather than machines or in situations where they require some form of human participation. As such crowdsourcing is categorized as a form of human-based computation or human-in-the-loop computation system. This article examines the fundamental performance limits of crowdsourcing and sheds light on the design of optimized crowdsourcing systems. Crowdsourcing is used in many machine learning projects for labeling of large sets of unlabeled data and Amazon Mechanical Turk (AMT) serves as a popular platform to this end. Crowdsourcing is also useful in very subjective matters such as rating of different goods and services, as is now widely popular in different online rating platforms and applications such as Yelp. Another example is if we wish to classify a large number of images as suitable or unsuitable for children. In so-called citizen research projects, a large number of –often human deployed or operated– sensors contribute to accomplish a wide array of crowdsensing objectives, e.g., [2] and [3]. In crowdsourcing, a taskmaster typically breaks down the project into small batches of tasks, recruits so-called workers and assigns them the tasks accordingly. The crowdsourcer then collects and analyzes the results collectively to address the purpose of the project. The worker’s pay is often low or non-existent. In cases such as labeling, the work is typically tedious and hence the workers usually handle only a small batch of work in a given project. The workers are often non-specialists 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. (a) (b) (c) Figure 1: (a): An information theoretic crowdsourcing model; (b) 3IC for N = 2 valid responses, (c) invalid responses and as such there may be errors in their completion of assigned tasks. Due to the nature of task assignment by the taskmaster, the workers and the skill level are typically unknown a priori. In case of rating systems, such as Yelp, there is no pay for regular reviewers but only non-monetary personal incentives; however, there are illegal reviewers who are paid to write fake reviews. In many cases of crowdsourcing, the ground truth is not known at all. The transitory or fleeting characteristic of workers, their unknown and imperfect skill levels and their possible motivations for spamming makes the design of crowdsourcing projects and the analysis of the obtained results particularly challenging. Researchers have studied the optimized design of crowdsourcing systems within the setting described for enhanced reliability. Most research reported so far is devoted to optimized design and analysis of aggregation and inference schemes and possibly using redundant task assignment. In AMT-type crowdsourcing, two popular approaches for aggregation are namely majority voting and the Dawid and Skene (DS) algorithm [5]. The former sets the estimate based on what the majority of the crowd agrees on, and is provably suboptimal [8]. Majority voting is susceptible to error, when there are spammers in the crowd, as it weighs the opinion of everybody in the crowd the same. The DS algorithm, within a probabilistic framework, aims at joint estimation of the workers’ skill levels and a reliable label based on the data collected from the crowd. The scheme runs as an expectation maximization (EM) formulation in an iterative manner. More recent research with similar EM formulation and a variety of probabilistic models are reported in [9, 12, 10]. In [8], a label inference algorithm for CS is presented that runs iteratively over a bipartite graph. In [1], the CS problem is posed as a so-called bandit survey problem for which the trade offs of cost and reliability in the context of worker selection is studied. Schemes for identifying workers with low skill levels is studied in, e.g., [6]. In [13], an analysis of the DS algorithm is presented and an improved inference algorithm is presented. Another class of works on crowdsourcing for clustering relies on convex optimization formulations of inferring clusters within probabilistic graphical models, e.g., [11] and the references therein. In this work, a crowdsourcing problem is modeled and analyzed in an information theoretic setting. The purpose is to seek ultimate performance bounds, in terms of the CS budget (or equivalently the number of queries per item) and a CS fidelity, that one can achieve by any form of query from the workers and any inference algorithm. Two particular scenarios of interest include the case where the workers’ skill levels are unknown both to the taskmaster and the crowdsourcer, or the case where the skill levels are perfectly estimated during inference by the crowdsourcer. Within the presented framework, we also investigate a class of query schemes dubbed k-ary incidence coding and analyze its performance. At the end, we comment on an associated query pricing strategy. 2 Modeling Crowdsourcing In this Section, we present a communication system model for crowdsourcing. The model, as depicted in Figure 1a, then enables the analysis of the fundamental performance limits of crowdsourcing. 2 2.1 Data Set: Source Consider a dataset X = {X1, . . . , XL} composed of L items, e.g., images. In practice, there is certain function B(X) ∈B(X) of the items that is of interest in crowdsourcing and is here considered as the source. The value of this function is to be determined by the crowd for the given dataset. In the case of crowdsourced clustering, B(Xi) = Bj ∈B(X) = {B1, . . . , BN} indicates the bin or cluster to which the item Xi ideally belongs. We have B(X1, . . . , Xn) = B(Xn) = (B(X1), . . . , B(Xn)). The number of clusters, |B(X)| = N, may or may not be known a priori. 2.2 Crowd Workers: Channels The crowd is modeled by a set of parallel noisy channels in which each channel Ci, i = 1, . . . , W, represents the ith worker. The channel input is a query that is designed based on the source. The channel output is what a user perceives or responds to a query. The output may or may not be the correct answer to the query depending on the skill level of the worker and hence the noisy channel is meant to model possible errors by the worker. A suitable model for Ci is a discrete channel model. The channels may be assumed independent, on the basis that different individuals have different knowledge sets. Related probabilistic models representing the possible error in completion of a task by a worker are reviewed in [8]. Formally, a channel (worker) is represented by a probability distribution P(v|u), u ∈U, v ∈V, where U is the set of possible responses to a query and V is the set of choices offered to the worker in responding to a query. For the example of images suitable for children, in general we may consider a shade of possible responses to the query, U, including the extremes of totally suitable and totally unsuitable; As possible choices offered to the worker to answer the query, V, we may consider two options of suitable and unsuitable. As described below, in this work we consider two channel models representing possibly erroneous responses of the workers: an M-ary symmetric channel model (MSC) and a spammer-hammer channel model (SHC). An MSC model with parameter ϵ, is a symmetric discrete memoryless channel without feedback [4] and with input u ∈U and output v ∈V (|U| = |V| = M), that is characterized by the following transition probability P(v|u) =  ϵ M−1 v ̸= u 1 −ϵ v = u. (1) If we consider a sequence of channel inputs un = (u1, . . . , un) and the corresponding output sequence vn, we have P(vn|un) = n i=1 P(vi|ui), which holds because of the memoryless and no feedback assumptions. In case of clustering and MSC, the probability of misclassifying any input from a given cluster in another cluster only depends on the worker and not the corresponding clusters. In the spammer-hammer channel model with the probability of being a hammer of q (SHC(q)), a spammer randomly and uniformly chooses a valid response to a query, and a hammer perfectly answers the query [8]. The corresponding discrete memoryless channel model without feedback, with input u ∈U and output v ∈V and state C ∈{S, H}, P(C = H) = q is described as follows P(v|u, C) = ⎧ ⎨ ⎩ 0 C = H and v ̸= u 1 C = H and v = u 1 |V| C = S (2) where C ∈{S, H} indicates whether the worker (channel) is a spammer or a hammer. In the case of our current interest |U| = |V| = M, and we have P(vn|un, Cn) = n i=1 P(vi|ui, Ci). In the sequel, we consider the following two scenarios: when the workers’ skill levels are unknown (SL-UK) and when it is perfectly known by the crowdsourcer (SL-CS). In both cases, we assume that the skill levels are not known at the taskmaster (transmitter). The presented framework can also accommodate other more general scenarios of interest. For example, the feedforward link in Figure 1a could be used to model a channel whose state is affected by the input, e.g., difficulty of questions. These extensions remain for future studies. 3 2.3 Query Scheme and Inference: Coding In the system model presented in Figure 1a, encoding shows the way the queries are posed. A basic query is that the worker is asked of the value of B(X). In the example of crowdsourcing for labeling images that are suitable for children, the query is "This image suits children; true or false?" The decoder or the crowdsourcer collects the responses of workers to the queries and attempts to infer the right label (cluster) for each of the images. This is while the collected responses could be in general incomplete or erroneous. In the case of crowdsourcing for labeling a large set of dog images with their breeds, a query may be formed by showing two pictures at once and inquiring whether they are from the same breed [11]. The queries in fact are posed as showing the elements of a binary incidence matrix, A, whose rows and columns correspond to X. In this case, A(X1, X2) = 1 indicates that the two are members of the same cluster (breed) and A(X1, X2) = 0 indicates otherwise. The matrix is symmetric and its diagonal is 1. We refer to this query scheme as Binary Incidence Coding. If we show three pictures at once and ask the user to classify them (put the pictures in similar or distinct bins); it is as if we ask about three elements of the same matrix, i.e., A(X1, X2), A(X1, X3) and A(X2, X3) (Ternary Incidence Coding). In general, if we show k pictures as a single query, it is equivalent to inquiring about C(k, 2) (choose 2 out of k elements) entries of the matrix (k-ary Incidence Coding or kIC). As we elaborate below, out of the 2C(k,2) possibilities, a number of the choices remain invalid and this provides an error correction capability for kIC. Figures 1b and 1c show the graphical representation of 3IC, and the choices a worker would have in clustering with this code. The nodes denote the items and the edges indicate whether they are in the same cluster. In 3IC, if X1 and X2 are in the same cluster as X3, then all three of them are in the same cluster. It is straightforward to see that in 3IC and for N = 2, we only have four valid responses (Figure 1b) to a query as opposed to 2C(3,2) = 8. The first item in Figure 1c is invalid because there are only two clusters (N = 2) (in case we do not know the number of clusters or N ≥3, then this would remain a valid response). In this setting, the encoded signal u can be one of the four valid symbols in the set U; and similarly what the workers may select v (decoded signal over the channel) is from the set V, where U = V. As such, since in kIC the obviously erroneous answers are removed from the choices a worker can make in responding to a query, one expects an improved overall CS performance, i.e., an error correction capability for kIC. In Section 4, we study the performance of this code in greater details. Note that in clustering with kIC (k ≥2) described above, the code would identify clusters up to their specific labellings. While we presented kIC as a concrete example, there may be many other forms of query or coding schemes. Formally, the code is composed of encoder and decoder mappings: C : B(X)n →U W i=1 mi, C′ : V W i=1 mi →ˆB(X) n, (3) where n is the block size or number of items in each encoding (we assume n|L), and mi is the number of uses of channel Ci or queries that worker i, 1 ≤i ≤W, handles. In many practical cases of interest, we have ˆB(X) = B(X) and we may have n = L. The rate of the code is R = W i=1 mi/n queries per item. In this setting, C′(C(B(Xn))) = ˆB(Xn). Depending on the availability of feedback from the decoder to the encoder, the code can adapt for optimized performance. The feedback could provide the encoder with the results of prior queries. We here focus on non-adaptive codes in (3) that are static and remain unchanged over the period of crowdsourcing project. We will elaborate on code design in Section 3. Depending on the type of code in use and the objectives of crowdsourcing, one may design different decoding schemes. For instance, in the simple case of directly inquiring workers about the function B(Xi), with multiple queries for each item, popular approaches are majority voting, and EM style decoding due to Dawid and Skene [5], where it attempts to jointly estimate the workers’ skill levels and decode B(X). In the case of clustering with 2IC, an inference scheme based on convex optimization is presented in [11]. The rate of the code is proportional to the CS budget and we use the rate as a proxy for budget throughout this analysis. However, since different types of query have different costs both financially (in crowdsourcing platforms) and from the perspective of time or effort it takes from the user to process it, one needs to be careful in comparing the results of different coding schemes. We shall elaborate on this issue for the case of kIC in Appendix E. 4 2.4 Distortion and the Design Problem In the framework of Figure 1a, we are interested to design the CS code, i.e., the query and inference schemes, such that with a given budget a certain CS fidelity is optimized. We consider the fidelity as an average distortion with respect to the source (dataset). For a distance function d(B(x), ˆB(x)), for which d(B(xn), ˆB(xn)) = 1 n n i=1 d(B(xi), ˆB(xi)), the average distortion is D(B(X), ˆB(X)) = Ed(B(Xn), ˆB(Xn)) =  Xn P(B(Xn))P( ˆB(Xn)|B(Xn))d(B(Xn), ˆB(Xn)), (4) where P(B(Xn)) = P(B(X))n for iid B(X). The design problem is therefore one of CS fidelityquery budget optimization (or distortion-rate, D(R), optimization) and may be expressed as follows D∗(Rt) = min C,C′,R≤Rt D(B(X), ˆB(X)) (5) where Rt is a target rate or query budget. The optimization is with respect to the coding and decoding schemes, the type of feedback (if applicable), and query assignment and rate allocation. The optimum solution to the above problem is referred to as the distortion-rate function, D∗(Rt) (or CS fidelity-query budget function). A basic distance function, for the case where B(X) is discrete, is l0(B(X), ˆB(X)), or the Hamming distance. In this case, the average distortion D(B(X), ˆB(X)) reflects the average probability of error. As such, the D(R) optimization problem may be rewritten as follows D∗(Rt) = min C,C′,R≤Rt P(E : ˆB(X) ̸= B(X)). (6) In case of crowdsourcing for clustering, this quantifies the performance in terms of the overall probability of error in clustering. For other crowdsourcing problems, we may consider other distortion functions. Equivalently, we may consider minimizing the rate subject to a constrained distortion in crowdsourcing. The R(D) problem is expressed as follows R∗(Dt) = min C,C′,D(B(X), ˆ B(X))≤Dt R = min C,C′,D(B(X), ˆ B(X))≤Dt W  i=1 mi/n (7) where Dt is a target distortion or average probability of error. The optimum solution to the above problem is referred to as the rate-distortion function, R∗(Dt) (CS query budget-fidelity function). In case, the taskmaster does not know the skill level of the workers, different users -disregarding their skill levels- would receive the same number of queries (mi = m′, ∀i); and the code design involves designing the query and inference schemes. 3 Information Theoretic CS Budget-Fidelity Limits In the CS budget-fidelity optimization problem in (5), the code providing the optimized solution indeed needs to balance two opposing design criteria to meet the target CS fidelity: On one hand the design aims at efficiency of the query and making as small number of queries as possible; On the other hand, the code needs to take into account the imperfection of worker responses and incorporate sufficient redundancy. In information theory (coding theory) realm, the former corresponds to source coding (compression) and the latter corresponds to channel coding (error control coding) and coding to serve both purposes is a joint source channel code. In this Section, we first present a brief overview on joint source channel coding and related results in information theory. Next, we present the CS budget-fidelity function in two cases of SL-UK and SL-CS described in Section 2.2. 3.1 Background Consider the communication of a random source Z from a finite alphabet Z over a discrete memoryless channel. The source is first processed by an encoder C and whose output is communicated over the channel. The channel output is processed by a decoder C′, which reconstructs the source as ˆZ ∈ˆZ and we often have Z = ˆZ. 5 From a rate-distortion theory perspective, we first consider the case where the channel is error free. The source is iid distributed with probability mass function P(Z) and based on Shannon’s source coding theorem is characterized by a rate-distortion function, R∗(Dt) = min C,C′:D(Z, ˆ Z)≤Dt I(Z, ˆZ), (8) where I(., .) indicates mutual information between two random variables. The source coding is defined by the following two mappings: C : Zn →{1, . . . , 2nR}, C′ : {1, . . . , 2nR} →ˆZn (9) The average distortion is defined in (4) and Dt is the target performance. The optimization in source coding with distortion is with respect to the source coding or compression scheme, that is described probabilistically as P( ˆZ|Z) in information theory. The proof of the source coding theorem follows in two steps: In the first step, we prove that any rate R ≥R∗(Dt) is achievable in the sense that there exists a family (as a function of n) of codes {Cn, C′ n} for which as n grows to infinity the resulting average distortion satisfies the desired constraint. In the second step or the converse, we prove that any code with rate R < R∗(Dt) results in an average distortion that violates the desired constraint. This establishes the described rate-distortion function as the fundamental limit for lossy compression of a source with a desired maximum average distortion. From the perspective of Shannon’s channel coding theorem, we consider the source as an iid uniform source and the channel as a discrete memoryless channel characterized by P(V |U), where U ∈U is the channel input and, V ∈V is the channel output. The channel coding is defined by the following two mappings: C : {1, . . . , |Z|} →Un C′ : Vn →{1, . . . , |Z|} (10) The theorem establishes the capacity of the channel as C = maxC,C′ I(Z, ˆZ) and states that for a rate R, there exists a channel code that provides a reliable communication over the noisy channel if and only if R ≤C. Again the proof follows in two steps: First, we establish achievability, i.e., we show that for any rate R ≤C, there exists a family of codes (as a function of length n) for which the average probability of error P( ˆZ ̸= Z) goes to zero as n grows to infinity. Next, we prove the converse, i.e., we show that for any rate R > C, the probability of error is always greater than zero and grows exponentially fast to 1/2 as R goes beyond C. This establishes the described capacity as the fundamental limit for transmission of an iid uniform source over a discrete memoryless channel. For the problem of our interest, i.e., the transmission of an iid source (not necessarily uniform) over a discrete memoryless channel, the joint source channel coding theorem, aka source-channel separation theorem, is instrumental. The theorem states that in this setting a code exists that can facilitate reconstruction of the source with distortion D(Z, ˆZ) ≤Dt if and only if R∗(Dt) < C. For completeness, we reproduce the theorem form [4] below. Theorem 1 Let Z be a finite alphabet iid source which is encoded as a sequence of n input symbols U n of a discrete memoryless channel with capacity C. The output of the channel V n is mapped onto the reconstruction alphabet ˆZn = C′(V n). Let D(Zn, ˆZn) be the average distortion achieved by this joint source and channel coding scheme. Then distortion D is achievable if and only if C > R∗(Dt). The proof follows a similar two step approach described above and assumes large block length (n →∞). The result is important from a communication theoretic perspective as a concatenation of a source code, which removes the redundancy and produces an iid uniform output at a rate R > R∗(Dt), and a channel code, which communicates this reliably over the noisy channel at a rate R < C, can achieve the same fundamental limit. 3.2 Basic Information Theoretic Bounds We here consider crowdsourcing within the presented framework, and derive basic information theoretic bounds. Following Section 2.1, we examine the case where a large dataset X (L →∞) and a function of interest B(X) with associated probability mass function, P(B(X)), are available. We consider the MSC worker pool model described in Section 2.2, where the skill set of workers are from a discrete set E = {ϵ1, ϵ2, . . . , ϵW ′} with probability P(ϵ), ϵ ∈E. The number of workers in each skill level class is assumed large. We here study the two scenarios of SL-UK and SL-CS. 6 At any given instance, a query is posed to a random worker with a random skill level within the set, E. We assume there is no feedback available from the decoder (non-adaptive coding) and the queries do not influence the channel probabilities (no feedforward). Extensions remain for future work. The following theorem identifies the information theoretic minimum number of queries per item to perform at least as good as a target fidelity in case the skill levels are not known (SL-UK). The bound is oblivious to the type of code used and serves as an ultimate performance bound. Theorem 2 In crowdsourcing for a large dataset of N-ary discrete source B(X) ∼P(B(X)) with Hamming distortion, when a large number of unknown workers with skill levels ϵ ∈E, ϵ ∼P(ϵ) from an MSC population participate (SL-UK), the minimum number of queries per item to obtain an overall error probability of at most ˆϵ, is given by Rmin =  H(B(X))−HN(ˆϵ) log2 M−HM(E(ϵ)) ˆϵ ≤min{1 −pmax, 1 −1 N } 0 otherwise, (11) in which HN(ϵ) ≜H(1 −ϵ, ϵ/(N −1), . . . , ϵ/(N −1)), and pmax = maxB(X)∈B(X) P(B(X)). The proof is provided in Appendix A. Another interesting scenario is when the crowdsourcer attempts to estimate the worker skill levels from the data it has collected as part of the inference. In case this estimation is done perfectly, the next theorem identifies the corresponding fundamental limit on the crowdsourcing rate. The proof is provided in Appendix B. Theorem 3 In crowdsourcing for a large dataset of N-ary discrete source B(X) ∼P(B(X)) with Hamming distortion, when a large number of workers with skill levels ϵ ∈E, ϵ ∼P(ϵ) -known to the crowdsourcer (SL-CS)- from an MSC population participate, the minimum number of queries per item to obtain an overall error probability of at most ˆϵ, is given by Rmin =  H(B(X))−HN(ˆϵ) log2 M−E(HM(ϵ)) ˆϵ ≤min{1 −pmax, 1 −1 N } 0 otherwise. (12) Comparing the results in Theorems 2 and 3 the following interesting observation can be made. In case the worker skill levels are unknown, the CS system provides an overall work quality (capacity) of an average worker; whereas when the skill levels are known at the crowdsourcer, the system provides an overall work quality that pertains to the average of the work quality of the workers. 4 k-ary Incidence Coding In this Section, we examine the performance of the k-ary incidence coding introduced in Section 2.3. The k-ary incidence code poses a query as a set of k ≥2 items and inquires the workers to identify those with the same label. In the sequel, we begin with deriving a lower-bound on the performance of kIC with a spammer-hammer worker pool. We then presents numerical results along with the information theoretic lower bounds presented in the previous Section. 4.1 Performance of kIC with SHC Worker Pool We consider kIC for crowdsourcing in the following setting. The items X in the dataset are iid with N = 2. There is no feedback from the decoder to the task manager (encoder), i.e., the code is non-adaptive. Since the task manager has no knowledge of the workers’ skill levels, it queries the workers at the same fixed rate of R queries per item. To compose a query, the items are drawn uniformly at random from the dataset. We assume that the workers are drawn from the SHC(q) model elaborated in Section 2.2. The purpose is to obtain a lower-bound on the performance assuming an Oracle decoder that can perfectly identify the workers’ skill levels (here a spammer or a hammer) and perform an optimal decoding. Specifically, we consider the following: min C′,C:kIC P(E : ˆB(X) ̸= B(X)) (13) where minimization is with respect to the choice of a decoder for a given kIC code. We note that the code length is governed by how the decoder operates, and often could be as long as the dataset. As 7 evident in (2), in the SHC model, the channel error rate (worker reliability) is explicitly influenced by the code and parameter, k. In the model of Figure 1a, this implies that a certain static feedforward exists in this setting. We first present a lemma, which is used later to establish a Theorem 4 on kIC performance. The proofs are respectively provided in Appendix C and Appendix D. Lemma 1 In crowdsourcing for binary labeling (N = 2) of a uniformly distributed dataset, with kIC and a SHC worker pool, the probability of error in labeling of an item by a spammer (C = S), is given by ¯ϵS = P(E : ˆB(X) ̸= B(X)|C = S) = ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ 1 k2k−1 ⌊(k−1)/2⌋ i=0 i × k i k odd 1 k2k−1 ⌊(k−1)/2⌋ i=0 i × k i + k 4 k k/2 k even. Theorem 4 Assuming crowdsourcing using a non-adaptive kIC over a uniformly distributed dataset (k ≥2), if the number of queries per item, R, is less than 1 k ln(1−q) ln ˆϵ ¯ϵS , then no decoder can achieve an average probability of labeling error less than ˆϵ for any L under the SHC(q) worker model. To interpret and use the result in Theorem 4, we consider the following points: (i) The theorem presents a necessary condition, i.e., the minimum rate (budget) requirement identified here for kIC with a given fidelity is a lower bound. This is due to the fact that we are considering an oracle CS decoder that can perfectly identify the workers’ skill levels and correctly label the item if the item is at least labeled by one hammer out of R′ times it is processed by the workers. (ii) In the current setting, where the taskmaster does not know the workers’ skill levels, each item is included in exactly R′ ∈Z+ k-ary queries. That is due to the nature of the code R′. (iii) As discussed in Appendix E, Theorem 4 can also be used to establish an approximate rule of thumb for pricing. Specifically, considering two query schemes k1IC and k2IC, the query price π is to be set as π(k1) π(k2) ≈k1 k2 . 4.2 Numerical Results To obtain an information theoretic benchmark, the next corollary specializes Theorem 3 to the setting of interest in this Section. Corollary 1 In crowdsourcing for binary labeling of a uniformly distributed dataset with a SHC(q) worker pool -known to the crowdsourcer (SL-CS)- and number of choices in responding to a query of M, the minimum rate for any given coding scheme to obtain a probability of error of at most ˆϵ, is Rmin =  1−Hb(ˆϵ) q log2 M , 0 ≤ˆϵ ≤0.5 0 otherwise. queries per item (14) Figure 2 shows the information theoretic limit of Corollary 1 and the bound obtained in Theorem 4. For rates (budgets) greater than the former bound, there exist a code which provides crowdsourcing with the desired fidelity; and for rates below this bound no such code exists. The coding theoretic lower bounds for kIC depend on k, q and fidelity, and improve as k and q grow. The kIC bounds for k = 1 is equivalent to the analysis leading to Lemma 1 of [8]. 8 Figure 2: kIC performance bound and the information theoretic limit References [1] I. Abraham, O. Alonso, V. Kandylas, and A. Slivkins. Adaptive crowdsourcing algorithms for the bandit survey problem. In 26th Conference on Learning Theory (COLT), 2013. [2] Audubon. History of the christmas bird count, 2015. URL http://birds.audubon.org. [3] Caltech. Community seismic network project, 2016. URL http://csn.caltech.edu. [4] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiely, New Jersey, USA, 2006. [5] A. P. Dawid and A. M. Skene. Maximum Likelihood Estimation of Observer Error-Rates Using the EM Algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1): 20–28, 1979. [6] O. Dekel and O. Shamir. Vox populi: Collecting high-quality labels from a crowd. In Proceedings of the Twenty-Second Annual Conference on Learning Theory, June 2009. [7] A El Gamal and Y.-H. Kim. Network Information Theory. Cambridge University Press, New York, USA, 2011. [8] D. R. Karger, S. Ohy, and D. Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, 61(1):1–24, 2014. [9] V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. J. Machine Learn. Res., 99(1):1297–1322, 2010. [10] P. Smyth, U. Fayyad, M. Burl, P. Perona, and P. Baldi. Inferring ground truth from subjective labeling of venus images. Adv. Neural Inform. Processing Systems, pages 1085–1092, 1995. [11] R. Korlakai Vinayak, S. Oymak, and B. Hassibi. Graph clustering with missing data: Convex algorithms and analysis. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, pages 2996–3004, 2014. [12] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: optimal integration of labels from labelers of unknown expertise. Adv. Neural Inform. Processing Systems, 22(1):2035–2043, 2009. [13] Y. Zhang, X. Chen, D. Zhou, and M. I. Jordan. Spectral methods meet EM: A provably optimal algorithm for crowdsourcing. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems, Montreal, Quebec, Canada, pages 1260–1268, 2014. 9
2016
126
6,024
Beyond Exchangeability: The Chinese Voting Process Moontae Lee Dept. of Computer Science Cornell University Ithaca, NY 14853 moontae@cs.cornell.edu Seok Hyun Jin Dept. of Computer Science Cornell University Ithaca, NY 14853 sj372@cornell.edu David Mimno Dept. of Information Science Cornell University Ithaca, NY 14853 mimno@cornell.edu Abstract Many online communities present user-contributed responses such as reviews of products and answers to questions. User-provided helpfulness votes can highlight the most useful responses, but voting is a social process that can gain momentum based on the popularity of responses and the polarity of existing votes. We propose the Chinese Voting Process (CVP) which models the evolution of helpfulness votes as a self-reinforcing process dependent on position and presentation biases. We evaluate this model on Amazon product reviews and more than 80 StackExchange forums, measuring the intrinsic quality of individual responses and behavioral coefficients of different communities. 1 Introduction With the expansion of online social platforms, user-generated content has become increasingly influential. Customer reviews in e-commerce like Amazon are often more helpful than editorial reviews [14], and question answers in Q&A forums such as StackOverflow and MathOverflow are highly useful for coders and researchers [9, 18]. Due to the diversity and abundance of user content, promoting better access to more useful information is critical for both users and service providers. Helpfulness voting is a powerful means to evaluate the quality of user responses (i.e., reviews/answers) by the wisdom of crowds. While these votes are generally valuable in aggregate, estimating the true quality of the responses is difficult because users are heavily influenced by previous votes. We propose a new model that is capable of learning the intrinsic quality of responses by considering their social contexts and momentum. Previous work in self-reinforcing social behaviors shows that although inherent quality is an important factor in overall ranking, users are susceptible to position bias [12, 13]. Displaying items in an order affects users: top-ranked items get more popularity, while low-ranked items remain in obscurity. We find that sensitivity to orders also differs across communities: some value a range of opinions, while others prefer a single authoritative answer. Summary information displayed together can lead to presentation bias [19]. As the current voting scores are visibly presented with responses, users inevitably perceive the score before reading the contents of responses. Such exposure could immediately nudge user evaluations toward the majority opinion, making high-scored responses more attractive. We also find that the relative length of each response affects the polarity of future votes. Res Votes Diff Ratio Relative Quality 1 + + + −−− 0 0.5 quite negative 2 + −+ −+− 0 0.5 moderately negative 3 −+ −+ −+ 0 0.5 moderately positive 4 −−−+ ++ 0 0.5 quite positive Table 1: Quality interpretation for each sequence of six votes. Standard discrete models for self-reinforcing process include the Chinese Restaurant Process and the Pólya urn model. Since these models are exchangeable, the order of events does not affect the probability of a sequence. However, Table 1 suggests how different contexts of votes cause different impacts. While the four sequences have equal numbers of positive and 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. negative votes in aggregate, the fourth votes in the first and last responses are given against a clear majority opinion. Our model treats objection as a more challenging decision, thereby deserving higher weight. In contrast, the middle two sequences receive alternating votes. As each vote is a relatively weaker disagreement, the underlying quality is moderate compared to the other two responses. Furthermore, if these are responses to one item, the order between them also matters. If the initial three votes on the fourth response pushed its display position to the next page, for example, it might not have a chance to get future votes, which recover its reputation. The Chinese Voting Process (CVP) models generation of responses and votes, formalizing the evolution of helpfulness under positional and presentational reinforcement. Whereas most previous work on helpfulness prediction [7, 5, 8, 4, 11, 10, 15] has involved a single snapshot, the CVP estimates intrinsic quality of responses solely from selection and voting trajectories over multiple snapshots. The resulting model shows significant improvements in predictive probability for helpfulness votes, especially in the critical early stages of a trajectory. We find that the CVP estimated intrinsic quality ranks responses better than existing system rank, correlating orderly with the sentiment of comments associated with each response. Finally, we qualitatively compare different characteristics of self-reinforcing behavior between communities using two learned coefficients: Trendiness and Conformity. The two-dimensional embedding in Figure 1 characterizes different opinion dynamics from Judaism to Javascript (in StackOverflow). Figure 1: 2D Community embedding. Each of 83 communities is represented by two behavioral coefficients (Trendiness, Conformity). Eleven clusters are grouped based on their common focus. The MEAN community is synthesized by sampling 20 questions from every community (except Amazon due to the different user interface). Related work. There is strong evidence that helpfulness voting is socially influenced. Helpfulness ratings on Amazon product reviews differ significantly from independent human annotators [8]. Votes are generally more positive, and the number of votes decreases exponentially based on displayed page position. Review polarity is biased towards matching the consensus opinion [4]: when two reviews contain essentially the same text but differ in star rating, the review closer to the consensus star rating is considered more helpful. There is also evidence that users vote strategically to correct perceived mismatches in review rank [16]. Many studies have attempted to predict helpfulness given review-content features [7, 5, 11, 10, 15]. Each of these examples predicts helpfulness based on text, star-ratings, sales, and badges, but only at a single snapshot. Our work differs in two ways. First, we combine data on Amazon helpfulness votes from [16] with a much larger collection of helpfulness votes from 82 StackExchange forums. Second, instead of considering text-based features (which we hold out for evaluation) within a single snapshot, we attempt to predict the next vote at each stage based on the previous voting trajectory over multiple snapshots without considering textual contents. 2 The Chinese Voting Process Our goal is to model helpfulness voting as a two-phase self-reinforcing stochastic process. In the selection phase, each user either selects an existing response based on their positions or writes a new 2 response. The positional reinforcement is inspired by the Chinese Restaurant Process (CRP) and Distance Dependent Chinese Restaurant Process (ddCRP). In the voting phase, when one response is selected, the user chooses one of the two feedback options: a positive or negative vote based on the intrinsic quality and the presentational factors. The presentational reinforcement is modeled by a log-linear model with time-varying features based on the Pólya urn model. The CVP implements the-rich-get-richer dynamics as an interplay of these two preferential reinforcements, learning latent qualities of individual responses as inspired in Table 1. Specifically, each user at time t interested in the item i follows the generative story in Table 2. Generative process Sample parametrization (Amazon) 1. Evaluate j-th response: p(z(t) i = j|z(1:t−1) i ; ↵) / f(t−1) i (j) (a) ‘Yes’: p(v(t) i = 1|✓) = logit−1(qij + g(t−1) i (j)) (b) ‘No’ : p(v(t) i = 0|✓) = 1 −p(v(t) i = 1|✓) 2. Or write a new response: p(z(t) i = Ji + 1|z(1:t−1) i ; ↵) / ↵ (a) Sample qi(J+1) from N(0, σ2). f(t) i (j) = ⇣ 1 1 + the-display-rank(t) i (j) ⌘⌧ g(t) i (j) = λr(t) ij + µs(t) ij + ⌫iu(t) ij ✓= {{qij}, λ, µ, {⌫i}} Ji = J(t−1) i (abbreviated notation) Table 2: The generative story and the parametrization of the Chinese Voting Process (CVP). 2.1 Selection phase The CRP [1, 2] is a self-reinforcing decision process over an infinite discrete set. For each item (product/question) i, the first user writes a new response (review/answer). The t-th subsequent user can choose an existing response j out of J(t−1) i possible responses with probability proportional to the number of votes n(t−1) j given to the response j by time t −1, whereas the probability of writing a new response J(t−1) i + 1 is proportional to a constant ↵. While the CRP models self-reinforcement — each vote for a response makes that response more likely to be selected later — there is evidence that the actual selection rate in an ordered list decays with display rank [6]. Since such rankings are mechanism-specific and not always clearly known in advance, we need a more flexible model that can specify various degrees of positional preference. The ddCRP [3] introduces a function f that decays with respect to some distance measure. In our formulation, the distance function varies over time and is further configurable with respect to the specific interface of service providers. Specifically, the function f(t) i (j) in the CVP evaluates the popularity of the j-th response in the item i at time t. Since we assume that popularity of responses is decided by their positional accessibility, we can parametrize f to be inversely proportional to their display ranks. The exponent ⌧determines sensitivity to popularity in the selection phase by controlling the degree of harmonic penalization over ranks. Larger ⌧> 0 indicates that users are more sensitive to trendy responses displayed near the top. If ⌧< 0, users often select low-ranked responses over high-ranked ones for some reasons.1 Note that even if the user at time t does not vote on the j-th response, f(t) i (j) could be different from f(t−1) i (j) in the CVP,2 whereas n(t) ij = n(t−1) ij in the CRP. Thus one can view the selection phase of the CVP as a non-exchangeable extension of the CRP via a time-varying function f. 2.2 Voting phase We next construct a self-reinforcing process for the inner voting phase. The Pólya urn model is a self-reinforcing decision process over a finite discrete set, but because it is exchangeable, it is unable to capture contextual information encoded in each a sequence of votes. We instead use a log-linear formulation with the urn-based features, allowing other presentational features to be flexibly incorporated based on the modeler’s observations. Each response initially has x = x(0) positive and y = y(0) negative votes, which could be fractional pseudo-votes. For each draw of a vote, we return w + 1 votes with the same polarity, thus selfreinforcing when w > 0. The following Table 3 shows time-evolving positive/negative ratios r(t) j = x(t) j /(x(t) j + y(t) j ) and s(t) j = y(t) j /(x(t) j + y(t) j ) of the first two responses: j 2 {1, 2} in Table 1 with the corresponding ratio gain ∆(t) j = r(t) j −r(t−1) j (if v(t) j = 1 or +) or s(t) j −s(t−1) j (if v(t) j = 0 or −). 1This sometimes happens especially in the early stage when only a few responses exist. 2Say the rank of another response j0 was lower than j’s at time t −1. If t-th vote given to the response j0 raises its rank higher than the rank of the response j, then f(t) i (j) < f(t−1) i (j) assuming ⌧> 0. 3 t or T v(t) 1 r(t) 1 s(t) 1 ∆(t) 1 qT 1 v(t) 2 r(t) 2 s(t) 2 ∆(t) 2 qT 2 0 1/2 1/2 1/2 1/2 1 + 2/3 1/3 0.167 − + 2/3 1/3 0.167 − 2 + 3/4 1/4 0.083 0.363 − 2/4 2/4 0.167 -0.363 3 + 4/5 1/5 0.050 0.574 + 3/5 2/5 0.100 0.004 4 − 4/6 2/6 0.133 0.237 − 3/6 3/6 0.100 -0.230 5 − 4/7 3/7 0.095 0.004 + 4/7 3/7 0.071 0.007 6 − 4/8 4/8 0.071 -0.175 − 4/8 4/8 0.071 -0.166 Table 3: Change of quality estimation qj over times for the first two example responses in Table 1 with the initial pseudo-votes (x, y, w) = (1, 1, 1). The estimated quality at the first response sharply decreases when receiving the first majority-against vote at t = 4. The first response ends up being more negative than the second, even if they receive the same number of votes in aggregate. These non-exchangeable behaviors cannot be modeled with a simple exchangeable process. In this toy setting, the polarity of a vote to a response is an outcome of its intrinsic quality as well as presentational factors: positive and negative votes. Thus we model each sequence of votes by `2-regularized logistic regression with the latent intrinsic quality and the Pólya urn ratios.3 max ✓ log T Y t=2 logit−1$ qT j + λr(t−1) j + µs(t−1) j % −1 2k✓k2 2 where ✓= $ qT j , λ, µ % (1) The {qT j } in the Table 3 shows the result from solving (1) up to T-th votes for each j 2 {1, 2}. The initial vote given at t = 1 is disregarded in the training due to its arbitrariness from the uniform prior (x0 = y0). Since it is quite possible to have only positive or only negative votes, Gaussian regularization is necessary. Note that using the urn-based ratio features is essential to encode contextual information. If we instead use raw count features (only the numerators of rj and sj), for example in the first response, the estimated quality qT 1 keeps increasing even after getting negative votes from time 4 to 6. Log raw count features are unable to infer the negative quality. In the first response, ∆(t) 1 shows the decreasing gain in positive ratios from t = 1 to 3 and in negative ratios from t = 4 to 6, whereas it gains a relatively large momentum at the first negative vote when t = 4. ∆(t) 2 converges to 0 in the 2nd response, implying that future votes have less effect than earlier votes for alternating +/−votes. qT 2 also converges to 0 as we expect neutral quality in the limit. Overall the model is capable of learning intrinsic quality as desired in Table 1 where relative gains can be further controlled by tuning the initial pseudo-votes (x, y). In the real setting, the polarity score function g(t) i (j) in the CVP evaluates presentational factors of the j-th response in the item i at time t. Because we adopt a log-linear formulation, one can easily add additional information about responses. In addition to the positive ratio r(t) ij and the negative ratio s(t) ij , g also contains a length feature u(t) ij (as given in Table 2), which is the relative length of the response j against the average length of responses in the item i at particular time t. Users in some items may prefer shorter responses than longer ones for brevity, whereas users in other items may blindly believe that longer responses are more credible before reading their contents. The parameter ⌫i explains length-wise preferential idiosyncrasy as a per-item bias: ⌫i < 0 means a preference toward the shorter responses. Note that g(t) i (j) could be different from g(t−1) i (j) even if the user at time t does not choose to vote.4 All together, the voting phase of the CVP generates non-exchangeable votes. 3 Inference Each phase of the CVP depends on the result of all previous stages, so decoupling these related problems is crucial for efficient inference. We need to estimate community-level parameters, itemlevel length preferences, and response-level intrinsic qualities. The graphical model of the CVP and corresponding parameters to estimate are illustrated in Table 4. We further compute two communitylevel behavioral coefficients: Trendiness and Conformity, which are useful summary statistics for exploring different voting patterns and explaining macro characteristics across different communities. 3One might think (1) can be equivalently achievable with only two parameters because of r(t) j + s(t) j = 1 for all t. However, such reparametrization adds inconsistent translations to qT j and makes it difficult to interpret different inclinations between positive and negative votes for various communities. 4If a new response is written at time t, u(t) ij 6= u(t−1) ij as the new response changes the average length. 4 Process ↵: hyper-parameter for response growth σ2: hyper-parameter for quality variance ⌧: community-level sensitivity to popularity λ: community-level preference for positive ratio µ: community-level preference for negative ratio ⌫i: item-level preference for response length qij: response-level hidden intrinsic quality m: # of items (e.g., products/questions) Ji: # of responses of item i (e.g., reviews/answers) Table 4: Graphical model and parameters for the CVP. Only three time steps are unrolled for visualization. Parameter inference. The goal is to infer parameters ✓= {{qij}, λ, µ, {⌫i}}. We sometimes use f and g instead to compactly indicate parameters associated to each function. The likelihood of one CVP step in the item i at time t is L(t) i (⌧, ✓; ↵, σ) = n ↵ ↵+PJ(t−1) i j=1 f(t−1) i (j) N(qi,z(t) i ; 0, σ2) o [z(t) i =J(t−1) i +1]n f(t−1) i (z(t) i ) ↵+PJ(t−1) i j=1 f(t−1) i (j) p(v(t) i |qi,z(t) i , g(t−1) i (j)) o [z(t) i J(t−1) i ] where the two terms correspond to writing a new response and selecting an existing response to vote. The fractions in each term respectively indicate the probability of writing a new response and choosing existing responses in the selection phase. The other two probability expression in each term describe quality sampling from a normal distribution and the logistic regression in the voting phase. It is important to note that our setting differs from many CRP-based models. The CRP is typically used to represent a non-parametric prior over the choice of latent cluster assignments that must themselves be inferred from noisy observations. In our case, the result of each choice is directly observable because we have the complete trajectory of helpfulness votes. As a result, we only need to infer the continuous parameters of the process, and not combinatorial configurations of discrete variables. Since we know the complete trajectory where the rank inside the function f is a part of the true observations, we can view each vote as an independent sample. Denoting the last timestamp of the item i by Ti, the log-likelihood becomes `(⌧, ✓; ↵, σ) = Pm i=1 PTi t=1 log L(t) i and is further separated into two pieces: `v(✓; σ) = m X i=1 Ti X t=1 n [write] · log N(qi,z(t) i ; 0, σ2) + [choose] · log p(v(t) i |qi,z(t) i , g(t−1) i (j)) o , (2) `s(⌧; ↵) = m X i=1 Ti X t=1 n [write] · log ↵ ↵+ PJ(t−1) i j=1 f(t−1) i (j) + [choose] · log f(t−1) i (z(t) i ) ↵+ PJ(t−1) i j=1 f(t−1) i (j) o . Inferring a whole trajectory based only on the final snapshots would likely be intractable for a non-exchangeable model. Due to the continuous interaction between f and g for every time step, small mis-predictions in the earlier stages will cause entirely different configurations. Moreover the rank function inside f is in many cases site-specific.5 It is therefore vital to observe all trajectories of random variables {z(t) i , v(t) i }: decoupling f and g reduces the inference problem into estimating parameters separately for the selection phase and the voting phase. Maximizing `v can be efficiently solved by `2-regularized logistic regression as demonstrated for (1). If the hyper-parameter ↵is fixed, maximizing `s becomes a convex optimization because ⌧appears in both the numerator and the denominator. Since the gradient for each parameter in ✓is obvious, we only include the gradient of `(t) s,i for the particular item i at time t with respect to ⌧. Then @`s @⌧= Pm i=1 PTi t=1 @`(t) s,i/@⌧. @`(t) s,i @⌧ = 1 ⌧ ( [z(t) i J(t−1) i ] · f(t−1) i (z(t) i ) log f(t−1) i (z(t) i ) f(t−1) i (z(t) i ) − PJ(t−1) i j=1 f(t−1) i (j) log f(t−1) if (j) ↵+ PJ(t−1) i j=1 f(t−1) i (j) ) (3) 5We generally know that Amazon decides the display order by the portion of positive votes and the total number of votes on each response, but the relative weights between them are not known. We do not know how StackExchange forums break ties, which affects highly in the early stages of voting. 5 Community Selection Voting Residual Bumpiness CRP CVP qij λ ⌫i qij, λ qij, ⌫i λ, ⌫i Full Rank Qual Rank Qual SOF(22925) 2.152 1.989 .107 .103 .108 .100 .106 .100 .096 .005 .003 .080 .038 math(6245) 1.841 1.876 .071 .064 .067 .062 .066 .060 .059 .014 .008 .280 .139 english(5242) 1.969 1.924 .160 .146 .152 .141 .147 .137 .135 .018 .007 .285 .149 mathOF(2255) 1.992 1.910 .049 .046 .049 .045 .047 .046 .045 .009 .007 .185 .119 physics(1288) 1.824 1.801 .174 .155 .166 .150 .156 .146 .142 .032 .014 .497 .273 stats(598) 1.889 1.822 .051 .044 .048 .043 .046 .042 .042 .030 .019 .613 .347 judaism(504) 2.039 1.859 .135 .124 .132 .121 .125 .118 .116 .046 .018 .875 .403 amazon(363) 2.597 2.261 .266 .270 .262 .254 .243 .253 .240 .023 .016 .392 .345 meta.SOF(294) 1.411 1.575 .261 .241 .270 .229 .243 .232 .225 .018 .013 .281 .255 cstheory(279) 1.893 1.795 .052 .040 .053 .039 .049 .039 .038 .032 .029 .485 .553 cs(123) 1.825 1.780 .128 .100 .118 .099 .113 .097 .096 .069 .040 .725 .673 linguistics(107) 1.993 1.789 .133 .127 .130 .122 .123 .120 .116 .074 .038 .778 .656 AVERAGE 2.050 1.945 .109 .103 .108 .099 .105 .098 .095 .011 .006 .186 .101 Table 5: Predictive analysis on the first 50 votes: In the selection phase, the CVP shows better negative log-likelihood in almost all forums. In the voting phase, the full model shows better negative log-likelihood than all subsets of features. Quality analysis at the final snapshot: Smaller residuals and bumpiness show that the order based on the estimated quality qij more coherently correlates with the average sentiments of the associated comments than the order by display rank. (SOF=StackOverflow, OF=Overflow, rest=Exchange, Blue: p 0.001, Green: p 0.01, Red: p 0.05) Behavioral coefficients. To succinctly measure overall voting behaviors across different communities, we propose two community-level coefficients. Trendiness indicates the sensitivity to positional popularity in the selection phase. While the community-level ⌧parameter renders Trendiness simply to avoid overly-complicated models, one can easily extend the CVP to have per-item ⌧i to better fit the data. In that case, Trendiness would be a summary statistics for {⌧i}. Conformity captures users’ receptiveness to prevailing polarity in the voting phase. To count every single vote, we define Conformity to be a geometric mean of odds ratios between majority-following votes and majoritydisagreeing votes. Let Vi be the set of time steps when users vote rather than writing responses in the item i. Say n is the total number of votes across all items in the target community. Then Conformity is defined as = ( m Y i=1 Y t2Vi ✓P(v(t+1) i = 1|qt i,z(t+1) i , λt, µt, ⌫t i) P(v(t+1) i = 0|qt i,z(t+1) i , λt, µt, ⌫t i) ◆h(t) i )1/n where h(t) i = ( 1 (n+(t) ij ≥n−(t) ij ) −1 (n+(t) ij < n−(t) ij ) . To compute Conformity , we need to learn ✓t = {qt ij, λt, µt, ⌫t i} for each t, which is a set of parameters learned on the data only up to the time t. This is because the user at time t cannot see any future which will be given later than the time t. Note that ✓t+1 can be efficiently learned by warm-starting at ✓t. In addition, while positive votes are mostly dominant in the end, the dominant mood up to time t could be negative, exactly when the user at time t + 1 tries to vote. In this case, h(t) i becomes −1, inverting the fraction to be the ratio of following the majority against the minority. By summarizing learned parameters in terms of two coefficients (⌧, ), we can compare different selection/voting behaviors for various communities. 4 Experiments We evaluate the CVP on product reviews from Amazon and 82 issue-specific forums from the StackExchange network. The Amazon dataset [16] originally consisted of 595 products with daily snapshots of writing/voting trajectories from Oct 2012 to Mar 2013. After eliminating duplicate products6 and products with fewer than five reviews or fragmented trajectories,7 363 products are left. For the StackExchange dataset8, we filter out questions from each community with fewer than five answers besides the answer chosen by the question owner.9 We drop communities with fewer than 100 questions after pre-processing. Many of these are “Meta” forums where users discuss policies and logistics for their original forums. 6Different seasons of the same TV shows have different ASIN codes but share the same reviews. 7If the number of total votes between the last snapshot of the early fragment and the first snapshot of the later fragment is less than 3, we fill in the missing information simply with the last snapshot of the earlier fragment. 8Dataset and statistics are available at https://archive.org/details/stackexchange. 9The answer selected by the question owner is displayed first regardless of voting scores. 6 Figure 2: Comment and likelihood analysis on the StackOverflow forum. The left panels show that responses with higher ranks tend to have more comments (top) and more positive sentiments (bottom). The middle panels show responses have more comments at both high and low intrinsic quality qij (top). The corresponding sentiment correlates more cohesively with the quality score (bottom). Each blue dot is approximately an average over 1k responses, and we parse 337k comments given on 104k responses in total. The right panels show predictive power for the selection phase (top) and the voting phase (bottom) up to t < 50 (lower is better). Predictive analysis. In each community, our prediction task is to learn the model up to time t and predict the action at t + 1. We align all items at their initial time steps and compute the average negative log-likelihood of the next actions based on the current model. Since the complete trajectory enables us to separate the selection and voting phases in inference, we also measure the predictive power of these two tasks separately against their own baselines. For the selection phase, the baseline is the CRP, which selects responses proportional to the number of accumulated votes or writes a new response with the probability proportional to ↵.10 When t < 50, as shown in the first column of Table 5, the CVP significantly outperforms the CRP based on paired t-tests (two-tailed). Using the function f based on display rank and Trendiness parameter ⌧is indeed a more precise representation of positional accessibility. Especially in the early stages, users often select responses displayed at lower ranks with fewer votes. While the CRP has no ability to give high scores in these cases, the CVP properly models it by decreasing ⌧. The comparative advantage of the CVP declines as more votes become available and the correlation between display rank and the number of votes increases. For items with t ≥50, there is no significant difference between the two models as exemplified in the third column of Figure 2. These results are coherent across other communities (p > 0.07). Improving predictive power on the voting phase is difficult because positive votes dominate in every community. We compare the fully parametrized model to simpler partial models in which certain parameters are set to zero. For example, a model with all parameters but λ knocked out is comparable to a plain Pólya Urn. As illustrated in the second column of Table 5, we verify that every sub-model is significantly different from the full model in all major communities based on one-way ANOVA test, implying that each feature adds distinctive and meaningful information. Having the item-specific length bias ⌫i provides significant improvements as well as having intrinsic quality qij and current opinion counts λ. While we omit the log-likelihood results with t ≥50, all model better predicts true polarity when t ≥50, because the log-linear model obtains a more robust estimate of community-level parameters as the model acquires more training samples. Quality analysis. The primary advantage of the CVP is its ability to learn “intrinsic quality” for each response that filters out noise from self-reinforcing voting processes. We validate these scores by comparing them to another source of user feedback: both StackExchange and Amazon allow users to attach comments to responses along with votes. For each response, we record the number of comments and the average sentiment of those comments as estimated by [17]. As a baseline, we 10We fix ↵to 0.5 after searching over a wide range of values. 7 also calculate the final display rank of each response, which we convert to a z-score to make it more comparable to the quality scores qij. After sorting responses based on display rank and quality rank, we measure the association between the two rankings and comment sentiment with linear regression. Results are shown for StackOverflow in Figure 2. As expected, highly-ranked responses have more comments, but we also find that there are more comments for both high and low values of intrinsic quality. Both better display rank and higher quality score qij are clearly associated with more positive comments (slope 2 [0.47, 0.64]), but the residuals of quality rank 0.012 are on average less than the half the residuals of display rank 0.028. In addition, we also calculate the “bumpiness” of these plots by computing the mean variation of two consecutive slopes between each adjacent pair of data points. Quality rank reduces bumpiness of display rank from 0.391 to 0.226 in average, implying the estimated intrinsic quality yields locally consistent ranking as well as globally consistent.11 Figure 3: Sub-community embedding for StackOverflow. Community analysis. The 2D embedding in Figure 1 shows that we can compare and contrast the different evaluation cultures of communities using two inferred behavioral coefficients: Trendiness ⌧and Conformity . Communities are sized according to the number of items and colored based on a manual clustering. Related communities collocate in the same neighborhood. Religion, scholarship, and meta-discussions cluster towards the bottom left, where users are interested in many different opinions, and are happy to disagree with each other. Going from left to right, communities become more trendy: users in trendier communities tend to select and vote mostly on already highly-ranked responses. Going from bottom to top, users become increasingly likely to conform to the majority opinion on any given response. By comparing related communities we can observe that characteristics of user communities determine voting behavior more than technical similarity. Highly theoretical and abstract communities (cstheory) have low Trendiness but high Conformity. More applied, but still graduate-level, communities in similar fields (cs, mathoverflow, stats) show less Conformity but greater Trendiness. Finally, more practical homework-oriented forums (physics, math) are even more trendy. In contrast, users in english are trendy and debatable. Users in Amazon are most sensitive to trendy reviews and least afraid of voicing minority opinion. StackOverflow is by far the largest community, and it is reasonable to wonder whether the Trendiness parameter is simply a proxy for size. When we subdivide StackOverflow by programming languages however (see Figure 3), individual community averages can be distinguished, but they all remain in the same region. Javascript programmers are more satisfied with trendy responses than those using c/c++. Mobile developers tend to be more conformist, while Perl hackers are more likely to argue. 5 Conclusions Helpfulness voting is a powerful tool to evaluate user-generated responses such as product reviews and question answers. However such votes can be socially reinforced by positional accessibility and existing evaluations by other users. In contrast to many exchangeable random processes, the CVP takes into account sequences of votes, assigning different weights based on the context that each vote was cast. Instead of trying to model the response ordering function f, which is mechanism-specific and often changes based on service providers’ strategies, we leverage the fully observed trajectories of votes, estimating the hidden intrinsic quality of each response and inferring two behavioral coefficients for community-level exploration. The proposed log-linear urn model is capable of generating nonexchangeable votes with great scalability to incorporate other factors such as length bias or other textual features. As we are more able to observe social interactions as they are occurring and not just summarized after the fact, we will increasingly be able to use models beyond exchangeability. 11All numbers and p-values in paragraphs are weighted averages on all 83 communities, whereas Table 5 only includes results for the major communities and their own weighted averages due to space limits. 8 References [1] D. J. Aldous. Exchangeability and related topics. In École d’Été St Flour 1983, pages 1–198. SpringerVerlag, 1985. [2] D. Blei, T. Griffiths, M. Jordan, and J. Tenenbaum. Hierarchical topic models and the nested chinese restaurant process. In Advances in Neural Information Processing System, NIPS ’03, 2003. [3] D. M. Blei and P. I. Frazier. Distance dependent chinese restaurant processes. Journal of Machine Learning Learning Research, pages 2461–2488, 2011. [4] C. Danescu-Niculescu-Mizil, G. Kossinets, J. Kleinberg, and L. Lee. How opinions are received by online communities: A case study on Amazon.Com helpfulness votes. In Proceedings of World Wide Web, WWW ’09, pages 141–150, 2009. [5] A. Ghose and P. G. Ipeirotis. Designing novel review ranking systems: Predicting the usefulness and impact of reviews. In Proceedings of the Ninth International Conference on Electronic Commerce, ICEC ’07, pages 303–310, 2007. [6] T. Joachims, L. Granka, B. Pan, H. Hembrooke, F. Radlinski, and G. Gay. Evaluating the accuracy of implicit feedback from clicks and query reformulations in web search. ACM Transactions on Information Systems, 25(2), 2007. [7] S.-M. Kim, P. Pantel, T. Chklovski, and M. Pennacchiotti. Automatically assessing review helpfulness. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, 2006. [8] J. Liu, Y. Cao, C.-Y. Lin, Y. Huang, and M. Zhou. Low-quality product review detection in opinion summarization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’07, pages 334–342, 2007. [9] L. Mamykina, B. Manoim, M. Mittal, G. Hripcsak, and B. Hartmann. Design lessons from the fastest q&a site in the west. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’11, 2011. [10] L. Martin and P. Pu. Prediction of helpful reviews using emotion extraction. In Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence, AAAI ’14, pages 1551–1557, 2014. [11] J. Otterbacher. ’helpfulness’ in online communities: A measure of message quality. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’09, pages 955–964, 2009. [12] M. J. Salganik, P. S. Dodds, and D. J. Watts. Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311:854–856, 2006. [13] M. J. Salganik and D. J. Watts. Leading the herd astray: An experimental study of self-fulfilling prophecies in an artificial cultural mmrket. Social Psychology Quarterly, 71:338–355, 2008. [14] W. Shandwick. Buy it, try it, rate it: Study of consumer electronics purchase deicisions in the engagement era. KRC Research, 2012. [15] S. Siersdorfer, S. Chelaru, J. S. Pedro, I. S. Altingovde, and W. Nejdl. Analyzing and mining comments and comment ratings on the social web. ACM Trans. Web, pages 17:1–17:39, 2014. [16] R. Sipos, A. Ghosh, and T. Joachims. Was this review helpful to you?: It depends! context and voting patterns in online content. In International Conference on World Wide Web, WWW ’14, pages 337–348, 2014. [17] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1631–1642. Association for Computational Linguistics, 2013. [18] Y. R. Tausczik, A. Kittur, and R. E. Kraut. Collaborative problem solving: A study of mathoverflow. In Computer-Supported Cooperative Work and Social Computing, CSCW’ 14, 2014. [19] Y. Yue, R. Patel, and H. Roehrig. Beyond position bias: Examining result attractiveness as a source of presentation bias in clickthrough data. In Proceedings of the 19th International Conference on World Wide Web, WWW ’10, 2010. 9
2016
127
6,025
Robust Spectral Detection of Global Structures in the Data by Learning a Regularization Pan Zhang Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China panzhang@itp.ac.cn Abstract Spectral methods are popular in detecting global structures in the given data that can be represented as a matrix. However when the data matrix is sparse or noisy, classic spectral methods usually fail to work, due to localization of eigenvectors (or singular vectors) induced by the sparsity or noise. In this work, we propose a general method to solve the localization problem by learning a regularization matrix from the localized eigenvectors. Using matrix perturbation analysis, we demonstrate that the learned regularizations suppress down the eigenvalues associated with localized eigenvectors and enable us to recover the informative eigenvectors representing the global structure. We show applications of our method in several inference problems: community detection in networks, clustering from pairwise similarities, rank estimation and matrix completion problems. Using extensive experiments, we illustrate that our method solves the localization problem and works down to the theoretical detectability limits in different kinds of synthetic data. This is in contrast with existing spectral algorithms based on data matrix, non-backtracking matrix, Laplacians and those with rank-one regularizations, which perform poorly in the sparse case with noise. 1 Introduction In many statistical inference problems, the task is to detect, from given data, a global structure such as low-rank structure or clustering. The task is usually hard to solve since modern datasets usually have a large dimensionality. When the dataset can be represented as a matrix, spectral methods are popular as it gives a natural way to reduce the dimensionality of data using eigenvectors or singular vectors. In the point-of-view of inference, data can be seen as measurements to the underlying structure. Thus more data gives more precise information about the underlying structure. However in many situations when we do not have enough measurements, i.e. the data matrix is sparse, standard spectral methods usually have localization problems thus do not work well. One example is the community detection in sparse networks, where the task is to partition nodes into groups such that there are many edges connecting nodes within the same group and comparatively few edges connecting nodes in different groups. It is well known that when the graph has a large connectivity c, simply using the first few eigenvectors of the adjacency matrix A ∈{0, 1}n×n (with Aij = 1 denoting an edge between node i and node j,and Aij = 0 otherwise) gives a good result. In this case, like that of a sufficiently dense Erd˝os-R´enyi (ER) random graph with average degree c, the spectral density follows Wigner’s semicircle rule, P(λ) = √ 4c −λ2/2πc, and there is a gap between the edge of bulk of eigenvalues and the informative eigenvalue that represents the underlying community structure. However when the network is large and sparse, the spectral density of the adjacency matrix deviates from the semicircle, the informative eigenvalue is hidden in the bulk of eigenvalues, as displayed in Fig. 1 left. Its eigenvectors associated with largest eigenvalues (which are roughly proportional to log n/ log log n for ER random graphs) are localized on the large30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. degree nodes, thus reveal only local structures about large degrees rather than the underlying global structure. Other standard matrices for spectral clustering [19, 22], e.g. Laplacian, random walk matrix, normalized Laplacian, all have localization problems but on different local structures such as dangling trees. Another example is the matrix completion problem which asks to infer missing entries of matrix A ∈Rm×n with rank r ≪√mn from only few observed entries. A popular method for this problem is based on the singular value decomposition (SVD) of the data matrix. However it is well known that when the matrix is sparse, SVD-based method performs very poorly, because the singular vectors corresponding to the largest singular values are localized, i.e. highly concentrated on high-weight column or row indices. A simple way to ease the pain of localization induced by high degree or weight is trimming [6, 13] which sets to zero columns or rows with a large degree or weight. However trimming throws away part of the information, thus does not work all the way down to the theoretical limit in the community detection problem [6, 15]. It also performs worse than other methods in matrix completion problem [25]. In recent years, many methods have been proposed for the sparsity-problem. One kind of methods use new linear operators related to the belief propagation and Bethe free energy, such as the nonbacktracking matrix [15] and Bethe Hessian [24]. Another kind of methods add to the data matrix or its variance a rank-one regularization matrix [2, 11, 16–18, 23]. These methods are quite successful in some inference problems in the sparse regime. However in our understanding none of them works in a general way to solve the localization problem. For instance, the non-backtracking matrix and the Bethe Hessian work very well when the graph has a locally-tree-like structure, but they have again the localization problems when the system has short loops or sub-structures like triangles and cliques. Moreover its performance is sensitive to the noise in the data [10]. Rank-one regularizations have been used for a long time in practice, the most famous example is the “teleportation” term in the Google matrix. However there is no satisfactory way to determine the optimal amount of regularization in general. Moreover, analogous to the non-backtracking matrix and Bethe Hessian, the rank-one regularization approach is also sensitive to the noise, as we will show in the paper. The main contribution of this paper is to illustrate how to solve the localization problem of spectral methods for general inference problems in sparse regime and with noise, by learning a proper regularization that is specific for the given data matrix from its localized eigenvectors. In the following text we will first discuss in Sec. 2 that all three methods for community detection in sparse graphs can be put into the framework of regularization. Thus the drawbacks of existing methods can be seen as improper choices of regularizations. In Sec. 3 we investigate how to choose a good regularization that is dedicated for the given data, rather than taking a fixed-form regularization as in the existing approaches. We use matrix perturbation analysis to illustrate how the regularization works in penalizing the localized eigenvectors, and making the informative eigenvectors that correlate with the global structure float to the top positions in spectrum. In Sec. 4 we use extensive numerical experiments to validate our approach on several well-studied inference problems, including the community detection in sparse graphs, clustering from sparse pairwise entries, rank estimation and matrix completion from few entries. −3 0 3 0 0.05 0.1 0.15 0.2 −3 0 3 0 0.05 0.1 0.15 0.2 Figure 1: Spectral density of the adjacency matrix (left) and X-Laplacian (right) of a graph generated by the stochastic block model with n = 10000 nodes, average degree c = 3, q = 2 groups and ϵ = 0.125. Red arrows point to eigenvalues out of the bulk. 2 2 Regularization as a unified framework We see that the above three methods for the community detection problem in sparse graphs, i.e. trimming, non-backtracking/Bethe Hessian, and rank-one regularizations, can be understood as doing different ways of regularizations. In this framework, we consider a regularized matrix L = ˆA + ˆR. (1) Here matrix ˆA is the data matrix or its (symmetric) variance, such as ˜A = D−1/2AD−1/2 with D denoting the diagonal matrix of degrees, and matrix ˆR is a regularization matrix. The rank-one regularization approaches [2, 11, 16–18, 23] fall naturally into this framework as they set R to be a rank-one matrix, −ζ11T , with ζ being a tunable parameter controlling strength of regularizations. It is also easy to see that in the trimming, ˆA is set to be the adjacency matrix and ˆR contains entries to remove columns or rows with high degrees from A. For spectral algorithms using the non-backtracking matrix, its relation to form Eq. (1) is not straightforward. However we can link them using the theory of graph zeta function [8] which says that an eigenvalue µ of the non-backtracking operator satisfies the following quadratic eigenvalue equation, det[µ2I −µA + (D −I)] = 0, where I is the identity matrix. It indicates that a particular vector v that is related to the eigenvector of the non-backtracking matrix satisfies (A −D−I µ )v = µv. Thus spectral clustering algorithm using the non-backtracking matrix is equivalent to the spectral clustering algorithm using matrix with form in Eq. (1), while ˆA = A, ˆR = D−I µ , and µ acting as a parameter. We note here that the parameter does not necessarily be an eigenevalue of the non-backtracking matrix. Actually a range of parameters work well in practice, like those estimated from the spin-glass transition of the system [24]. So we have related different approaches of resolving localizations of spectral algorithm in sparse graphs into the framework of regularization. Although this relation is in the context of community detection in networks, we think it is a general point-of-view, when the data matrix has a general form rather than a {0, 1} matrix. As we have argued in the introduction, above three ways of regularization work from case to case and have different problems, especially when system has noise. It means that in the framework of regularizations, the effective regularization matrix ˆR added by these methods do not work in a general way and is not robust. In our understanding, the problem arises from the fact that in all these methods, the form of regularization is fixed for all kinds of data, regardless of different reasons for the localization. Thus one way to solve the problem would be looking for the regularizations that are specific for the given data, as a feature. In the following section we will introduce our method explicitly addressing how to learn such regularizations from localized eigenvectors of the data matrix. 3 Learning regularizations from localized eigenvectors The reason that the informative eigenvectors are hidden in the bulk is that some random eigenvectors have large eigenvalues, due to the localization which represent the local structures of the system. In the complementary side, if these eigenvectors are not localized, they are supposed to have smaller eigenvalues than the informative ones which reveal the global structures of the graph. This is the main assumption that our idea is based on. In this work we use the Inverse Participation Ratio (IPR), I(v) = Pn i=1 v4 i , to quantify the amount of localization of a (normalized) eigenvector v. IPR has been used frequently in physics, for example for distinguishing the extended state from the localized state when applied on the wave function [3]. It is easy to check that I(v) ranges from 1 n for vector { 1 √n, 1 √n, ..., 1 √n} to 1 for vector {0, ..., 0, 1, 0, ..., 0}. That is, a larger I(v) indicates more localization in vector v. Our idea is to create a matrix LX with similar structures to A, but with non-localized leading eigenvectors. We call the resulting matrix X-Laplacian, and define it as LX = A + X, where matrix A is the data matrix (or its variant), and X is learned using the procedure detailed below: 3 Algorithm 1: Regularization Learning Input: Real symmetric matrix A, number of eigenvectors q, learning rate η = O(1), threshold ∆. Output: X-Laplacian, LX, whose leading eigenvectors reveal the global structures in A. 1. Set X to be all-zero matrix. 2. Find set of eigenvectors U = {u1, u2, ..., uq} associated with the first q largest eigenvalues (in algebra) of LX. 3. Identify the eigenvector v that has the largest inverse participation ratio among the q eigenvectors in U. That is, find v = argmaxu∈U I(u). 4. if I(v) < ∆, return LX = A + X; Otherwise, ∀i, Xii ←Xii −ηv2 i , then go to step 2. We can see that the regularization matrix X is a diagonal matrix, its diagonal entries are learned gradually from the most localized vector among the first several eigenvectors. The effect of X is to penalize the localized eigenvectors, by suppressing down the eigenvalues associated with the localized eigenvectors. The learning will continue until all q leading eigenvectors are delocalized, thus are supposed to correlate with the global structure rather than the local structures. As an example, we show the effect of X to the spectrum in Fig. 1. In the left panel, we plot the spectrum of the adjacency matrix (i.e. before learning X) and the X-Laplacian (i.e. after learning X) of a sparse network generated by the stochastic block model with q = 2 groups. For the adjacency matrix in the left panel, localized eigenvectors have large eigenvalues and contribute a tail to the semicircle, covering the informative eigenvalue, leaving only one eigenvalue, which corresponds to the eigenvector that essentially sorts vertices according to their degree, out of the bulk. The spectral density of X-Laplacian is shown in the right panel of Fig. 1. We can see that the right corner of the continues part of the spectral density appearing in the spectrum of the adjacency matrix , is missing here. This is because due to the effect of X, the eigenvalues that are associated with localized eigenvectors in the adjacency matrix are pushed into the bulk, maintaining a gap between the edge of bulk and the informative eigenvalue (being pointed by the left red arrow in the figure). The key procedure of the algorithm is the learning part in step 4, which updates diagonal terms of matrix X using the most localized eigenvector v. Throughout the paper, by default we use learning rate η = 10 and threshold ∆= 5/n. As η = O(1) and v2 i = O(1/n), we can treat the learned entries in each step, ˆL, as a perturbation to matrix LX. After applying this perturbation, we anticipate that an eigenvalue of L changes from λi to λi + ˆλi, and an eigenvector changes from ui to ui + ˆui. If we assume that matrix LX is not ill-conditioned, and the first few eigenvectors that we care about are distinct, then we have ˆλi = uT i ˆLui. Derivation of the above expression is straightforward, but for the completeness we put the derivations in the SI text. In our algorithm, ˆL is a diagonal matrix with entries ˆLii = −ηv2 i with v denoting the identified eigenvector who has the largest inverse participation ratio, so last equation can be written as ˆλi = −η P k v2 ku2 ik. For the identified vector v, we further have ˆλv = −η X i v4 i = −ηI(v). (2) It means the eigenvalue of the identified eigenvector with inverse participation ratio I(v) is decreased by amount ηI(v). That is, the more localized the eigenvector is, the larger penalty on its eigenvalue. In addition to the penalty to the localized eigenvalues, We see that the leading eigenvectors are delocalizing during learning. We have analyzed the change of eigenvectors after the perturbation given by the identified vector v, and obtained (see SI for the derivations) the change of an eigenvector ˆui as a function of all the other eigenvalues and eigenvectors, ˆui = P j̸=i P k ujkv2 kuik λi−λj uj. Then the inverse participation ratio of the new vector ui + ˆui can be written as I(ui + ˆui) = I(ui) −4η n X l=1 X j̸=i u2 jlv2 l u4 il λi −λj −4η n X l=1 X j̸=i X k̸=l u3 ilv2 kujkuikujl λi −λj . (3) As eigenvectors ui and uj are orthogonal to each other, the term 4η Pn l=1 P j̸=i u2 jlv2 l u4 il λi−λj can be seen as a signal term and the last term can be seen as a cross-talk noise with zero mean. We see that the cross-talk noise has a small variance, and empirically its effect can be neglected. For the 4 leading eigenvector corresponding to the largest eigenvalue λi = λ1, it is straightforward to see that the signal term is strictly positive. Thus if the learning is slow enough, the perturbation will always decrease the inverse participation ratio of the leading eigenvector. This is essentially an argument for convergence of the algorithm. For other top eigenvectors, i.e. the second and third eigenvectors and so on, though λi −λj is not strictly positive, there are much more positive terms than negative terms in the sum, thus the signal should be positive with a high probability. Thus one can conclude that the process of learning X makes first few eigenvectors de-localizing. An example illustrating the process of the learning is shown in Fig. 2 where we plot the second eigenvector vs. the third eigenvector, at several times steps during the learning, for a network generated by the stochastic block model with q = 3 groups. We see that at t = 0, i.e. without learning, both eigenvectors are localized, with a large range of distribution in entries. The color of eigenvectors encodes the group membership in the planted partition. We see that at t = 0 three colors are mixed together indicating that two eigenvectors are not correlated with the planted partition. At t = 4 three colors begin to separate, and range of entry distribution become smaller, indicating that the localization is lighter. At t = 25, three colors are more separated, the partition obtained by applying k-means algorithm using these vectors successfully recovers 70% of the group memberships. Moreover we can see that the range of entries of eigenvectors shrink to [−0.06, 0.06], giving a small inverse participation ratio. Figure 2: The second eigenvector V2 compared with the third eigenvector V3 of LX for a network at three steps with t = 0, 4 and 25 during learning. The network has n = 42000 nodes, q = 3 groups, average degree c = 3, ϵ = 0.08, three colors represent group labels in the planted partition. 4 Numerical evaluations In this section we validate our approach with experiments on several inference problems, i.e. community detection problems, clustering from sparse pairwise entries, rank estimation and matrix completion from a few entries. We will compare performance of the X-Laplacian (using mean-removed data matrix) with recently proposed state-of-the-art spectral methods in the sparse regime. 4.1 Community Detection First we use synthetic networks generated by the stochastic block model [9], and its variant with noise [10]. The standard Stochastic Block Model (SBM), also called the planted partition model, is a popular model to generate ensemble of networks with community structure. There are q groups of nodes and a planted partition {t∗ i } ∈{1, ..., q}. Edges are generated independently according to a q × q matrix {pab}. Without loss of generality here we discuss the commonly studied case where the q groups have equal size and where {pab} has only two distinct entries, pab = cin/n if a = b and cout/n if a ̸= b. Given the average degree of the graph, there is a so-called detectability transition ϵ∗= cout/cin = (√c −1)/(√c −1 + q) [7] , beyond which point it is not possible to obtain any information about the planted partition. It is also known spectral algorithms based on the non-backtracking matrix succeed all the way down to the transition [15]. This transition was recently established rigorously in the case of q = 2 [20, 21]. Comparisons of spectral methods using different matrices are shown in Fig. 3 left. From the figure we see that the X-Laplacian works as well as the non-backtracking matrix, down to the detectability transition. While the direct use of the adjacency matrix, i.e. LX before learning, does not work well when ϵ exceeds about 0.1. In the right panel of Fig. 3, each network is generated by the stochastic block model with the same parameter as in the left panel, but with 10 extra cliques, each of which contains 10 randomly selected 5 nodes. Theses cliques do not carry information about the planted partition, hence act as noise to the system. In addition to the non-backtracking matrix, X-Laplacian, and the adjacency matrix, we put into comparison the results obtained using other classic and newly proposed matrices, including Bethe Hessian [24], Normalized Laplacian (N. Laplacian) Lsym = I −˜A, and regularized and normalized Laplacian (R.N. Laplacian) LA = ˜A −ζ11T, with a optimized regularization ζ (we have scanned the whole range of ζ, and chosen an optimal one that gives the largest overlap, i.e. fraction of correctly reconstructed labels, in most of cases). From the figure we see that with the noise added, only X-Laplacian works down to the original transition (of SBM without cliques). All other matrices fail in detecting the community structure with ϵ > 0.15. We have tested other kinds of noisy models, including the noisy stochastic block model, as proposed in [10]. Our results show that the X-Laplacian works well (see SI text) while all other spectral methods do not work at all on this dataset [10]. Moreover, in addition to the classic stochastic block model, we have extensively evaluated our method on networks generated by the degree-corrected stochastic block model [12], and the stochastic block model with extensive triangles. We basically obtained qualitatively results as in Fig. 3 that the X-Laplacian works as well as the state-of-the-art spectral methods for the dataset. The figures and detailed results can be found at the SI text. We have also tested real-world networks with an expert division, and found that although the expert division is usually easy to detect by directly using the adjacency matrix, the X-Laplacian significantly improves the accuracy of detection. For example on the political blogs network [1], spectral clustering using the adjacency matrix gives 83 mis-classified labels among totally 1222 labels, while the X-Laplacian gives only 50 mis-classified labels. 0 0.1 0.2 0.3 0.5 0.6 0.7 0.8 0.9 1 ε Overlap Detectability transition Adjacency Non−backtracking X−Laplacian 0 0.1 0.2 0.3 0.5 0.6 0.7 0.8 0.9 1 ε Overlap Detectability transition Adjacency R. N. Adjacency N. Laplacian Nonbacktracking Bethe Hessian X−Laplacian Figure 3: Accuracy of community detection, represented by overlap (fraction of correctly reconstructed labels) between inferred partition and the planted partition, for several methods on networks generated by the stochastic block model with average degree c = 3 (left) and with extra 10 size-10 cliques (right). All networks has n = 10000 nodes and q = 2 groups, ϵ = cout/cin. The black dashed lines denote the theoretical detectability transition. Each data point is averaged over 20 realizations. 4.2 Clustering from sparse pairwise measurements Consider the problem of grouping n items into clusters based on the similarity matrix S ∈Rn×n, where Sij is the pairwise similarity between items i and j. Here we consider not using all pairwise similarities, but only O(n) random samples of them. In other words, the similarity graph which encodes the information of the global clustering structure is sparse, rather than the complete graph. There are many motivations for choosing such sparse observations, for example in some cases all measurements are simply not available or even can not be stored. In this section we use the generative model recently proposed in [26], since there is a theoretical limit that can be used to evaluate algorithms. Without loss of generality, we consider the problem with only q = 2 clusters. The model in [26] first assigns items hidden clusters {ti} ∈{1, 2}n, then generates similarity between a randomly sampled pairs of items according to probability distribution, pin and pout, associated with membership of two items. There is a theoretical limit ˆc satisfying 1 ˆc = 1 q R ds (pin(s)−pout(s))2 pin(s)+(q−1)pout(s), that with c < ˆc no algorithm could obtain any partial information of the planted clusters; while with c > ˆc some algorithms, e.g. spectral clustering using the Bethe Hessian [26], achieve partial recovery of the planted clusters. 6 Similar to the community detection in sparse graphs, spectral algorithms directly using the eigenvectors of a similarity matrix S does not work well, due to the localization of eigenvectors induced by the sparsity. To evaluate whether our method, the X-Laplacian, solves the localization problem, and how it works compared with the Bethe Hessian, in Fig. 4 we plot the performance (in overlap, the fraction of correctly reconstructed group labels) of three algorithms on the same set of similarity matrices. For all the datasets there are two groups with distributions pin and pout being Gaussian with unit variance and mean 0.75 and −0.75 respectively. In the left panel of Fig. 4 the topology of pairwise entries is random graph, Bethe Hessian works down to the theoretical limit, while directly using of the measurement matrix gives a poor performance. We can also see that X-Laplacian has fixed the localization problem of directly using of the measurement matrix, and works almost as good as the Bethe-Hessian. We note that the Bethe Hessian needs to know the parameters (i.e. parameters of distributions pin and pout), while the X-Laplacian does not use them at all. In the right panel of Fig. 4, on top of the ER random graph topology, we add some noisy local structures by randomly selecting 20 nodes and connecting neighbors of each selected node to each other. The weights for the local pairwise were set to 1, so that the noisy structures do not contain information about the underlying clustering. We can see that Bethe Hessian is influenced by noisy local structures and fails to work, while X-Laplacian solves the localization problems induced by sparsity, and is robust to the noise. We have also tested other kinds of noise by adding cliques, or hubs, and obtained similar results (see SI text). 1 2 3 4 5 6 0.5 0.6 0.7 0.8 0.9 c Overlap Detectability transition Pairwise measurement matrix Bethe Hessian X−Laplacian 1 2 3 4 5 6 0.5 0.6 0.7 0.8 0.9 c Overlap Detectability transition Pairwise measurement matrix Bethe Hessian X−Laplacian Figure 4: Spectral clustering using sparse pairwise measurements. The X-axis denotes the average number of pairwise measurements per data point, and the Y-axis is the fraction of correctly reconstructed labels, maximized over permutations. The model used to generate pairwise measurements is proposed in [26], see text for detailed descriptions. In the left panel, the topologies of the pairwise measurements are random graphs. In the right panel in addition to the random graph topology there are 20 randomly selected nodes with all their neighbors connected. Each point in the figure is averaged over 20 realizations of size 104. 4.3 Rank estimation and Matrix Completion The last problem we consider in this paper for evaluating the X-Laplacian is completion of a low rank matrix from few entries. This problem has many applications including the famous collaborative filtering. A problem that is closely related to it is the rank estimation from revealed entries. Indeed estimating rank of the matrix is usually the first step before actually doing the matrix completion. The problem is defined as follows: let Atrue = UV T , where U ∈Rn×r and V ∈Rm×r are chosen uniformly at random and r ≪√nm is the ground-true rank. Only few, say c√mn, entries of matrix Atrue are revealed. That is we are given a matrix A ∈Rn×m who contains only subset of Atrue, with other elements being zero. Many algorithms have been proposed for matrix completion, including nuclear norm minimization [5] and methods based on the singular value decomposition [4] etc. Trimming which sets to zero all rows and columns with a large revealed entries, is usually introduced to control the localizations of singular vectors and to estimate the rank using the gap of singular values [14]. Analogous to the community detection problem, trimming is not supposed to work optimally when matrix A is sparse. Indeed in [25] authors reported that their approach based on the Bethe Hessian outperforms trimming+SVD when the topology of revealed entries is a sparse random graph. Moreover, authors in [25] show that the number of negative eigenvalues of the Bethe Hessian gives a more accurate estimate of the rank of A than that based on trimming+SVD. 7 However, we see that if the topology is not locally-tree-like but with some noise, for example with some additional cliques, both trimming of the data matrix and Bethe Hessian perform much worse, reporting a wrong rank, and giving a large reconstruction error, as illustrated in Fig. 5. In the left panel of the figure we plot the eigenvalues of the Bethe Hessian, and singular values of trimmed matrix A with true rank rtrue = 2. We can see that both of them are continuously distributed: there is no clear gap in singular values of trimmed A, and Bethe Hessian has lots of negative eigenvalues. In this case since matrix A could be a non-squared matrix, we need to define the X-Laplacian as LX =  0 A A 0  −X. The eigenvalues of LX are also plotted in Fig. 5 where one can see clearly that there is a gap between the second largest eigenvalue and the third one. Thus the correct rank can be estimated using the value minimizing consecutive eigenvalues, as suggested in [14]. After estimating the rank of the matrix, matrix completion is done by using a local optimization algorithm [27] starting from initial matrices, that obtained using first r singular vectors of trimming+SVD, first r eigenvectors of Bethe Hessian and X-Laplacian with estimated rank r respectively. The results are shown in Fig. 5 right where we plot the probability that obtained root mean square error (RMSE) is smaller than 10−7 as a function of average number of revealed entries per row c, for the ER random-graph topology plus noise represented by several cliques. We can see that X-Laplacian outperforms Bethe Hessian and Trimming+SVD with c ≥13. Moreover, when c ≥18, for all instances, only X-Laplacian gives an accurate completion for all instances. 0 10 20 30 40 50 0 5 10 15 20 Eigenvalues Trimming Bethe Hessian X−Laplacian 5 10 15 20 25 30 0 0.2 0.4 0.6 0.8 1 c P(RMSE<10−7) Trimming SVD Bethe Hessian X−Laplacian Figure 5: (Left:) Singular values of sparse data matrix with trimming, eigenvalues of the Bethe Hessian and X-Laplacian. The data matrix is the outer product of two vectors of size 1000. Their entries are Gaussian random variables with mean zero and unit variance, so the rank of the original matrix is 2. The topology of revealed observations are random graphs with average degree c = 8 plus 10 random cliques of size 20. (Right:) Fraction of samples that RMSE is smaller than 10−7, among 100 samples of rank-3 data matrix UV T of size 1000 × 1000, with the entries of U and V drawn from a Gaussian distribution of mean 0 and unit variance. The topology of revealed entries is the random graph with varying average degree c plus 10 size-20 cliques. 5 Conclusion and discussion We have presented the X-Laplacian, a general approach for detecting latent global structure in a given data matrix. It is completely a data-driven approach that learns different forms of regularization for different data, to solve the problem of localization of eigenvectors or singular vectors. The mechanics for de-localizing of eigenvectors during learning of regularizations has been illustrated using the matrix perturbation analysis. We have validated our method using extensive numerical experiments, and shown that it outperforms state-of-the-art algorithms on various inference problems in the sparse regime and with noise. In this paper we discuss the X-Laplacian using directly the (mean-removed) data matrix A, but we note that the data matrix is not the only choice for the X-Laplacian. Actually we have tested approaches using various variants of A, such as normalized data matrix ˜A, and found they work as well. We also tried learning regularizations for the Bethe Hessian, and found it succeeds in repairing Bethe Hessian when Bethe Hessian has localization problem. These indicate that our scheme of regularization-learning is a general spectral approach for hard inference problems. A (Matlab) demo of our method can be found at http://panzhang.net. 8 References [1] L. A. Adamic and N. Glance. The political blogosphere and the 2004 us election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery, pages 36–43. ACM, 2005. [2] A. A. Amini, A. Chen, P. J. Bickel, E. Levina, et al. Pseudo-likelihood methods for community detection in large sparse networks. The Annals of Statistics, 41(4):2097–2122, 2013. [3] R. Bell and P. Dean. Atomic vibrations in vitreous silica. Discussions of the Faraday society, 50:55–61, 1970. [4] J.-F. Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [5] E. J. Cand`es and B. Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 9(6):717–772, 2009. [6] A. COJA-OGHLAN. Graph partitioning via adaptive spectral techniques. Combinatorics, Probability and Computing, 19:227–284, 3 2010. [7] A. Decelle, F. Krzakala, C. Moore, and L. Zdeborov´a. Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications. Phys. Rev. E, 84:066106, Dec 2011. [8] K.-i. Hashimoto. Zeta functions of finite graphs and representations of p-adic groups. Advanced Studies in Pure Mathematics, 15:211–280, 1989. [9] P. W. Holland, K. B. Laskey, and S. Leinhardt. Stochastic blockmodels: First steps. Social networks, 5(2):109–137, 1983. [10] A. Javanmard, A. Montanari, and F. Ricci-Tersenghi. Phase transitions in semidefinite relaxations. Proceedings of the National Academy of Sciences, 113(16):E2218, 2016. [11] A. Joseph and B. Yu. Impact of regularization on spectral clustering. arXiv preprint arXiv:1312.1733, 2013. [12] B. Karrer and M. E. J. Newman. Stochastic blockmodels and community structure in networks. Phys. Rev. E, 83:016107, Jan 2011. [13] R. H. Keshavan, A. Montanari, and S. Oh. Low-rank matrix completion with noisy observations: a quantitative comparison. In Communication, Control, and Computing, 2009. Allerton 2009. 47th Annual Allerton Conference on, pages 1216–1222. IEEE, 2009. [14] R. H. Keshavan, S. Oh, and A. Montanari. Matrix completion from a few entries. In Information Theory, 2009. ISIT 2009. IEEE International Symposium on, pages 324–328. IEEE, 2009. [15] F. Krzakala, C. Moore, E. Mossel, J. Neeman, A. Sly, L. Zdeborov´a, and P. Zhang. Spectral redemption in clustering sparse networks. Proc. Natl. Acad. Sci. USA, 110(52):20935–20940, 2013. [16] C. M. Le, E. Levina, and R. Vershynin. Sparse random graphs: regularization and concentration of the laplacian. arXiv preprint arXiv:1502.03049, 2015. [17] C. M. Le and R. Vershynin. Concentration and regularization of random graphs. arXiv preprint arXiv:1506.00669, 2015. [18] J. Lei, A. Rinaldo, et al. Consistency of spectral clustering in stochastic block models. The Annals of Statistics, 43(1):215–237, 2014. [19] U. V. Luxburg, M. Belkin, O. Bousquet, and Pertinence. A tutorial on spectral clustering. Stat. Comput, 2007. [20] L. Massouli´e. Community detection thresholds and the weak ramanujan property. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 694–703. ACM, 2014. [21] E. Mossel, J. Neeman, and A. Sly. Stochastic block models and reconstruction. arXiv preprint arXiv:1202.1499, 2012. [22] A. Y. Ng, M. I. Jordan, Y. Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849–856, 2002. [23] T. Qin and K. Rohe. Regularized spectral clustering under the degree-corrected stochastic blockmodel. In Advances in Neural Information Processing Systems, pages 3120–3128, 2013. [24] A. Saade, F. Krzakala, and L. Zdeborov´a. Spectral clustering of graphs with the bethe hessian. In Advances in Neural Information Processing Systems, pages 406–414, 2014. [25] A. Saade, F. Krzakala, and L. Zdeborov´a. Matrix completion from fewer entries: Spectral detectability and rank estimation. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 1261–1269. Curran Associates, Inc., 2015. [26] A. Saade, M. Lelarge, F. Krzakala, and L. Zdeborov´a. Clustering from sparse pairwise measurements. To appear in IEEE International Symposium on Information Theory (ISIT). IEEE, arXiv:1601.06683, 2016. [27] S.G.Johnson. The nlopt nonlinear-optimization package, 2014. 9
2016
128
6,026
Optimal spectral transportation with application to music transcription Rémi Flamary Université Côte d’Azur, CNRS, OCA remi.flamary@unice.fr Cédric Févotte CNRS, IRIT, Toulouse cedric.fevotte@irit.fr Nicolas Courty Université de Bretagne Sud, CNRS, IRISA courty@univ-ubs.fr Valentin Emiya Aix-Marseille Université, CNRS, LIF valentin.emiya@lif.univ-mrs.fr Abstract Many spectral unmixing methods rely on the non-negative decomposition of spectral data onto a dictionary of spectral templates. In particular, state-of-the-art music transcription systems decompose the spectrogram of the input signal onto a dictionary of representative note spectra. The typical measures of fit used to quantify the adequacy of the decomposition compare the data and template entries frequency-wise. As such, small displacements of energy from a frequency bin to another as well as variations of timbre can disproportionally harm the fit. We address these issues by means of optimal transportation and propose a new measure of fit that treats the frequency distributions of energy holistically as opposed to frequency-wise. Building on the harmonic nature of sound, the new measure is invariant to shifts of energy to harmonically-related frequencies, as well as to small and local displacements of energy. Equipped with this new measure of fit, the dictionary of note templates can be considerably simplified to a set of Dirac vectors located at the target fundamental frequencies (musical pitch values). This in turns gives ground to a very fast and simple decomposition algorithm that achieves state-of-the-art performance on real musical data. 1 Context Many of nowadays spectral unmixing techniques rely on non-negative matrix decompositions. This concerns for example hyperspectral remote sensing (with applications in Earth observation, astronomy, chemistry, etc.) or audio signal processing. The spectral sample vn (the spectrum of light observed at a given pixel n, or the audio spectrum in a given time frame n) is decomposed onto a dictionary W of elementary spectral templates, characteristic of pure materials or sound objects, such that vn ≈Whn. The composition of sample n can be inferred from the non-negative expansion coefficients hn. This paradigm has led to state-of-the-art results for various tasks (recognition, classification, denoising, separation) in the aforementioned areas, and in particular in music transcription, the central application of this paper. In state-of-the-art music transcription systems, the spectrogram V (with columns vn) of a musical signal is decomposed onto a dictionary of pure notes (in so-called multi-pitch estimation) or chords. V typically consists of (power-)magnitude values of a regular short-time Fourier transform (Smaragdis and Brown, 2003). It may also consists of an audio-specific spectral transform such as the Melfrequency transform, like in (Vincent et al., 2010), or the Q-constant based transform, like in (Oudre et al., 2011). The success of the transcription system depends of course on the adequacy of the time-frequency transform & the dictionary to represent the data V. In particular, the matrix W must 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be able to accurately represent a diversity of real notes. It may be trained with individual notes using annotated data (Boulanger-Lewandowski et al., 2012), have a parametric form (Rigaud et al., 2013) or be learnt from the data itself using a harmonic subspace constraint (Vincent et al., 2010). One important challenge of such methods lies in their ability to cope with the variability of real notes. A simplistic dictionary model will assume that one note characterised by fundamental frequency ν0 (e.g., ν0 = 440 Hz for note A4) will be represented by a spectral template with non-zero coefficients placed at ν0 and at its multiples (the harmonic frequencies). In reality, many instruments, such as the piano, produce musical notes with either slight frequency misalignments (so-called inharmonicities) with respect to the theoretical values of the fundamental and harmonic frequencies, or amplitude variations at the harmonic frequencies with respect to recording conditions or played instrument (variations of timbre). Handling these variabilities by increasing the dictionary with more templates is typically unrealistic and adaptive dictionaries have been considered in (Vincent et al., 2010; Rigaud et al., 2013). In these papers, the spectral shape of the columns of W is adjusted to the data at hand, using specific time-invariant semi-parametric models. However, the note realisations may vary in time, something which is not handled by these approaches. This work presents a new spectral unmixing method based on optimal transportation (OT) that is fully flexible and remedies the latter difficulties. Note that Typke et al. (2004) have previously applied OT to notated music (e.g., score sheets) for search-by-query in databases while we address here music transcription from audio spectral data. 2 A relevant baseline: PLCA Before presenting our contributions, we start by introducing the PLCA method of Smaragdis et al. (2006) which is heavily used in audio signal processing. It is based on the Probabilistic Latent Semantic Analysis (PLSA) of Hofmann (2001) (used in text retrieval) and is a particular form of nonnegative matrix factorisation (NMF). Simplifying a bit, in PLCA the columns of V are normalised to sum to one. Each vector vn is then treated as a discrete probability distribution of “frequency quanta” and is approximated as V ≈WH. The matrices W and H are of size M × K and K × N, respectively, and their columns are constrained to sum to one. As a result, the columns of the approximate ˆV = WH sum to one as well and each distribution vector vn is as such approximated by the counterpart distribution ˆvn in ˆV. Under the assumption that W is known, the approximation is found by solving the optimisation problem defined by min H≥0 DKL(V|WH) s.t ∀n, ∥hn∥1 = 1, (1) where DKL(v|ˆv) = P i vi log(vi/ˆvi) is the KL divergence between discrete distributions, and by extension DKL(V| ˆV) = P n DKL(vn|ˆvn). An important characteristic of the KL divergence is its separability with respect to the entries of its arguments. It operates a frequency-wise comparison in the sense that, at every frame n, the spectral coefficient vin at frequency i is compared to its counterpart ˆvin, and the results of the comparisons are summed over i. In particular, a small displacement in the frequency support of one observation may disproportionally harm the divergence value. For example, if vn is a pure note with fundamental frequency ν0, a small inharmonicity that shifts energy from ν0 to an adjacent frequency bin will unreasonably increase the divergence value, when vn is compared with a purely harmonic spectral template with fundamental frequency ν0. As explained in Section 1 such local displacements of frequency energy are very common when dealing with real data. A measure of fit invariant to small perturbations of the frequency support would be desirable in such a setting, and this is precisely what OT can bring. 3 Elements of optimal transportation Given a discrete probability distribution v (a non-negative real-valued column vector of dimension M and summing to one) and a target distribution ˆv (with same properties), OT computes a transportation matrix T belonging to the set Θ def = {T ∈RM×M + |∀i, j = 1, . . . , N, PM j=1 tij = vi, PM i=1 tij = ˆvj}. T establishes a bi-partite graph connecting the two distributions. In simple words, an amount (or, in typical OT parlance, a “mass”) of every coefficient of vector v is transported to an entry of ˆv. The sum of transported amounts to the jth entry of ˆv must equal ˆvj. The value of tij is the amount 2 transported from the ith entry of v to the jth entry of ˆv. In our particular setting, the vector v is a distribution of spectral energies v1, . . . , vM at sampling frequencies f1, . . . , fM. Without additional constraints, the problem of finding a non-negative matrix T ∈Θ has an infinite number of solutions. As such, OT takes into account the cost of transporting an amount from the ith entry of v to the jth entry of ˆv, denoted cij (a non-negative real-valued number). Endorsed with this cost function, OT involves solving the optimisation problem defined by min T J(T|v, ˆv, C) = X ij cijtij s.t T ∈Θ, (2) where C is the non-negative square matrix of size M with elements cij. Eq. (2) defines a convex linear program. The value of the function J(T|v, ˆv, C) at its minimum is denoted DC(v|ˆv). When C is a symmetric matrix such that cij = ∥fi −fj∥p p, where we recall that fi and fj are the frequencies in Hertz indexed by i and j, DC(v|ˆv) defines a metric (i.e., a symmetric divergence that satisfies the triangle inequality) coined Wasserstein distance or earth mover’s distance (Rubner et al., 1998; Villani, 2009). In other cases, in particular when the matrix C is not even symmetric like in the next section, DC(v|ˆv) is not a metric in general, but is still a valid measure of fit. For generality, we will refer to it as the “OT divergence”. By construction, the OT divergence can explicitly embed a form of invariance to displacements of support, as defined by the transportation cost matrix C. For example, in the spectral decomposition setting, the matrix with entries of the form cij = (fi −fj)2 will increasingly penalise frequency displacements as the distance between frequency bins increases. This precisely remedies the limitation of the separable KL divergence presented in Section 2. As such, the next section addresses variants of spectral unmixing based on the Wasserstein distance. 4 Optimal spectral transportation (OST) Unmixing with OT. In light of the above discussion, a direct solution to the sensibility of PLCA to small frequency displacements consists in replacing the KL divergence with the OT divergence. This amounts to solving the optimisation problem given by min H≥0 DC(V|WH) s.t ∀n, ∥hn∥1 = 1, (3) where DC(V| ˆV) = P n DC(vn|ˆvn), W is fixed and populated with pure note spectra and C penalises large displacements of frequency support. This approach is a particular case of NMF with the Wasserstein distance, which has been considered in a face recognition setting by Sandler and Lindenbaum (2011), with subsequent developments by Zen et al. (2014) and Rolet et al. (2016). This approach is relevant to our spectral unmixing scenario but as will be discussed in Section 5 is on the downside computationally intensive. It also requires the columns of W to be set to realistic note templates, which is still constraining. The next two sections describes a computationally more friendly approach which additionally removes the difficulty of choosing W appropriately. Harmonic-invariant transportation cost. In the approach above, the harmonic modelling is conveyed by the dictionary W (consisting of comb-like pure note spectra) and the invariance to small frequency displacements is introduced via the matrix C. In this section we propose to model both harmonicity and local invariance through the transportation cost matrix C. Loosely speaking, we want to define a class of equivalence between musical spectra, that takes into account their inherent harmonic nature. As such, we essentially impose that a harmonic frequency (i.e., a close multiple of its fundamental) can be considered equivalent to its fundamental, the only target of multi-pitch estimation. As such, we assume that a mass at one frequency can be transported to a divisor frequency with no cost. In other words, a mass at frequency fi can be transported with no cost to fi/2, fi/3, fi/4, and so on until sampling resolution. One possible cost matrix that embeds this property is cij = min q=1,...,qmax(fi −qfj)2 + ϵ δq̸=1, (4) where qmax is the ceiling of fi/fj and ϵ is a small value. The term ϵ δq̸=1 favours the discrimination of octaves. Indeed, it penalises the transportation of a note of fundamental frequency 2ν0 or ν0/2 to the spectral template with fundamental frequency ν0, which would be costless without this additive term. Let us denote by Ch the transportation cost matrix defined by Eq. (4). Fig. 1 compares Ch 3 Quadratic cost C2 (log scale) j = 1 . . . 100 i = 1 . . . 100 j = 1 . . . 100 cij Selected columns of C2 i=20 i=25 i=30 i=35 Harmonic cost Ch (log scale) j = 1 . . . 100 i = 1 . . . 100 j = 1 . . . 100 cij Selected columns of Ch i=20 i=25 i=30 i=35 Figure 1: Comparison of transportation cost matrices C2 and Ch (full matrices and selected columns). 0 10 20 30 40 50 60 70 80 90 0 0.5 1 One Dirac spectral template and three data samples ˆvv 1 v 2 v 3 Measure of fit Dℓ2 DKL DC2 DCh D(v1|ˆv) 1.13 72.92 145.00 134.32 D(v2|ˆv) 1.13 5.42 10.00 10.00 D(v3|ˆv) 0.91 2.02 1042.67 1.00 Figure 2: Three example spectra vn compared to a given template ˆv (left) and computed divergences (right). The template is a mere Dirac vector placed at a particular frequency ν0. Dℓ2 denotes the standard quadratic error ∥x −y∥2 2. By construction of DCh, sample v3 which is harmonically related to the template returns a very good fit with the latter OT divergence. Note that it does not make sense to compare output values of different divergences; only the relative comparison of output values of the same divergence for different input samples is meaningful. to the more standard quadratic cost C2 defined by cij = (fi −fj)2. With the quadratic cost, only local displacements are permissible. In contrast, the harmonic-invariant cost additionally permits larger displacements to divisor frequencies, improving robustness to variations of timbre besides to inharmonicities. Dictionary of Dirac vectors. Having designed an OT divergence that encodes inherent properties of musical signals, we still need to choose a dictionary W that will encode the fundamental frequencies of the notes to identify. Typically, these will consist of the physical frequencies of the 12 notes of the chromatic scale (from note A to note G, including half-tones), over several octaves. As mentioned in Section 1, one possible strategy is to populate W with spectral note templates. However, as also discussed, the performance of the resulting unmixing method will be capped by the representativeness of the chosen set of templates. A most welcome consequence of using the OT divergence built on the harmonic-insensitive cost matrix Ch is that we may use for W a mere set of Dirac vectors placed at the fundamental frequencies ν1, . . . , νK of the notes to identify and separate. Indeed, under the proposed setting, a real note spectra (composed of one fundamental and multiple harmonic frequencies) can be transported with no cost to its fundamental. Similarly, a spectral sample composed of several notes can be transported to mixture of Dirac vectors placed at their fundamental frequencies. This simply eliminates the problem of choosing a representative dictionary! This very appealing property is illustrated in Fig. 2. Furthermore, the particularly simple structure of the dictionary leads to a very efficient unmixing algorithm, as explained in the next section. In the following, the unmixing method consisting of the combined use of the harmonic-invariant cost matrix Ch and of the dictionary of Dirac vectors will be coined “optimal spectral transportation” (OST). At this level, we assume for simplicity that the set of K fundamental frequencies {ν1, . . . , νK} is contained in the set of sampled frequencies {f1, . . . , fM}. This means that wk (the kth column of W) is zero everywhere except at some entry i such that fi = νk where wik = 1. This is typically not the case in practice, where the sampled frequencies are fixed by the sampling rate, of the form fi = 0.5(i/T)fs, and where the fundamental frequencies νk are fixed by music theory. Our approach can actually deal with such a discrepancy and this will be explained later in Section 5. 4 5 Optimisation OT unmixing with linear programming. We start by describing optimisation for the state-of-theart OT unmixing problem described by Eq. (3) and proposed by Sandler and Lindenbaum (2011). First, since the objective function is separable with respect to samples, the optimisation problem decouples with respect to the activation columns hn. Dropping the sample index n and combining Eqs. (2) and (3), optimisation thus reduces to solving for every sample a problem of the form min h≥0,T≥0 ⟨T, C⟩= X ij tijcij s.t. T1M = v, T⊤1M = Wh, (5) where 1M is a vector of dimension M containing only ones and ⟨·, ·⟩is the Frobenius inner product. Vectorising the variables T and h into a single vector of dimension M 2 + K, problem (5) can be turned into a canonical linear program. Because of the large dimension of the variable (typically in the order of 105), resolution can however be very demanding, as will be shown in experiments. Optimisation for OST. We now assume that W is a set of Dirac vectors as explained at the end of Section 4. We also assume that K < M, which is the usual scenario. Indeed, K is typically in the order of a few tens, while M is in the order of a few hundreds. In such a setting ˆv = Wh contains by design at most K non-zero coefficients, located at the entries such that fi = νk. We denote this set of frequency indices by S. Hence, for j /∈S, we have ˆvj = 0 and thus P i tij = 0, by the second constraint of Eq. (5). Additionally, by the non-negativity of T this also implies that T has only K non-zero columns, indexed by j ∈S. Denoting by eT this subset of columns, and by eC the corresponding subset of columns of C, problem (5) reduces to min h≥0,eT≥0 ⟨eT, eC⟩ s.t. eT1K = v, eT⊤1M = h. (6) This is an optimisation problem of significantly reduced dimension (M + 1)K. Even more appealing, the problem has a simple closed-form solution. Indeed, the variable h has a virtual role in problem (6). It only appears in the second constraint, which de facto becomes a free constraint. Thus problem (6) can be solved with respect to eT regardless of h, and h is then simply obtained by summing the columns of eT⊤at the solution. Now, the problem min eT≥0 ⟨eT, eC⟩ s.t. eT1K = v (7) decouples with respect to the rows ˜ti of eT, and becomes, ∀i = 1, . . . , M, min ˜ti≥0 X k ˜tik˜cik s.t. X k ˜tik = vi. (8) The solution is simply given by ˜tik⋆ i = vi for k⋆ i = arg mink{˜cik}, and ˜tik = 0 for k ̸= k⋆ i . Introducing the labelling matrix L which is everywhere zero except for indices (i, k⋆ i ) where it is equal to 1, the solution to OST is trivially given by ˆh = L⊤v. Thus, under the specific assumption that W is a set of Dirac vectors, the challenging problem (5) has been reduced to an effortless assignment problem to solve for T and a simple sum to solve for h. Note that the algorithm is independent of the particular structure of C. In the end, the complexity per frame of OST reduces to O(M), which starkly contrasts with the complexity of PLCA, in the order O(KM) per iteration. In Section 4, we assumed for simplicity that the set of fundamental frequencies {νk}k was contained in the set of sampled frequencies {fi}i. As a matter of fact, this assumption can be trivially lifted in the proposed setting of OST. Indeed, we may construct the cost matrix eC (of dimensions M × K) by replacing the target frequencies fj in Eq. (4) by the theoretical fundamental frequencies νk. Namely, we may simply set the coefficients of eC to be ecik = minq(fi −qνk)2 + ϵ δq̸=1, in the implementation. Then, the matrix eT indicates how each sample v is transported to the Dirac vectors placed at fundamental frequencies {νk}k, without the need for the actual Dirac vectors themselves, which elegantly solves the frequency sampling problem. OST with entropic regularisation (OSTe). The procedure described above leads to a winnertakes-all transportation of all of vi to its cost-minimum target entry k⋆ i . We found it useful in 5 practice to relax this hard assignment and distribute energies more evenly by using the entropic regularisation of Cuturi (2013). It consists of penalising the fit ⟨eT, eC⟩in Eq. (6) with an additional term Ωe(eT) = P ik ˜tik log(˜tik), weighted by the hyper-parameter λe. The negentropic term Ωe(eT) promotes the transportation of vi to several entries, leading to a smoother estimate of eT. As explained in the supplementary material, one can show that the negentropy-regularised problem is a Bregman projection (Benamou et al., 2015) and has again a closed-form solution ˆh = L⊤ e v where Le is the M × K matrix with coefficients lik = exp(−˜cik/λe)/ P p exp(−˜cip/λe). Limiting cases λe = 0 and λe = ∞return the unregularised OST estimate and the maximum-entropy estimate hk = 1/K, respectively. Because Le becomes a full matrix, the complexity per frame of OSTe becomes O(KM). OST with group regularisation (OSTg). We have explained above that the transportation matrix T has a strong group structure in the sense that it contains by construction M −K null columns, and that only the subset eT needs to be considered. Because a small number of the K possible notes will be played at every time frame, the matrix eT will additionally have a significant number of null columns. This heavily suggests using group-sparse regularisation in the estimation of eT. As such, we also consider problem (6) penalised by the additional term Ωg(eT) = P k q ∥etk∥1 which promotes group-sparsity at column level (Huang et al., 2009). Unlike OST or OSTe, OSTg does not offer a closed-form solution. Following Courty et al. (2014), a majorisation-minimisation procedure based on the local linearisation of Ωg(eT) can be employed and the details are given in the supplementary material. The resulting algorithm consists in iteratively applying unregularised OST, as of Eq. (6), with the iteration-dependent transportation cost matrix eC(iter) = eC + eR(iter), where eR(iter) is the M × K matrix with coefficients er(iter) ik = 1 2∥et(iter) k ∥ −1 2 1 . Note that the proposed group-regularisation of eT corresponds to a sparse regularisation of h. This is because hk = ∥etk∥1 and thus, Ωg(eT) = P k √hk. Finally, note that OSTe and OSTg can be implemented simultaneously, leading to OSTe+g, by considering the optimisation of the doubly-penalised objective function ⟨eT, eC⟩+ λe Ωe(eT) + λg Ωg(eT), addressed in the supplementary material. 6 Experiments Toy experiments with simulated data. In this section we illustrate the robustness, the flexibility and the efficiency of OST on two simulated examples. The top plots of Fig. 3 display a synthetic dictionary of 8 harmonic spectral templates, referred to as the “harmonic dictionary”. They have been generated as Gaussian kernels placed at a fundamental frequency and its multiples, and using exponential dampening of the amplitudes. As everywhere in the paper, the spectral templates are normalised to sum to one. Note that the 8th template is the upper octave of the first one. We compare the unmixing performance of five methods in two different scenarios. The five methods are as follows. PLCA is the method described in Section 2, where the dictionary W is the harmonic dictionary. Convergence is stopped when the relative difference of the objective function between two iterations falls below 10−5 or the number of iterations (per frame) exceeds 1000. OTh is the unmixing method with the OT divergence, as in the first paragraph of Section 4, using the harmonic transportation cost matrix Ch and the harmonic dictionary. OST is like OTh, but using a dictionary of Dirac vectors (placed at the 8 fundamental frequencies characterising the harmonic dictionary). OSTe, OSTg and OSTe+g are the regularised variants of OST, described at the end of Section 4. The iterative procedure in the group-regularised variants is run for 10 iterations (per frame). In the first experimental scenario, reported in Fig. 3 (a), the data sample is generated by mixing the 1st and 4th elements of the harmonic dictionary, but introducing a small shift of the true fundamental frequencies (with the shift being propagated to the harmonic frequencies). This mimics the effect of possible inharmonicities or of an ill-tuned instrument. The middle plot of Fig. 3 (a), displays the generated sample, together with the “theoretical sample”, i.e., without the frequencies shift. This shows how a slight shift of the fundamental frequencies can greatly impact the overall spectral distribution. The bottom plot displays the true activation vector and the estimates returned by the five methods. The table reports the value of the (arbitrary) error measure ∥ˆh −htrue∥1 together with the run time (on an average desktop PC using a MATLAB implementation) for every method. The results show that group-regularised variants of OST lead to best performance with very light computational 6 (a) Unmixing with shifted fundamental frequencies Method PLCA OTh OST OSTg OSTe OSTe+g ℓ1 error 0.900 0.340 0.534 0.021 0.660 0.015 Time (s) 0.057 6.541 0.006 0.007 0.007 0.013 (b) Unmixing with wrong harmonic amplitudes Method PLCA OTh OST OSTg OSTe OSTe+g ℓ1 error 0.791 0.430 0.971 0.045 0.911 0.048 Time (s) 0.019 6.529 0.006 0.006 0.005 0.010 Figure 3: Unmixing under model misspecification. See text for details. burden, and without using the true harmonic dictionary. In the second experimental scenario, reported in Fig. 3 (b), the data sample is generated by mixing the 1st and 6th elements of the harmonic dictionary, with the right fundamental and harmonic frequencies, but where the spectral amplitudes at the latters do not follow the exponential dampening of the template dictionary (variation of timbre). Here again the group-regularised variants of OST outperforms the state-of-the-art approaches, both in accuracy and run time. Transcription of real musical data. We consider in this section the transcription of a selection of real piano recordings, obtained from the MAPS dataset (Emiya et al., 2010). The data comes with a ground-truth binary “piano-roll” which indicates the active notes at every time. The note fundamental frequencies are given in MIDI, a standard musical integer-valued frequency scale that matches the keys of a piano, with 12 half-tones (i.e., piano keys) per octave. The spectrogram of each recording is computed with a Hann window of size 93-ms and 50% overlap (fs = 44.1Hz). The columns (time frames) are then normalised to produce V. Each recording is decomposed with PLCA, OST and OSTe, with K = 60 notes (5 octaves). Half of the recording is used for validation of the hyper-parameters and the other half is used as test data. For PLCA, we validated 4 and 3 values of the width and amplitude dampening of the Gaussian kernels used to synthesise the dictionary. For OST, we set ϵ = qϵ0 in Eq. (4), which was found to satisfactorily improve the discrimination of octaves increasingly with frequency, and validated 5 orders of magnitude of ϵ0. For OSTe, we additionally validated 4 orders of magnitude of λe. Each of the three methods returns an estimate of H. The estimate is turned into a 0/1 piano-roll by only retaining the support of its Pn maximum entries at every frame n, where Pn is the ground-truth number of notes played in frame n. The estimated piano-roll is then numerically compared to its ground truth using the F-measure, a global recognition measure which accounts both for precision and recall and which is bounded between 0 (critically wrong) and 1 (perfect recognition). Our evaluation framework follows standard practice in music transcription evaluation, see for example (Daniel et al., 2008). As detailed in the supplementary material, it can be shown that OSTg and OSTe+g do not change the location of the maximum entries in the estimates of H returned by OST and OSTe, respectively, but only their amplitude. As such, they lead to the same F-measures than OST and OSTe, and we did not include them in the experiments of this section. We first illustrate the complexity of real-data spectra in Fig. 4, where the amplitudes of the first six partials (the components corresponding to the harmonic frequencies) of a single piano note are represented along time. Depending on the partial order q, the amplitude evolves with asynchronous beats and with various slopes. This behaviour is characteristic of piano sounds in which each note comes from the vibration of up to three coupled strings. As a consequence, the spectral envelope of such notes cannot be well modelled by a fixed amplitude pattern. Fig. 4 shows that, thanks to its flexibility, OSTe can perfectly recover the true fundamental frequency (MIDI 50) while PLCA 7 (a) Thresholded OSTe transcription 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 Pitch (MIDI) 40 60 80 (b) Thresholded PLCA transcription 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 Pitch (MIDI) 40 60 80 Time (s) Figure 4: First 6 partials and transcription of a single piano note (note D3, ν0 = 147 Hz, MIDI 50). Table 1: Recognition performance (F-measure values) and average computational unmixing times. MAPS dataset file IDs PLCA PLCA+noise OST OST+noise OSTe OSTe+noise chpn_op25_e4_ENSTDkAm 0.679 0.671 0.566 0.564 0.695 0.695 mond_2_SptkBGAm 0.616 0.713 0.470 0.534 0.610 0.607 mond_2_SptkBGCl 0.645 0.687 0.583 0.676 0.695 0.730 muss_1_ENSTDkAm 4 0.613 0.478 0.513 0.550 0.671 0.667 muss_2_AkPnCGdD 0.587 0.574 0.531 0.611 0.667 0.675 mz_311_1_ENSTDkCl 0.561 0.593 0.580 0.628 0.625 0.665 mz_311_1_StbgTGd2 0.663 0.617 0.701 0.718 0.747 0.747 Average 0.624 0.619 0.563 0.612 0.673 0.684 Time (s) 14.861 15.420 0.004 0.005 0.210 0.202 is prone to octave errors (confusions between MIDI 50 and MIDI 62). Then, Table 1 reports the F-measures returned by the three competing approaches on seven 15-s extracts of pieces from Chopin, Beethoven, Mussorgski and Mozart. For each of the three methods, we have also included a variant that incorporates a flat component in the dictionary that can account for noise or non-harmonic components. In PLCA, this merely consists in adding a constant vector wf(K+1) = 1/M to W. In OST or OSTe this consists in adding a constant column to eC, whose amplitude has also been validated over 3 orders of magnitude. OST performs comparably or slightly inferiorly to PLCA but with an impressive gain in computational time (∼3000× speedup). Best overall performance is obtained with OSTe+noise with an average ∼10% performance gain over PLCA and ∼750× speedup. A Python implementation of OST and real-time demonstrator are available at https://github. com/rflamary/OST 7 Conclusions In this paper we have introduced a new paradigm for spectral dictionary-based music transcription. As compared to state-of-the-art approaches, we have proposed a holistic measure of fit which is robust to local and harmonically-related displacements of frequency energies. It is based on a new form of transportation cost matrix that takes into account the inherent harmonic structure of musical signals. The proposed transportation cost matrix allows in turn to use a simplistic dictionary composed of Dirac vectors placed at the target fundamental frequencies, eliminating the problem of choosing a meaningful dictionary. Experimental results have shown the robustness and accuracy of the proposed approach, which strikingly does not come at the price of computational efficiency. Instead, the particular structure of the dictionary allows for a simple algorithm that is way faster than state-of-the-art NMF-like approaches. The proposed approach offers new foundations, with promising results and room for improvement. In particular, we believe exciting avenues of research concern the learning of Ch from examples and extensions to other areas such as in remote sensing, using application-specific forms of C. Acknowledgments. This work is supported in part by the European Research Council (ERC) under the European Union’s Horizon 2020 research & innovation programme (project FACTORY) and by the French ANR JCJC program MAD (ANR-14-CE27-0002). Many thanks to Antony Schutz for generating & providing some of the musical data. 8 References J.-D. Benamou, G. Carlier, M. Cuturi, L. Nenna, and G. Peyré. Iterative Bregman projections for regularized transportation problems. SIAM Journal on Scientific Computing, 37(2):A1111–A1138, 2015. N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent. Discriminative non-negative matrix factorization for multiple pitch estimation. In Proc. International Society for Music Information Retrieval Conference (ISMIR), 2012. N. Courty, R. Flamary, and D. Tuia. Domain adaptation with regularized optimal transport. In Proc. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD), 2014. M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transportation. In Advances on Neural Information Processing Systems (NIPS), 2013. A. Daniel, V. Emiya, and B. David. Perceptually-based evaluation of the errors usually made when automatically transcribing music. In Proc. International Society for Music Information Retrieval Conference (ISMIR), 2008. V. Emiya, R. Badeau, and B. David. Multipitch estimation of piano sounds using a new probabilistic spectral smoothness principle. IEEE Trans. Audio, Speech, and Language Processing, 18(6): 1643–1654, 2010. T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 42 (1):177–196, 2001. J. Huang, S. Ma, H. Xie, and C.-H. Zhang. A group bridge approach for variable selection. Biometrika, 96(2):339–355, 2009. L. Oudre, Y. Grenier, and C. Févotte. Chord recognition by fitting rescaled chroma vectors to chord templates. IEEE Trans. Audio, Speech and Language Processing, 19(7):2222 – 2233, 2011. F. Rigaud, B. David, and L. Daudet. A parametric model and estimation techniques for the inharmonicity and tuning of the piano. The Journal of the Acoustical Society of America, 133(5): 3107–3118, 2013. A. Rolet, M. Cuturi, and G. Peyré. Fast dictionary learning with a smoothed Wasserstein loss. In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. Y. Rubner, C. Tomasi, and L. Guibas. A metric for distributions with applications to image databases. In Proc. International Conference in Computer Vision (ICCV), 1998. R. Sandler and M. Lindenbaum. Nonnegative matrix factorization with earth mover’s distance metric for image analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 33(8):1590–1602, 2011. P. Smaragdis and J. C. Brown. Non-negative matrix factorization for polyphonic music transcription. In Proc. IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), 2003. P. Smaragdis, B. Raj, and M. V. Shashanka. A probabilistic latent variable model for acoustic modeling. In Proc. NIPS workshop on Advances in models for acoustic processing, 2006. R. Typke, R. C. Veltkamp, and F. Wiering. Searching notated polyphonic music using transportation distances. In Proc. ACM International Conference on Multimedia, 2004. C. Villani. Optimal transport: old and new. Springer, 2009. E. Vincent, N. Bertin, and R. Badeau. Adaptive harmonic spectral decomposition for multiple pitch estimation. IEEE Trans. Audio, Speech and Language Processing, 18:528 – 537, 2010. G. Zen, E. Ricci, and N. Sebe. Simultaneous ground metric learning and matrix factorization with earth mover’s distance. In Proc. International Conference on Pattern Recognition (ICPR), 2014. 9
2016
129
6,027
Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering Michaël Defferrard Xavier Bresson Pierre Vandergheynst EPFL, Lausanne, Switzerland {michael.defferrard,xavier.bresson,pierre.vandergheynst}@epfl.ch Abstract In this work, we are interested in generalizing convolutional neural networks (CNNs) from low-dimensional regular grids, where image, video and speech are represented, to high-dimensional irregular domains, such as social networks, brain connectomes or words’ embedding, represented by graphs. We present a formulation of CNNs in the context of spectral graph theory, which provides the necessary mathematical background and efficient numerical schemes to design fast localized convolutional filters on graphs. Importantly, the proposed technique offers the same linear computational complexity and constant learning complexity as classical CNNs, while being universal to any graph structure. Experiments on MNIST and 20NEWS demonstrate the ability of this novel deep learning system to learn local, stationary, and compositional features on graphs. 1 Introduction Convolutional neural networks [19] offer an efficient architecture to extract highly meaningful statistical patterns in large-scale and high-dimensional datasets. The ability of CNNs to learn local stationary structures and compose them to form multi-scale hierarchical patterns has led to breakthroughs in image, video, and sound recognition tasks [18]. Precisely, CNNs extract the local stationarity property of the input data or signals by revealing local features that are shared across the data domain. These similar features are identified with localized convolutional filters or kernels, which are learned from the data. Convolutional filters are shift- or translation-invariant filters, meaning they are able to recognize identical features independently of their spatial locations. Localized kernels or compactly supported filters refer to filters that extract local features independently of the input data size, with a support size that can be much smaller than the input size. User data on social networks, gene data on biological regulatory networks, log data on telecommunication networks, or text documents on word embeddings are important examples of data lying on irregular or non-Euclidean domains that can be structured with graphs, which are universal representations of heterogeneous pairwise relationships. Graphs can encode complex geometric structures and can be studied with strong mathematical tools such as spectral graph theory [6]. A generalization of CNNs to graphs is not straightforward as the convolution and pooling operators are only defined for regular grids. This makes this extension challenging, both theoretically and implementation-wise. The major bottleneck of generalizing CNNs to graphs, and one of the primary goals of this work, is the definition of localized graph filters which are efficient to evaluate and learn. Precisely, the main contributions of this work are summarized below. 1. Spectral formulation. A spectral graph theoretical formulation of CNNs on graphs built on established tools in graph signal processing (GSP). [31]. 2. Strictly localized filters. Enhancing [4], the proposed spectral filters are provable to be strictly localized in a ball of radius K, i.e. K hops from the central vertex. 3. Low computational complexity. The evaluation complexity of our filters is linear w.r.t. the filters support’s size K and the number of edges |E|. Importantly, as most real-world graphs are highly sparse, we have |E| ≪n2 and |E| = kn for the widespread k-nearest neighbor 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Classification Fully connected layers Feature extraction Convolutional layers Input graph signals e.g. bags of words Output signals e.g. labels Graph signal filtering 1. Convolution 2. Non-linear activation Graph coarsening 3. Sub-sampling 4. Pooling Figure 1: Architecture of a CNN on graphs and the four ingredients of a (graph) convolutional layer. (NN) graphs, leading to a linear complexity w.r.t the input data size n. Moreover, this method avoids the Fourier basis altogether, thus the expensive eigenvalue decomposition (EVD) necessary to compute it as well as the need to store the basis, a matrix of size n2. That is especially relevant when working with limited GPU memory. Besides the data, our method only requires to store the Laplacian, a sparse matrix of |E| non-zero values. 4. Efficient pooling. We propose an efficient pooling strategy on graphs which, after a rearrangement of the vertices as a binary tree structure, is analog to pooling of 1D signals. 5. Experimental results. We present multiple experiments that ultimately show that our formulation is (i) a useful model, (ii) computationally efficient and (iii) superior both in accuracy and complexity to the pioneer spectral graph CNN introduced in [4]. We also show that our graph formulation performs similarly to a classical CNNs on MNIST and study the impact of various graph constructions on performance. The TensorFlow [1] code to reproduce our results and apply the model to other data is available as an open-source software.1 2 Proposed Technique Generalizing CNNs to graphs requires three fundamental steps: (i) the design of localized convolutional filters on graphs, (ii) a graph coarsening procedure that groups together similar vertices and (iii) a graph pooling operation that trades spatial resolution for higher filter resolution. 2.1 Learning Fast Localized Spectral Filters There are two strategies to define convolutional filters; either from a spatial approach or from a spectral approach. By construction, spatial approaches provide filter localization via the finite size of the kernel. However, although graph convolution in the spatial domain is conceivable, it faces the challenge of matching local neighborhoods, as pointed out in [4]. Consequently, there is no unique mathematical definition of translation on graphs from a spatial perspective. On the other side, a spectral approach provides a well-defined localization operator on graphs via convolutions with a Kronecker delta implemented in the spectral domain [31]. The convolution theorem [22] defines convolutions as linear operators that diagonalize in the Fourier basis (represented by the eigenvectors of the Laplacian operator). However, a filter defined in the spectral domain is not naturally localized and translations are costly due to the O(n2) multiplication with the graph Fourier basis. Both limitations can however be overcome with a special choice of filter parametrization. Graph Fourier Transform. We are interested in processing signals defined on undirected and connected graphs G = (V, E, W), where V is a finite set of |V| = n vertices, E is a set of edges and W ∈Rn×n is a weighted adjacency matrix encoding the connection weight between two vertices. A signal x : V →R defined on the nodes of the graph may be regarded as a vector x ∈Rn where xi is the value of x at the ith node. An essential operator in spectral graph analysis is the graph Laplacian [6], which combinatorial definition is L = D−W ∈Rn×n where D ∈Rn×n is the 1https://github.com/mdeff/cnn_graph 2 diagonal degree matrix with Dii = P j Wij, and normalized definition is L = In −D−1/2WD−1/2 where In is the identity matrix. As L is a real symmetric positive semidefinite matrix, it has a complete set of orthonormal eigenvectors {ul}n−1 l=0 ∈Rn, known as the graph Fourier modes, and their associated ordered real nonnegative eigenvalues {λl}n−1 l=0 , identified as the frequencies of the graph. The Laplacian is indeed diagonalized by the Fourier basis U = [u0, . . . , un−1] ∈Rn×n such that L = UΛU T where Λ = diag([λ0, . . . , λn−1]) ∈Rn×n. The graph Fourier transform of a signal x ∈Rn is then defined as ˆx = U T x ∈Rn, and its inverse as x = U ˆx [31]. As on Euclidean spaces, that transform enables the formulation of fundamental operations such as filtering. Spectral filtering of graph signals. As we cannot express a meaningful translation operator in the vertex domain, the convolution operator on graph ∗G is defined in the Fourier domain such that x ∗G y = U((U T x) ⊙(U T y)), where ⊙is the element-wise Hadamard product. It follows that a signal x is filtered by gθ as y = gθ(L)x = gθ(UΛU T )x = Ugθ(Λ)U T x. (1) A non-parametric filter, i.e. a filter whose parameters are all free, would be defined as gθ(Λ) = diag(θ), (2) where the parameter θ ∈Rn is a vector of Fourier coefficients. Polynomial parametrization for localized filters. There are however two limitations with nonparametric filters: (i) they are not localized in space and (ii) their learning complexity is in O(n), the dimensionality of the data. These issues can be overcome with the use of a polynomial filter gθ(Λ) = K−1 X k=0 θkΛk, (3) where the parameter θ ∈RK is a vector of polynomial coefficients. The value at vertex j of the filter gθ centered at vertex i is given by (gθ(L)δi)j = (gθ(L))i,j = P k θk(Lk)i,j, where the kernel is localized via a convolution with a Kronecker delta function δi ∈Rn. By [12, Lemma 5.2], dG(i, j) > K implies (LK)i,j = 0, where dG is the shortest path distance, i.e. the minimum number of edges connecting two vertices on the graph. Consequently, spectral filters represented by Kthorder polynomials of the Laplacian are exactly K-localized. Besides, their learning complexity is O(K), the support size of the filter, and thus the same complexity as classical CNNs. Recursive formulation for fast filtering. While we have shown how to learn localized filters with K parameters, the cost to filter a signal x as y = Ugθ(Λ)U T x is still high with O(n2) operations because of the multiplication with the Fourier basis U. A solution to this problem is to parametrize gθ(L) as a polynomial function that can be computed recursively from L, as K multiplications by a sparse L costs O(K|E|) ≪O(n2). One such polynomial, traditionally used in GSP to approximate kernels (like wavelets), is the Chebyshev expansion [12]. Another option, the Lanczos algorithm [33], which constructs an orthonormal basis of the Krylov subspace KK(L, x) = span{x, Lx, . . . , LK−1x}, seems attractive because of the coefficients’ independence. It is however more convoluted and thus left as a future work. Recall that the Chebyshev polynomial Tk(x) of order k may be computed by the stable recurrence relation Tk(x) = 2xTk−1(x) −Tk−2(x) with T0 = 1 and T1 = x. These polynomials form an orthogonal basis for L2([−1, 1], dy/ p 1 −y2), the Hilbert space of square integrable functions with respect to the measure dy/ p 1 −y2. A filter can thus be parametrized as the truncated expansion gθ(Λ) = K−1 X k=0 θkTk(˜Λ), (4) of order K −1, where the parameter θ ∈RK is a vector of Chebyshev coefficients and Tk(˜Λ) ∈ Rn×n is the Chebyshev polynomial of order k evaluated at ˜Λ = 2Λ/λmax −In, a diagonal matrix of scaled eigenvalues that lie in [−1, 1]. The filtering operation can then be written as y = gθ(L)x = PK−1 k=0 θkTk(˜L)x, where Tk(˜L) ∈Rn×n is the Chebyshev polynomial of order k evaluated at the scaled Laplacian ˜L = 2L/λmax −In. Denoting ¯xk = Tk(˜L)x ∈Rn, we can use the recurrence relation to compute ¯xk = 2˜L¯xk−1 −¯xk−2 with ¯x0 = x and ¯x1 = ˜Lx. The entire filtering operation y = gθ(L)x = [¯x0, . . . , ¯xK−1]θ then costs O(K|E|) operations. 3 Learning filters. The jth output feature map of the sample s is given by ys,j = Fin X i=1 gθi,j(L)xs,i ∈Rn, (5) where the xs,i are the input feature maps and the Fin × Fout vectors of Chebyshev coefficients θi,j ∈RK are the layer’s trainable parameters. When training multiple convolutional layers with the backpropagation algorithm, one needs the two gradients ∂E ∂θi,j = S X s=1 [¯xs,i,0, . . . , ¯xs,i,K−1]T ∂E ∂ys,j and ∂E ∂xs,i = Fout X j=1 gθi,j(L) ∂E ∂ys,j , (6) where E is the loss energy over a mini-batch of S samples. Each of the above three computations boils down to K sparse matrix-vector multiplications and one dense matrix-vector multiplication for a cost of O(K|E|FinFoutS) operations. These can be efficiently computed on parallel architectures by leveraging tensor operations. Eventually, [¯xs,i,0, . . . , ¯xs,i,K−1] only needs to be computed once. 2.2 Graph Coarsening The pooling operation requires meaningful neighborhoods on graphs, where similar vertices are clustered together. Doing this for multiple layers is equivalent to a multi-scale clustering of the graph that preserves local geometric structures. It is however known that graph clustering is NP-hard [5] and that approximations must be used. While there exist many clustering techniques, e.g. the popular spectral clustering [21], we are most interested in multilevel clustering algorithms where each level produces a coarser graph which corresponds to the data domain seen at a different resolution. Moreover, clustering techniques that reduce the size of the graph by a factor two at each level offers a precise control on the coarsening and pooling size. In this work, we make use of the coarsening phase of the Graclus multilevel clustering algorithm [9], which has been shown to be extremely efficient at clustering a large variety of graphs. Algebraic multigrid techniques on graphs [28] and the Kron reduction [32] are two methods worth exploring in future works. Graclus [9], built on Metis [16], uses a greedy algorithm to compute successive coarser versions of a given graph and is able to minimize several popular spectral clustering objectives, from which we chose the normalized cut [30]. Graclus’ greedy rule consists, at each coarsening level, in picking an unmarked vertex i and matching it with one of its unmarked neighbors j that maximizes the local normalized cut Wij(1/di + 1/dj). The two matched vertices are then marked and the coarsened weights are set as the sum of their weights. The matching is repeated until all nodes have been explored. This is an very fast coarsening scheme which divides the number of nodes by approximately two (there may exist a few singletons, non-matched nodes) from one level to the next coarser level. 2.3 Fast Pooling of Graph Signals Pooling operations are carried out many times and must be efficient. After coarsening, the vertices of the input graph and its coarsened versions are not arranged in any meaningful way. Hence, a direct application of the pooling operation would need a table to store all matched vertices. That would result in a memory inefficient, slow, and hardly parallelizable implementation. It is however possible to arrange the vertices such that a graph pooling operation becomes as efficient as a 1D pooling. We proceed in two steps: (i) create a balanced binary tree and (ii) rearrange the vertices. After coarsening, each node has either two children, if it was matched at the finer level, or one, if it was not, i.e the node was a singleton. From the coarsest to finest level, fake nodes, i.e. disconnected nodes, are added to pair with the singletons such that each node has two children. This structure is a balanced binary tree: (i) regular nodes (and singletons) either have two regular nodes (e.g. level 1 vertex 0 in Figure 2) or (ii) one singleton and a fake node as children (e.g. level 2 vertex 0), and (iii) fake nodes always have two fake nodes as children (e.g. level 1 vertex 1). Input signals are initialized with a neutral value at the fake nodes, e.g. 0 when using a ReLU activation with max pooling. Because these nodes are disconnected, filtering does not impact the initial neutral value. While those fake nodes do artificially increase the dimensionality thus the computational cost, we found that, in practice, the number of singletons left by Graclus is quite low. Arbitrarily ordering the nodes at the coarsest level, then propagating this ordering to the finest levels, i.e. node k has nodes 2k and 2k + 1 as children, produces a regular ordering in the finest level. Regular in the sense that adjacent nodes are hierarchically merged at coarser levels. Pooling such a rearranged graph signal is 4 0 1 5 6 4 8 10 9 0 1 2 0 3 2 4 5 0 1 2 2 3 4 5 1 0 4 5 1 8 9 2 3 7 11 7 11 3 2 1 0 6 10 Figure 2: Example of Graph Coarsening and Pooling. Let us carry out a max pooling of size 4 (or two poolings of size 2) on a signal x ∈R8 living on G0, the finest graph given as input. Note that it originally possesses n0 = |V0| = 8 vertices, arbitrarily ordered. For a pooling of size 4, two coarsenings of size 2 are needed: let Graclus gives G1 of size n1 = |V1| = 5, then G2 of size n2 = |V2| = 3, the coarsest graph. Sizes are thus set to n2 = 3, n1 = 6, n0 = 12 and fake nodes (in blue) are added to V1 (1 node) and V0 (4 nodes) to pair with the singeltons (in orange), such that each node has exactly two children. Nodes in V2 are then arbitrarily ordered and nodes in V1 and V0 are ordered consequently. At that point the arrangement of vertices in V0 permits a regular 1D pooling on x ∈R12 such that z = [max(x0, x1), max(x4, x5, x6), max(x8, x9, x10)] ∈R3, where the signal components x2, x3, x7, x11 are set to a neutral value. analog to pooling a regular 1D signal. Figure 2 shows an example of the whole process. This regular arrangement makes the operation very efficient and satisfies parallel architectures such as GPUs as memory accesses are local, i.e. matched nodes do not have to be fetched. 3 Related Works 3.1 Graph Signal Processing The emerging field of GSP aims at bridging the gap between signal processing and spectral graph theory [6, 3, 21], a blend between graph theory and harmonic analysis. A goal is to generalize fundamental analysis operations for signals from regular grids to irregular structures embodied by graphs. We refer the reader to [31] for an introduction of the field. Standard operations on grids such as convolution, translation, filtering, dilatation, modulation or downsampling do not extend directly to graphs and thus require new mathematical definitions while keeping the original intuitive concepts. In this context, the authors of [12, 8, 10] revisited the construction of wavelet operators on graphs and techniques to perform mutli-scale pyramid transforms on graphs were proposed in [32, 27]. The works of [34, 25, 26] redefined uncertainty principles on graphs and showed that while intuitive concepts may be lost, enhanced localization principles can be derived. 3.2 CNNs on Non-Euclidean Domains The Graph Neural Network framework [29], simplified in [20], was designed to embed each node in an Euclidean space with a RNN and use those embeddings as features for classification or regression of nodes or graphs. By setting their transition function f as a simple diffusion instead of a neural net with a recursive relation, their state vector becomes s = f(x) = Wx. Their point-wise output function gθ can further be set as ˆx = gθ(s, x) = θ(s −Dx) + x = θLx + x instead of another neural net. The Chebyshev polynomials of degree K can then be obtained with a K-layer GNN, to be followed by a non-linear layer and a graph pooling operation. Our model can thus be interpreted as multiple layers of diffusions and node-local operations. The works of [11, 7] introduced the concept of constructing a local receptive field to reduce the number of learned parameters. The idea is to group together features based upon a measure of similarity such as to select a limited number of connections between two successive layers. While this model reduces the number of parameters by exploiting the locality assumption, it did not attempt to exploit any stationarity property, i.e. no weight-sharing strategy. The authors of [4] used this idea for their spatial formulation of graph CNNs. They use a weighted graph to define the local neighborhood and compute a multiscale clustering of the graph for the pooling operation. Inducing weight sharing in a spatial construction is however challenging, as it requires to select and order the neighborhoods when a problem-specific ordering (spatial, temporal, or otherwise) is missing. A spatial generalization of CNNs to 3D-meshes, a class of smooth low-dimensional non-Euclidean spaces, was proposed in [23]. The authors used geodesic polar coordinates to define the convolu5 Model Architecture Accuracy Classical CNN C32-P4-C64-P4-FC512 99.33 Proposed graph CNN GC32-P4-GC64-P4-FC512 99.14 Table 1: Classification accuracies of the proposed graph CNN and a classical CNN on MNIST. tion on mesh patches, and formulated a deep learning architecture which allows comparison across different manifolds. They obtained state-of-the-art results for 3D shape recognition. The first spectral formulation of a graph CNN, proposed in [4], defines a filter as gθ(Λ) = Bθ, (7) where B ∈Rn×K is the cubic B-spline basis and the parameter θ ∈RK is a vector of control points. They later proposed a strategy to learn the graph structure from the data and applied the model to image recognition, text categorization and bioinformatics [13]. This approach does however not scale up due to the necessary multiplications by the graph Fourier basis U. Despite the cost of computing this matrix, which requires an EVD on the graph Laplacian, the dominant cost is the need to multiply the data by this matrix twice (forward and inverse Fourier transforms) at a cost of O(n2) operations per forward and backward pass, a computational bottleneck already identified by the authors. Besides, as they rely on smoothness in the Fourier domain, via the spline parametrization, to bring localization in the vertex domain, their model does not provide a precise control over the local support of their kernels, which is essential to learn localized filters. Our technique leverages on this work, and we showed how to overcome these limitations and beyond. 4 Numerical Experiments In the sequel, we refer to the non-parametric and non-localized filters (2) as Non-Param, the filters (7) proposed in [4] as Spline and the proposed filters (4) as Chebyshev. We always use the Graclus coarsening algorithm introduced in Section 2.2 rather than the simple agglomerative method of [4]. Our motivation is to compare the learned filters, not the coarsening algorithms. We use the following notation when describing network architectures: FCk denotes a fully connected layer with k hidden units, Pk denotes a (graph or classical) pooling layer of size and stride k, GCk and Ck denote a (graph) convolutional layer with k feature maps. All FCk, Ck and GCk layers are followed by a ReLU activation max(x, 0). The final layer is always a softmax regression and the loss energy E is the cross-entropy with an ℓ2 regularization on the weights of all FCk layers. Mini-batches are of size S = 100. 4.1 Revisiting Classical CNNs on MNIST To validate our model, we applied it to the Euclidean case on the benchmark MNIST classification problem [19], a dataset of 70,000 digits represented on a 2D grid of size 28 × 28. For our graph model, we construct an 8-NN graph of the 2D grid which produces a graph of n = |V| = 976 nodes (282 = 784 pixels and 192 fake nodes as explained in Section 2.3) and |E| = 3198 edges. Following standard practice, the weights of a k-NN similarity graph (between features) are computed as Wij = exp  −∥zi −zj∥2 2 σ2  , (8) where zi is the 2D coordinate of pixel i. This is an important sanity check for our model, which must be able to extract features on any graph, including the regular 2D grid. Table 1 shows the ability of our model to achieve a performance very close to a classical CNN with the same architecture. The gap in performance may be explained by the isotropic nature of the spectral filters, i.e. the fact that edges in a general graph do not possess an orientation (like up, down, right and left for pixels on a 2D grid). Whether this is a limitation or an advantage depends on the problem and should be verified, as for any invariance. Moreover, rotational invariance has been sought: (i) many data augmentation schemes have used rotated versions of images and (ii) models have been developed to learn this invariance, like the Spatial Transformer Networks [14]. Other explanations are the lack of experience on architecture design and the need to investigate better suited optimization or initialization strategies. The LeNet-5-like network architecture and the following hyper-parameters are borrowed from the TensorFlow MNIST tutorial2: dropout probability of 0.5, regularization weight of 5 × 10−4, initial 2https://www.tensorflow.org/versions/r0.8/tutorials/mnist/pros 6 Model Accuracy Linear SVM 65.90 Multinomial Naive Bayes 68.51 Softmax 66.28 FC2500 64.64 FC2500-FC500 65.76 GC32 68.26 Table 2: Accuracies of the proposed graph CNN and other methods on 20NEWS. 2000 4000 6000 8000 10000 12000 number of features (words) 0 200 400 600 800 1000 1200 1400 time (ms) Chebyshev Non-Param / Spline Figure 3: Time to process a mini-batch of S = 100 20NEWS documents w.r.t. the number of words n. Accuracy Dataset Architecture Non-Param (2) Spline (7) [4] Chebyshev (4) MNIST GC10 95.75 97.26 97.48 MNIST GC32-P4-GC64-P4-FC512 96.28 97.15 99.14 Table 3: Classification accuracies for different types of spectral filters (K = 25). Time (ms) Model Architecture CPU GPU Speedup Classical CNN C32-P4-C64-P4-FC512 210 31 6.77x Proposed graph CNN GC32-P4-GC64-P4-FC512 1600 200 8.00x Table 4: Time to process a mini-batch of S = 100 MNIST images. learning rate of 0.03, learning rate decay of 0.95, momentum of 0.9. Filters are of size 5 × 5 and graph filters have the same support of K = 25. All models were trained for 20 epochs. 4.2 Text Categorization on 20NEWS To demonstrate the versatility of our model to work with graphs generated from unstructured data, we applied our technique to the text categorization problem on the 20NEWS dataset which consists of 18,846 (11,314 for training and 7,532 for testing) text documents associated with 20 classes [15]. We extracted the 10,000 most common words from the 93,953 unique words in this corpus. Each document x is represented using the bag-of-words model, normalized across words. To test our model, we constructed a 16-NN graph with (8) where zi is the word2vec embedding [24] of word i, which produced a graph of n = |V| = 10, 000 nodes and |E| = 132, 834 edges. All models were trained for 20 epochs by the Adam optimizer [17] with an initial learning rate of 0.001. The architecture is GC32 with support K = 5. Table 2 shows decent performances: while the proposed model does not outperform the multinomial naive Bayes classifier on this small dataset, it does defeat fully connected networks, which require much more parameters. 4.3 Comparison between Spectral Filters and Computational Efficiency Table 3 reports that the proposed parametrization (4) outperforms (7) from [4] as well as nonparametric filters (2) which are not localized and require O(n) parameters. Moreover, Figure 4 gives a sense of how the validation accuracy and the loss E converges w.r.t. the filter definitions. Figure 3 validates the low computational complexity of our model which scales as O(n) while [4] scales as O(n2). The measured runtime is the total training time divided by the number of gradient steps. Table 4 shows a similar speedup as classical CNNs when moving to GPUs. This exemplifies the parallelization opportunity offered by our model, who relies solely on matrix multiplications. Those are efficiently implemented by cuBLAS, the linear algebra routines provided by NVIDIA. 4.4 Influence of Graph Quality For any graph CNN to be successful, the statistical assumptions of locality, stationarity, and compositionality regarding the data must be fulfilled on the graph where the data resides. Therefore, the learned filters’ quality and thus the classification performance critically depends on the quality of 7 500 1000 1500 2000 0 20 40 60 80 100 validation accuracy Chebyshev Non-Param Spline 500 1000 1500 2000 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 6.5 training loss Chebyshev Non-Param Spline Figure 4: Plots of validation accuracy and training loss for the first 2000 iterations on MNIST. Architecture 8-NN on 2D Euclidean grid random GC32 97.40 96.88 GC32-P4-GC64-P4-FC512 99.14 95.39 Table 5: Classification accuracies with different graph constructions on MNIST. word2vec bag-of-words pre-learned learned approximate random 67.50 66.98 68.26 67.86 67.75 Table 6: Classification accuracies of GC32 with different graph constructions on 20NEWS. the graph. For data lying on Euclidean space, experiments in Section 4.1 show that a simple k-NN graph of the grid is good enough to recover almost exactly the performance of standard CNNs. We also noticed that the value of k does not have a strong influence on the results. We can witness the importance of a graph satisfying the data assumptions by comparing its performance with a random graph. Table 5 reports a large drop of accuracy when using a random graph, that is when the data structure is lost and the convolutional layers are not useful anymore to extract meaningful features. While images can be structured by a grid graph, a feature graph has to be built for text documents represented as bag-of-words. We investigate here three ways to represent a word z: the simplest option is to represent each word as its corresponding column in the bag-of-words matrix while, another approach is to learn an embedding for each word with word2vec [24] or to use the pre-learned embeddings provided by the authors. For larger datasets, an approximate nearest neighbors algorithm may be required, which is the reason we tried LSHForest [2] on the learned word2vec embeddings. Table 6 reports classification results which highlight the importance of a well constructed graph. 5 Conclusion and Future Work In this paper, we have introduced the mathematical and computational foundations of an efficient generalization of CNNs to graphs using tools from GSP. Experiments have shown the ability of the model to extract local and stationary features through graph convolutional layers. Compared with the first work on spectral graph CNNs introduced in [4], our model provides a strict control over the local support of filters, is computationally more efficient by avoiding an explicit use of the Graph Fourier basis, and experimentally shows a better test accuracy. Besides, we addressed the three concerns raised by [13]: (i) we introduced a model whose computational complexity is linear with the dimensionality of the data, (ii) we confirmed that the quality of the input graph is of paramount importance, (iii) we showed that the statistical assumptions of local stationarity and compositionality made by the model are verified for text documents as long as the graph is well constructed. Future works will investigate two directions. On one hand, we will enhance the proposed framework with newly developed tools in GSP. On the other hand, we will explore applications of this generic model to important fields where the data naturally lies on graphs, which may then incorporate external information about the structure of the data rather than artificially created graphs which quality may vary as seen in the experiments. Another natural and future approach, pioneered in [13], would be to alternate the learning of the CNN parameters and the graph. 8 References [1] Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. 2016. [2] M. Bawa, T. Condie, and P. Ganesan. LSH Forest: Self-Tuning Indexes for Similarity Search. In International Conference on World Wide Web, pages 651–660, 2005. [3] M. Belkin and P. Niyogi. Towards a Theoretical Foundation for Laplacian-based Manifold Methods. Journal of Computer and System Sciences, 74(8):1289–1308, 2008. [4] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun. Spectral Networks and Deep Locally Connected Networks on Graphs. arXiv:1312.6203, 2013. [5] T.N. Bui and C. Jones. Finding Good Approximate Vertex and Edge Partitions is NP-hard. Information Processing Letters, 42(3):153–159, 1992. [6] F. R. K. Chung. Spectral Graph Theory, volume 92. American Mathematical Society, 1997. [7] A. Coates and A.Y. Ng. Selecting Receptive Fields in Deep Networks. In Neural Information Processing Systems (NIPS), pages 2528–2536, 2011. [8] R.R. Coifman and S. Lafon. Diffusion Maps. Applied and Computational Harmonic Analysis, 21(1):5–30, 2006. [9] I. Dhillon, Y. Guan, and B. Kulis. Weighted Graph Cuts Without Eigenvectors: A Multilevel Approach. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 29(11):1944–1957, 2007. [10] M. Gavish, B. Nadler, and R. Coifman. Multiscale Wavelets on Trees, Graphs and High Dimensional Data: Theory and Applications to Semi Supervised Learning. In International Conference on Machine Learning (ICML), pages 367–374, 2010. [11] K. Gregor and Y. LeCun. Emergence of Complex-like Cells in a Temporal Product Network with Local Receptive Fields. In arXiv:1006.0448, 2010. [12] D. Hammond, P. Vandergheynst, and R. Gribonval. Wavelets on Graphs via Spectral Graph Theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011. [13] M. Henaff, J. Bruna, and Y. LeCun. Deep Convolutional Networks on Graph-Structured Data. arXiv:1506.05163, 2015. [14] Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Advances in Neural Information Processing Systems, pages 2017–2025, 2015. [15] T. Joachims. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. Carnegie Mellon University, Computer Science Technical Report, CMU-CS-96-118, 1996. [16] G. Karypis and V. Kumar. A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs. SIAM Journal on Scientific Computing (SISC), 20(1):359–392, 1998. [17] D. Kingma and J. Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980, 2014. [18] Y. LeCun, Y. Bengio, and G. Hinton. Deep Learning. Nature, 521(7553):436–444, 2015. [19] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-Based Learning Applied to Document Recognition. In Proceedings of the IEEE, 86(11), pages 2278–2324, 1998. [20] Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated Graph Sequence Neural Networks. [21] U. Von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4):395–416, 2007. [22] S. Mallat. A Wavelet Tour of Signal Processing. Academic press, 1999. [23] Jonathan Masci, Davide Boscaini, Michael Bronstein, and Pierre Vandergheynst. Geodesic convolutional neural networks on riemannian manifolds. In Proceedings of the IEEE International Conference on Computer Vision Workshops, pages 37–45, 2015. [24] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Estimation of Word Representations in Vector Space. In International Conference on Learning Representations, 2013. [25] B. Pasdeloup, R. Alami, V. Gripon, and M. Rabbat. Toward an Uncertainty Principle for Weighted Graphs. In Signal Processing Conference (EUSIPCO), pages 1496–1500, 2015. [26] N. Perraudin, B. Ricaud, D. Shuman, and P. Vandergheynst. Global and Local Uncertainty Principles for Signals on Graphs. arXiv:1603.03030, 2016. [27] I. Ram, M. Elad, and I. Cohen. Generalized Tree-based Wavelet Transform. IEEE Transactions on Signal Processing,, 59(9):4199–4209, 2011. [28] D. Ron, I. Safro, and A. Brandt. Relaxation-based Coarsening and Multiscale Graph Organization. SIAM Iournal on Multiscale Modeling and Simulation, 9:407–423, 2011. [29] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini. The Graph Neural Network Model. 20(1):61–80. [30] J. Shi and J. Malik. Normalized Cuts and Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 22(8):888–905, 2000. [31] D. Shuman, S. Narang, P. Frossard, A. Ortega, and P. Vandergheynst. The Emerging Field of Signal Processing on Graphs: Extending High-Dimensional Data Analysis to Networks and other Irregular Domains. IEEE Signal Processing Magazine, 30(3):83–98, 2013. [32] D.I. Shuman, M.J. Faraji, and P. Vandergheynst. A Multiscale Pyramid Transform for Graph Signals. IEEE Transactions on Signal Processing, 64(8):2119–2134, 2016. [33] A. Susnjara, N. Perraudin, D. Kressner, and P. Vandergheynst. Accelerated Filtering on Graphs using Lanczos Method. preprint arXiv:1509.04537, 2015. [34] M. Tsitsvero and S. Barbarossa. On the Degrees of Freedom of Signals on Graphs. In Signal Processing Conference (EUSIPCO), pages 1506–1510, 2015. 9
2016
13
6,028
MoCap-guided Data Augmentation for 3D Pose Estimation in the Wild Grégory Rogez Cordelia Schmid Inria Grenoble Rhône-Alpes, Laboratoire Jean Kuntzmann, France Abstract This paper addresses the problem of 3D human pose estimation in the wild. A significant challenge is the lack of training data, i.e., 2D images of humans annotated with 3D poses. Such data is necessary to train state-of-the-art CNN architectures. Here, we propose a solution to generate a large set of photorealistic synthetic images of humans with 3D pose annotations. We introduce an image-based synthesis engine that artificially augments a dataset of real images with 2D human pose annotations using 3D Motion Capture (MoCap) data. Given a candidate 3D pose our algorithm selects for each joint an image whose 2D pose locally matches the projected 3D pose. The selected images are then combined to generate a new synthetic image by stitching local image patches in a kinematically constrained manner. The resulting images are used to train an end-to-end CNN for full-body 3D pose estimation. We cluster the training data into a large number of pose classes and tackle pose estimation as a K-way classification problem. Such an approach is viable only with large training sets such as ours. Our method outperforms the state of the art in terms of 3D pose estimation in controlled environments (Human3.6M) and shows promising results for in-the-wild images (LSP). This demonstrates that CNNs trained on artificial images generalize well to real images. 1 Introduction Convolutionnal Neural Networks (CNN) have been very successful for many different tasks in computer vision. However, training these deep architectures requires large scale datasets which are not always available or easily collectable. This is particularly the case for 3D human pose estimation, for which an accurate annotation of 3D articulated poses in large collections of real images is nontrivial: annotating 2D images with 3D pose information is impractical [6] while large scale 3D pose capture is only available through marker-based systems in constrained environments [13]. The images captured in such conditions do not match well real environments. This has limited the development of end-to-end CNN architectures for in-the-wild 3D pose understanding. Learning architectures usually augment existing training data by applying synthetic perturbations to the original images, e.g. jittering exemplars or applying more complex affine or perspective transformations [15]. Such data augmentation has proven to be a crucial stage, especially for training deep architectures. Recent work [14, 23, 34, 40] has introduced the use of data synthesis as a solution to train CNNs when only limited data is available. Synthesis can potentially provide infinite training data by rendering 3D CAD models from any camera viewpoint [23, 34, 40]. Fisher et al [8] generate a synthetic “Flying Chairs” dataset to learn optical flow with a CNN and show that networks trained on this unrealistic data still generalize very well to existing datasets. In the context of scene text recognition, Jaderberg et al. [14] trained solely on data produced by a synthetic text generation engine. In this case, the synthetic data is highly realistic and sufficient to replace real data. Although synthesis seems like an appealing solution, there often exists a large domain shift from synthetic to real data [23]. Integrating a human 3D model in a given background in a realistic way is not trivial. Rendering a collection of photo-realistic images (in terms of color, texture, context, shadow) that would cover the variations in pose, body shape, clothing and scenes is a challenging task. Instead of rendering a human 3D model, we propose an image-based synthesis approach that makes use of Motion Capture (MoCap) data to augment an existing dataset of real images with 2D pose annotations. Our system synthesizes a very large number of new in-the-wild images showing more pose configurations and, importantly, it provides the corresponding 3D pose annotations (see Fig. 1). For each candidate 3D pose in the MoCap library, our system combines several annotated images to generate a synthetic image of a human in this particular pose. This is achieved by “copy-pasting” the image information corresponding to each joint in a kinematically constrained manner. Given this large “in-the-wild” dataset, we implement an end-to-end CNN architecture for 3D pose estimation. Our approach first clusters the 3D poses into K pose classes. Then, a K-way CNN classifier is trained to return a distribution over probable pose classes given a bounding box around the human in the image. Our method outperforms state-of-the-art results in terms of 3D pose estimation in controlled environments and shows promising results on images captured “in-the-wild”. Figure 1: Image-based synthesis engine. Input: real images with manual annotation of 2D poses, and 3D poses captured with a Motion Capture (MoCap) system. Output: 220x220 synthetic images and associated 3D poses. 1.1 Related work 3D human pose estimation in monocular images. Recent approaches employ CNNs for 3D pose estimation in monocular images [20] or in videos [44]. Due to the lack of large scale training data, they are usually trained (and tested) on 3D MoCap data in constrained environments [20]. Pose understanding in natural images is usually limited to 2D pose estimation [7, 36, 37]. Recent work also tackles 3D pose understanding from 2D poses [2, 10]. Some approaches use as input the 2D joints automatically provided by a 2D pose detector [32, 38], while others jointly solve the 2D and 3D pose estimation [31, 43]. Most similar to ours is the approach of Iqbal et al. [42] who use a dual-source approach that combines 2D pose estimation with 3D pose retrieval. Our method uses the same two training sources, i.e., images with annotated 2D pose and 3D MoCap data. However, we combine both sources off-line to generate a large training set that is used to train an end-to-end CNN 3D pose classifier. This is shown to improve over [42], which can be explained by the fact that training is performed in an end-to-end fashion. Synthetic pose data. A number of works have considered the use of synthetic data for human pose estimation. Synthetic data have been used for upper body [29], full-body silhouettes [1], hand-object interactions [28], full-body pose from depth [30] or egocentric RGB-D scenes [27]. Recently, Zuffi and Black [45] used a 3D mesh-model to sample synthetic exemplars and fit 3D scans. In [11], a scene-specific pedestrian detectors was learned without real data while [9] synthesized virtual samples with a generative model to enhance the classification performance of a discriminative model. In [12], pictures of 2D characters were animated by fitting and deforming a 3D mesh model. Later, [25] augmented labelled training images with small perturbations in a similar way. These methods require a perfect segmentation of the humans in the images. Park and Ramanan [22] synthesized hypothetical poses for tracking purposes by applying geometric transformations to the first frame of a video sequence. We also use image-based synthesis to generate images but our rendering engine combines image regions from several images to create images with associated 3D poses. 2 Image-based synthesis engine At the heart of our approach is an image-based synthesis engine that artificially generates “in-the-wild” images with 3D pose annotations. Our method takes as input a dataset of real images with 2D 2 Figure 2: Synthesis engine. From left to right: for each joint j of a 2D query pose p (centered in a 220 × 220 bounding box), we align all the annotated 2D poses w.r.t the limb and search for the best pose match, obtaining a list of n matches {(I′ j, q′ j), j = 1...n} where I′ j is obtained after transforming Ij with Tqj →q′ j. For each retrieved pair, we compute a probability map pj[u, v]. These n maps are used to compute index[u, v] ∈{1...n}, pointing to the image I′ j that should be used for a particular pixel (u, v). Finally, our blending algorithm computes each pixel value of the synthetic image M[u, v] as the weighted sum over all aligned images I′ j, the weights being calculated using an histogram of indexes in a squared region Ru,v around (u, v). annotations and a library of 3D Motion Capture (MoCap) data, and generates a large number of synthetic images with associated 3D poses (Fig. 1). We introduce an image-based rendering engine that augments the existing database of annotated images with a very large set of photorealistic images covering more body pose configurations than the original set. This is done by selecting and stitching image patches in a kinematically constrained manner using the MoCap 3D poses. Our synthesis process consists of two stages: a MoCap-guided mosaic construction stage that stitches image patches together and a pose-aware blending process that improves image quality and erases patch seams. These are discussed in the following subsections. Fig. 2 summarizes the overall process. 2.1 MoCap-guided image mosaicing Given a 3D pose with n joints P ∈Rn×3, and its projected 2D joints p = {pj, j = 1...n} in a particular camera view, we want to find for each joint j ∈{1...n} an image whose annotated 2D pose presents a similar kinematic configuration around j. To do so, we define a distance function between 2 different 2D poses p and q, conditioned on joint j as: Dj(p, q) = n X k=1 dE(pk, q′ k) (1) where dE is the Euclidean distance. q′ is the aligned version of q with respect to joint j after applying a rigid transformation Tqj→q′ j, which respects q′ j = pj and q′ i = pi , where i is the farthest directly connected joint to j in p. This function Dj measures the similarity between 2 joints by aligning and taking into account the entire poses. To increase the influence of neighboring joints, we weight the distances dE between each pair of joints {(pk, q′ k), k = 1...n} according to their distance to the query joint j in both poses. Eq. 1 becomes: Dj(p, q) = n X k=1 (wj k(p) + wj k(q)) dE(pk, q′ k) (2) where weight wj k is inversely proportional to the distance between joint k and the query joint j, i.e., wj k(p) = 1/dE(pk, pj) and normalized so that P k wj k(p) = 1. For each joint j of the query pose p, we retrieve from our dataset Q = {(I1, q1) . . . (IN, qN)} of images and annotated 2D poses1: qj = argminq∈QDj(p, q) ∀j ∈{1...n}. (3) We obtain a list of n matches {(I′ j, q′ j), j = 1...n} where I′ j is the cropped image obtained after transforming Ij with Tqj→q′ j. Note that a same pair (I, q) can appear multiple times in the list of candidates, i.e., being a good match for several joints. 1In practice, we do not search for occluded joints. 3 Finally, to render a new image, we need to select the candidate images I′ j to be used for each pixel (u, v). Instead of using regular patches, we compute a probability map pj[u, v] associated with each pair (I′ j, q′ j) based on local matches measured by dE(pk, q′ k) in Eq. 1. To do so, we first apply a Delaunay triangulation to the set of 2D joints in {q′ j} obtaining a partition of the image into triangles, accordingly to the selected pose. Then, we assign the probability pj(q′ k) = exp(−dE(pk, q′ k)2/σ2) to each vertex q′ k. We finally compute a probability map pj[u, v] by interpolating values from these vertices using barycentric interpolation inside each triangle. The resulting n probability maps are concatenated and an index map index[u, v] ∈{1...n} can be computed as follows: index[u, v] = argmaxj∈{1...n} pj[u, v], (4) this map pointing to the training image I′ j that should be used for each pixel (u, v). A mosaic M[u, v] can be generated by “copy-pasting” image information at pixel (u, v) indicated by index[u, v]: M[u, v] = I′ j∗[u, v] with j∗= index[u, v]. (5) 2.2 Pose-aware image blending The mosaic M[u, v] resulting from the previous stage presents significant artifacts at the boundaries between image regions. Smoothing is necessary to prevent the learning algorithm from interpreting these artifacts as discriminative pose-related features. We first experimented with off-the-shelf image filtering and alpha blending algorithms, but the results were not satisfactory. Instead, we propose a new pose-aware blending algorithm that maintains image information on the human body while erasing most of the stitching artifacts. For each pixel (u, v), we select a surrounding squared region Ru,v whose size varies with the distance of pixel (u, v) to the pose: Ru,v will be larger when far from the body and smaller nearby. Then, we evaluate how much each image I′ j should contribute to the value of pixel (u, v) by building an histogram of the image indexes inside the region Ru,v: wj[u, v] = Hist(index(Ru,v)) ∀j ∈{1 . . . n}, (6) where the weights are normalized so that P j wj[u, v] = 1. The final mosaic M[u, v] (see examples in Fig. 1) is then computed as the weighted sum over all aligned images: M[u, v] = X j wj[u, v]I′ j[u, v]. (7) This procedure produces plausible images that are kinematically correct and locally photorealistic. 3 CNN for full-body 3D pose estimation Human pose estimation has been addressed as a classification problem in the past [4, 21, 27, 26]. Here, the 3D pose space is partitioned into K clusters and a K-way classifier is trained to return a distribution over pose classes. Such a classification approach allows modeling multimodal outputs in ambiguous cases, and produces multiple hypothesis that can be rescored, e.g. using temporal information. Training such a classifier requires a reasonable amount of data per class which implies a well-defined and limited pose space (e.g. walking action) [26, 4], a large-scale synthetic dataset [27] or both [21]. Here, we introduce a CNN-based classification approach for full-body 3D pose estimation. Inspired by the DeepPose algorithm [37] where the AlexNet CNN architecture [19] is used for full-body 2D pose regression, we select the same architecture and adapt it to the task of 3D body pose classification. This is done by adapting the last fully-connected layer to output a distribution of scores over pose classes as illustrated in Fig. 3. Training such a classifier requires a large amount of training data that we generate using our image-based synthesis engine. Given a library of MoCap data and a set of camera views, we synthesize for each 3D pose a 220×220 image. This size has proved to be adequate for full-body pose estimation [37]. The 3D poses are then aligned with respect to the camera center and translated to the center of the torso. In that way, we obtain orientated 3D poses that also contain the viewpoint information. We cluster the resulting 3D poses to define our classes which will correspond to groups of similar orientated 3D poses.We empirically found that K=5000 clusters was a sufficient number of clusters. For evaluation, we return the average 2D and 3D poses of the top scoring class. To compare with [37], we also train a holistic pose regressor, which regresses to 2D and 3D poses (not only 2D). To do so, we concatenate the 3D coordinates expressed in meters normalized to the range [−1, 1], with the 2D pose coordinates, also normalized in the range [−1, 1] following [37]. 4 Figure 3: CNN-based pose classifier. We show the different layers with their corresponding dimensions, with convolutional layers depicted in blue and fully connected ones in green. The output is a distribution over K pose classes. Pose estimation is obtained by taking the highest score in this distribution. We show on the right the 3D poses for 3 highest scores. 4 Experiments We address 3D pose estimation in the wild. However, there does not exist a dataset of real-world images with 3D annotations. We thus evaluate our method in two different settings using existing datasets: (1) we validate our 3D pose predictions using Human3.6M [13] which provides accurate 3D and 2D poses for 15 different actions captured in a controlled indoor environment; (2) we evaluate on Leeds Sport dataset (LSP)[16] that presents in-the-wild images together with full-body 2D pose annotations. We demonstrate competitive results with state-of-the-art methods for both of them. Our image-based rendering engine requires two different training sources: 1) a 2D source of images with 2D pose annotations and 2) a MoCap 3D source. We consider two different datasets for each: for 3D poses we use the CMU Motion Capture Dataset2 and the Human3.6M 3D poses [13], and for 2D pose annotations the MPII-LSP-extended dataset [24] and the Human3.6M 2D poses and images. MoCap 3D source. The CMU Motion Capture dataset consists of 2500 sequences and a total of 140,000 3D poses. We align the 3D poses w.r.t. the torso and select a subset of 12,000 poses, ensuring that selected poses have at least one joint 5 cm apart. In that way, we densely populate our pose space and avoid repeating common poses (e.g. neutral standing or walking poses which are over-represented in the dataset). For each of the 12,000 original MoCap poses, we sample 180 random virtual views with azimuth angle spanning 360 degrees and elevation angles in the range [−45, 45]. We generate over 2 million pairs of 3D/2D pose configurations (articulated poses + camera position and angle). For Human3.6M, we randomly selected a subset of 190,000 orientated 3D poses, discarding similar poses, i.e., when the average Euclidean distance of the joints is less than 15mm as in [42]. 2D source. For the training dataset of real images with 2D pose annotations, we use the MPII-LSPextended [24] which is a concatenation of the extended LSP [17] and the MPII dataset [3]. Some of the poses were manually corrected as a non-negligible number of annotations are not accurate enough or completely wrong (eg., right-left inversions or bad ordering of the joints along a limb). We mirror the images to double the size of the training set, obtaining a total of 80,000 images with 2D pose annotations. For Human3.6M, we consider the 4 cameras and create a pool of 17,000 images and associated 2D poses that we also mirror. We ensure that most similar poses have at least one joint 5 cm apart in 3D. 4.1 Evaluation on Human3.6M Dataset (H3.6M) To compare our results with very recent work in 3D pose estimation [42], we follow the protocol introduced in [18] and employed in [42]: we consider six subjects (S1, S5, S6, S7, S8 and S9) for training, use every 64th frame of subject S11 for testing and evaluate the 3D pose error (mm) averaged over the 13 joints. We refer to this protocol by P1. As in [42], we consider a 3D pose error that measures accuracy of aligned pose by a rigid transformation but also report the absolute error. We first evaluate the impact of our synthetic data on the performances for both the regressor and classifier. The results are reported in Tab. 1. We can observe that when considering few training images (17,000), the regressor clearly outperforms the classifier which, in turns, reaches better performances when trained on larger sets. This can be explained by the fact that the classification approach requires a sufficient amount of examples. We, then, compare results when training both regressor and classifier on the same 190,000 poses considering a) synthetic data generating from H3.6M, b) the real images corresponding to the 190,000 poses and c) the synthetic and real images 2http://mocap.cs.cmu.edu 5 Table 1: 3D pose estimation results on Human3.6M (protocol P1). Method Type of images 2D source size 3D source size Error (mm) Reg. Real 17,000 17,000 112.9 Class. Real 17,000 17,000 149.7 Reg. Synth 17,000 190,000 101.9 Class. Synth 17,000 190,000 97.2 Reg. Real 190,000 190,000 139.6 Class. Real 190,000 190,000 97.7 Reg. Synth + Real 207,000 190,000 125.5 Class. Synth + Real 207,000 190,000 88.1 Table 2: Comparison with state-of-the-art results on Human3.6M. The average 3D pose error (mm) is reported before (Abs.) and after rigid 3D alignment for 2 different protocols. See text for details. Method Abs. Error (P1) Error (P1) Abs. Error (P2) Error (P2) Bo&Sminchisescu [5] 117.9 Kostrikov&Gall [18] 115.7 Iqbal et al. [42] 108.3 Li et al. [20] 121.31 Tekin et al. [35] 124.97 Zhou et al. [44] 113.01 Ours 126 88.1 121.2 87.3 together. We observe that the classifier has similar performance when trained on synthetic or real images, which means that our image-based rendering engine synthesizes useful data. Furthermore, we can see that the classifier performs much better when trained on synthetic and real images together. This means that our data is different from the original data and allows the classifier to learn better features. Note that we retrain Alexnet from scratch. We found that it performed better than just fine-tuning a model pre-trained on Imagenet (3D error of 88.1mm vs 98.3mm with fine-tuning). In Tab. 2, we compare our results to state-of-the-art approaches. We also report results for a second protocol (P2) employed in [20, 44, 35] where all the frames from subjects S9 and S11 are used for testing and only S1, S5, S6, S7 and S8 are used for training. Our best classifier, trained with a combination of synthetic and real data, outperforms state-of-the-art results in terms of 3D pose estimation for single frames. Zhou et al. [44] report better performance, but they integrate temporal information. Note that our method estimates absolute pose (including orientation w.r.t. the camera), which is not the case for other methods such as Bo et al. [5], who estimate a relative pose and do not provide 3D orientation. 4.2 Evaluation on Leeds Sport Dataset (LSP) We now train our pose classifier using different combinations of training sources and use them to estimate 3D poses on images captured in-the-wild, i.e., LSP. Since 3D pose evaluation is not possible on this dataset, we instead compare 2D pose errors expressed in pixels and measure this error on the normalized 220 × 220 images following [44]. We compute the average 2D pose error over the 13 joints on both LSP and H3.6M (see Table 3). As expected, we observe that when using a pool of the in-the-wild images to generate the synthetic data, the performance increases on LSP and drops on H3.6M, showing the importance of realistic images for good performance in-the-wild and the lack of generability of models trained on constrained indoor images. The error slightly increases in both cases when using the same number (190,000) of CMU 3D poses. The same drop was observed by [42] and can be explained by the fact that by CMU data covers a larger portions of the 3D pose space, resulting in a worse fit. The results improve on both test sets when considering more poses and synthetic images (2 millions). The larger drop in Abs 3D error and 2D error compared to 3D error means that a better camera view is estimated when using more synthetic data. In all cases, the performance (in pixel) is lower on LSP than on H3.6M due to the fact that the poses observed in LSP are more different from the ones in the CMU MoCap data. In Fig. 4 , we visualize the 2D pose error on LSP and Human3.6M 1) for different pools of annotated 2D images, 2) varying the number of synthesized training images and 3) considering different number of pose classes K. As expected using a bigger set of annotated images improves the 6 Table 3: Pose error on LSP and H3.6M using different sources for rendering the synthetic images. 2D 3D Num. of H3.6M H3.6M H3.6M LSP source source 3D poses Abs Error (mm) Error (mm) Error (pix) Error (pix) H3.6M H3.6M 190,000 130.1 97.2 8.8 31.1 MPII+LSP H3.6M 190,000 248.9 122.1 17.3 20.7 MPII+LSP CMU 190,000 320.0 150.6 19.7 22.4 MPII+LSP CMU 2.106 216.5 138.0 11.2 13.8 Table 4: State-of-the-art results on LSP (2D pose error in pixels on normalized 220 × 220 images). Method Feet Knees Hips Hands Elbows Shoulder Head All Wei et al. [39] 6.6 5.3 4.8 8.6 7.0 5.2 5.3 6.2 Pishchulin et al. [24] 10.0 6.8 5.0 11.1 8.2 5.7 5.9 7.6 Chen & Yuille [7] 15.7 11.5 8.1 15.6 12.1 8.6 6.8 11.5 Yang et al. [41] 15.5 11.5 8.0 14.7 12.2 8.9 7.4 11.5 Ours (Alexnet) 19.1 13 4.9 21.4 16.6 10.5 10.3 13.8 Ours (VGG) 16.2 10.6 4.1 17.7 13.0 8.4 9.8 11.5 performance in-the-wild. Pose error converges both on LSP and H3.6M when using 1.5 million of images; using more than K = 5000 classes does not further improve the performance. Figure 4: 2D pose error on LSP and Human3.6M using different pools of annotated images to generate 2 million of synthetic training images (left), varying the number of synthetic training images (center) and considering different number of pose classes K (right). To further improve the performance, we also experiment with fine-tuning a VGG-16 architecture [33] for pose classification. By doing so, the average (normalized) 2D pose error decreases by 2.3 pixels. In Table 4, we compare our results on LSP to the state-of-the-art 2D pose estimation methods. Although our approach is designed to estimate a coarse 3D pose, its performances is comparable to recent 2D pose estimation methods [7, 41]. The qualitative results in Fig. 5 show that our algorithm correctly estimates the global 3D pose. After a visual analysis of the results, we found that failures occur in two cases: 1) when the observed pose does not belong to the MoCap training database, which is a limitation of purely holistic approaches, or 2) when there is a possible right-left or front-back confusion. We observed that this later case is often correct for subsequent top-scoring poses. This highlights a property of our approach that can keep multiple pose hypotheses which could be rescored adequately, for instance, using temporal information in videos. 5 Conclusion In this paper, we introduce an approach for creating a synthetic training dataset of “in-the-wild” images and their corresponding 3D pose. Our algorithm artificially augments a dataset of real images with new synthetic images showing new poses and, importantly, with 3D pose annotations. We show that CNNs can be trained on artificial images and generalize well to real images. We train an end-to-end CNN classifier for 3D pose estimation and show that, with our synthetic training images, our method outperforms state-of-the-art results in terms of 3D pose estimation in controlled environments and shows promising results for in-the-wild images (LSP). In this paper, we have 7 Figure 5: Qualitative results on LSP. We show correct 3D pose estimations (top 2 rows) and typical failure cases (bottom row) corresponding to unseen poses or right-left and front-back confusions. estimated a coarse 3D pose by returning the average pose of the top scoring cluster. In future work, we will investigate how top scoring classes could be re-ranked and also how the pose could be refined. Acknowledgments. This work was supported by the European Commission under FP7 Marie Curie IOF grant (PIOF-GA-2012-328288) and partially supported by ERC advanced grant Allegro. We acknowledge the support of NVIDIA with the donation of the GPUs used for this research. We thank P. Weinzaepfel for his help and the anonymous reviewers for their comments and suggestions. References [1] A. Agarwal and B. Triggs. Recovering 3D human pose from monocular images. PAMI, 28(1):44–58, 2006. [2] I. Akhter and M. Black. Pose-conditioned joint angle limits for 3D human pose reconstruction. In CVPR, 2015. [3] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. 2D human pose estimation: New benchmark and state-of- the-art analysis. In CVPR, 2014. [4] A. Bissacco, M.-H. Yang, and S. Soatto. Detecting humans via their pose. In NIPS, 2006. [5] L. Bo and C. Sminchisescu. Twin Gaussian processes for structured prediction. IJCV, 87(1-2):28–52, 2010. [6] L. Bourdev and J. Malik. Poselets: Body part detectors trained using 3D human pose annotations. In ICCV, 2009. [7] X. Chen and A. L. Yuille. Articulated pose estimation by a graphical model with image dependent pairwise relations. In NIPS, 2014. [8] A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazirbas, V. Golkov, P. van der Smagt, D. Cremers, and T. Brox. Flownet: Learning optical flow with convolutional networks. In ICCV, 2015. [9] M. Enzweiler and D. M. Gavrila. A mixed generative-discriminative framework for pedestrian classification. In CVPR, 2008. [10] X. Fan, K. Zheng, Y. Zhou, and S. Wang. Pose locality constrained representation for 3D human pose reconstruction. In ECCV, 2014. [11] H. Hattori, V. N. Boddeti, K. M. Kitani, and T. Kanade. Learning scene-specific pedestrian detectors without real data. In CVPR, 2015. [12] A. Hornung, E. Dekkers, and L. Kobbelt. Character animation from 2D pictures and 3D motion data. ACM Trans. Graph., 26(1), 2007. [13] C. Ionescu, D. Papava, V. Olaru, and C. Sminchisescu. Human3.6m: Large scale datasets and predictive methods for 3D human sensing in natural environments. PAMI, 36(7):1325–1339, 2014. [14] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Reading text in the wild with convolutional neural networks. IJCV, 116(1):1–20, 2016. [15] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu. Spatial transformer networks. NIPS, 2015. 8 [16] S. Johnson and M. Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In BMVC, 2010. [17] S. Johnson and M. Everingham. Learning effective human pose estimation from inaccurate annotation. In CVPR, 2011. [18] I. Kostrikov and J. Gall. Depth sweep regression forests for estimating 3D human pose from images. In BMVC, 2014. [19] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [20] S. Li, W. Zhang, and A. B. Chan. Maximum-margin structured learning with deep networks for 3D human pose estimation. In ICCV, 2015. [21] R. Okada and S. Soatto. Relevant feature selection for human pose estimation and localization in cluttered images. In ECCV, 2008. [22] D. Park and D. Ramanan. Articulated pose estimation with tiny synthetic videos. In CVPRW, 2015. [23] X. Peng, B. Sun, K. Ali, and K. Saenko. Learning deep object detectors from 3D models. In ICCV, 2015. [24] L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. V. Gehler, and B. Schiele. Deepcut: Joint subset partition and labeling for multi person pose estimation. CVPR, 2016. [25] L. Pishchulin, A. Jain, M. Andriluka, T. Thormählen, and B. Schiele. Articulated people detection and pose estimation: Reshaping the future. In CVPR, 2012. [26] G. Rogez, J. Rihan, C. Orrite, and P. Torr. Fast human pose detection using randomized hierarchical cascades of rejectors. IJCV, 99(1):25–52, 2012. [27] G. Rogez, J. Supancic, and D. Ramanan. First-person pose recognition using egocentric workspaces. In CVPR, 2015. [28] J. Romero, H. Kjellstrom, and D. Kragic. Hands in action: real-time 3D reconstruction of hands in interaction with objects. In ICRA, 2010. [29] G. Shakhnarovich, P. A. Viola, and T. Darrell. Fast pose estimation with parameter-sensitive hashing. In ICCV, 2003. [30] J. Shotton, A. W. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In CVPR, 2011. [31] E. Simo-Serra, A. Quattoni, C. Torras, and F. Moreno-Noguer. A joint model for 2D and 3D pose estimation from a single image. In CVPR, 2013. [32] E. Simo-Serra, A. Ramisa, G. Alenyà, C. Torras, and F. Moreno-Noguer. Single image 3D human pose estimation from noisy observations. In CVPR, 2012. [33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556, 2014. [34] H. Su, C. Ruizhongtai Qi, Y. Li, and L. J. Guibas. Render for CNN: viewpoint estimation in images using CNNs trained with rendered 3D model views. In ICCV, 2015. [35] Bugra Tekin, Artem Rozantsev, Vincent Lepetit, and Pascal Fua. Direct prediction of 3d body poses from motion compensated sequences. In CVPR, 2016. [36] J. J. Tompson, A. Jain, Y. LeCun, and C. Bregler. Joint training of a convolutional network and a graphical model for human pose estimation. In NIPS, 2014. [37] A. Toshev and C. Szegedy. DeepPose: Human pose estimation via deep neural networks. In CVPR, 2014. [38] C. Wang, Y. Wang, Z. Lin, A. L. Yuille, and W. Gao. Robust estimation of 3D human poses from a single image. In CVPR, 2014. [39] S-E Wei, V. Ramakrishna, T. Kanade, and Y. Sheikh. Convolutional pose machines. In CVPR, 2016. [40] Z. Wu, S. Song, A. Khosla, F. Yu, L. Zhang, X. Tang, and J. Xiao. 3D shapenets: A deep representation for volumetric shapes. In CVPR, 2015. [41] W. Yang, W. Ouyang, H. Li, and X. Wang. End-to-end learning of deformable mixture of parts and deep convolutional neural networks for human pose estimation. In CVPR, 2016. [42] H. Yasin, U. Iqbal, B. Krüger, A. Weber, and J. Gall. A dual-source approach for 3D pose estimation from a single image. In CVPR, 2016. [43] F. Zhou and F. De la Torre. Spatio-temporal matching for human detection in video. In ECCV, 2014. [44] X. Zhou, M. Zhu, S. Leonardos, K. Derpanis, and K. Daniilidis. Sparseness meets deepness: 3D human pose estimation from monocular video. In CVPR, 2016. [45] S. Zuffiand M. J. Black. The stitched puppet: A graphical model of 3D human shape and pose. In CVPR, 2015. 9
2016
130
6,029
A Constant-Factor Bi-Criteria Approximation Guarantee for k-means++ Dennis Wei IBM Research Yorktown Heights, NY 10598, USA dwei@us.ibm.com Abstract This paper studies the k-means++ algorithm for clustering as well as the class of Dℓ sampling algorithms to which k-means++ belongs. It is shown that for any constant factor β > 1, selecting βk cluster centers by Dℓsampling yields a constant-factor approximation to the optimal clustering with k centers, in expectation and without conditions on the dataset. This result extends the previously known O(log k) guarantee for the case β = 1 to the constant-factor bi-criteria regime. It also improves upon an existing constant-factor bi-criteria result that holds only with constant probability. 1 Introduction The k-means problem and its variants constitute one of the most popular paradigms for clustering [15]. Given a set of n data points, the task is to group them into k clusters, each defined by a cluster center, such that the sum of distances from points to cluster centers (raised to a power ℓ) is minimized. Optimal clustering in this sense is known to be NP-hard [11, 3, 20, 6]. In practice, the most widely used algorithm remains Lloyd’s [19] (often referred to as the k-means algorithm), which alternates between updating centers given cluster assignments and re-assigning points to clusters. In this paper, we study an enhancement to Lloyd’s algorithm known as k-means++ [4] and the more general class of Dℓsampling algorithms to which k-means++ belongs. These algorithms select cluster centers randomly from the given data points with probabilities proportional to their current costs. The clustering can then be refined using Lloyd’s algorithm. Dℓsampling is attractive for two reasons: First, it is guaranteed to yield an expected O(log k) approximation to the optimal clustering with k centers [4]. Second, it is as simple as Lloyd’s algorithm, both conceptually as well as computationally with O(nkd) running time in d dimensions. The particular focus of this paper is on the setting where an optimal k-clustering remains the benchmark but more than k cluster centers can be sampled to improve the approximation. Specifically, it is shown that for any constant factor β > 1, if βk centers are chosen by Dℓsampling, then a constant-factor approximation to the optimal k-clustering is obtained. This guarantee holds in expectation and for all datasets, like the one in [4], and improves upon the O(log k) factor therein. Such a result is known as a constant-factor bi-criteria approximation since both the optimal cost and the relevant degrees of freedom (k in this case) are exceeded but only by constant factors. In the context of clustering, bi-criteria approximation guarantees can be valuable because an appropriate number of clusters k is almost never known or pre-specified in practice. Approaches to determining k from the data are ideally based on knowing how the optimal cost decreases as k increases, but obtaining this optimal trade-off between cost and k is NP-hard as mentioned earlier. Alternatively, a simpler algorithm (like k-means++) that has a constant-factor bi-criteria guarantee would ensure that the trade-off curve generated by this algorithm deviates by no more than constant factors along both axes from the optimal curve. This may be more appealing than a deviation along 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the cost axis that grows as O(log k). Furthermore, if a solution with a specified number of clusters k is truly required, then linear programming techniques can be used to select a k-subset from the βk cluster centers while still maintaining a constant-factor approximation [1, 8]. The next section reviews existing work on Dℓsampling and other clustering approximations. Section 2 formally states the problem, the Dℓsampling algorithm, and existing lemmas regarding the algorithm. Section 3 states the main results and compares them to previous results. Proofs are presented in Section 4 with more algebraic proofs deferred to the supplementary material. 1.1 Related Work Approximation algorithms for k-means (ℓ= 2), k-medians (ℓ= 1), and related problems span a wide range in the trade-off between tighter approximation factors and lower algorithm complexity. At one end, while exact algorithms [14] and polynomial-time approximation schemes (PTAS) (see [22, 18, 9, 12, 13, 10] and references therein) may have polynomial running times in n, the dependence on k and/or the dimension d is exponential or worse. Simpler local search [17, 5] and linear programming [8, 16] algorithms offer constant-factor approximations but still with high-order polynomial running times in n, and some rely on dense discretizations of size O(nϵ−d log(1/ϵ)). In contrast to the above, this paper focuses on highly practical algorithms in the Dℓsampling class, including k-means++. As mentioned, it was proved in [4] that Dℓsampling results in an O(log k) approximation, in expectation and for all datasets. The current work extends this guarantee to the constant-factor bi-criteria regime, also for all datasets. The authors of [4] also provided a matching lower bound, exhibiting a dataset on which k-means++ achieves an expected Ω(log k) approximation. Improved O(1) approximation factors have been shown for sampling algorithms like k-means++ provided that the dataset satisfies certain conditions. Such results were established in [24] for kmeans++ and other variants of Lloyd’s algorithm under the condition that the dataset is well-suited in a sense to partitioning into k clusters, and for an algorithm called successive sampling [23] with O(n(k + log n) + k2 log2 n) running time subject to a bound on the dispersion of the points. In a similar direction to the one pursued in the present work, [1] showed that if the number of cluster centers is increased to a constant factor times k, then k-means++ can achieve a constant-factor approximation, albeit only with constant probability. An O(1) factor was also obtained independently by [2] using more centers, of order O(k log k). It is important to note that the constant-probability result of [1] in no way implies the main results herein, which are true in expectation and are therefore stronger guarantees. Furthermore, Section 3.1 shows that a constant-probability corollary of Theorem 1 improves upon [1] by more than a factor of 2. Recently, [21, 7] have also established constant-factor bi-criteria results for the k-means problem. These works differ from the present paper in studying more complex local search and linear programming algorithms applied to large discretizations, of size nO(log(1/ϵ)/ϵ2) (a high-order polynomial) in [21] and O(nϵ−d log(1/ϵ)) in [7], the latter the same as in [17]. Moreover, [7] employs search neighborhoods that are also of exponential size in d (requiring doubly exponential running time). 2 Preliminaries 2.1 Problem Definition We are given n points x1, . . . , xn in a real metric space X with metric D(x, y). The objective is to choose t cluster centers c1, . . . , ct in X and assign points to the nearest cluster center to minimize the potential function φ = n X i=1 min j=1,...,t D(xi, cj)ℓ. (1) A cluster is thus defined by the points xi assigned to a center cj, where ties (multiple closest centers) are broken arbitrarily. For a subset of points S, define φ(S) = P xi∈S minj=1,...,t D(xi, cj)ℓto be the contribution to the potential from S; φ(xi) is the contribution from a single point xi. The exponent ℓ≥1 in (1) is regarded as a problem parameter. Letting ℓ= 2 and D be Euclidean distance, we have what is usually known as the k-means problem, so-called because the optimal 2 Algorithm 1 DℓSampling Input: Data points x1, . . . , xn, number of clusters t. Initialize φ(xi) = 1 for i = 1, . . . , n. for j = 1 to t do Select jth center cj = xi with probability φ(xi)/φ. Update φ(xi) for i = 1, . . . , n. cluster centers are means of the points assigned to them. The choice ℓ= 1 is also popular and corresponds to the k-medians problem. Throughout this paper, an optimal clustering will always refer to one that minimizes (1) over solutions with t = k clusters, where k ≥2 is given. Likewise, the term optimal cluster and symbol A will refer to one of the k clusters from this optimal solution. The goal is to approximate the potential φ∗of this optimal k-clustering using t = βk cluster centers for β ≥1. 2.2 DℓSampling Algorithm The Dℓsampling algorithm chooses cluster centers randomly from x1, . . . , xn with probabilities proportional to their current contributions to the potential, as detailed in Algorithm 1. Following [4], the case ℓ= 2 is referred to as the k-means++ algorithm and the non-uniform probabilities used after the first iteration are referred to as D2 weighting (hence Dℓin general). For t cluster centers, the running time of Dℓsampling is O(ntd) in d dimensions. In practice, Algorithm 1 is used as an initialization to Lloyd’s algorithm, which usually produces further decreases in the potential. The analysis herein pertains only to Algorithm 1 and not to the subsequent improvement due to Lloyd’s algorithm. 2.3 Existing Lemmas Regarding DℓSampling The following lemmas synthesize useful results from [4] that bound the expected potential within a single optimal cluster due to selecting a center from that cluster with uniform or Dℓweighting. Lemma 1. [4, Lemmas 3.1 and 5.1] Given an optimal cluster A, let φ be the potential resulting from selecting a first cluster center randomly from A with uniform weighting. Then E[φ(A)] ≤r(ℓ) u φ∗(A) for any A, where r(ℓ) u = 2, ℓ= 2 and D is Euclidean, 2ℓ, otherwise. Lemma 2. [4, Lemma 3.2] Given an optimal cluster A and an initial potential φ, let φ′ be the potential resulting from adding a cluster center selected randomly from A with Dℓweighting. Then E[φ′(A)] ≤r(ℓ) D φ∗(A) for any A, where r(ℓ) D = 2ℓr(ℓ) u . The factor of 2ℓbetween r(ℓ) u and r(ℓ) D for general ℓis explained just before Theorem 5.1 in [4]. 3 Main Results The main results of this paper are stated below in terms of the single-cluster approximation ratio r(ℓ) D defined by Lemma 2. Subsequently in Section 3.1, the results are discussed in the context of previous work. Theorem 1. Let φ be the potential resulting from selecting βk cluster centers according to Algorithm 1, where β ≥1. The expected approximation ratio is then bounded as E[φ] φ∗ ≤r(ℓ) D  1 + min  ϕ(k −2) (β −1)k + ϕ, Hk−1  −Θ  1 n  , where ϕ = (1 + √ 5)/2 .= 1.618 is the golden ratio and Hk = 1 + 1 2 + · · · + 1 k ∼log k is the kth harmonic number. In the proof of Theorem 1 in Section 4.2, it is shown that the 1/n term is indeed non-positive and can therefore be omitted, with negligible loss for large n. 3 The approximation ratio bound in Theorem 1 is stated as a function of k. The following corollary confirms that the theorem also implies a constant-factor bi-criteria approximation. Corollary 1. With the same definitions as in Theorem 1, the expected approximation ratio is bounded as E[φ] φ∗ ≤r(ℓ) D  1 + ϕ β −1  . Proof. The minimum in Theorem 1 is bounded by its first term. This term is in turn increasing in k with asymptote ϕ/(β −1), which can therefore be taken as a k-independent bound. It follows from Corollary 1 that a constant “oversampling” ratio β > 1 leads to a constant-factor approximation. Theorem 1 offers a further refinement for finite k. The bounds in Theorem 1 and Corollary 1 consist of two factors. As β increases, the second, parenthesized factor decreases to 1 either exactly or approximately as 1/(β −1). The first factor of r(ℓ) D however is no smaller than 4, and is a direct consequence of Lemma 2. Any future work on improving Lemma 2 would therefore strengthen the approximation factors above. 3.1 Comparisons to Existing Results A comparison of Theorem 1 to results in [4] is implicit in its statement since the Hk−1 term in the minimum comes directly from [4, Theorems 3.1 and 5.1]. For k = 2, 3, the first term in the minimum is smaller than Hk−1 for any β ≥1, and hence Theorem 1 is always an improvement. For k > 3, Theorem 1 improves upon [4] for β greater than the critical value βc = 1 + φ(k −2 −Hk−1) kHk−1 . Numerical evaluation of βc shows that it reaches a maximum value of 1.204 at k = 22 and then decreases back toward 1 roughly as 1/Hk−1. It can be concluded that for any k, at most 20% oversampling is required for Theorem 1 to guarantee a better approximation than [4]. The most closely related result to Theorem 1 and Corollary 1 is found in [1, Theorem 1]. The latter establishes a constant-factor bi-criteria approximation that holds only with constant probability, as opposed to in expectation. Since a bound on the expectation implies a bound with constant probability via Markov’s inequality (but not the other way around), a direct comparison with [1] is possible. Specifically, for ℓ= 2 and the t = ⌈16(k + √ k)⌉cluster centers assumed in [1], Theorem 1 in the present work implies that E[φ] φ∗ ≤8  1 + min  ϕ(k −2) ⌈15k + 16 √ k⌉+ ϕ , Hk−1  ≤8  1 + ϕ 15  , after taking k →∞. Then by Markov’s inequality, φ φ∗≤ 8 0.97  1 + ϕ 15  .= 9.137 with probability at least 1 −0.97 = 0.03 as in [1]. This 9.137 approximation factor is less than half the factor of 20 in [1]. Corollary 1 may also be compared to the results in [21], which are obtained through more complex algorithms applied to a large discretization, of size nO(log(1/ϵ)/ϵ2) for reasonably small ϵ. The main difference between Corollary 1 and the bounds in [21] is the extra factor of r(ℓ) D . As discussed above, this factor is due to Lemma 2 and is unlikely to be intrinsic to the Dℓsampling algorithm. 4 Proofs The overall strategy used to prove Theorem 1 is similar to that in [4]. The key intermediate result is Lemma 3 below, which relates the potential at a later iteration in Algorithm 1 to the potential at an earlier iteration. Section 4.1 is devoted to proving Lemma 3. Subsequently in Section 4.2, Theorem 1 is proven by an application of Lemma 3. 4 In the sequel, we say that an optimal cluster A is covered by a set of cluster centers if at least one of the centers lies in A. Otherwise A is uncovered. Also define ρ = r(ℓ) D φ∗as an abbreviation. Lemma 3. For an initial set of centers leaving u optimal clusters uncovered, let φ denote the potential, U the union of uncovered clusters, and V the union of covered clusters. Let φ′ denote the potential resulting from adding t ≥u centers, each selected randomly with Dℓweighting as in Algorithm 1. Then the new potential is bounded in expectation as E[φ′ | φ] ≤cV(t, u)φ(V) + cU(t, u)ρ(U) for coefficients cV(t, u) and cU(t, u) that depend only on t, u. This holds in particular for cV(t, u) = t + au + b t −u + b = 1 + (a + 1)u t −u + b, (2a) cU(t, u) = cV(t −1, u −1), u > 0, 0, u = 0, (2b) where the parameters a and b satisfy a + 1 ≥b > 0 and ab ≥1. The choice of a, b that minimizes cV(t, u) in (2a) is a + 1 = b = ϕ. 4.1 Proof of Lemma 3 Lemma 3 is proven using induction, showing that if it holds for (t, u) and (t, u + 1), then it also holds for (t + 1, u + 1), similar to the proof of [4, Lemma 3.3]. The proof is organized into three parts. Section 4.1.1 provides base cases. In Section 4.1.2, sufficient conditions on the coefficients cV(t, u), cU(t, u) are derived that allow the inductive step to be completed. In Section 4.1.3, it is shown that the closed-form expressions in (2) are consistent with the base cases in Section 4.1.1 and satisfy the sufficient conditions from Section 4.1.2, thus completing the proof. 4.1.1 Base cases This subsection exhibits two base cases of Lemma 3. The first case corresponds to u = 0, for which we have φ(V) = φ. Since adding centers cannot increase the potential, i.e. φ′ ≤φ deterministically, Lemma 3 holds with cV(t, 0) = 1, cU(t, 0) = 0, t ≥0. (3) The second base case occurs for t = u, u ≥1. For this purpose, a slightly strengthened version of [4, Lemma 3.3] is used, as given next. Lemma 4. With the same definitions as in Lemma 3 except with t ≤u, we have E[φ′ | φ] ≤(1 + Ht)φ(V) + (1 + Ht−1)ρ(U) + u −t u φ(U), where we define H0 = 0 and H−1 = −1 for convenience. The improvement is in the coefficient in front of ρ(U), from (1 + Ht) to (1 + Ht−1). The proof follows that of [4, Lemma 3.3] with some differences and is deferred to the supplementary material. Specializing to the case t = u, Lemma 4 coincides with Lemma 3 with coefficients cV(u, u) = 1 + Hu, cU(u, u) = 1 + Hu−1. (4) 4.1.2 Sufficient conditions on coefficients We now assume inductively that Lemma 3 holds for (t, u) and (t, u + 1). The induction to the case (t + 1, u + 1) is then completed under the following sufficient conditions on the coefficients: cV(t, u + 1) ≥1, (5a) (cV(t, u + 1) −cU(t, u + 1))cV(t, u)2 ≥(cU(t, u + 1) −cV(t, u))2, (5b) and cV(t + 1, u + 1) ≥1 2 h cV(t, u) + cV(t, u)2 + 4 max{cV(t, u + 1) −cV(t, u), 0} 1/2i , (6a) cU(t + 1, u + 1) ≥cV(t, u). (6b) 5 The first pair of conditions (5) applies to the coefficients involved in the inductive hypothesis for (t, u) and (t, u + 1). The second pair (6) can be seen as a recursive specification of the new coefficients for (t + 1, u + 1). This inductive step together with base cases (3) and (4) are sufficient to extend Lemma 3 to all t > u, starting with (t, u) = (1, 0) and (t, u + 1) = (1, 1). The inductive step is broken down into a series of three lemmas, each building upon the last. The first lemma applies the inductive hypothesis to derive a bound on the potential that depends not only on φ(V) and ρ(U) but also on φ(U). Lemma 5. Assume that Lemma 3 holds for (t, u) and (t, u + 1). Then for the case (t + 1, u + 1), i.e. φ corresponding to u + 1 uncovered clusters and φ′ resulting after adding t + 1 centers, E[φ′ | φ] ≤min cV(t, u)φ(U) + cV(t, u + 1)φ(V) φ(U) + φ(V) φ(V) + cV(t, u)φ(U) + cU(t, u + 1)φ(V) φ(U) + φ(V) ρ(U), φ(U) + φ(V)  . Proof. We consider the two cases in which the first of the t + 1 new centers is chosen from either the covered set V or the uncovered set U. Denote by φ1 the potential after adding the first new center. Covered case: This case occurs with probability φ(V)/φ and leaves the covered and uncovered sets unchanged. We then invoke Lemma 3 with (t, u + 1) (one fewer center to add) and φ1 playing the role of φ. The contribution to E[φ′ | φ] from this case is then bounded by φ(V) φ cV(t, u + 1)φ1(V) + cU(t, u + 1)ρ(U)  ≤φ(V) φ (cV(t, u + 1)φ(V) + cU(t, u + 1)ρ(U)) , (7) noting that φ1(S) ≤φ(S) for any set S. Uncovered case: We consider each uncovered cluster A ⊆U separately. With probability φ(A)/φ, the first new center is selected from A, moving A from the uncovered to the covered set and reducing the number of uncovered clusters by one. Applying Lemma 3 for (t, u), the contribution to E[φ′ | φ] is bounded by φ(A) φ  cV(t, u) φ1(V) + φ1(A)  + cU(t, u)(ρ(U) −ρ(A))  . Taking the expectation with respect to possible centers in A and using Lemma 2 and φ1(V) ≤φ(V), we obtain the further bound φ(A) φ [cV(t, u)(φ(V) + ρ(A)) + cU(t, u)(ρ(U) −ρ(A))] . Summing over A ⊆U yields φ(U) φ (cV(t, u)φ(V) + cU(t, u)ρ(U)) + cV(t, u) −cU(t, u) φ X A⊆U φ(A)ρ(A) ≤φ(U) φ cV(t, u)(φ(V) + ρ(U)), (8) using the inner product bound P A⊆U φ(A)ρ(A) ≤φ(U)ρ(U). The result follows from summing (7) and (8) and combining with the trivial bound E[φ′ | φ] ≤φ = φ(U) + φ(V). The bound in Lemma 5 depends on φ(U), the potential over uncovered clusters, which can be arbitrarily large or small. In the next lemma, φ(U) is eliminated by maximizing with respect to it. Lemma 6. Assume that Lemma 3 holds for (t, u) and (t, u + 1) with cV(t, u + 1) ≥1. Then for the case (t + 1, u + 1) in the sense of Lemma 5, E[φ′ | φ] ≤1 2 cV(t, u)(φ(V) + ρ(U)) + 1 2 max n cV(t, u)(φ(V) + ρ(U)), p Q o , 6 where Q = cV(t, u)2 −4cV(t, u) + 4cV(t, u + 1)  φ(V)2 + 2 cV(t, u)2 −2cV(t, u) + 2cU(t, u + 1)  φ(V)ρ(U) + cV(t, u)2ρ(U)2. Proof. Let B1(φ(U)) and B2(φ(U)) denote the two terms inside the minimum in Lemma 5 (i.e. B2(φ(U)) = φ(U) + φ(V)). The derivative of B1(φ(U)) with respect to φ(U) is given by B′ 1(φ(U)) = φ(V) (φ(U) + φ(V))2  (cV(t, u) −cV(t, u + 1))φ(V) + (cV(t, u) −cU(t, u + 1))ρ(U)  , which does not change sign as a function of φ(U). The two cases B′ 1(φ(U)) ≥0 and B′ 1(φ(U)) < 0 are considered separately below. Taking the maximum of the resulting bounds (9), (10) establishes the lemma. Case B′ 1(φ(U)) ≥0: Both B1(φ(U)) and B2(φ(U)) are non-decreasing functions of φ(U). The former has the finite supremum cV(t, u)(φ(V) + ρ(U)), (9) whereas the latter increases without bound. Therefore B1(φ(U)) eventually becomes the smaller of the two and (9) can be taken as an upper bound on min{B1(φ(U)), B2(φ(U))}. Case B′ 1(φ(U)) < 0: At φ(U) = 0, we have B1(0) = cV(t, u + 1)φ(V) + cU(t, u + 1)ρ(U) and B2(0) = φ(V). The assumption cV(t, u + 1) ≥1 implies that B1(0) ≥B2(0). Since B1(φ(U)) is now a decreasing function, the two functions must intersect and the point of intersection then provides an upper bound on min{B1(φ(U)), B2(φ(U))}. The supplementary material provides some algebraic details on solving for the intersection. The resulting bound is 1 2cV(t, u)(φ(V) + ρ(U)) + 1 2 p Q. (10) The bound in Lemma 6 is a nonlinear function of φ(V) and ρ(U), in contrast to the desired form in Lemma 3. The next step is to linearize the bound by imposing additional conditions (5). Lemma 7. Assume that Lemma 3 holds for (t, u) and (t, u + 1) with coefficients satisfying (5). Then for the case (t + 1, u + 1) in the sense of Lemma 5, E[φ′ | φ] ≤1 2 h cV(t, u) + cV(t, u)2 + 4 max{cV(t, u + 1) −cV(t, u), 0} 1/2i φ(V) + cV(t, u)ρ(U). Proof. It suffices to linearize the √Q term in Lemma 6, specifically by showing that Q ≤(aφ(V) + bρ(U))2 for all φ(V), ρ(U) with a =  cV(t, u)2 + 4(cV(t, u + 1) −cV(t, u)) 1/2 and b = cV(t, u). Proof of this inequality is provided in the supplementary material. Incorporating the inequality into Lemma 6 proves the result. Given conditions (5) and Lemma 7, the inductive step for Lemma 3 can be completed by defining cV(t + 1, u + 1) and cU(t + 1, u + 1) recursively as in (6). 4.1.3 Proof with specific form for coefficients We now prove that Lemma 3 holds for coefficients cV(t, u), cU(t, u) given by (2) with a + 1 ≥b > 0 and ab ≥1. Given the inductive approach and the results established in Sections 4.1.1 and 4.1.2, the proof requires the remaining steps below. First, it is shown that the base cases (3), (4) from Section 4.1.1 imply that Lemma 3 is true for the same base cases but with cV(t, u), cU(t, u) given by (2) instead. Second, (2) is shown to satisfy conditions (5) for all t > u, thus permitting Lemma 7 to be used. Third, (2) is also shown to satisfy (6), which combined with Lemma 7 completes the induction. 7 Considering the base cases, for u = 0, (3) and (2) coincide so there is nothing to prove. For the case t = u, u ≥1, Lemma 3 with coefficients given by (4) implies the same with coefficients given by (2) provided that (1 + Hu)φ(V) + (1 + Hu−1)ρ(U) ≤  1 + (a + 1)u b  φ(V) +  1 + (a + 1)(u −1) b  ρ(U) for all φ(V), ρ(U). This in turn is ensured if the coefficients satisfy Hu ≤(a + 1)u/b for all u ≥1. The most stringent case is u = 1 and corresponds to the assumption a + 1 ≥b. For the second step of establishing (5), it is clear that (5a) is satisfied by (2a). A direct calculation presented in the supplementary material shows that (5b) is also true. Lemma 8. Condition (5b) is satisfied for all t > u if cV(t, u), cU(t, u) are given by (2) and ab ≥1. Similarly for the third step, it suffices to show that (2a) satisfies recursion (6a) since (2b) automatically satisfies (6b). A proof is provided in the supplementary material. Lemma 9. Recursion (6a) is satisfied for all t > u if cV(t, u) is given by (2a) and ab ≥1. Lastly, we minimize cV(t, u) in (2a) with respect to a, b, subject to a + 1 ≥b > 0 and ab ≥1. For fixed a, minimizing with respect to b yields b = a+1 and cV(t, u) = 1+((a+1)u)/(t−u+a+1). Minimizing with respect to a then results in setting ab = a(a + 1) = 1. The solution satisfying a + 1 > 0 is a = ϕ −1 and b = ϕ. 4.2 Proof of Theorem 1 Denote by nA the number of points in optimal cluster A. In the first iteration of Algorithm 1, the first cluster center is selected from some A with probability nA/n. Conditioned on this event, Lemma 3 is applied with covered set V = A, u = k −1 uncovered clusters, and t = βk −1 remaining cluster centers. This bounds the final potential φ′ as E[φ′ | φ] ≤cV(βk −1, k −1)φ(A) + cU(βk −1, k −1)(ρ −ρ(A)) where cV(t, u), cU(t, u) are given by (2) with a + 1 = b = ϕ. Taking the expectation over possible centers in A and using Lemma 1, E[φ′ | A] ≤r(ℓ) u cV(βk −1, k −1)φ∗(A) + cU(βk −1, k −1)(ρ −ρ(A)). Taking the expectation over clusters A and recalling that ρ = r(ℓ) D φ∗, E[φ′] ≤r(ℓ) D cU(βk −1, k −1)φ∗−C X A nA n φ∗(A), (11) where C = r(ℓ) D cU(βk −1, k −1)−r(ℓ) u cV(βk −1, k −1). Using (2) and r(ℓ) D = 2ℓr(ℓ) u from Lemma 2, C = r(ℓ) u 2ℓ((β −1)k + ϕ(k −1)) −(β −1 + ϕ)k (β −1)k + ϕ = r(ℓ) u (2ℓ−1)(β −1)k + ϕ((2ℓ−1)(k −1) −1) (β −1)k + ϕ . The last expression for C is seen to be non-negative for β ≥1, k ≥2, and ℓ≥1. Furthermore, since nA = 1 (a singleton cluster) implies that φ∗(A) = 0, we have X A nAφ∗(A) = X A:nA≥2 nAφ∗(A) ≥2φ∗. (12) Substituting (2) and (12) into (11), we obtain E[φ′] φ∗ ≤r(ℓ) D  1 + ϕ(k −2) (β −1)k + ϕ  −2C n . (13) The last step is to recall [4, Theorems 3.1 and 5.1], which together state that E[φ′] φ∗ ≤r(ℓ) D (1 + Hk−1) (14) for φ′ resulting from selecting exactly k cluster centers. In fact, (14) also holds for βk centers, β ≥1, since adding centers cannot increase the potential. The proof is completed by taking the minimum of (13) and (14). 8 References [1] A. Aggarwal, A. Deshpande, and R. Kannan. Adaptive sampling for k-means clustering. In Proc. 12th Int. Workshop and 13th Int. Workshop on Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, pages 15–28, August 2009. [2] N. Ailon, R. Jaiswal, and C. Monteleoni. Streaming k-means approximation. In Adv. Neural Information Processing Systems 22, pages 10–18, December 2009. [3] D. Aloise, A. Deshpande, P. Hansen, and P. Popat. NP-hardness of Euclidean sum-of-squares clustering. Mach. Learn., 75(2):245–248, May 2009. [4] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Proc. 18th ACM-SIAM Symp. Discrete Algorithms, pages 1027–1035, January 2007. [5] V. Arya, N. Garg, R. Khandekar, A. Meyerson, K. Munagala, and V. Pandit. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33(3):544–562, March 2004. [6] P. Awasthi, M. Charikar, R. Krishnaswamy, and A. K. Sinop. The hardness of approximation of Euclidean k-means. In Proc. 31st Int. Symp. Computational Geometry, pages 754–767, June 2015. [7] S. Bandyapadhyay and K. Varadarajan. On variants of k-means clustering. Technical Report arXiv:1512.02985, December 2015. [8] M. Charikar, S. Guha, E. Tardos, and D. B. Shmoys. A constant-factor approximation algorithm for the k-median problem. J. Comput. Syst. Sci., 65(1):129–149, August 2002. [9] K. Chen. On coresets for k-median and k-means clustering in metric and Euclidean spaces and their applications. SIAM J. Comput., 39(3):923–947, September 2009. [10] V. Cohen-Addad, P. N. Klein, and C. Mathieu. Local search yields approximation schemes for k-means and k-median in Euclidean and minor-free metrics. Technical Report arXiv:1603.09535, March 2016. [11] S. Dasgupta. The hardness of k-means clustering. Technical Report CS2008-0916, Department of Computer Science and Engineering, University of California, San Diego, 2008. [12] D. Feldman, M. Monemizadeh, and C. Sohler. A PTAS for k-means clustering based on weak coresets. In Proc. 23rd Int. Symp. Computational Geometry, pages 11–18, June 2007. [13] Z. Friggstad, M. Rezapour, and M. R. Salavatipour. Local search yields a PTAS for k-means in doubling metrics. Technical Report arXiv:1603.08976, March 2016. [14] M. Inaba, N. Katoh, and H. Imai. Applications of weighted Voronoi diagrams and randomization to variance-based k-clustering. In Proc. 10th Int. Symp. Computational Geometry, pages 332–339, 1994. [15] A. K. Jain. Data clustering: 50 years beyond k-means. Pattern Recogn. Lett., 31(8):651–666, June 2010. [16] K. Jain and V. V. Vazirani. Approximation algorithms for metric facility location and k-median problems using the primal-dual schema and Lagrangian relaxation. J. ACM, 48(2):274–296, March 2001. [17] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu. A local search approximation algorithm for k-means clustering. Comput. Geom., 28(2–3):89–112, June 2004. [18] A. Kumar, Y. Sabharwal, and S. Sen. Linear-time approximation schemes for clustering problems in any dimensions. J. ACM, 57(2):5:1–5:32, January 2010. [19] S. Lloyd. Least squares quantization in PCM. Technical report, Bell Laboratories, 1957. [20] M. Mahajan, P. Nimbhorkar, and K. Varadarajan. The planar k-means problem is NP-hard. In Proc. 3rd Int. Workshop Algorithms and Computation, pages 274–285, February 2009. [21] K. Makarychev, Y. Makarychev, M. Sviridenko, and J. Ward. A bi-criteria approximation algorithm for k means. Technical Report arXiv:1507.04227, August 2015. [22] J. Matoušek. On approximate geometric k-clustering. Discrete & Comput. Geom., 24(1):61–84, January 2000. [23] R. R. Mettu and C. G. Plaxton. Optimal time bounds for approximate clustering. Mach. Learn., 56(1– 3):35–60, June 2004. [24] R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the k-means problem. J. ACM, 59(6):28, December 2012. 9
2016
131
6,030
CNNpack: Packing Convolutional Neural Networks in the Frequency Domain Yunhe Wang1,3, Chang Xu2, Shan You1,3, Dacheng Tao2, Chao Xu1,3 1Key Laboratory of Machine Perception (MOE), School of EECS, Peking University 2Centre for Quantum Computation and Intelligent Systems, School of Software, University of Technology Sydney 3Cooperative Medianet Innovation Center, Peking University wangyunhe@pku.edu.cn, Chang.Xu@uts.edu.au, youshan@pku.edu.cn, Dacheng.Tao@uts.edu.au, xuchao@cis.pku.edu.cn Abstract Deep convolutional neural networks (CNNs) are successfully used in a number of applications. However, their storage and computational requirements have largely prevented their widespread use on mobile devices. Here we present an effective CNN compression approach in the frequency domain, which focuses not only on smaller weights but on all the weights and their underlying connections. By treating convolutional filters as images, we decompose their representations in the frequency domain as common parts (i.e., cluster centers) shared by other similar filters and their individual private parts (i.e., individual residuals). A large number of low-energy frequency coefficients in both parts can be discarded to produce high compression without significantly compromising accuracy. We relax the computational burden of convolution operations in CNNs by linearly combining the convolution responses of discrete cosine transform (DCT) bases. The compression and speed-up ratios of the proposed algorithm are thoroughly analyzed and evaluated on benchmark image datasets to demonstrate its superiority over state-of-the-art methods. 1 Introduction Thanks to the large amount of accessible training data and computational power of GPUs, deep learning models, especially convolutional neural networks (CNNs), have been successfully applied to various computer vision (CV) applications such as image classification [19], human face verification [20], object recognition, and object detection [7, 17]. However, most of the widely used CNNs can only be used on desktop PCs or even workstations due to their demanding storage and computational resource requirements. For example, over 232MB of memory and over 7.24 × 108 multiplications are required to launch AlexNet and VGG-Net per image, preventing them from being used in mobile terminal apps on smartphones or tablet PCs. Nevertheless, CV applications are growing in importance for mobile device use and there is, therefore, an imperative to develop and use CNNs for this purpose. Considering the lack of GPU support and the limited storage and CPU performance of mainstream mobile devices, compressing and accelerating CNNs is essential. Although CNNs can have millions of neurons and weights, recent research [9] has highlighted that over 85% of weights are useless and can be set to 0 without an obvious deterioration in performance. This suggests that the gap in demands made by large CNNs and the limited resources offered by mobile devices may be bridged. Some effective algorithms have been developed to tackle this challenging problem. [8] utilized vector quantization to allow similar connections to share the same cluster center. [6] showed that the weight matrices can be reduced by low-rank decomposition approaches.[4] proposed a network architecture 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Figure 1: The flowchart of the proposed CNNpack. using the “hashing trick” and [4] then transferred the HashedNet into the discrete cosine transform (DCT) frequency domain [3]. [16, 5] proposed binaryNet, whose weights were -1/1 or -1/0/1 [2]. [15] utilizes a sparse decomposition to reduce the redundancy of weights and computational complexity of CNNs. [9] employed pruning [10], quantization, and Huffman coding to obtain a greater than 35× compression ratio and 3× speed improvement, thereby producing state-of-the-art CNNs compression to the best of our knowledge. The effectiveness of the pruning strategy relies on the principle that if the absolute value of a weight in a CNN is sufficiently small, its influence on the output is often negligible. However, these methods tend to disregard the properties of larger weights, which might also provide opportunities for compression. Moreover, independently considering each weight ignores the contextual information of other weights. To address the aforementioned problems, we propose handling convolutional filters in the frequency domain using DCT (see Fig. 1). In practice, convolutional filters can be regarded as small and smooth image patches. Hence, any operation on the convolutional filter frequency coefficients in the frequency domain is equivalent to an operation performed simultaneously over all weights of the convolutional filters in the spatial domain. We factorize the representation of the convolutional filter in the frequency domain as the composition of common parts shared with other similar filters and its private part describing some unique information. Both parts can be significantly compressed by discarding a large number of subtle frequency coefficients. Furthermore, we develop an extremely fast convolution calculation scheme that exploits the relationship between the feature maps of DCT bases and frequency coefficients. We have theoretically discussed the compression and the speed-up of the proposed algorithm. Experimental results on benchmark datasets demonstrate that our proposed algorithm can consistently outperform state-of-the-art competitors, with higher compression ratios and speed gains. 2 Compressing CNNs in the Frequency Domain Recently developed CNNs contain a large number of convolutional filters. We regard convolutional filters as small images with intrinsic patterns, and present an approach to compress CNNs in the frequency domain with the help of the DCT. 2.1 The Discrete Cosine Transform (DCT) The DCT plays an important role in JPEG compression [22], which is regarded as an approximate KL-transformation for 2D images [1]. In JPEGs, the original image is usually divided into several square patches. For an image patch P ∈Rn×n, its DCT coefficient C ∈Rn×n in the frequency domain is defined as: Cj1j2 = D(Pi1i2) = sj1sj2 n−1 X i1=0 n−1 X i2=0 α(i1, i2, j1, j2)Pi1i2 = cT j1Pcj2, (1) where sj = p 1/n if j = 0 and sj = p 2/n, otherwise, and C = CT PC is the matrix form of the DCT, where C = [c1, ..., cd] ∈Rd×d is the transformation matrix. The basis of this DCT is 2 Sj1j2 = cj1cT j2, and α(i1, i2, j1, j2) denotes the cosine basis function: α(i1, i2, j1, j2) = cos π(2i1 + 1)j1 2n  cos π(2i2 + 1)j2 2n  , (2) and cj(i) =  π(2i+1)j 2n  . The DCT is a lossless transformation thus we can recover the original image by simply utilizing the inverse DCT, i.e., Pi1i2 = D−1(Cj1j2) = n−1 X j1=0 n−1 X j2=0 sj1sj2α(i1, i2, j1, j2)Cj1j2, (3) whose matrix form is P = CCCT . Furthermore, to facilitate the notations we denote the DCT and the inverse DCT for vectors as vec(C) = D(vec(P)) = (C ⊗C)vec(P) and vec(P) = D−1(vec(C)) = (C ⊗C)T vec(C), where vec(·) is the vectorization operation and ⊗is the Kronecker product. 2.2 Convolutional Layer Compression Computing Residuals in the Frequency Domain. For a given convolutional layer Li, we first extract its convolutional filters F (i) = {F (i) 1 , ..., F (i) Ni }, where the size of each convolutional filter is di × di and Ni is the number of filters in Li. Each filter can then be transformed into a vector, and together they form a matrix Xi = [x(i) 1 , ..., x(i) Ni] ∈Rd2 i ×Ni, where x(i) j = vec(F (i) j ), ∀j = 1, ..., Ni. DCT has been widely used for image compression, since DCT coefficients present an experienced distribution in the frequency domain. Energies of high-frequency coefficients are usually much smaller than those of low-frequency coefficients for 2D natural images, i.e., the high frequencies tend to have values equal or close to zero [22]. Hence, we propose to transfer Xi into the DCT frequency domain and obtain its frequency representation C = D(Xi) = [C1, ..., CNi]. Since a number of convolutional filters will share some similar components, we divide them into several groups G = {G1, ..., GK} by exploiting K centers U = [µ1, ..., µk] ∈Rdi×K with the following minimization problem: arg min G K X k=1 X C∈Gk ||C −µk||2 2, (4) where µk is the cluster center of Gk. Fcn. 4 can easily be solved with the conventional k-means algorithm [9, 8]. For each Cj, we denote its residual with its corresponding cluster as Rj = Cj −µkj, where kj = arg mink ||Cj−µk||2. Hence, each convolutional filter is represented by its corresponding cluster center shared by other similar filters and its private part Rj in the frequency domain. We further employ the following ℓ1-penalized optimization problem to control redundancy in the private parts for each convolutional filter: arg min b Rj || b Rj −Rj||2 2 + λ|| b Rj||1, (5) where λ is a parameter for balancing the reconstruction error and sparsity penalty. The solution to Fcn. 5 is: bRj = sign(Rj) ⊙max{|Rj| −λ 2 , 0}, (6) where sign(·) is the sign function. We can also control the redundancy in the cluster centers using the above approach as well. Quantization, Encoding, and Fine-tuning. The sparse data obtained through Fcn. 6 is continuous, which is not benefit for storing and compressing. Hence we need to represent similar values with a common value, e.g., [0.101, 0.100, 0.102, 0.099] →0.100. Inspired by the conventional JPEG algorithm, we use the following function to quantize bRj: Rj = Q  bRj, Ω, b  = Ω· I ( Clip( bRj, −b, b) Ω ) , (7) where Clip(x, −b, b) = max(−b, min(b, x)) with boundary b > 0, and Ωis a large integer with similar functionality to the quantization table in the JPEG algorithm. It is useful to note that the 3 quantized values produced by applying Fcn. 7 can be regarded as the one-dimensional k-means centers in [9], thus a dictionary can consist of unique values in all {R1, ..., RNi}. Since occurrence probabilities of elements in the codebook are unbalanced, we employ the Huffman encoding for a more compact storage. Finally, the Huffman encoded data is stored in the compressed sparse row format (CSR), denoted as Ei. It has also been shown that fine-tuning after compression further enhances network accuracy [10, 9]. In our algorithm, we also employ the fine-tuning approach by holding weights that have been discarded so that the fine-tuning operation does not change the compression ratio. After generating a new model, we apply Fcn. 7 again to quantize the new model’s parameters until convergence. The above scheme for compressing convolutional CNNs stores four types of data: the compressed data Ei, the Huffman dictionary with Hi quantized values, the k-means centers U ∈Rd2 i ×K, and the indexes for mapping residual data to centers. It is obvious that if all filters from every layer can share the same Huffman dictionary and cluster centers, the compression ratio will be significantly increased. 2.3 CNNpack for CNN Compression Global Compression Scheme. To enable all convolutional filters to share the same cluster centers U ∈Rd2 i ×k in the frequency domain, we must convert them into a fixed-dimensional space. It is intuitive to directly resize all convolutional filters into matrices of the same dimensions and then apply k-means. However, this simple resizing method increases the amount of the data that needs to be stored. Considering d as the target dimension and di × di as the convolutional filter size of the i-th layer, the weight matrix would be inaccurately reshaped in the case of di < d or di > d. A more reasonable approach would be to resize the DCT coefficient matrices of convolutional filters in the frequency domain, because high-frequency coefficients are generally small and discarding them only has a small impact on the result (di > d). On the other hand, the introduced zeros will be immediately compressed by CSR since we do not need to encode or store them (di < d). Formally, the resizing operation for convolutional filters in the DCT frequency domain can be defined as: bCj1,j2 = Γ(C, d) =  Cj1,j2, if j1, j2 ≤d, 0, otherwise. (8) where d × d is the fixed filter size, C ∈Rdi×di is the DCT coefficient matrix of a filter in the i-th layer and bC ∈Rd×d is the coefficient matrix after resizing. After applying Fcn. 8, we can pack all the coefficient matrices together and use only one set of cluster centers to compute the residual data and then compress the network. We extend the individual compression scheme into an integrated method that is convenient and effective for compressing deep CNNs, which we call CNNpack. The procedures of the proposed algorithm are detailed in the supplementary materials. CNNpack has five hyper-parameters: λ, d, K, b, and Ω. Its compression ratio can be calculated by: rc = Pp i=1 32Nid2 i Pp i=1 Ni log K + Bi  + 32H + 32d 2K , (9) where p is the number of convolutional layers and H is the bits for storing the Huffman dictionary. It is instructive to note that a larger λ (Fcn. 5) puts more emphasis on the common parts of convolutional filters, which leads to a higher compression ratio rc. A decrease in any of b, d and Ωwill increase rc accordingly. Parameter K is related to the sparseness of Ei, and a larger K would contribute more to Ei’s sparseness but would lead to higher storage requirement. A detailed investigation of all these parameters is presented in Section. 4, and we also demonstrate and validate the trade-off between the compression ratio and CNN accuracy of the convolutional neural work (i.e., classification accuracy [19]). 3 Speeding Up Convolutions According to Fcn. 9, we can obtain a good compression ratio by converting the convolutional filters into the DCT frequency domain and representing them using their corresponding cluster centers and 4 residual data Ri = [R1, ..., RNi]. This is an effective strategy to reduce CNN storage requirements and transmission consumption. However, two other important issues need to be emphasized: memory usage and complexity. This section focuses on a single layer throughout, we will drop the layer index i in order to simplify notation. In practice, the residual data R in the frequency domain cannot be simply used to calculate convolutions, so we must first transform them to the original filters in the spatial domain, thus not saving memory. Furthermore, the transformation will further increase algorithm complexity. It is thus necessary to explore a scheme that enables the proposed compression method to easily calculate feature maps of original filters in the DCT frequency domain. Given a convolutional layer L with its filters F = {Fq}N q=1 of size d × d, we denote the input data as X ∈RH×W and its output feature maps as Y = {Y1, Y2, ..., YN} with size H′ × W ′, where Yq = Fq ∗X for the convolution operation ∗. Here we propose a scheme which decomposes the calculation of the conventional convolutions in the DCT frequency domain. For the DCT matrix C = [c1, ..., cd], the d × d convolutional filter Fq can be represented by its DCT coefficient matrix C(q) with DCT bases {Sj1,j2}d j1,j2=1 defined as Sj1,j2 = cj1cT j2, namely, Fq = Pd j1=1 Pd j2=1 C(q) j1,j2Sj1,j2. In this way, feature maps of X through F can be calculated as Yq = Pd j1,j2=1 C(q) j1,j2(Sj1,j2 ∗X), where M = d2 is the number of DCT bases. Fortunately, since the DCT is an orthogonal transformation, all of its bases are rank-1 matrices. Note that Sj1,j2 ∗X = cj1cT j2  ∗X, and thus feature map Yq can be written as Yq = Fq ∗X = d X j1,j2=1 C(q) j1,j2(Sj1,j2 ∗X) = d X j1,j2=1 C(q) j1,j2[cj1 ∗(cT j2 ∗X)]. (10) One drawback of this scheme is that when the number of filters in this layer is relatively small, i.e., M ≈N, Fcn. 10 will increase computational cost. Moreover, the complexity can be further reduced given the fascinating fact that the feature maps calculated by these DCT bases are exactly the same as the DCT frequency coefficients of the input data. For a d × d matrix X, we consider the matrix form of its DCT, i.e., C = CT XC, and DCT bases {Sj1,j2}, then the coefficient Cj1,j2 can be calculated as Cj1,j2 = cT j1Xcj2 = cT j1(cT j2 ∗X) = cj1 ∗(cT j2 ∗X) = (cj1 ∗cT j2) ∗X = (cj1cT j2) ∗X = Sj1,j2 ∗X, (11) thus we can obtain the feature maps of M DCT bases by only applying the DCT once. Thus the computational complexity of our proposed scheme can be analyzed in Proposition 1. Proposition 1. Given a convolutional layers with N filters, denote M = d×d as the number of DCT base, and C ∈Rd2×N as the compressed coefficients of filters in this layer. Suppose δ is the ratio of non-zero elements in C, while η is the ratio of non-zero elements in K′ active cluster centers of this layer. The computational complexity of our proposed scheme is O((d2 log d+ηMK′+δMN)H′W ′). The proof of Proposition 1 can be found in the supplementary materials. According to Proposition 1, the proposed compression scheme in the frequency domain can also improve the speed. Compared to the original CNN, for a convolutional layer, the speed-up of the proposed method is rs = d2NH′W ′ (d2 log d + ηK′M + δNM)H′W ′ ≈ N ηK′ + δN . (12) Obviously, the speed-up ratio of the proposed method is directly relevant to η and δ, which correspond to λ in Fcn. 6. 4 Experimental Results Baselines and Models. We compared the proposed approach with 4 baseline approaches: Pruning [10], P+QH (Pruning + Quantization and Huffman encoding) [9], SVD [6], and XNOR-Net [16]. The evaluation was conducted using the MNIST and ILSVRC2012 datasets. We compared the proposed compression approach with four baseline CNNs: LeNet [14, 21], AlexNet [13], VGG-16 Net [19], and ResNet-50 [11]. All methods were implemented using MatConvNet [21] and run on 5 NVIDIA Titan X graphics cards. Model parameters were stored and updated as 32 bit floating-point values. Impact of parameters. As discussed above, the proposed compression method has several important parameters: λ, d, K, b, and Ω. We first tested their impact on the network accuracy by conducting an experiment using MNIST [21], where the network has two convolutional layers and two fullyconnected layers of size 5 × 5 × 1 × 20, 5 × 5 × 20 × 50, 4 × 4 × 50 × 500, and 1 × 1 × 500 × 10, respectively. The model accuracy was 99.06%. The compression results of different λ and d after fine-tuning 20 epochs are shown in Fig. 2 in which k was set as 16, b was equal to +∞since it did not make an obvious contribution to the compression ratio even when set at a relatively smaller value (e.g., b = 0.05) but caused the accuracy reduction. Ωwas set to 500, making the average length of weights in the frequency domain about 5, a bit larger than that in [9] but more flexible and with relatively better performance. Note that all the training parameters used their the default settings, such as for epochs, learning rates, etc.. It can be seen from Fig. 2 that although a lower d slightly improves the compression ratio and speed-up ratio simultaneously, this comes at a cost of decreased overall network accuracy; thus, we kept d = max{di}, ∀i = 1, ..., p, in CNNpack. Overall, λ is clearly the most important parameter in the proposed scheme, which is sensitive but monotonous. Thus, it only needs to be adjusted according to demand and restrictions. Furthermore, we tested the impact of number of cluster centers K. As mentioned above, K is special in that its impact on performance is not intuitive, and when it becomes larger, E becomes sparser but needs more space for storing cluster centers U and indexes. Fig. 3 shows that K = 16 provides the best trade-off between compression performance and accuracy. 0.980 0.984 0.988 0.992 0.025 0.035 0.045 0.055 Accuracy λ d- = 5 d- = 4 d- = 3 20 40 60 80 0.025 0.035 0.045 0.055 Compression ratio λ d- = 5 d- = 4 d- = 3 5 10 15 20 25 0.025 0.035 0.045 0.055 Speedup ratio λ d- = 5 d- = 4 d- = 3 Figure 2: The performance of the proposed approach with different λ and d. 0.986 0.988 0.990 0.992 0.025 0.035 0.045 0.055 Accuracy λ K = 16 K = 64 K = 128 K = 0 20 40 60 80 0.025 0.035 0.045 0.055 Compression ratio λ K = 16 K = 64 K = 128 K = 0 5 10 15 20 0.025 0.035 0.045 0.055 Speedup ratio λ K = 16 K = 64 K = 128 K = 0 Figure 3: The performance of the proposed approach with different numbers of cluster centers K. We also report the compression results by directly compressing the DCT frequency coefficients of original filters C as before (i.e., K = 0, the black line in Fig. 3). It can be seen that the clustering number does not affect accuracy, but a suitable K does enhance the compression ratio. Another interesting phenomenon is that the speed-up ratio without decomposition is larger than that of the proposed scheme because the network is extremely small and the clustering introduces additional computational cost as shown in Fcn. 12. However, recent networks contain a lot more filters in a convolutional layer, larger than K = 16. Based on the above analysis, we kept λ = 0.04 and K = 16 for this network (an accuracy of 99.14%). Accordingly, the compression ratio rc = 32.05× and speed-up ratio rs = 8.34×, which is the best trade-off between accuracy and compression performance. Filter visualization. The proposed algorithm operates in the frequency domain, and although we do not need to invert the compressed net when calculating convolutions, we can reconstruct the convolutional filters in the spatial domain to provide insights into our approach. Reconstructed convolution filters are obtained from the LeNet on MNIST, as shown in Fig. 4. 6 Figure 4: Visualization of example filters learned on MNIST: (a) the original convolutional filters, (b) filters after pruning, (c) convolutional filters compressed by the proposed algorithm. The proposed approach is fundamentally different to the previously used pruning algorithm. According to Fig. 4(b), weights with smaller magnitudes are pruned while influences of larger weights have been discarded. In contrast, our proposed algorithm not only handles the smaller weights but also considers impacts of those larger weights. Most importantly, we accomplish the compressing task by exploring the underlying connections between all the weights in the convolutional filter (see Fig. 4(c)). Compression AlexNet and VGGNet on ImageNet. We next employed CNNpack for CNN compression on the ImageNet ILSVRC-2012 dataset [18], which contains over 1.2M training images and 50k validation images. First, we examined two conventional models: AlexNet [13], with over 61M parameters and a top-5 accuracy of 80.8%; and VGG-16 Net, which is much larger than the AlexNet with over 138M parameters and has a top-5 accuracy of 90.1%. Table 1 shows detailed compression and speed-up ratios of the AlexNet with λ = 0.04 and K = 16. The result of the VGG-16 Net can be found in the supplementary materials. The reported multiplications are for computing one image. Table 1: Compression statistics for AlexNet. Layer Num of Weights Memory rc Multiplication rs conv1 11 × 11 × 3 × 96 0.13MB 878× 1.05×108 127× conv2 5 × 5 × 48 × 256 1.17MB 94× 2.23×108 28× conv3 3 × 3 × 256 × 384 3.37MB 568× 1.49×108 33× conv4 3 × 3 × 192 × 384 2.53MB 42× 1.12×108 15× conv5 3 × 3 × 192 × 256 1.68MB 43× 0.74×108 12× fc6 6 × 6 × 256 × 4096 144MB 148× 0.37×108 100× fc7 1 × 1 × 4096 × 4096 64MB 15× 0.16×108 8× fc8 1 × 1 × 4096 × 1000 15.62MB 121× 0.04×108 60× Total 60954656 232.52MB 39× 7.24×108 25× We achieved a 39× compression ratio and a 46× compression ratio for AlexNet and VGG-16 Net, respectively. The layer with a relatively larger filter size had a larger compression ratio because it contains more subtle high-frequency coefficients. In contrast, the highest speed-up ratio was often obtained on the layer whose filter number N was much larger than its filter size, e.g., the fc6 layer of AlexNet. We obtained an about 9× speed-up ratio on VGG-16 Net, which is lower than that on the AlexNet since complexity is relevant to the feature map size and the first several layers definitely have more multiplications. Unfortunately, their filter numbers are relatively small and their compression ratios are all small, thus the overall speed-up ratio is lower than that on AlexNet. Accordingly, when we set K = 0, the compression ratio and the speed-up ratio of AlexNet were close to 35× and 22× and those of VGG-16 Net were near 28× and 7×. This reduction is due to these two networks being relatively large and containing many similar filters. Moreover, the filter number in each layer is larger than the number of cluster centers, i.e., N > K. Thus, cluster centers can effectively reduce memory consumption and computational complexity simultaneously. ResNet-50 on ImageNet. Here we discuss a more recent work, ResNet-50 [11], which has more than 150 layers and 54 convolutional layers. This model achieves a top-5 accuracy of 7.71% and a top-1 accuracy of 24.62% with only about 95MB parameters [21]. Moreover, since most of its convolutional filters are 1 × 1, i.e., its architecture is very thin with less redundant weights, it is very hard to perform compression methods on it. For the experiment on ResNet-50, we set K = 0 since the functionality of Fcn. 7 for 1-dimensional filters are similar to that of k-means clustering, thus cluster centers are dispensable for this model. Further, we set λ = 0.04 and therefore discarded about 84% of original weights in the frequency 7 (a) Compression ratios of all convolutional layers. (b) Speed-up ratios of all convolutional layers. Figure 5: Compression statistics for ResNet-50 (better viewed in color version). domain. After fine-tuning, we obtained a 7.82% top-5 accuracy on ILSVRC2012 dataset. Fig. 5 shows the detailed compression statistics of ResNet-50 utilizing the proposed CNNpack. In summary, memory usage for storing its filters was squeezed by a factor of 12.28×. It is worth mentioning that, larger filters seem have more redundant weights and connections because compression ratios on layers with 1×1 filters are smaller than those on layers with 3×3 filters. On the other side, these 1×1 filters hold a larger proportion of multiplications of the ResNet, thus we got an about 4× speed-up on this network. Comparison with state-of-the-art methods. We detail a comparison with state-of-the-art-methods for compressing DNNs in Table 2. CNNpack clearly achieves the best performance in terms of both the compression ratio (rc) and the speed-up ratio (rs). Note that although Pruning+QH achieves a similar compression ratio to the proposed method, filters in their algorithm are stored after applying a modified CSC format which stores index differences, means that it needs to be decoded before any calculation. Hence, the compression ratio of P+QH will be lower than that have been reported in [9] if we only consider memory usage. In contrast, the compressed data produced by our method can be directly used for network calculation. In reality, online memory usage is the real restriction for mobile devices, and the proposed method is superior to previous works in terms of both the compression ratio and the speed-up ratio. Table 2: An overall comparison of state-of-the-art methods for deep neural network compression and speed-up, where rc is the compression ratio and rs is the speed-up. Model Evaluation Original Pruning [10] P+QH [9] SVD [6] XNOR [16] CNNpack AlexNet rc 1 9× 35× 5× 64× 39× rs 1 2× 58× 25× top-1 err 41.8% 42.7% 42.7% 44.0% 56.8% 41.6% top-5 err 19.2% 19.6% 19.7% 20.5% 31.8% 19.2% VGG16 rc 1 13× 49× 46× rs 1 3.5× 9.4× top-1 err 28.5% 31.3 31.1% 29.7% top-5 err 9.9% 10.8 10.9% 10.4% 5 Conclusion Neural network compression techniques are desirable so that CNNs can be used on mobile devices. Therefore, here we present an effective compression scheme in the DCT frequency domain, namely, CNNpack. Compared to state-of-the-art methods, we tackle this issue in the frequency domain, which can offer the probability for more compression ratio and speed-up. Moreover, we no longer independently consider each weight since each frequency coefficient’s calculation involves all weights in the spatial domain. Following the proposed compression approach, we explore a much cheaper convolution calculation based on the sparsity of the compressed net in the frequency domain. Moreover, although the compressed network produced by our approach is sparse in the frequency domain, the compressed model has the same functionality as the original network since filters in the spatial domain have preserved intrinsic structure.Our experiments show that the compression ratio and the speed-up ratio are both higher than those of state-of-the-art methods. The proposed CNNpack approach creates a bridge to link traditional signal and image compression with CNN compression theory, allowing us to further explore CNN approaches in the frequency domain. Acknowledgements This work was supported by the National Natural Science Foundation of China under Grant NSFC 61375026 and 2015BAF15B00, and Australian Research Council Projects: FT130101457, DP-140102164 and LE-140100061. 8 References [1] Nasir Ahmed, T Natarajan, and Kamisetty R Rao. Discrete cosine transform. Computers, IEEE Transactions on, 100(1):90–93, 1974. [2] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. ICML, 2014. [3] Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing convolutional neural networks. arXiv preprint arXiv:1506.04449, 2015. [4] Wenlin Chen, James T Wilson, Stephen Tyree, Kilian Q Weinberger, and Yixin Chen. Compressing neural networks with the hashing trick. 2015. [5] Matthieu Courbariaux and Yoshua Bengio. Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016. [6] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In NIPs, 2014. [7] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [8] Yunchao Gong, Liu Liu, Ming Yang, and Lubomir Bourdev. Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115, 2014. [9] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. 2016. [10] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In NIPs, 2015. [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [12] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the ACM International Conference on Multimedia, 2014. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPs, 2012. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [15] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In CVPR, 2015. [16] Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. arXiv preprint arXiv:1603.05279, 2016. [17] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPs, 2015. [18] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. IJCV, 115(3):211–252, 2015. [19] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. [20] Yi Sun, Yuheng Chen, Xiaogang Wang, and Xiaoou Tang. Deep learning face representation by joint identification-verification. In NIPs, 2014. [21] Andrea Vedaldi and Karel Lenc. Matconvnet: Convolutional neural networks for matlab. In Proceedings of the 23rd Annual ACM Conference on Multimedia Conference, 2015. [22] Gregory K Wallace. The jpeg still picture compression standard. Consumer Electronics, IEEE Transactions on, 38(1):xviii–xxxiv, 1992. 9
2016
132
6,031
Feature-distributed sparse regression: a screen-and-clean approach Jiyan Yang† Michael W. Mahoney‡ Michael A. Saunders† Yuekai Sun§ † Stanford University ‡ University of California at Berkeley § University of Michigan jiyan@stanford.edu mmahoney@stat.berkeley.edu saunders@stanford.edu yuekai@umich.edu Abstract Most existing approaches to distributed sparse regression assume the data is partitioned by samples. However, for high-dimensional data (D ≫N), it is more natural to partition the data by features. We propose an algorithm to distributed sparse regression when the data is partitioned by features rather than samples. Our approach allows the user to tailor our general method to various distributed computing platforms by trading-off the total amount of data (in bits) sent over the communication network and the number of rounds of communication. We show that an implementation of our approach is capable of solving ℓ1-regularized ℓ2 regression problems with millions of features in minutes. 1 Introduction Explosive growth in the size of modern datasets has fueled the recent interest in distributed statistical learning. For examples, we refer to [2, 20, 9] and the references therein. The main computational bottleneck in distributed statistical learning is usually the movement of data between compute notes, so the overarching goal of algorithm design is the minimization of such communication costs. Most work on distributed statistical learning assume the data is partitioned by samples. However, for high-dimensional datasets, it is more natural to partition the data by features. Unfortunately, methods that are suited to such feature-distributed problems are scarce. A possible explanation for the paucity of methods is feature-distributed problems are harder than their sample-distributed counterparts. If the data is distributed by samples, each machine has a complete view of the problem (albeit a partial view of the dataset). Given only its local data, each machine can fit the full model. On the other hand, if the data is distributed by features, each machine no longer has a complete view of the problem. It can only fit a (generally mis-specified) submodel. Thus communication among the machines is necessary to solve feature-distributed problems. In this paper, our goal is to develop algorithms that minimize the amount of data (in bits) sent over the network across all rounds for feature-distributed sparse linear regression. The sparse linear model is y = Xβ∗+ ϵ, (1) where X ∈RN×D are features, y ∈RN are responses, β∗∈RD are (unknown) regression coefficients, and ϵ ∈RN are unobserved errors. The model is sparse because β∗is s-sparse; i.e., the cardinality of S := supp(β∗) is at most s. Although it is an idealized model, the sparse linear model has proven itself useful in a wide variety of applications. A popular way to fit a sparse linear model is the lasso [15, 3]: bβ ←arg min∥β∥1≤1 1 2N ∥y −Xβ∥2 2, 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. where we assumed the problem is scaled so that ∥β∗∥1 = 1. There is a well-developed theory of the lasso that ensures the lasso estimator bβ is nearly as close to β∗as an oracle estimator X† Sy, where S ⊂[D] is the support of β∗[11]. Formally, under some conditions on the Gram matrix 1 N XT X, the (in-sample) prediction error of the lasso is roughly s log D N . Since the prediction error of the oracle estimator is (roughly) s N , the lasso estimator is almost as good as the oracle estimator. We refer to [8] for the details. We propose an approach to feature distributed sparse regression that attains the convergence rate of the lasso estimator. Our approach, which we call SCREENANDCLEAN, consists of two stages: a screening stage where we reduce the dimensionality of the problem by discarding irrelevant features; and a cleaning stage where we fit a sparse linear model to a sketched problem. The key features of the proposed approach are: • We reduce the best-known communication cost (in bits) of feature-distributed sparse regression from O(mN 2) to O(Nms) bits, where N is the sample size, m is the number of machines, and s is the sparsity. To our knowledge, the proposed approach is the only one that exploits sparsity to reduce communication cost. • As a corollary, we show that constrained Newton-type methods converge linearly (up to a statistical tolerance) on high-dimensional problems that are not strongly convex. Also, the convergence rate is only weakly dependent on the condition number of the problem. • Another benefit of our approach is it allows users to trade-off the amount of data (in bits) sent over the network and the number of rounds of communication. At one extreme, it is possible to reduce the amount of bits sent over the network to eO(mNs) (at the cost of log N s log D  rounds of communication). At the other extreme, it is possible to reduce the total number of iterations to a constant at the cost of sending eO(mN 2) bits over the network. Related work. DECO [17] is a recently proposed method that addresses the same problem we address. At a high level, DECO is based on the observation that if the features on separate machines are uncorrelated, the sparse regression problem decouples across machines. To ensures the features on separate machines are uncorrelated, DECO first decorrelates the features by a decorrelation step. The method is communication efficient in that it only requires a single round of communication, where O(mN 2) bits of data are sent over the network. We refer to [17] for the details of DECO. As we shall see, in the cleaning stage of our approach, we utilize the sub-Gaussian sketches. In fact, other sketches, e.g., sketches based on Hadamard transform [16] and sparse sketches [4] may also be used. An overview of various sketching techniques can be found in [19]. The cleaning stage of our approach is operationally very similar to the iterative Hessian sketch (IHS) by Pilanci and Wainwright for constrained least squares problems [12]. Similar Newton-type methods that relied on sub-sampling rather than sketching were also studied by [14]. However, they are chiefly concerned with the convergence of the iterates to the (stochastic) minimizer of the least squares problem, while we are chiefly concerned with the convergence of the iterates to the unknown regression coefficients β∗. Further, their assumptions on the sketching matrix are stated in terms of the transformed tangent cone at the minimizer of the least squares problem, while our assumptions are stated in terms of the tangent cone at β∗. Finally, we wish to point out that our results are similar in spirit to those on the fast convergence of first order methods [1, 10] on high-dimensional problems in the presence of restricted strong convexity. However, those results are also chiefly concerned with the convergence of the iterates to the (stochastic) minimizer of the least squares problem. Further, those results concern first-order, rather than second-order methods. 2 A screen-and-clean approach Our approach SCREENANDCLEAN consists of two stages: 1. Screening Stage: reduce the dimension of the problem from D to d = O(N) by discarding irrelevant features. 2. Cleaning Stage: fit a sparse linear model to the O(N) selected features. We note that it is possible to avoid communication in the screening stage by using a method based on the marginal correlations between the features and the response. Further, by exploiting sparsity, it is 2 possible to reduce the amount of communication to O(mNs) bits (ignoring polylogarithmic factors). To the authors’ knowledge, all existing one-shot approaches to feature-distributed sparse regression that involve only a single round of communication require sending O(mN 2) bits over the network. In the first stage of SCREENANDCLEAN, the k-th machine selects a subset bSk of potentially relevant features, where |bSk| = dk ≲N. To avoid discarding any relevant features, we use a screening method that has the sure screening property: P supp(β∗ k) ⊂∪k∈[m] bSk  →1, (2) where β∗ k is the k-th block of β∗. We remark that we do not require the selection procedure to be variable selection consistent. That is, we do not require the selection procedure to only selected relevant features. In fact, we permit the possibility that most of the selected features are irrelevant. There are many existing methods that, under some conditions on the strength of the signal, has the sure screening property. A prominent example is sure independence screening (SIS) [6]: bSSIS ←{i ∈[D] : 1 N xT i y is among the ⌊τN⌋largest entries of 1 N XT y}. (3) SIS requires no communication among the machines, making it particularly amenable to distributed implementation. Other methods include HOLP [18]. In the second stage of SCREENANDCLEAN, which is presented as Algorithm 1, we solve the reduced sparse regression problem in an iterative manner. At a high level, our approach is a constrained quasi-Newton method. At the beginning of the second stage, each machine sketches the features that are stored locally: f Xk ← 1 √ nT SXk,bSk, where S ⊂RnT ×N is a sketching matrix and Xk,bSk ∈Rn×dk comprises the features stored on the k-th machine that were selected by the screening stage. For notational convenience later, we divide f Xk row-wise into T blocks: f Xk =   f Xk,1 ... f Xk,T  , where each block is a n × dk block. We emphasize that the sketching matrix is identical on all the machines. To ensure the sketching matrix is identical, it is necessary to synchronize the random number generators on the machines. We restrict our attention to sub-Gaussian sketches; i.e., the rows of Sk are i.i.d. sub-Gaussian random vectors. Formally, a random vector x ∈Rd is 1-sub-Gaussian if P(θT x ≥ϵ) ≤e−ϵ2 2 for any θ ∈Sd−1, ϵ > 0. Two examples of sub-Gaussian sketches are the standard Gaussian sketch: Si,j i.i.d. ∼ N(0, 1), and the Rademacher sketch: Si,j are i.i.d. Rademacher random variables. After each machine sketches the features that are stored locally, it sends the sketched features f Xk and the correlation of the screened features with the response bγk := 1 N XT k,bSky to a central machine, which solves a sequence of T regularized quadratic programs (QP) to estimate β∗: eβt ←arg minβ∈Bd 1 1 2βT eΓtβ −(bγ −bΓ eβt−1 + eΓt eβt−1)T β, where bγ =  bγT 1 . . . bγm T are the correlations of the screened features with the response, bΓ = 1 N XT bS XbS is the Gram matrix of the features selected by the screening stage, and eΓt := h f X1,t . . . f Xm,t iT h f X1,t . . . f Xm,t i . As we shall see, despite the absence of strong convexity, the sequence { eβt}∞ t=1 converges q-linearly to β∗up to the statistical precision. 3 Algorithm 1 Cleaning Stage Sketching 1: Each machine computes sketches 1 √ nT StXk,bSk and sufficient statistics 1 N Xk,bSky, t ∈[T] 2: A central machine collects the sketches and sufficient statistics and forms: eΓt ← 1 nT   ... StXk,bSk T ...   . . . StXk,bSk . . . , bγ ←   ... 1 N XT k,bSky ...  . Optimization 3: for t ∈[T] do 4: The cluster computes bΓ eβt−1 in a distributed fashion: byt−1 ←P k∈[m] Xk,bSk eβt−1,k, bΓ eβt−1 ←   ... 1 N XT k,bSkbyt−1 ...  . 5: eβt ←arg minβ∈Bd 1 1 2βT eΓtβ −(bγ −bΓ eβt−1 + eΓt eβt−1)T β 6: end for 7: The central machine pads eβT with zeros to obtain an estimator of β∗ The cleaning stage involves 2T + 1 rounds of communication: step 2 involve a single round of communication, and step 4 involves two rounds of communication. We remark that T is a small integer in practice. Consequently, the number of rounds of communication is a small integer. In terms of the amount of data (in bits) sent over the network, the communication cost of the cleaning stage grows as O(dnmT), where d is the number of features selected by the screening stage and n is the sketch size. The communication cost of step 2 is O(dmnT + d), while that of step 4 is O(d + N). Thus the dominant term is O(dnmT) incurred by machines sending sketches to the central machine. 3 Theoretical properties of the screen-and-clean approach In this section, we will establish our main theoretical result regarding our SCREENANDCLEAN approach, given as Theorem 3.5. Recall that a key element of our approach is to prove the first stage of SCREENANDCLEAN establishes the sure screening property, i.e., (2). To this end, we begin by stating a result by Fan and Lv that establishes sufficient conditions for SIS, i.e., (3) to possess the sure screening property. Theorem 3.1 (Fan and Lv (2008)). Let Σ be the covariance of the predictors and Z = XΣ−1/2 be the whitened predictors. We assume Z satisfies the concentration property: there are c, c1 > 1 and C1 > 0 such that P λmax ˜d−1eZeZT  > c1 and λmin ˜d−1eZeZT  < c−1 1  ≤e−C1n for any N × ˜d submatrix eZ of Z. Further, 1. the rows of Z are spherically symmetric, and ϵi i.i.d. ∼ N(0, σ2) for some σ > 0; 2. var(y) ≲1 and minj∈S β∗ j ≥ c2 N κ and minj∈S |cov(y, xj)| ≥c3 βj for some κ > 0 and c2, c3 > 0; 3. there is c4 > 0 such that λmax(Σ) ≤c4. As long as κ < 1 2, there is some θ < 1 −2κ such that if τ = cN −θ for some c > 0, we have P(S ⊂bSSIS) = 1 −C2 exp −CN 1−2κ log N  for some C, C2 > 0, where bSSIS is given by (3). The assumptions of Theorem 3.1 are discussed at length in [6], Section 5. We remark that the most stringent assumption is the third assumption, which is an assumption on the signal-to-noise ratio (SNR). It rules out the possibility a relevant variable is (marginally) uncorrelated with the response. 4 We continue our analysis by studying the convergence rate of our approach. We begin by describing three structural conditions we impose on the problem. In the rest of the section, let K(S) := {β ∈Rd : ∥βSc∥1 ≤∥βS∥1}. Condition 3.2 (RE condition). There is α2 > 0 s.t. ∥β∥2 bΓ ≥α1∥β∥2 2 for any β ∈K(S). Condition 3.3. There is α2 > 0 s.t. ∥β∥2 bΓt ≥α2∥β∥2 bΓ for any β ∈K(S). Condition 3.4. There is α3 > 0 s.t. |βT 1 (bΓt −bΓ)β2| ≤α3∥β1∥bΓ∥β2∥bΓ for any β ∈K(S). The preceding conditions deserve elaboration. The cone K(S) is an object that appears in the study of the statistical properties of constrained M-estimators: it is the set the error of the constrained lasso bβ −β∗belongs to. Its image under XbS is the transformed tangent cone which contains the prediction error XbS( bβT −bβ∗). Condition 3.2 is a common assumption in the literature on high-dimensional statistics. It is a specialization of the notion of restricted strong convextiy that plays a crucial part in the study of constrained M-estimators. Conditions 3.3 and 3.4 are conditions on the sketch. At a high level, Conditions 3.3 and 3.4 state that the action of the sketched Gram matrix bΓt on K(S) is similar to that of bΓ on K(S). As we shall see, they are satisfied with high probability by sub-Gaussian sketches. The following theorem is our main result regarding the SCREENANDCLEAN method. Theorem 3.5. Under Conditions 3.2, 3.3, and 3.4, for any T > 0 such that ∥eβt −β∗∥bΓ ≥ √ L √s ∥bβ − β∗∥1 for all t ≤T, we have ∥eβt −β∗∥bΓ ≤γt−1∥eβ1 −β∗∥bΓ + ϵst(N, D) 1 −γ , where γ = cγα3 α2 is the contraction factor (cγ > 0 is an absolute constant) and ϵst(N, D) = 2(1 + 12α3)λmax(bΓ)1/2 α2 √s ∥bβ −β∗∥1 + 24√s α2√α1 ∥bΓβ∗−bγ∥∞. To interpret Theorem 3.5, recall ∥bβ −β∗∥2 ≲P √s∥bΓβ∗−bγ∥∞, ∥bβ −β∗∥1 ≲P s∥bΓβ∗−bγ∥∞, where bβ is the lasso estimator. Further, the prediction error of the lasso estimator is (up to a constant) √ L √s ∥bβ −β∗∥1, which (up to a constant) is exactly statistical precision ϵst(N, D). Theorem 3.5 states that the prediction error of eβt decreases q-linearly to that of the lasso estimator. We emphasize that the convergence rate is linear despite the absence of strong convexity, which is usually the case when N < D. A direct consequence is that only logarithmically many iterations ensures a desired suboptimality, which stated in the following corollary. Corollary 3.6. Under the conditions of Theorem 3.5, T = log ϵ −ϵst(N,D) 1−γ −1 −log 1 ϵ1 log 1 γ ≈log 1 ϵ iterations of the constrained quasi-Newton method, where ϵ1 = ∥bβ1 −β∗∥bΓ, is enough to produce an iterate whose prediction error is smaller than ϵ > max n λmax(bΓ)1/2 √s ∥bβ −β∗∥1, ϵst(N,D) 1−γ o ≈∥bβ −β∗∥bΓ. Theorem 3.5 is vacuous if the contraction factor γ = cγα3 α2 is not smaller than 1. To ensure γ < 1, it is enough to choose the sketch size n so that α3 α2 < c−1 γ . Consider the “good event” E(δ) :=  α2 ≥1 −δ, α3 ≤δ 2 . (4) If the rows of St are sub-Gaussian, to ensure E(δ) occurs with high probability, Pilanci and Wainwright show it is enough to choose n > cs δ2 W XbS(K(S) ∩Sd−1) 2, (5) where cs > 0 is an absolute constant and W(S) is the Gaussian-width of the set S ⊂Rd [13]. 5 Lemma 3.7 (Pilanci and Wainwright (2014)). For any sketching matrix whose rows are independent 1-sub-Gaussian vectors, as long as the sketch size n satisfies (5), P E(δ)  ≥1 −c5 exp −c6nδ2 , where c5, c6 are absolute constants. As a result, when the sketch size n satisfies (5), Theorem 3.5 is non-trivial. Tradeoffs depending on sketch size. We remark that the contraction coefficient in Theorem 3.5 depends on the sketch size. As the sketch size n increases, the contraction coefficient decays and vice versa. Thus the sketch size allows practitioner to trade-off the total rounds of communication with the total amount of data (in bits) sent over the network. A larger sketch size results in fewer rounds of communication, but more bits per round of communication and vice versa. Recall [5] the communication cost of an algorithm is rounds × overhead + bits × bandwidth−1. By tweaking the sketch size, users can trade-off rounds and bits, thereby minimizing the communcation cost of our approach on various distributed computing platforms. For example, the user of a cluster comprising commodity machines is more concerned with overhead than the user of a purpose-built high performance cluster [7]. In the following, we study the two extremes of the trade-off. At one extreme, users are solely concerned by the total amount of data sent over the network. On such platforms, users should use smaller sketches to reduce the total amount of data sent over the network at the expense of performing a few extra iterations (rounds of communication). Corollary 3.8. Under the conditions of Theorem 3 and Lemma 3.7, selecting d := ⌊τN⌋features by SIS, where τ = cN −θ for some c > 0 and θ < 1 −2κ and letting n > cs(cγ + 2)2 4 W XbS(K(S) ∩Sd−1) 2, T = log 1 ϵst(N,D) −log 1 ϵ1 log 2 in Algorithm 1 ensures ∥eβT −β∗∥bΓ ≤3ϵst(N, D) with probability at least 1 −c4T exp −c2nδ2 −C2 exp −CN 1−2κ log N  , where c, cγ, cs, c2, c4, C, C2 are absolute constants. We state the corrollary in terms of the statistical precision ϵst(N, D) and the Gaussian width to keep the expressions concise. It is known that the Gausssian width of the transformed tangent cone that appears in Corollary 3.8 is O(s log d)1/2 [13]. Thus it is possible to keep the sketch size n on the order of s log d. Recalling d = ⌊τN⌋, where τ is specified in the statement of Theorem 3.1, and ϵst(N, D) ≤ s log D N  1 2 , we deduce the communication cost of the approach is O(dnmT) = O N(s log d)m log N s log D  = eO(mns), where eO ignores polylogarithmic terms. The takeaway is it is possible to obtain an O(ϵst(N, D)) accurate solution by sending eO(mNs) bits over the network. Compared to the O(mN 2) communication cost of DECO, we see that our approach exploits sparsity to reduce communication cost. At the other extreme, there is a line of work in statistics that studies estimators whose evaluation only requires a single round of communication. DECO is such a method. In our approach, it is possible to obtain an ϵst(N, D) accurate solution in a single iteration by choosing the sketch size large enough to ensure the contraction factor γ is on the order of ϵst(N, D). Corollary 3.9. Under the conditions of Theorem 3 and Lemma 3.7, selecting d := ⌊τN⌋features by SIS, where τ = cN −θ for some c > 0 and θ < 1 −2κ and letting n > cs(cγϵst(N, D)−1 + 2)2 4 W XbS(K(S) ∩Sd−1) 2 and T = 1 in Algorithm 1 ensures ∥eβT −β∗∥bΓ ≤3ϵst(N, D) with probability at least 1 −c4T exp −c2nδ2 −C2 exp −CN 1−2κ log N  , where c, cγ, cs, c2, c4, C, C2 are absolute constants. 6 2 4 6 8 10 Iterations 10-2 10-1 100 101 Prediction error m = 231 m = 277 m = 369 m = 553 m = 922 Lasso (a) xi i.i.d. ∼ N(0, ID) 2 4 6 8 10 Iterations 10-2 10-1 100 101 Prediction error m = 231 m = 277 m = 369 m = 553 m = 922 Lasso (b) xi i.i.d. ∼ AR(1) Figure 1: Plots of the statistical error log ∥f X( bβ −β∗)∥2 2 versus iteration. Each plots shows the convergence of 10 runs of Algorithm 1 on the same problem instance. We see that the statistical error decreases linearly up to the statistical precision of the problem. Recalling ϵst(N, D)2 ≈s log D N , W XbS(K(S) ∩Sd−1) 2 ≈s log d, we deduce the communication cost of the one-shot approach is O(dnmT) = O N 2m log N s log D  = eO(mN 2), which matches the communication cost of DECO. 4 Simulation results In this section, we provide empirical evaluations of our main algorithm SCREENANDCLEAN on synthetic datasets. In most of the experiments the performance of the methods is evaluated in terms of the prediction error which is defined as ∥f X( bβ −β∗)∥2 2. All the experiments are implemented in Matlab on a shared memory machine with 512 GB RAM with 4(6) core intel Xeon E7540 2 GHz processors. We use TFOCS as a solver for any optimization problem involved, e.g., step 5 in Algorithm 1. For brevity, we refer to our approach as SC in the rest of the section. 4.1 Impact of number of iterations and sketch size First, we confirm the prediction of Theorem 3.5 by simulation. Figure 1 shows the prediction error of the iterates of Algorithm 1 with different sketch sizes m. We generate a random instance of a sparse regression problem with size 1000 by 10000 and sparsity s = 10, and apply Algorithm 1 to estimate the regression coefficients. Since Algorithm 1 is a randomized algorithm, for a given (fixed) dataset, its error is reported as the median of the results from 11 independent trials. The two subfigures show the results for two random designs: standard Gaussian (left) and AR(1) (right). Within each subfigure, each curve corresponds to a sketch size, and the dashed black line show the prediction error of the lasso estimator. On the logarithmic scale, a linearly convergent sequence of points appear on a straight line. As predicted by Theorem 3.5, the iterates of Algorithm 1 converge linearly up to the statistical precision, which is (roughly) the prediction error of the lasso estimator, and then it plateaus. As expected, the higher the sketch size is, the fewer number of iteration is needed. These results are consistent with our theoretical findings. 4.2 Impact of sample size N Next, we evaluate the statistical performance of our SC algorithm when N grows. For completeness, we also evaluate several competing methods, namely, lasso, SIS [6] and DECO [17]. The synthetic datasets used in our experiments are based on model (1). In it, X ∼N(0, ID) or X ∼N(0, Σ) with all predictors equally correlated with correlation 0.7, ϵ ∼N(0, 1). Similar to the setting appeared in [17], the support of β∗, S satisfies that |S| = 5 and its coordinates are randomly chosen from {1, . . . , D}, and β∗ i = ( (−1)Ber(0.5)|(0, 1)| + 5 log D N 1/2 i ∈S 0 i /∈S. 7 We generate datasets with fixed D = 3000 and N ranging from 50 to 600. For each N, 20 synthetic datasets are generated and the plots are made by averaging the results. In order to compare with methods such as DECO which is concerned with the Lagrangian formulation of lasso, we modify our algorithm accordingly. That is, in step 5 of Algorithm 1, we solve eβt ←arg minβ∈Rd 1 2βT eΓtβ −(bγ −bΓ eβt−1)T β + λ∥β∥1. Herein, in our experiments, the regularization parameter is set to be λ = 2∥XT ϵ∥∞. Also, for SIS and SC, the screening size is set to be 2N. For SC, we run it with sketch size n = 2s log(N) where s = 5 and 3 iterations. For DECO, the dataset is partitioned into m = 3 subsets and it is implemented without the refinement step. The results on two kinds of design matrix are presented in Figure 2. 102 n 100 101 102 prediction error Lasso SIS DECO SC (a) xi i.i.d. ∼ N(0, ID) 102 n 100 101 102 prediction error Lasso SIS DECO SC (b) xi i.i.d. ∼ N(0, Σ) Figure 2: Plots of the statistical error log ∥f X( bβ −β∗)∥2 2 versus log N. In the above, (a) is generated on datasets with independent predictors and (b) is generated on datasets with correlated predictors. Besides our main algorithm SC, several competing methods, namely, lasso, SIS and DECO are evaluated. Here D = 3000. For each N, 20 independent simulated datasets are generated and the averaged results are plotted. As can be seen, SIS achieves similar errors as lasso. Indeed, after careful inspection, we find out that when in the cases where predictors are highly correlated, i.e., Figure 2(b), usually less than 2 non-zero coefficients can be recovered by sure independent screening. Nevertheless, this doesn’t deteriorate the accuracy too much. Moreover, SC’s performance is comparable to both SIS and lasso as the prediction error goes down in the same rate, and SC outperforms DECO in our experiments. 2 4 6 8 10 12 14 16 number of machines 0 1000 2000 3000 4000 5000 6000 time (s) Figure 3: Running time of a Spark implementation of SC versus number of machines. Finally, in order to demonstrate that our approach is amenable to distributed computing environments, we implement it using Spark1 on a modern cluster with 20 nodes, each of which has 12 executor cores. We run our algorithm on an independent Gaussian problem instance with size 6000 and 200,000, and sparsity s = 20. The screening size is 2400, sketch size is 700, number of iterations is 3. To show the scalability, we report the running time using 1, 2, 4, 8, 16 machines, respectively. As most of the steps in our approach are embarrassingly parallel, the running time becomes almost half as we double the number of machines. 5 Conclusion and discussion We presented an approach to feature-distributed sparse regression that exploits the sparsity of the regression coefficients to reduce communication cost. Our approach relies on sketching to compress the information that has to be sent over the network. Empirical results verify our theoretical findings. 1http://spark.apache.org/ 8 Acknowledgments. We would like to thank the Army Research Office and the Defense Advanced Research Projects Agency for providing partial support for this work. References [1] Alekh Agarwal, Sahand Negahban, Martin J. Wainwright, et al. Fast global convergence of gradient methods for high-dimensional statistical recovery. The Annals of Statistics, 40(5):2452–2482, 2012. [2] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. [3] Scott Shaobing Chen, David L Donoho, and Michael A Saunders. Atomic decomposition by basis pursuit. SIAM Review, 43(1):129–159, 2001. [4] K. L. Clarkson and D. P. Woodruff. Low rank approximation and regression in input sparsity time. In Symposium on Theory of Computing (STOC), 2013. [5] Jim Demmel. Communication avoiding algorithms. In 2012 SC Companion: High Performance Computing, Networking Storage and Analysis, pages 1942–2000. IEEE, 2012. [6] Jianqing Fan and Jinchi Lv. Sure independence screening for ultra-high dimensional feature space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 70(5):849–911, 2008. [7] Alex Gittens, Aditya Devarakonda, Evan Racah, Michael F. Ringenburg, Lisa Gerhardt, Jey Kottalam, Jialin Liu, Kristyn J. Maschhoff, Shane Canon, Jatin Chhugani, Pramod Sharma, Jiyan Yang, James Demmel, Jim Harrell, Venkat Krishnamurthy, Michael W. Mahoney, and Prabhat. Matrix factorization at scale: a comparison of scientific data analytics in spark and C+MPI using three case studies. arXiv preprint arXiv:1607.01335, 2016. [8] Trevor J. Hastie, Robert Tibshirani, and Martin J. Wainwright. Statistical Learning with Sparsity: The Lasso and Its Generalizations. CRC Press, 2015. [9] Jason D. Lee, Yuekai Sun, Qiang Liu, and Jonathan E. Taylor. Communication-efficient sparse regression: a one-shot approach. arXiv preprint arXiv:1503.04337, 2015. [10] Po-Ling Loh and Martin J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. Ann. Statist., 40(3):1637–1664, 06 2012. [11] Sahand N. Negahban, Pradeep Ravikumar, Martin J. Wainwright, and Bin Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4):538–557, 2012. [12] Mert Pilanci and Martin J. Wainwright. Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares. arXiv preprint arXiv:1411.0347, 2014. [13] Mert Pilanci and Martin J. Wainwright. Randomized sketches of convex programs with sharp guarantees. Information Theory, IEEE Transactions on, 61(9):5096–5115, 2015. [14] Farbod Roosta-Khorasani and Michael W. Mahoney. Sub-sampled Newton methods II: Local convergence rates. arXiv preprint arXiv:1601.04738, 2016. [15] Robert Tibshirani. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol., pages 267–288, 1996. [16] Joel A. Tropp. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal., 3(1-2):115–126, 2011. [17] Xiangyu Wang, David Dunson, and Chenlei Leng. Decorrelated feature space partitioning for distributed sparse regression. arXiv preprint arXiv:1602.02575, 2016. [18] Xiangyu Wang and Chenlei Leng. High dimensional ordinary least squares projection for screening variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2015. [19] David P. Woodruff. Sketching as a tool for numerical linear algebra. arXiv preprint arXiv:1411.4357, 2014. [20] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Communication-efficient algorithms for statistical optimization. Journal of Machine Learning Research, 14:3321–3363, 2013. 9
2016
133
6,032
Generating Images with Perceptual Similarity Metrics based on Deep Networks Alexey Dosovitskiy and Thomas Brox University of Freiburg {dosovits, brox}@cs.uni-freiburg.de Abstract We propose a class of loss functions, which we call deep perceptual similarity metrics (DeePSiM), allowing to generate sharp high resolution images from compressed abstract representations. Instead of computing distances in the image space, we compute distances between image features extracted by deep neural networks. This metric reflects perceptual similarity of images much better and, thus, leads to better results. We demonstrate two examples of use cases of the proposed loss: (1) networks that invert the AlexNet convolutional network; (2) a modified version of a variational autoencoder that generates realistic high-resolution random images. 1 Introduction Recently there has been a surge of interest in training neural networks to generate images. These are being used for a wide variety of applications: generative models, analysis of learned representations, learning of 3D representations, future prediction in videos. Nevertheless, there is little work on studying loss functions which are appropriate for the image generation task. The widely used squared Euclidean (SE) distance between images often yields blurry results; see Fig. 1 (b). This is especially the case when there is inherent uncertainty in the prediction. For example, suppose we aim to reconstruct an image from its feature representation. The precise location of all details is not preserved in the features. A loss in image space leads to averaging all likely locations of details, hence the reconstruction looks blurry. However, exact locations of all fine details are not important for perceptual similarity of images. What is important is the distribution of these details. Our main insight is that invariance to irrelevant transformations and sensitivity to local image statistics can be achieved by measuring distances in a suitable feature space. In fact, convolutional networks provide a feature representation with desirable properties. They are invariant to small, smooth deformations but sensitive to perceptually important image properties, like salient edges and textures. Using a distance in feature space alone does not yet yield a good loss function; see Fig. 1 (d). Since feature representations are typically contractive, feature similarity does not automatically mean image similarity. In practice this leads to high-frequency artifacts (Fig. 1 (d)). To force the network generate realistic images, we introduce a natural image prior based on adversarial training, as proposed by Goodfellow et al. [1] 1 . We train a discriminator network to distinguish the output of the generator from real images based on local image statistics. The objective of the generator is to trick the discriminator, that is, to generate images that the discriminator cannot distinguish from real ones. A combination of similarity in an appropriate feature space with adversarial training yields the best results; see Fig. 1 (e). Results produced with adversarial loss alone (Fig. 1 (c)) are clearly inferior to those of our approach, so the feature space loss is crucial. 1An interesting alternative would be to explicitly analyze feature statistics, similar to Gatys et al. [2] . However, our preliminary experiments with this approach were not successful. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Original Img loss Img + Adv Img + Feat Our (a) (b) (c) (d) (e) Figure 1: Reconstructions from AlexNet FC6 with different components of the loss. Figure 2: Schematic of our model. Black solid lines denote the forward pass. Dashed lines with arrows on both ends are the losses. Thin dashed lines denote the flow of gradients. The new loss function is well suited for generating images from highly compressed representations. We demonstrate this in two applications: inversion of the AlexNet convolutional network and a generative model based on a variational autoencoder. Reconstructions obtained with our method from high-level activations of AlexNet are significantly better than with existing approaches. They reveal that even the predicted class probabilities contain rich texture, color, and position information. As an example of a true generative model, we show that a variational autoencoder trained with the new loss produces sharp and realistic high-resolution 227 × 227 pixel images. 2 Related work There is a long history of neural network based models for image generation. A prominent class of probabilistic models of images are restricted Boltzmann machines [3] and their deep variants [4, 5]. Autoencoders [6] have been widely used for unsupervised learning and generative modeling, too. Recently, stochastic neural networks [7] have become popular, and deterministic networks are being used for image generation tasks [8]. In all these models, loss is measured in the image space. By combining convolutions and un-pooling (upsampling) layers [5, 1, 8] these models can be applied to large images. There is a large body of work on assessing the perceptual similarity of images. Some prominent examples are the visible differences predictor [9], the spatio-temporal model for moving picture quality assessment [10], and the perceptual distortion metric of Winkler [11]. The most popular perceptual image similarity metric is the structural similarity metric (SSIM) [12], which compares the local statistics of image patches. We are not aware of any work making use of similarity metrics for machine learning, except a recent pre-print of Ridgeway et al. [13]. They train autoencoders by directly maximizing the SSIM similarity of images. This resembles in spirit what we do, but technically is very different. Because of its shallow and local nature, SSIM does not have invariance properties needed for the tasks we are solving in this paper. Generative adversarial networks (GANs) have been proposed by Goodfellow et al. [1]. In theory, this training procedure can lead to a generator that perfectly models the data distribution. Practically, training GANs is difficult and often leads to oscillatory behavior, divergence, or modeling only part of the data distribution. Recently, several modifications have been proposed that make GAN training more stable. Denton et al. [14] employ a multi-scale approach, gradually generating higher resolution images. Radford et al. [15] make use of an upconvolutional architecture and batch normalization. GANs can be trained conditionally by feeding the conditioning variable to both the discriminator and the generator [16]. Usually this conditioning variable is a one-hot encoding of the object class in the input image. Such GANs learn to generate images of objects from a given class. Recently Mathieu et al. [17] used GANs for predicting future frames in videos by conditioning on previous frames. Our approach looks similar to a conditional GAN. However, in a GAN there is no loss directly comparing the generated image to some ground truth. As Fig. 1 shows, the feature loss introduced in the present paper is essential to train on complicated tasks we are interested in. 2 Several concurrent works [18–20] share the general idea — to measure the similarity not in the image space but rather in a feature space. These differ from our work both in the details of the method and in the applications. Larsen et al. [18] only run relatively small-scale experiments on images of faces, and they measure the similarity between features extracted from the discriminator, while we study different “comparators” (in fact, we also experimented with features from the disciminator and were not able to get satisfactory results on our applications with those). Lamb et al. [19] and Johnson et al. [20] use features from different layers, including the lower ones, to measure image similarity, and therefore do not need the adversarial loss. While this approach may be suitable for tasks which allow for nearly perfect solutions (e.g. super-resolution with low magnification), it is not applicable to more complicated problems such as extreme super-resolution or inversion of highly invariant feature representations. 3 Model Suppose we are given a supervised image generation task and a training set of input-target pairs {yi, xi}, consisting of high-level image representations yi ∈RI and images xi ∈RW ×H×C . The aim is to learn the parameters θ of a differentiable generator function Gθ(·): RI →RW ×H×C which optimally approximates the input-target dependency according to a loss function L(Gθ(y), x). Typical choices are squared Euclidean (SE) loss L2(Gθ(y), x) = ||Gθ(y) −x||2 2 or ℓ1 loss L1(Gθ(y), x) = ||Gθ(y) −x||1, but these lead to blurred results in many image generation tasks. We propose a new class of losses, which we call deep perceptual similarity metrics (DeePSiM ). These go beyond simple distances in image space and can capture complex and perceptually important properties of images. These losses are weighted sums of three terms: feature loss Lfeat, adversarial loss Ladv, and image space loss Limg: L = λfeat Lfeat + λadv Ladv + λimg Limg. (1) They correspond to a network architecture, an overview of which is shown in Fig. 2 . The architecture consists of three convolutional networks: the generator Gθ that implements the generator function, the discriminator Dϕ that discriminates generated images from natural images, and the comparator C that computes features used to compare the images. Loss in feature space. Given a differentiable comparator C : RW ×H×C →RF , we define Lfeat = X i ||C(Gθ(yi)) −C(xi)||2 2. (2) C may be fixed or may be trained; for example, it can be a part of the generator or the discriminator. Lfeat alone does not provide a good loss for training. Optimizing just for similarity in a high-level feature space typically leads to high-frequency artifacts [21]. This is because for each natural image there are many non-natural images mapped to the same feature vector 2 . Therefore, a natural image prior is necessary to constrain the generated images to the manifold of natural images. Adversarial loss. Instead of manually designing a prior, as in Mahendran and Vedaldi [21], we learn it with an approach similar to Generative Adversarial Networks (GANs) of Goodfellow et al. [1] . Namely, we introduce a discriminator Dϕ which aims to discriminate the generated images from real ones, and which is trained concurrently with the generator Gθ. The generator is trained to “trick” the discriminator network into classifying the generated images as real. Formally, the parameters ϕ of the discriminator are trained by minimizing Ldiscr = − X i log(Dϕ(xi)) + log(1 −Dϕ(Gθ(yi))), (3) and the generator is trained to minimize Ladv = − X i log Dϕ(Gθ(yi)). (4) 2This is unless the feature representation is specifically designed to map natural and non-natural images far apart, such as the one extracted from the discriminator of a GAN. 3 Loss in image space. Adversarial training is unstable and sensitive to hyperparameter values. To suppress oscillatory behavior and provide strong gradients during training, we add to our loss function a small squared error term: Limg = X i ||Gθ(yi) −xi||2 2. (5) We found that this term makes hyperparameter tuning significantly easier, although it is not strictly necessary for the approach to work. 3.1 Architectures Generators. All our generators make use of up-convolutional (’deconvolutional’) layers [8] . An upconvolutional layer can be seen as up-sampling and a subsequent convolution. We always up-sample by a factor of 2 with ’bed of nails’ upsampling. A basic generator architecture is shown in Table 1 . In all networks we use leaky ReLU nonlinearities, that is, LReLU(x) = max(x, 0) + α min(x, 0). We used α = 0.3 in our experiments. All generators have linear output layers. Comparators. We experimented with three comparators: 1. AlexNet [22] is a network with 5 convolutional and 2 fully connected layers trained on image classification. More precisely, in all experiments we used a variant of AlexNet called CaffeNet [23]. 2. The network of Wang and Gupta [24] has the same architecture as CaffeNet, but is trained without supervision. The network is trained to map frames of one video clip close to each other in the feature space and map frames from different videos far apart. We refer to this network as VideoNet. 3. AlexNet with random weights. We found using CONV5 features for comparison leads to best results in most cases. We used these features unless specified otherwise. Discriminator. In our setup the job of the discriminator is to analyze the local statistics of images. Therefore, after five convolutional layers with occasional stride we perform global average pooling. The result is processed by two fully connected layers, followed by a 2-way softmax. We perform 50% dropout after the global average pooling layer and the first fully connected layer. The exact architecture of the discriminator is shown in the supplementary material. 3.2 Training details Coefficients for adversarial and image loss were respectively λadv = 100, λimg = 2 · 10−6. The feature loss coefficient λfeat depended on the comparator being used. It was set to 0.01 for the AlexNet CONV5 comparator, which we used in most experiments. Note that a high coefficient in front of the adversarial loss does not mean that this loss dominates the error function; it simply compensates for the fact that both image and feature loss include summation over many spatial locations. We modified the caffe [23] framework to train the networks. For optimization we used Adam [25] with momentum β1 = 0.9, β2 = 0.999 and initial learning rate 0.0002. To prevent the discriminator from overfitting during adversarial training we temporarily stopped updating it if the ratio of Ldiscr and Ladv was below a certain threshold (0.1 in our experiments). We used batch size 64 in all experiments. The networks were trained for 500, 000-1, 000, 000 mini-batch iterations. 4 Experiments 4.1 Inverting AlexNet As a main application, we trained networks to reconstruct images from their features extracted by AlexNet. This is interesting for a number of reasons. First and most straightforward, this shows which information is preserved in the representation. Second, reconstruction from artificial networks can be seen as test-ground for reconstruction from real neural networks. Applying the proposed method to real brain recordings is a very exciting potential extension of our work. Third, it is interesting to see that in contrast with the standard scheme “generative pretraining for a discriminative task”, we show that “discriminative pre-training for a generative task” can be fruitful. Lastly, we indirectly show that our loss can be useful for unsupervised learning with generative models. Our version of 4 Type fc fc fc reshape uconv conv uconv conv uconv conv uconv uconv uconv InSize − − − 1 4 8 8 16 16 32 32 64 128 OutCh 4096 4096 4096 256 256 512 256 256 128 128 64 32 3 Kernel − − − − 4 3 4 3 4 3 4 4 4 Stride − − − − ↑2 1 ↑2 1 ↑2 1 ↑2 ↑2 ↑2 Table 1: Generator architecture for inverting layer FC6 of AlexNet. Image CONV5 FC6 FC7 FC8 Figure 3: Representative reconstructions from higher layers of AlexNet. General characteristics of images are preserved very well. In some cases (simple objects, landscapes) reconstructions are nearly perfect even from FC8. In the leftmost column the network generates dog images from FC7 and FC8. reconstruction error allows us to reconstruct from very abstract features. Thus, in the context of unsupervised learning, it would not be in conflict with learning such features. We describe how our method relates to existing work on feature inversion. Suppose we are given a feature representation Φ, which we aim to invert, and an image I. There are two inverse mappings: Φ−1 R such that Φ(Φ−1 R (φ)) ≈φ, and Φ−1 L such that Φ−1 L (Φ(I)) ≈I. Recently two approaches to inversion have been proposed, which correspond to these two variants of the inverse. Mahendran and Vedaldi [21] apply gradient-based optimization to find an image eI which minimizes ||Φ(I) −Φ(eI)||2 2 + P(eI), (6) where P is a simple natural image prior, such as the total variation (TV) regularizer. This method produces images which are roughly natural and have features similar to the input features, corresponding to Φ−1 R . However, due to the simplistic prior, reconstructions from fully connected layers of AlexNet do not look much like natural images (Fig. 4 bottom row). Dosovitskiy and Brox [26] train up-convolutional networks on a large training set of natural images to perform the inversion task. They use squared Euclidean distance in the image space as loss function, which leads to approximating Φ−1 L . The networks learn to reconstruct the color and rough positions of objects, but produce over-smoothed results because they average all potential reconstructions (Fig. 4 middle row). Our method combines the best of both worlds, as shown in the top row of Fig. 4. The loss in the feature space helps preserve perceptually important image features. Adversarial training keeps reconstructions realistic. Technical details. The generator in this setup takes the features Φ(I) extracted by AlexNet and generates the image I from them, that is, y = Φ(I). In general we followed Dosovitskiy and Brox [26] in designing the generators. The only modification is that we inserted more convolutional layers, giving the network more capacity. We reconstruct from outputs of layers CONV5 –FC8. In each layer we also include processing steps following the layer, that is, pooling and non-linearities. For example, CONV5 means pooled features (pool5), and FC6 means rectified values (relu6). 5 Image CONV5 FC6 FC7 FC8 Image CONV5 FC6 FC7 FC8 Our D&B M&V Figure 4: AlexNet inversion: comparison with Dosovitskiy and Brox [26] and Mahendran and Vedaldi [21] . Our results are significantly better, even our failure cases (second image). The generator used for inverting FC6 is shown in Table 1 . Architectures for other layers are similar, except that for reconstruction from CONV5, fully connected layers are replaced by convolutional ones. We trained on 227 × 227 pixel crops of images from the ILSVRC-2012 training set and evaluated on the ILSVRC-2012 validation set. Ablation study. We tested if all components of the loss are necessary. Results with some of these components removed are shown in Fig. 1 . Clearly the full model performs best. Training just with loss in the image space leads to averaging all potential reconstructions, resulting in over-smoothed images. One might imagine that adversarial training makes images sharp. This indeed happens, but the resulting reconstructions do not correspond to actual objects originally contained in the image. The reason is that any “natural-looking” image which roughly fits the blurry prediction minimizes this loss. Without the adversarial loss, predictions look very noisy because nothing enforces the natural image prior. Results without the image space loss are similar to the full model (see supplementary material), but training was more sensitive to the choice of hyperparameters. Inversion results. Representative reconstructions from higher layers of AlexNet are shown in Fig. 3 . Reconstructions from CONV5 are nearly perfect, combining the natural colors and sharpness of details. Reconstructions from fully connected layers are still strikingly good, preserving the main features of images, colors, and positions of large objects. More results are shown in the supplementary material. For quantitative evaluation we compute the normalized Euclidean error ||a −b||2/N. The normalization coefficient N is the average of Euclidean distances between all pairs of different samples from the test set. Therefore, the error of 100% means that the algorithm performs the same as randomly drawing a sample from the test set. Error in image space and in feature space (that is, the distance between the features of the image and the reconstruction) are shown in Table 2 . We report all numbers for our best approach, but only some of them for the variants, because of limited computational resources. The method of Mahendran&Vedaldi performs well in feature space, but not in image space, the method of Dosovitskiy&Brox — vice versa. The presented approach is fairly good on both metrics. This is further supported by iterative image re-encoding results shown in Fig. 5 . To generate these, we compute the features of an image, apply our "inverse" network to those, compute the features of the resulting reconstruction, apply the "inverse" net again, and iterate this procedure. The reconstructions start to change significantly only after 4 −8 iterations of this process. Nearest neighbors Does the network simply memorize the training set? For several validation images we show nearest neighbors (NNs) in the training set, based on distances in different feature spaces (see supplementary material). Two main conclusions are: 1) NNs in feature spaces are much more meaningful than in the image space, and 2) The network does more than just retrieving the NNs. Interpolation. We can morph images into each other by linearly interpolating between their features and generating the corresponding images. Fig. 7 shows that objects shown in the images smoothly warp into each other. This capability comes “for free” with our generator networks, but in fact it is very non-trivial, and to the best of our knowledge has not been previously demonstrated to this extent on general natural images. More examples are shown in the supplementary material. 6 CONV5 FC6 FC7 FC8 M & V [21] 71/19 80/19 82/16 84/09 D & B [26] 35/− 51/− 56/− 58/− Our image loss −/− 46/79 −/− −/− AlexNet CONV5 43/37 55/48 61/45 63/29 VideoNet CONV5 −/− 51/57 −/− −/− Table 2: Normalized inversion error (in %) when reconstructing from different layers of AlexNet with different methods. First in each pair – error in the image space, second – in the feature space. CONV5 FC6 FC7 FC8 1 2 4 8 Figure 5: Iteratively re-encoding images with AlexNet and reconstructing. Iteration number shown on the left. Image Alex5 Alex6 Video5 Rand5 Figure 6: Reconstructions from FC6 with different comparators. The number indicates the layer from which features were taken. Image pair 1 Image pair 2 FC6 Figure 7: Interpolation between images by interpolating between their FC6 features. Different comparators. The AlexNet network we used above as comparator has been trained on a huge labeled dataset. Is this supervision really necessary to learn a good comparator? We show here results with several alternatives to CONV5 features of AlexNet: 1) FC6 features of AlexNet, 2) CONV5 of AlexNet with random weights, 3) CONV5 of the network of Wang and Gupta [24] which we refer to as VideoNet. The results are shown in Fig. 6 . While the AlexNet CONV5 comparator provides best reconstructions, other networks preserve key image features as well. Sampling pre-images. Given a feature vector y, it would be interesting to not just generate a single reconstruction, but arbitrarily many samples from the distribution p(I|y). A straightforward approach would inject noise into the generator along with the features, so that the network could randomize its outputs. This does not yield the desired result, even if the discriminator is conditioned on the feature vector y. Nothing in the loss function forces the generator to output multiple different reconstructions per feature vector. An underlying problem is that in the training data there is only one image per feature vector, i.e., a single sample per conditioning vector. We did not attack this problem in this paper, but we believe it is an interesting research direction. 4.2 Variational autoencoder We also show an example application of our loss to generative modeling of images, demonstrating its superiority to the usual image space loss. A standard VAE consists of an encoder Enc and a decoder Dec. The encoder maps an input sample x to a distribution over latent variables z ∼Enc(x) = q(z|x). Dec maps from this latent space to a distribution over images ˜x ∼Dec(z) = p(x|z). The loss function is X i −Eq(z|xi) log p(xi|z) + DKL(q(z|xi)||p(z)), (7) where p(z) is a prior distribution of latent variables and DKL is the Kullback-Leibler divergence. The first term in Eq. 7 is a reconstruction error. If we assume that the decoder predicts a Gaussian distribution at each pixel, then it reduces to squared Euclidean error in the image space. The second term pulls the distribution of latent variables towards the prior. Both q(z|x) and p(z) are commonly 7 (a) (b) (c) Figure 8: Samples from VAEs: (a) with the squared Euclidean loss, (b), (c) with DeePSiM loss with AlexNet CONV5 and VideoNet CONV5 comparators, respectively. assumed to be Gaussian, in which case the KL divergence can be computed analytically. Please see Kingma and Welling [7] for details. We use the proposed loss instead of the first term in Eq. 7 . This is similar to Larsen et al. [18], but the comparator need not be a part of the discriminator. Technically, there is little difference from training an “inversion” network. First, we allow the encoder weights to be adjusted. Second, instead of predicting a single latent vector z, we predict two vectors µ and σ and sample z = µ + σ ⊙ε, where ε is standard Gaussian (zero mean, unit variance) and ⊙is element-wise multiplication. Third, we add the KL divergence term to the loss: LKL = 1 2 X i ||µi||2 2 + ||σi||2 2 −⟨log σ2 i , 1⟩  . (8) We manually set the weight λKL of the KL term in the overall loss (we found λKL = 20 to work well). Proper probabilistic derivation in presence of adversarial training is non-straightforward, and we leave it for future research. We trained on 227 × 227 pixel crops of 256 × 256 pixel ILSVRC-2012 images. The encoder architecture is the same as AlexNet up to layer FC6, and the decoder architecture is same as in Table 1 . We initialized the encoder with AlexNet weights when using AlexNet as comparator, and at random when using VideoNet as comparator. We sampled from the model by sampling the latent variables from a standard Gaussian z = ε and generating images from that with the decoder. Samples generated with the usual SE loss, as well as two different comparators (AlexNet CONV5, VideoNet CONV5) are shown in Fig. 8 . While Euclidean loss leads to very blurry samples, our method yields images with realistic statistics. Global structure is lacking, but we believe this can be solved by combining the approach with a GAN. Interestingly, the samples trained with the VideoNet comparator and random initialization look qualitatively similar to the ones with AlexNet, showing that supervised training may not be necessary to yield a good loss function for generative model. 5 Conclusion We proposed a class of loss functions applicable to image generation that are based on distances in feature spaces and adversarial training. Applying these to two tasks — feature inversion and random natural image generation — reveals that our loss is clearly superior to the typical loss in image space. In particular, it allows us to generate perceptually important details even from very low-dimensional image representations. Our experiments suggest that the proposed loss function can become a useful tool for generative modeling. Acknowledgements We acknowledge funding by the ERC Starting Grant VideoLearn (279401). 8 References [1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014. [2] L. A. Gatys, A. S. Ecker, and M. Bethge. Image style transfer using convolutional neural networks. In CVPR, 2016. [3] G. E. Hinton and T. J. Sejnowski. Learning and relearning in boltzmann machines. In Parallel Distributed Processing: Volume 1: Foundations, pages 282–317. MIT Press, Cambridge, 1986. [4] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Comput., 18(7):1527–1554, 2006. [5] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML, pages 609–616, 2009. [6] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, July 2006. [7] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In ICLR, 2014. [8] A. Dosovitskiy, J. T. Springenberg, and T. Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. [9] S. Daly. Digital images and human vision. chapter The Visible Differences Predictor: An Algorithm for the Assessment of Image Fidelity, pages 179–206. MIT Press, 1993. [10] C. J. van den Branden Lambrecht and O. Verscheure. Perceptual quality measure using a spatio-temporal model of the human visual system. Electronic Imaging: Science & Technology, 1996. [11] S. Winkler. A perceptual distortion metric for digital color images. In in Proc. SPIE, pages 175–184, 1998. [12] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 2004. [13] K. Ridgeway, J. Snell, B. Roads, R. S. Zemel, and M. C. Mozer. Learning to generate images with perceptual similarity metrics. arxiv:1511.06409, 2015. [14] E. L. Denton, S. Chintala, arthur Szlam, and R. Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In NIPS, pages 1486–1494, 2015. [15] A. Radford, L. Metz, and S. Chintala. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In ICLR, 2016. [16] M. Mirza and S. Osindero. Conditional generative adversarial nets. arxiv:1411.1784, 2014. [17] M. Mathieu, C. Couprie, and Y. LeCun. Deep multi-scale video prediction beyond mean square error. In ICLR, 2016. [18] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O. Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, pages 1558–1566, 2016. [19] A. Lamb, V. Dumoulin, and A. Courville. Discriminative regularization for generative models. arXiv:1602.03220, 2016. [20] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, pages 694–711, 2016. [21] A. Mahendran and A. Vedaldi. Understanding deep image representations by inverting them. In CVPR, 2015. [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012. [23] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014. [24] X. Wang and A. Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. [25] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [26] A. Dosovitskiy and T. Brox. Inverting visual representations with convolutional networks. In CVPR, 2016. 9
2016
134
6,033
Residual Networks Behave Like Ensembles of Relatively Shallow Networks Andreas Veit Michael Wilber Serge Belongie Department of Computer Science & Cornell Tech Cornell University {av443, mjw285, sjb344}@cornell.edu Abstract In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks. 1 Introduction Most modern computer vision systems follow a familiar architecture, processing inputs from lowlevel features up to task specific high-level features. Recently proposed residual networks [5, 6] challenge this conventional view in three ways. First, they introduce identity skip-connections that bypass residual layers, allowing data to flow from any layers directly to any subsequent layers. This is in stark contrast to the traditional strictly sequential pipeline. Second, skip connections give rise to networks that are two orders of magnitude deeper than previous models, with as many as 1202 layers. This is contrary to architectures like AlexNet [13] and even biological systems [17] that can capture complex concepts within half a dozen layers.1 Third, in initial experiments, we observe that removing single layers from residual networks at test time does not noticeably affect their performance. This is surprising because removing a layer from a traditional architecture such as VGG [18] leads to a dramatic loss in performance. In this work we investigate the impact of these differences. To address the influence of identity skipconnections, we introduce the unraveled view. This novel representation shows residual networks can be viewed as a collection of many paths instead of a single deep network. Further, the perceived resilience of residual networks raises the question whether the paths are dependent on each other or whether they exhibit a degree of redundancy. To find out, we perform a lesion study. The results show ensemble-like behavior in the sense that removing paths from residual networks by deleting layers or corrupting paths by reordering layers only has a modest and smooth impact on performance. Finally, we investigate the depth of residual networks. Unlike traditional models, paths through residual networks vary in length. The distribution of path lengths follows a binomial distribution, meaning 1Making the common assumption that a layer in a neural network corresponds to a cortical area. that the majority of paths in a network with 110 layers are only about 55 layers deep. Moreover, we show most gradient during training comes from paths that are even shorter, i.e., 10-34 layers deep. This reveals a tension. On the one hand, residual network performance improves with adding more and more layers [6]. However, on the other hand, residual networks can be seen as collections of many paths and the only effective paths are relatively shallow. Our results could provide a first explanation: residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network. Rather, they enable very deep networks by shortening the effective paths. For now, short paths still seem necessary to train very deep networks. In this paper we make the following contributions: • We introduce the unraveled view, which illustrates that residual networks can be viewed as a collection of many paths, instead of a single ultra-deep network. • We perform a lesion study to show that these paths do not strongly depend on each other, even though they are trained jointly. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. • We investigate the gradient flow through residual networks, revealing that only the short paths contribute gradient during training. Deep paths are not required during training. 2 Related Work The sequential and hierarchical computer vision pipeline Visual processing has long been understood to follow a hierarchical process from the analysis of simple to complex features. This formalism is based on the discovery of the receptive field [10], which characterizes the visual system as a hierarchical and feedforward system. Neurons in early visual areas have small receptive fields and are sensitive to basic visual features, e.g., edges and bars. Neurons in deeper layers of the hierarchy capture basic shapes, and even deeper neurons respond to full objects. This organization has been widely adopted in the computer vision and machine learning literature, from early neural networks such as the Neocognitron [4] and the traditional hand-crafted feature pipeline of Malik and Perona [15] to convolutional neural networks [13, 14]. The recent strong results of very deep neural networks [18, 20] led to the general perception that it is the depth of neural networks that govern their expressive power and performance. In this work, we show that residual networks do not necessarily follow this tradition. Residual networks [5, 6] are neural networks in which each layer consists of a residual module fi and a skip connection2 bypassing fi. Since layers in residual networks can comprise multiple convolutional layers, we refer to them as residual blocks in the remainder of this paper. For clarity of notation, we omit the initial pre-processing and final classification steps. With yi−1 as is input, the output of the ith block is recursively defined as yi ≡fi(yi−1) + yi−1, (1) where fi(x) is some sequence of convolutions, batch normalization [11], and Rectified Linear Units (ReLU) as nonlinearities. Figure 1 (a) shows a schematic view of this architecture. In the most recent formulation of residual networks [6], fi(x) is defined by fi(x) ≡Wi · σ (B (W ′ i · σ (B (x)))) , (2) where Wi and W ′ i are weight matrices, · denotes convolution, B(x) is batch normalization and σ(x) ≡max(x, 0). Other formulations are typically composed of the same operations, but may differ in their order. The idea of branching paths in neural networks is not new. For example, in the regime of convolutional neural networks, models based on inception modules [20] were among the first to arrange layers in blocks with parallel paths rather than a strict sequential order. We choose residual networks for this study because of their simple design principle. Highway networks Residual networks can be viewed as a special case of highway networks [19]. The output of each layer of a highway network is defined as yi+1 ≡fi+1(yi) · ti+1(yi) + yi · (1 −ti+1(yi)) (3) 2We only consider identity skip connections, but this framework readily generalizes to more complex projection skip connections when downsampling is required. 2 (a) Conventional 3-block residual network = (b) Unraveled view of (a) Figure 1: Residual Networks are conventionally shown as (a), which is a natural representation of Equation (1). When we expand this formulation to Equation (6), we obtain an unraveled view of a 3-block residual network (b). Circular nodes represent additions. From this view, it is apparent that residual networks have O(2n) implicit paths connecting input and output and that adding a block doubles the number of paths. This follows the same structure as Equation (1). Highway networks also contain residual modules and skip connections that bypass them. However, the output of each path is attenuated by a gating function t, which has learned parameters and is dependent on its input. Highway networks are equivalent to residual networks when ti(·) = 0.5, in which case data flows equally through both paths. Given an omnipotent solver, highway networks could learn whether each residual module should affect the data. This introduces more parameters and more complexity. Investigating neural networks Several investigative studies seek to better understand convolutional neural networks. For example, Zeiler and Fergus [23] visualize convolutional filters to unveil the concepts learned by individual neurons. Further, Szegedy et al. [21] investigate the function learned by neural networks and how small changes in the input called adversarial examples can lead to large changes in the output. Within this stream of research, the closest study to our work is from Yosinski et al. [22], which performs lesion studies on AlexNet. They discover that early layers exhibit little co-adaptation and later layers have more co-adaptation. These papers, along with ours, have the common thread of exploring specific aspects of neural network performance. In our study, we focus our investigation on structural properties of neural networks. Ensembling Since the early days of neural networks, researchers have used simple ensembling techniques to improve performance. Though boosting has been used in the past [16], one simple approach is to arrange a committee [3] of neural networks in a simple voting scheme, where the final output predictions are averaged. Top performers in several competitions use this technique almost as an afterthought [6, 13, 18]. Generally, one key characteristic of ensembles is their smooth performance with respect to the number of members. In particular, the performance increase from additional ensemble members gets smaller with increasing ensemble size. Even though they are not strict ensembles, we show that residual networks behave similarly. Dropout Hinton et al. [7] show that dropping out individual neurons during training leads to a network that is equivalent to averaging over an ensemble of exponentially many networks. Similar in spirit, stochastic depth [9] trains an ensemble of networks by dropping out entire layers during training. In this work, we show that one does not need a special training strategy such as stochastic depth to drop out layers. Entire layers can be removed from plain residual networks without impacting performance, indicating that they do not strongly depend on each other. 3 The unraveled view of residual networks To better understand residual networks, we introduce a formulation that makes it easier to reason about their recursive nature. Consider a residual network with three building blocks from input y0 to output y3. Equation (1) gives a recursive definition of residual networks. The output of each stage is based on the combination of two subterms. We can make the shared structure of the residual network apparent by unrolling the recursion into an exponential number of nested terms, expanding one layer 3 (a) Deleting f2 from unraveled view (b) Ordinary feedforward network Figure 2: Deleting a layer in residual networks at test time (a) is equivalent to zeroing half of the paths. In ordinary feed-forward networks (b) such as VGG or AlexNet, deleting individual layers alters the only viable path from input to output. at each substitution step: y3 = y2 + f3(y2) (4) = [y1 + f2(y1)] + f3(y1 + f2(y1)) (5) =  y0 + f1(y0) + f2(y0 + f1(y0))  + f3 y0 + f1(y0) + f2(y0 + f1(y0))  (6) We illustrate this expression tree graphically in Figure 1 (b). With subscripts in the function modules indicating weight sharing, this graph is equivalent to the original formulation of residual networks. The graph makes clear that data flows along many paths from input to output. Each path is a unique configuration of which residual module to enter and which to skip. Conceivably, each unique path through the network can be indexed by a binary code b ∈{0, 1}n where bi = 1 iff the input flows through residual module fi and 0 if fi is skipped. It follows that residual networks have 2n paths connecting input to output layers. In the classical visual hierarchy, each layer of processing depends only on the output of the previous layer. Residual networks cannot strictly follow this pattern because of their inherent structure. Each module fi(·) in the residual network is fed data from a mixture of 2i−1 different distributions generated from every possible configuration of the previous i −1 residual modules. Compare this to a strictly sequential network such as VGG or AlexNet, depicted conceptually in Figure 2 (b). In these networks, input always flows from the first layer straight through to the last in a single path. Written out, the output of a three-layer feed-forward network is yF F 3 = f F F 3 (f F F 2 (f F F 1 (y0))) (7) where f F F i (x) is typically a convolution followed by batch normalization and ReLU. In these networks, each f F F i is only fed data from a single path configuration, the output of f F F i−1(·). It is worthwhile to note that ordinary feed-forward neural networks can also be “unraveled” using the above thought process at the level of individual neurons rather than layers. This renders the network as a collection of different paths, where each path is a unique configuration of neurons from each layer connecting input to output. Thus, all paths through ordinary neural networks are of the same length. However, paths in residual networks have varying length. Further, each path in a residual network goes through a different subset of layers. Based on these observations, we formulate the following questions and address them in our experiments below. Are the paths in residual networks dependent on each other or do they exhibit a degree of redundancy? If the paths do not strongly depend on each other, do they behave like an ensemble? Do paths of varying lengths impact the network differently? 4 Lesion study In this section, we use three lesion studies to show that paths in residual networks do not strongly depend on each other and that they behave like an ensemble. All experiments are performed at test 4 0 10 20 30 40 50 dropped layer index 0.0 0.2 0.4 0.6 0.8 1.0 Test classification error Test error when dropping any single block from residual network vs. VGG on CIFAR-10 residual network v2, 110 layers VGG network, 15 layers residual network baseline VGG network baseline Figure 3: Deleting individual layers from VGG and a residual network on CIFAR-10. VGG performance drops to random chance when any one of its layers is deleted, but deleting individual modules from residual networks has a minimal impact on performance. Removing downsampling modules has a slightly higher impact. 0 10 20 30 40 50 60 dropped layer index 0.0 0.2 0.4 0.6 0.8 1.0 top 1 error Top-1 error when dropping any single block from 200-layer residual network on ImageNet residual network v2, 200 layers residual network baseline Figure 4: Results when dropping individual blocks from residual networks trained on ImageNet are similar to CIFAR results. However, downsampling layers tend to have more impact on ImageNet. time on CIFAR-10 [12]. Experiments on ImageNet [2] show comparable results. We train residual networks with the standard training strategy, dataset augmentation, and learning rate policy, [6]. For our CIFAR-10 experiments, we train a 110-layer (54-module) residual network with modules of the “pre-activation” type which contain batch normalization as first step. For ImageNet we use 200 layers (66 modules). It is important to note that we did not use any special training strategy to adapt the network. In particular, we did not use any perturbations such as stochastic depth during training. 4.1 Experiment: Deleting individual layers from neural networks at test time As a motivating experiment, we will show that not all transformations within a residual network are necessary by deleting individual modules from the neural network after it has been fully trained. To do so, we remove the residual module from a single building block, leaving the skip connection (or downsampling projection, if any) untouched. That is, we change yi = yi−1 + fi(yi−1) to y′ i = yi−1. We can measure the importance of each building block by varying which residual module we remove. To compare to conventional convolutional neural networks, we train a VGG network with 15 layers, setting the number of channels to 128 for all layers to allow the removal of any layer. It is unclear whether any neural network can withstand such a drastic change to the model structure. We expect them to break because dropping any layer drastically changes the input distribution of all subsequent layers. The results are shown in Figure 3. As expected, deleting any layer in VGG reduces performance to chance levels. Surprisingly, this is not the case for residual networks. Removing downsampling blocks does have a modest impact on performance (peaks in Figure 3 correspond to downsampling building blocks), but no other block removal lead to a noticeable change. This result shows that to some extent, the structure of a residual network can be changed at runtime without affecting performance. Experiments on ImageNet show comparable results, as seen in Figure 4. Why are residual networks resilient to dropping layers but VGG is not? Expressing residual networks in the unraveled view provides a first insight. It shows that residual networks can be seen as a collection of many paths. As illustrated in Figure 2 (a), when a layer is removed, the number of paths is reduced from 2n to 2n−1, leaving half the number of paths valid. VGG only contains a single usable path from input to output. Thus, when a single layer is removed, the only viable path is corrupted. This result suggests that paths in a residual network do not strongly depend on each other although they are trained jointly. 4.2 Experiment: Deleting many modules from residual networks at test-time Having shown that paths do not strongly depend on each other, we investigate whether the collection of paths shows ensemble-like behavior. One key characteristic of ensembles is that their performance 5 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Number of layers deleted 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error Error when deleting layers 1.0 0.98 0.96 0.94 0.92 0.9 0.88 0.86 0.84 Kendall Tau correlation 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Error Error when permuting layers (a) (b) Figure 5: (a) Error increases smoothly when randomly deleting several modules from a residual network. (b) Error also increases smoothly when re-ordering a residual network by shuffling building blocks. The degree of reordering is measured by the Kendall Tau correlation coefficient. These results are similar to what one would expect from ensembles. depends smoothly on the number of members. If the collection of paths were to behave like an ensemble, we would expect test-time performance of residual networks to smoothly correlate with the number of valid paths. This is indeed what we observe: deleting increasing numbers of residual modules, increases error smoothly, Figure 5 (a). This implies residual networks behave like ensembles. When deleting k residual modules from a network originally of length n, the number of valid paths decreases to O(2n−k). For example, the original network started with 54 building blocks, so deleting 10 blocks leaves 244 paths. Though the collection is now a factor of roughly 10−6 of its original size, there are still many valid paths and error remains around 0.2. 4.3 Experiment: Reordering modules in residual networks at test-time Our previous experiments were only about dropping layers, which have the effect of removing paths from the network. In this experiment, we consider changing the structure of the network by re-ordering the building blocks. This has the effect of removing some paths and inserting new paths that have never been seen by the network during training. In particular, it moves high-level transformations before low-level transformations. To re-order the network, we swap k randomly sampled pairs of building blocks with compatible dimensionality, ignoring modules that perform downsampling. We graph error with respect to the Kendall Tau rank correlation coefficient which measures the amount of corruption. The results are shown in Figure 5 (b). As corruption increases, the error smoothly increases as well. This result is surprising because it suggests that residual networks can be reconfigured to some extent at runtime. 5 The importance of short paths in residual networks Now that we have seen that there are many paths through residual networks and that they do not necessarily depend on each other, we investigate their characteristics. Distribution of path lengths Not all paths through residual networks are of the same length. For example, there is precisely one path that goes through all modules and n paths that go only through a single module. From this reasoning, the distribution of all possible path lengths through a residual network follows a Binomial distribution. Thus, we know that the path lengths are closely centered around the mean of n/2. Figure 6 (a) shows the path length distribution for a residual network with 54 modules; more than 95% of paths go through 19 to 35 modules. Vanishing gradients in residual networks Generally, data flows along all paths in residual networks. However, not all paths carry the same amount of gradient. In particular, the length of the paths through the network affects the gradient magnitude during backpropagation [1, 8]. To empirically investigate the effect of vanishing gradients on residual networks we perform the following experiment. Starting from a trained network with 54 blocks, we sample individual paths of a certain length and measure the norm of the gradient that arrives at the input. To sample a path of length k, we first feed a batch forward through the whole network. During the backward pass, we randomly sample k residual 6 0 10 20 30 40 50 path length 0.0 0.5 1.0 1.5 2.0 number of paths 1e15 distribution of path length 0 10 20 30 40 50 path length 0.0 0.2 0.4 0.6 0.8 1.0 grad magnitude at input 1e 5 gradient magnitude per path length 0 10 20 30 40 50 10-26 10-24 10-22 10-20 10-18 10-16 10-14 10-12 10-10 10-8 10-6 log gradient magnitude grad magnitude at input path length 0 10 20 30 40 50 path length 0.0 0.1 0.2 0.3 0.4 0.5 total gradient magnitude total gradient magnitude per path length (a) (b) (c) Figure 6: How much gradient do the paths of different lengths contribute in a residual network? To find out, we first show the distribution of all possible path lengths (a). This follows a Binomial distribution. Second, we record how much gradient is induced on the first layer of the network through paths of varying length (b), which appears to decay roughly exponentially with the number of modules the gradient passes through. Finally, we can multiply these two functions (c) to show how much gradient comes from all paths of a certain length. Though there are many paths of medium length, paths longer than ∼20 modules are generally too long to contribute noticeable gradient during training. This suggests that the effective paths in residual networks are relatively shallow. blocks. For those k blocks, we only propagate through the residual module; for the remaining n −k blocks, we only propagate through the skip connection. Thus, we only measure gradients that flow through the single path of length k. We sample 1,000 measurements for each length k using random batches from the training set. The results show that the gradient magnitude of a path decreases exponentially with the number of modules it went through in the backward pass, Figure 6 (b). The effective paths in residual networks are relatively shallow Finally, we can use these results to deduce whether shorter or longer paths contribute most of the gradient during training. To find the total gradient magnitude contributed by paths of each length, we multiply the frequency of each path length with the expected gradient magnitude. The result is shown in Figure 6 (c). Surprisingly, almost all of the gradient updates during training come from paths between 5 and 17 modules long. These are the effective paths, even though they constitute only 0.45% of all paths through this network. Moreover, in comparison to the total length of the network, the effective paths are relatively shallow. To validate this result, we retrain a residual network from scratch that only sees the effective paths during training. This ensures that no long path is ever used. If the retrained model is able to perform competitively compared to training the full network, we know that long paths in residual networks are not needed during training. We achieve this by only training a subset of the modules during each mini batch. In particular, we choose the number of modules such that the distribution of paths during training aligns with the distribution of the effective paths in the whole network. For the network with 54 modules, this means we sample exactly 23 modules during each training batch. Then, the path lengths during training are centered around 11.5 modules, well aligned with the effective paths. In our experiment, the network trained only with the effective paths achieves a 5.96% error rate, whereas the full model achieves a 6.10% error rate. There is no statistically significant difference. This demonstrates that indeed only the effective paths are needed. 6 Discussion Removing residual modules mostly removes long paths Deleting a module from a residual network mainly removes the long paths through the network. In particular, when deleting d residual modules from a network of length n, the fraction of paths remaining per path length x is given by fraction of remaining paths of length x = n−d x  n x  (8) Figure 7 illustrates the fraction of remaining paths after deleting 1, 10 and 20 modules from a 54 module network. It becomes apparent that the deletion of residual modules mostly affects the long paths. Even after deleting 10 residual modules, many of the effective paths between 5 and 17 modules long are still valid. Since mainly the effective paths are important for performance, this result is in line with the experiment shown in Figure 5 (a). Performance only drops slightly up to the removal of 10 residual modules, however, for the removal of 20 modules, we observe a severe drop in performance. 7 0 10 20 30 40 50 path length 0.0 0.2 0.4 0.6 0.8 1.0 fraction of remaining paths effective paths remaining paths after deleting d modules delete 1 module delete 10 modules delete 20 modules Figure 7: Fraction of paths remaining after deleting individual layers. Deleting layers mostly affects long paths through the networks. 0 10 20 30 40 50 dropped layer index 0.0 0.2 0.4 0.6 0.8 1.0 Test classification error Residual network vs. stochastic depth error when dropping any single block (CIFAR-10) residual network v2, 110 layers stochastic depth, 110 layers, d = 0.5 linear decay Figure 8: Impact of stochastic depth on resilience to layer deletion. Training with stochastic depth only improves resilience slightly, indicating that plain residual networks already don’t depend on individual layers. Compare to Fig. 3. Connection to highway networks In highway networks, ti(·) multiplexes data flow through the residual and skip connections and ti(·) = 0.5 means both paths are used equally. For highway networks in the wild, [19] observe empirically that the gates commonly deviate from ti(·) = 0.5. In particular, they tend to be biased toward sending data through the skip connection; in other words, the network learns to use short paths. Similar to our results, it reinforces the importance of short paths. Effect of stochastic depth training procedure Recently, an alternative training procedure for residual networks has been proposed, referred to as stochastic depth [9]. In that approach a random subset of the residual modules is selected for each mini-batch during training. The forward and backward pass is only performed on those modules. Stochastic depth does not affect the number of paths in the network because all paths are available at test time. However, it changes the distribution of paths seen during training. In particular, mainly short paths are seen. Further, by selecting a different subset of short paths in each mini-batch, it encourages the paths to produce good results independently. Does this training procedure significantly reduce the dependence between paths? We repeat the experiment of deleting individual modules for a residual network trained using stochastic depth. The result is shown in Figure 8. Training with stochastic depth improves resilience slightly; only the dependence on the downsampling layers seems to be reduced. By now, this is not surprising: we know that plain residual networks already don’t depend on individual layers. 7 Conclusion What is the reason behind residual networks’ increased performance? In the most recent iteration of residual networks, He et al. [6] provide one hypothesis: “We obtain these results via a simple but essential concept—going deeper.” While it is true that they are deeper than previous approaches, we present a complementary explanation. First, our unraveled view reveals that residual networks can be viewed as a collection of many paths, instead of a single ultra deep network. Second, we perform lesion studies to show that, although these paths are trained jointly, they do not strongly depend on each other. Moreover, they exhibit ensemble-like behavior in the sense that their performance smoothly correlates with the number of valid paths. Finally, we show that the paths through the network that contribute gradient during training are shorter than expected. In fact, deep paths are not required during training as they do not contribute any gradient. Thus, residual networks do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network. This insight reveals that depth is still an open research question. These promising observations provide a new lens through which to examine neural networks. Acknowledgements We would like to thank Sam Kwak and Theofanis Karaletsos for insightful feedback. We also thank the reviewers of NIPS 2016 for their very constructive and helpful feedback and for suggesting the paper title. This work is partly funded by AOL through the Connected Experiences Laboratory (Author 1), an NSF Graduate Research Fellowship award (NSF DGE-1144153, Author 2), and a Google Focused Research award (Author 3). 8 References [1] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994. [2] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Conference on Computer Vision and Pattern Recognition, 2009. [3] Harris Drucker, Corinna Cortes, Lawrence D. Jackel, Yann LeCun, and Vladimir Vapnik. Boosting and other ensemble methods. Neural Computation, 6(6):1289–1301, 1994. [4] Kunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193–202, 1980. [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. [7] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [8] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen netzen. Master’s thesis, Institut fur Informatik, Technische Universitat, Munchen, 1991. [9] Gao Huang, Yu Sun, Zhuang Liu, Daniel Sedra, and Kilian Weinberger. Deep networks with stochastic depth. arXiv preprint arXiv:1603.09382, 2016. [10] David H Hubel and Torsten N Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1):106–154, 1962. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning, 2015. [12] Alex Krizhevsky. Learning multiple layers of features from tiny images, 2009. [13] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [14] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [15] Jitendra Malik and Pietro Perona. Preattentive texture discrimination with early vision mechanisms. Journal of the Optical Society of America, 1990. [16] Robert E Schapire. The strength of weak learnability. Machine Learning, 5(2):197–227, 1990. [17] Thomas Serre, Aude Oliva, and Tomaso Poggio. A feedforward architecture accounts for rapid categorization. Proceedings of the National Academy of Sciences, 104(15):6424–6429, 2007. [18] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [19] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. [20] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Conference on Computer Vision and Pattern Recognition, pages 1–9, 2015. [21] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. [22] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems, 2014. [23] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014, pages 818–833. Springer, 2014. 9
2016
135
6,034
Low-Rank Regression with Tensor Responses Guillaume Rabusseau and Hachem Kadri Aix Marseille Univ, CNRS, LIF, Marseille, France {firstname.lastname}@lif.univ-mrs.fr Abstract This paper proposes an efficient algorithm (HOLRR) to handle regression tasks where the outputs have a tensor structure. We formulate the regression problem as the minimization of a least square criterion under a multilinear rank constraint, a difficult non convex problem. HOLRR computes efficiently an approximate solution of this problem, with solid theoretical guarantees. A kernel extension is also presented. Experiments on synthetic and real data show that HOLRR computes accurate solutions while being computationally very competitive. 1 Introduction Recently, there has been an increasing interest in adapting machine learning and statistical methods to tensors. Data with a natural tensor structure are encountered in many scientific areas including neuroimaging [30], signal processing [4], spatio-temporal analysis [2] and computer vision [16]. Extending multivariate regression methods to tensors is one of the challenging task in this area. Most existing works extend linear models to the multilinear setting and focus on the tensor structure of the input data (e.g. [24]). Little has been done however to investigate learning methods for tensor-structured output data. We consider a multilinear regression task where outputs are tensors; such a setting can occur in the context of e.g. spatio-temporal forecasting or image reconstruction. In order to leverage the tensor structure of the output data, we formulate the problem as the minimization of a least squares criterion subject to a multilinear rank constraint on the regression tensor. The rank constraint enforces the model to capture low-rank structure in the outputs and to explain dependencies between inputs and outputs in a low-dimensional multilinear subspace. Unlike previous work (e.g. [22, 24, 27]) we do not rely on a convex relaxation of this difficult non-convex optimization problem. Instead we show that it is equivalent to a multilinear subspace identification problem for which we design a fast and efficient approximation algorithm (HOLRR), along with a kernelized version which extends our approach to the nonlinear setting (Section 3). Our theoretical analysis shows that HOLRR provides good approximation guarantees. Furthermore, we derive a generalization bound for the class of tensor-valued regression functions with bounded multilinear rank (Section 3.3). Experiments on synthetic and real data are presented to validate our theoretical findings and show that HOLRR computes accurate solutions while being computationally very competitive (Section 4). Proofs of all results stated in the paper can be found in supplementary material A. Related work. The problem we consider is a generalization of the reduced-rank regression problem (Section 2.2) to tensor structured responses. Reduced-rank regression has its roots in statistics [10] but it has also been investigated by the neural network community [3]; non-parametric extensions of this method have been proposed in [18] and [6]. In the context of multi-task learning, a linear model using a tensor-rank penalization of a least squares criterion has been proposed in [22] to take into account the multi-modal interactions between 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. tasks. They propose an approach relying on a convex relaxation of the multlinear rank constraint using the trace norms of the matricizations, and a non-convex approach based on alternating minimization. Nonparametric low-rank estimation strategies in reproducing kernel Hilbert spaces (RKHS) based on a multilinear spectral regularization have been proposed in [23, 24]. Their method is based on estimating the regression function in the tensor product of RKHSs and is naturally adapted for tensor covariates. A greedy algorithm to solve a low-rank tensor learning problem has been proposed in [2] in the context of multivariate spatio-temporal data analysis. The linear model they assume is different from the one we propose and is specifically designed for spatio-temporal data. A higher-order extension of partial least squares (HOPLS) has been proposed in [28] along with a kernel extension in [29]. While HOPLS has the advantage of taking the tensor structure of the input into account, the questions of approximation and generalization guarantees were not addressed in [28]. The generalization bound we provide is inspired from works on matrix and tensor completion [25, 19]. 2 Preliminaries We begin by introducing some notations. For any integer k we use [k] to denote the set of integers from 1 to k. We use lower case bold letters for vectors (e.g. v ∈Rd1), upper case bold letters for matrices (e.g. M ∈Rd1×d2) and bold calligraphic letters for higher order tensors (e.g. T ∈Rd1×d2×d3). The identity matrix will be written as I. The ith row (resp. column) of a matrix M will be denoted by Mi,: (resp. M:,i). This notation is extended to slices of a tensor in the straightforward way. If v ∈Rd1 and v′ ∈Rd2, we use v ⊗v′ ∈Rd1·d2 to denote the Kronecker product between vectors, and its straightforward extension to matrices and tensors. Given a matrix M ∈Rd1×d2, we use vec(M) ∈Rd1·d2 to denote the column vector obtained by concatenating the columns of M. 2.1 Tensors and Tucker Decomposition We first recall basic definitions of tensor algebra; more details can be found in [13]. A tensor T ∈Rd1×···×dp can simply be seen as a multidimensional array (T i1,··· ,ip : in ∈[dn], n ∈[p]). The mode-n fibers of T are the vectors obtained by fixing all indices except the nth one, e.g. T :,i2,··· ,ip ∈Rd1. The nth mode matricization of T is the matrix having the mode-n fibers of T for columns and is denoted by T(n) ∈Rdn×d1···dn−1dn+1···dp. The vectorization of a tensor is defined by vec(T ) = vec(T(1)). The inner product between two tensors S and T (of the same size) is defined by ⟨S, T ⟩= ⟨vec(S), vec(T )⟩and the Frobenius norm is defined by ∥T ∥2 F = ⟨T , T ⟩. In the following T always denotes a tensor of size d1 × · · · × dp. The mode-n matrix product of the tensor T and a matrix X ∈Rm×dn is a tensor denoted by T ×n X. It is of size d1 × · · · × dn−1 × m × dn+1 × · · · × dp and is defined by the relation Y = T ×n X ⇔Y(n) = XT(n). The mode-n vector product of the tensor T and a vector v ∈Rdn is a tensor defined by T •n v = T ×n v⊤∈Rd1×···×dn−1×dn+1×···×dp. The mode-n rank of T is the dimension of the space spanned by its mode-n fibers, that is rankn(T ) = rank(T(n)). The multilinear rank of T , denoted by rank(T ), is the tuple of mode-n ranks of T : rank(T ) = (R1, · · · , Rp) where Rn = rankn(T ) for n ∈[p]. We will write rank(T ) ≤(S1, · · · , Sp) whenever rank1(T ) ≤S1, rank2(T ) ≤S2, · · · , rankp(T ) ≤Sp. The Tucker decomposition decomposes a tensor T into a core tensor G transformed by an orthogonal matrix along each mode: (i) T = G ×1 U1 ×2 U2 ×3 · · · ×p Up, where G ∈RR1×R2×···×Rp, Ui ∈Rdi×Ri and U⊤ i Ui = I for all i ∈[p]. The number of parameters involved in a Tucker decomposition can be considerably smaller than d1d2 · · · dp. We have the following identities when matricizing and vectorizing a Tucker decomposition: T(n) = UnG(n)(Up ⊗· · ·⊗Un+1 ⊗Un−1 ⊗· · ·⊗U1)⊤and vec(T ) = (Up ⊗Up−1 ⊗· · ·⊗U1)vec(G). It is well known that T admits the Tucker decomposition (i) iffrank(T ) ≤(R1, · · · , Rp) (see e.g. [13]). Finding an exact Tucker decomposition can be done using the higher-order SVD algorithm (HOSVD) introduced by [5]. Although finding the best approximation of 2 multilinear rank (R1, · · · , Rp) of a tensor T is a difficult problem, the truncated HOSVD algorithm provides good approximation guarantees and often performs well in practice. 2.2 Low-Rank Regression Multivariate regression is the task of recovering a function f : Rd →Rp from a set of inputoutput pairs {(x(n), y(n))}N n=1 sampled from the model with an additive noise y = f(x) + ε, where ε is the error term. To solve this problem, the ordinary least squares (OLS) approach assumes a linear dependence between input and output data and boils down to finding a matrix W ∈Rd×p that minimizes the squared error ∥XW −Y∥2 F , where X ∈RN×d and Y ∈RN×p denote the input and the output matrices. To prevent overfitting and to avoid numerical instabilities a ridge regularization term (i.e. γ∥W∥2 F ) is often added to the objective function, leading to the regularized least squares (RLS) method. It is easy to see that the OLS/RLS approach in the multivariate setting is equivalent to performing p linear regressions for each scalar output {yj}p j=1 independently. Thus it performs poorly when the outputs are correlated and the true dimension of the response is less than p. Low-rank regression (or reduced-rank regression) addresses this issue by solving the rank penalized problem minW∈Rd×p ∥XW −Y∥2 F + γ∥W∥2 F s.t. rank(W) ≤R for a given integer R. The rank constraint was first proposed in [1], whereas the term reduced-rank regression was introduced in [10]. Adding a ridge regularization was proposed in [18]. In the rest of the paper we will refer to this approach as low-rank regression (LRR). For more description and discussion of reduced-rank regression, we refer the reader to the books [21] and [11]. 3 Low-Rank Regression for Tensor-Valued Functions 3.1 Problem Formulation We consider a multivariate regression task where the input is a vector and the response has a tensor structure. Let f : Rd0 →Rd1×d2×···×dp be the function we want to learn from a sample of input-output data {(x(n), Y(n))}N n=1 drawn from the model Y = f(x)+E, where E is an error term. We assume that f is linear, that is f(x) = W •1 x for some regression tensor W ∈Rd0×d1×···×dp. The vectorization of this relation leads to vec(f(x)) = W⊤ (1)x showing that this model is equivalent to the standard multivariate linear model. One way to tackle this regression task would be to vectorize each output sample and to perform a standard low-rank regression on the data {(x(n), vec(Y(n)))}N n=1 ⊂Rd0 × Rd1···dp. A major drawback of this approach is that the tensor structure of the output is lost in the vectorization step. The low-rank model tries to capture linear dependencies between components of the output but it ignores higher level dependencies that could be present in a tensor-structured output. For illustration, suppose the output is a matrix encoding the samples of d1 continuous variables at d2 different time steps, one could expect structural relations between the d1 time series, e.g. linear dependencies between the rows of the output matrix. Low-rank regression for tensor responses. To overcome the limitation described above we propose an extension of the low-rank regression method for tensor-structured responses by enforcing low multilinear rank of the regression tensor W. Let {(x(n), Y(n))}N n=1 ⊂ Rd0 × Rd1×d2×···×dp be a training sample of input/output data drawn from the model f(x) = W •1 x + E where W is assumed of low multilinear rank. Considering the framework of empirical risk minimization, we want to find a low-rank regression tensor W minimizing the loss on the training data. To avoid numerical instabilities and to prevent overfitting we add a ridge regularization to the objective function, leading to the minimization of PN n=1 ℓ(W •1 x(n), Y(n)) + γ∥W∥2 F w.r.t. the regression tensor W subject to the constraint rank(W) ≤(R0, R1, · · · , Rp) for some given integers R0, R1, · · · , Rp and where ℓis a loss function. In this paper, we consider the squared error loss between tensors defined by L(T , ˆT ) = ∥T −ˆT ∥2 F . Using this loss we can rewrite the minimization problem as min W∈Rd0×d1×···×dp ∥W ×1 X −Y∥2 F + γ∥W∥2 F s.t. rank(W) ≤(R0, R1, · · · , Rp), (1) 3 Figure 1: Image reconstruction from noisy measurements: Y = W •1 x + E where W is a color image (RGB). Each image is labeled with the algorithm and the rank parameter. where the input matrix X ∈RN×d0 and the output tensor Y ∈RN×d1×···×dp are defined by Xn,: = (x(n))⊤, Yn,:,··· ,: = Y(n) for n = 1, · · · , N (Y is the tensor obtained by stacking the output tensors along the first mode). Low-rank regression function. Let W∗be a solution of problem (1), it follows from the multilinear rank constraint that W∗= G ×1 U0 ×2 · · · ×p+1 Up for some core tensor G ∈RR0×···×Rp and orthogonal matrices Ui ∈Rdi×Ri for 0 ≤i ≤p. The regression function f ∗: x 7→W∗•1 x can thus be written as f ∗: x 7→G ×1 x⊤U0 ×2 · · · ×p+1 Up. This implies several interesting properties. First, for any x ∈Rd0 we have f ∗(x) = T x ×1 U1 ×2 · · · ×p Up with T x = G •1 U⊤ 0 x, which implies rank(f ∗(x)) ≤(R1, · · · , Rp), that is the image of f ∗is a set of tensors with low multilinear rank. Second, the relation between x and Y = f ∗(x) is explained in a low dimensional subspace of size R0 × R1 × · · · × Rp. Indeed one can decompose the mapping f ∗into the following steps: (i) project x in RR0 as ¯x = U⊤ 0 x, (ii) perform a low-dimensional mapping ¯Y = G •1 ¯x, (iii) project back into the output space to get Y = ¯Y ×1 U1 ×2 · · · ×p Up. To give an illustrative intuition on the differences between matrix and multilinear rank regularization we present a simple experiment1 in Figure 1. We generate data from the model Y = W •1 x + E where the tensor W ∈R3×m×n is a color image of size m × n encoded with three color channels RGB. The components of both x and E are drawn from N(0, 1). This experiment allows us to visualize the tensors returned by RLS, LRR and our method HOLRR that enforces low multilinear rank of the regression function. First, this shows that the function learned by vectorizing the outputs and performing LRR does not enforce any low-rank structure. This is well illustrated in (Figure 1) where the regression tensors returned by HOLRR-(3,1,1) are clearly of low-rank while the ones returned by LRR-1 are not. This also shows that taking into account the low-rank structure of the model allows one to better eliminate the noise when the true regression tensor is of low rank (Figure 1, left). However if the ground truth model does not have a low-rank structure, enforcing low mutlilinear rank leads to underfitting for low values of the rank parameter (Figure 1, right). 3.2 Higher-Order Low-Rank Regression and its Kernel Extension We now propose an efficient algorithm to tackle problem (1). We first show that the ridge regularization term in (1) can be incorporated in the data fitting term. Let ˜X ∈R(N+d0)×d0 and ˜Y ∈R(N+d0)×d1×···×dp be defined by ˜X⊤= (X | γI)⊤and ˜Y⊤ (1) = Y(1) | 0 ⊤. It is easy to check that the objective function in (1) is equal to ∥W ×1 ˜X −˜Y∥2 F . Minimization problem (1) is then equivalent to min G∈RR0×R1×···×Rp , Ui∈Rdi×Ri for 0≤i≤p ∥W ×1 ˜X −˜Y∥2 F s.t. W = G ×1 U0 · · · ×p+1 Up, U⊤ i Ui = I for all i. (2) We now show that this minimization problem can be reduced to finding p + 1 projection matrices onto subspaces of dimension R0, R1, · · · , Rp. We start by showing that the core tensor G solution of (2) is determined by the factor matrices U0, · · · , Up. 1An extended version of this experiment is presented in supplementary material B. 4 Theorem 1. For given orthogonal matrices U0, · · · , Up the tensor G that minimizes (2) is given by G = ˜Y ×1 (U⊤ 0 ˜X⊤˜XU0)−1U⊤ 0 ˜X⊤×2 U⊤ 1 ×3 · · · ×p+1 U⊤ p . It follows from Theorem 1 that problem (1) can be written as min Ui∈Rdi×Ri,0≤i≤p ∥˜Y ×1 Π0 ×2 · · · ×p+1 Πp −˜Y∥2 F (3) subject to U⊤ i Ui = I for all i, Π0 = ˜XU0 U⊤ 0 ˜X⊤˜XU0 −1 U⊤ 0 ˜XT , Πi = UiU⊤ i for i ≥1. Note that Π0 is the orthogonal projection onto the space spanned by the columns of ˜XU0 and Πi is the orthogonal projection onto the column space of Ui for i ≥1. Hence solving problem (1) is equivalent to finding p + 1 low-dimensional subspaces U0, · · · , Up such that projecting ˜Y onto the spaces ˜XU0, U1, · · · , Up along the corresponding modes is close to ˜Y. HOLRR algorithm. Since solving problem (3) for the p + 1 projections simultaneously is a difficult non-convex optimization problem we propose to solve it independently for each projection. This approach has the benefits of both being computationally efficient and providing good theoretical approximation guarantees (see Theorem 2). The following proposition gives the analytic solutions of (3) when each projection is considered independently. Proposition 1. For 0 ≤i ≤p, using the definition of Πi in (3), the optimal solution of minUi∈Rdi×Ri ∥˜Y ×i+1 Πi −˜Y∥2 F s.t. U⊤ i Ui = I is given by the top Ri eigenvectors of ( ˜X⊤˜X)−1 ˜X⊤˜Y(1) ˜Y⊤ (1) ˜X if i = 0 and ˜Y(i+1) ˜Y⊤ (i+1) otherwise. The results from Theorem 1 and Proposition 1 can be rewritten in terms of the original input matrix X and output tensor Y using the identities ˜X⊤˜X = X⊤X+γI, ˜Y ×1 ˜X⊤= Y ×1 X⊤ and ˜Y(i) ˜Y⊤ (i) = Y(i)Y⊤ (i) for any i ≥1. The overall Higher-Order Low-Rank Regression procedure (HOLRR) is summarized in Algorithm 1. Note that the Tucker decomposition of the solution returned by HOLRR could be a good initialization point for an Alternative Least Square method. However, studying the theoretical and experimental properties of this approach is beyond the scope of this paper and is left for future work. HOLRR Kernel Extension We now design a kernelized version of the HOLRR algorithm by analyzing how it would be instantiated in a feature space. We show that all the steps involved can be performed using the Gram matrix of the input data without having to explicitly compute the feature map. Let φ : Rd0 →RL be a feature map and let Φ ∈RN×L be the matrix with rows φ(x(n))⊤for n ∈[N]. The higher-order low-rank regression problem in the feature space boils down to the minimization problem min W∈RL×d1×···×dp ∥W ×1 Φ −Y∥2 F + γ∥W∥2 F s.t. rank(W) ≤(R0, R1, · · · , Rp) . (4) Following the HOLRR algorithm, one needs to compute the top R0 eigenvectors of the L × L matrix (Φ⊤Φ + γI)−1Φ⊤Y(1)Y⊤ (1)Φ. The following proposition shows that this can be done using the Gram matrix K = ΦΦ⊤without explicitly knowing the feature map φ. Proposition 2. If α ∈RN is an eigenvector with eigenvalue λ of the matrix (K + γI)−1Y(1)Y⊤ (1)K, then v = Φ⊤α ∈RL is an eigenvector with eigenvalue λ of the matrix (Φ⊤Φ + γI)−1Φ⊤Y(1)Y⊤ (1)Φ. Let A be the top R0 eigenvectors of the matrix (K + γI)−1Y(1)Y⊤ (1)K. When working with the feature map φ, it follows from the previous proposition that line 1 in Algorithm 1 is equivalent to choosing U0 = Φ⊤A ∈RL×R0, while the updates in line 3 stay the same. The regression tensor W ∈RL×d1×···×dp returned by this algorithm is then equal to W = Y ×1P×2U1U⊤ 1 ×2· · ·×p+1UpU⊤ p , where P = Φ⊤A  A⊤Φ(Φ⊤Φ + γI)Φ⊤A −1 A⊤ΦΦ⊤. It is easy to check that P can be rewritten as P = Φ⊤A A⊤K(K + γI)A −1 A⊤K. Suppose now that the feature map φ is induced by a kernel k : Rd0 × Rd0 →R. The prediction for an input vector x is then given by W •1 x = C •1 kx where the nth component 5 Algorithm 1 HOLRR Input: X ∈RN×d0, Y ∈RN×d1×···×dp, rank (R0, R1, · · · , Rp) and regularization parameter γ. 1: U0 ←top R0 eigenvectors of (X⊤X + γI)−1X⊤Y(1)Y⊤ (1)X 2: for i = 1 to p do 3: Ui ←top Ri eigenvec. of Y(i+1)Y⊤ (i+1) 4: end for 5: M = U⊤ 0 (X⊤X + γI)U0 −1 U⊤ 0 X⊤ 6: G ←Y ×1 M ×2 U⊤ 1 ×3 · · · ×p+1 U⊤ p 7: return G ×1 U0 ×2 · · · ×p+1 Up Algorithm 2 Kernelized HOLRR Input: Gram matrix K ∈RN×N, Y ∈ RN×d1×···×dp, rank (R0, R1, · · · , Rp) and regularization parameter γ. 1: A ←top R0 eigenvectors of (K + γI)−1Y(1)Y⊤ (1)K 2: for i = 1 to p do 3: Ui ←top Ri eigenvec. of Y(i+1)Y⊤ (i+1) 4: end for 5: M ← A⊤K(K + γI)A −1 A⊤K 6: G ←Y ×1 M ×2 U⊤ 1 ×3 · · · ×p+1 U⊤ p 7: return C = G×1A×2U1×3· · ·×p+1Up of kx ∈RN is ⟨φ(x(n)), φ(x)⟩= k(x(n), x) and the tensor C ∈RN×d1×···×dp is defined by C = G×1A×2U1×2· · ·×p+1Up, with G = Y×1 A⊤K(K + γI)A −1 A⊤K×2U⊤ 2 ×3· · ·×p+1Up. Note that C has multilinear rank (R0, · · · , Rp), hence the low mutlilinear rank constraint on W in the feature space translates into the low rank structure of the coefficient tensor C. Let H be the reproducing kernel Hilbert space associated with the kernel k. The overall procedure for kernelized HOLRR is summarized in Algorithm 2. This algorithm returns the tensor C ∈RN×d1×···×dp defining the regression function f : x 7→C •1 kx = PN n=1 k(x, x(n))C(n), where C(n) = Cn:···: ∈Rd1×···×dp. 3.3 Theoretical Analysis Complexity analysis. HOLRR is a polynomial time algorithm, more precisely it has a time complexity in O((d0)3 +N((d0)2 +d0d1 · · · dp)+maxi≥0 Ri(di)2 +Nd1 · · · dp maxi≥1 di). In comparison, LRR has a time complexity in O((d0)3 + N((d0)2 + d0d1 · · · dp) + (N + R)(d1 · · · dp)2). Since the complexity of HOLRR only have a linear dependence on the product of the output dimensions instead of a quadratic one for LRR, we can conclude that HOLRR will be more efficient than LRR when the output dimensions d1, · · · , dp are large. It is worth mentioning that the method proposed in [22] to solve a convex relaxation of problem 2 is an iterative algorithm that needs to compute SVDs of matrices of size di ×d1 · · · di−1di+1 · · · dp for each 0 ≤i ≤p at each iteration, it is thus computationally more expensive than HOLRR. Moreover, since HOLRR only relies on simple linear algebra tools, readily available methods could be used to further improve the speed of the algorithm, e.g. randomized-SVD [8] and random feature approximation of the kernel function [12, 20]. Approximation guarantees. It is easy to check that problem (1) is NP-hard since it generalizes the problem of fitting a Tucker decomposition [9]. The following theorem shows that HOLRR is a (p + 1)-approximation algorithm for this problem. This result generalizes the approximation guarantees provided by the truncated HOSVD algorithm for the problem of finding the best low multilinear rank approximation of an arbitrary tensor. Theorem 2. Let W∗be a solution of problem (1) and let W be the regression tensor returned by Algorithm 1. If L : Rd0×···×dp →R denotes the objective function of (1) w.r.t. W then L(W) ≤(p + 1)L(W∗). Generalization Bound. The following theorem gives an upper bound on the excessrisk for the function class F = {x 7→W •1 x : rank(W) ≤(R0, · · · , Rp)} of tensor-valued regression functions with bounded multilinear rank. Recall that the expected loss of an hypothesis h ∈F w.r.t. the target function f ∗is defined by R(h) = Ex[L(h(x), f ∗(x))] and its empirical loss by ˆR(h) = 1 N PN n=1 L(h(x(n)), f ∗(x(n))). Theorem 3. Let L : Rd1×···×dp → R be a loss function satisfying L(A, B) = 1 d1···dp P i1,··· ,ip ℓ(Ai1,··· ,ip, Bi1,··· ,ip) for some loss-function ℓ: R →R+ bounded by M. Then for any δ > 0, with probability at least 1 −δ over the choice of a sample of size N, the follow6 ing inequality holds for all h ∈F: R(h) ≤ˆR(h) + M r 2D log  4e(p+2)d0d1···dp maxi≥0 di  log(N)/N + M q log 1 δ  /(2N), where D = R0R1 · · · Rp + Pp i=0 Ridi. Proof. (Sketch) The complete proof is given in the supplementary material. It relies on bounding the pseudo-dimension of the class of real-valued functions ˜F =  (x, i1, · · · , ip) 7→(W •1 x)i1,··· ,ip : rank(W) = (R0, · · · , Rp) . We show that the pseudodimension of ˜F is upper bounded by (R0R1 · · · Rp + Pp i=0 Ridi) log  4e(p+2)d0d1···dp maxi≥0 di  . This is done by leveraging the following result originally due to [26]: the number of sign patterns of r polynomials, each of degree at most d, over q variables is at most (4edr/q)q for all r > q > 2 [25, Theorem 2]. The rest of the proof consists in showing that the risk (resp. empirical risk) of hypothesis in F and ˜F are closely related and invoking standard error generalization bounds in terms of the pseudo-dimension [17, Theorem 10.6]. Note that generalization bounds based on the pseudo-dimension for multivariate regression without low-rank constraint would involve a term in O( p d0d1 · · · dp). In contrast, the bound from the previous theorem only depends on the product of the output dimensions in a term bounded by O( p log(d1 · · · dp)). In some sense, taking into account the low mutlilinear rank of the hypothesis allows us to significantly reduce the dependence on the output dimensions from O( p d0 · · · dp) to O( p (R0 · · · Rp + P i Ridi)(P i log(di))). 4 Experiments In this section, we evaluate HOLRR on both synthetic and real-world datasets. Our experimental results are for tensor-structured output regression problems on which we report root mean-squared errors (RMSE) averaged across all the outputs. We compare HOLLR with the following methods: regularized least squares RLS, low-rank regression LRR described in Section 2.2, a multilinear approach based on tensor trace norm regularization ADMM [7, 22], a nonconvex multilinear multitask learning approach MLMT-NC [22], an higher order extension of partial least squares HOPLS [28] and the greedy tensor approach for multivariate spatio-temporal analysis Greedy [2]. For experiments with kernel algorithms we use the readily available kernelized RLS and the LRR kernel extension proposed in [18]. Note that ADMM, MLMT-NC and Greedy only consider a linear dependency between inputs and outputs. The greedy tensor algorithm proposed in [2] is developed specially for spatio-temporal data and the implementation provided by the authors is restricted to third-order tensors. Although MLMLT-NC is perhaps the closest algorithm to ours, we applied it only to simulated data. This is because MLMLT-NC is computationally very expensive and becomes intractable for large data sets. Average running times are reported in supplementary material B. 4.1 Synthetic Data We generate both linear and nonlinear data. Linear data is drawn from the model Y = W •1 x + E where W ∈R10×10×10×10 is a tensor of multilinear rank (6, 4, 4, 8) drawn at random, x ∈R10 is drawn from N(0, I), and each component of the error tensor E is drawn from N(0, 0.1). Nonlinear data is drawn from Y = W•1(x⊗x)+E where W ∈R25×10×10×10 is of rank (5, 6, 4, 2) and x ∈R5 and E are generated as above. Hyper-parameters for all algorithms are selected using 3-fold cross-validation on the training data. These experiments have been carried out for different sizes of the training data set, 20 trials have been executed for each size. The average RMSEs on a test set of size 100 for the 20 trials are reported in Figure 2. We see that HOLRR algorithm clearly outperforms the other methods on the linear data. MLMT-NC method achieved the second best performance, it is however much more computationally expensive (see Table 1 in supplementary material B). On the nonlinear data LRR achieves good performances but HOLRR is still significantly more accurate, especially with small training datasets. 7 Figure 2: Average RMSE as a function of the training set size: (left) linear data, (middle) nonlinear data, (right) for different values of the rank parameter. Table 1: RMSE on forecasting task. Data set ADMM Greedy HOPLS HOLRR K-HOLRR (poly) K-HOLRR (rbf) CCDS 0.8448 0.8325 0.8147 0.8096 0.8275 0.7913 Foursquare 0.1407 0.1223 0.1224 0.1227 0.1223 0.1226 Meteo-UK 0.6140 − 0.625 0.5971 0.6107 0.5886 To see how sensitive HOLLR is w.r.t. the choice of the multilinear rank, we carried out a similar experiment comparing HOLLR performances for different values of the rank parameter, see Fig. 2 (right). In this experiment, the rank of the tensor W used to generate the data is (2, 2, 2, 2) while the input and output dimensions and the noise level are the same as above. 4.2 Real Data We evaluate our algorithm on a forecasting task on the following real-world data sets: CCDS: the comprehensive climate data set is a collection of climate records of North America from [15]. The data set contains monthly observations of 17 variables such as Carbon dioxide and temperature spanning from 1990 to 2001 across 125 observation locations. Foursquare: the Foursquare data set [14] contains users’ check-in records in Pittsburgh area categorized by different venue types such as Art & University. It records the number of check-ins by 121 users in each of the 15 category of venues over 1200 time intervals. Meteo-UK: The data set is collected from the meteorological office of the UK2. It contains monthly measurements of 5 variables in 16 stations across the UK from 1960 to 2000. The forecasting task consists in predicting all variables at times t + 1,. . . , t + k from their values at times t −2, t −1 and t. The first two real data sets were used in [2] with k = 1 (i.e. outputs are matrices). We consider here the same setting for these two data sets. For the third dataset we consider higher-order output tensors by setting k = 5. The output tensors are thus of size respectively 17 × 125, 15 × 121 and 16 × 5 × 5 for the three datasets. For all the experiments, we use 90% of the available data for training and 10% for testing. All hyper-parameters are chosen by cross-validation. The average test RMSE over 10 runs are reported in Table 1 (running times are reported in Table 1 in supplementary material B). We see that HOLRR and K-HOLRR outperforms the other methods on the CCDS dataset while being orders of magnitude faster for the kernelized version (0.61s vs. 75.47s for Greedy and 235.73s for ADMM in average). On the Foursquare dataset HOLRR performs as well as Greedy and on the Meteo-UK dataset K-HOLRR gets the best results with the RBF kernel while being much faster than ADMM (1.66s vs. 40.23s in average). 5 Conclusion We proposed a low-rank multilinear regression model for tensor-structured output data. We developed a fast and efficient algorithm to tackle the multilinear rank penalized minimization problem and provided theoretical guarantees. Experimental results showed that capturing low-rank structure in the output data can help to improve tensor regression performance. 2http://www.metoffice.gov.uk/public/weather/climate-historic/ 8 Acknowledgments We thank François Denis and the reviewers for their helpful comments and suggestions. This work was partially supported by ANR JCJC program MAD (ANR- 14-CE27-0002). References [1] T. W. Anderson. Estimating linear restrictions on regression coefficients for multivariate normal distributions. Annals of Mathematical Statistics, 22:327–351, 1951. [2] M. T. Bahadori, Q. R. Yu, and Y. Liu. Fast multivariate spatio-temporal analysis via low rank tensor learning. In NIPS. 2014. [3] P. Baldi and K. Hornik. Neural networks and principal component analysis: Learning from examples without local minima. Neural networks, 2(1):53–58, 1989. [4] A. Cichocki, R. Zdunek, A.H. Phan, and S.I. Amari. Nonnegative Matrix and Tensor Factorizations. Wiley, 2009. [5] L. De Lathauwer, B. De Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM journal on Matrix Analysis and Applications, 21(4):1253–1278, 2000. [6] R. Foygel, M. Horrell, M. Drton, and J. D. Lafferty. Nonparametric reduced rank regression. In NIPS, 2012. [7] S. Gandy, B. Recht, and I. Yamada. Tensor completion and low-n-rank tensor recovery via convex optimization. Inverse Problems, 27(2):025010, 2011. [8] N. Halko, P. Martinsson, and J. A. Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM, 53(2):217–288, 2011. [9] C. J. Hillar and L. Lim. Most tensor problems are np-hard. JACM, 60(6):45, 2013. [10] A. J. Izenman. Reduced-rank regression for the multivariate linear model. Journal of Multivariate Analysis, 5(2):248–264, 1975. [11] A. J. Izenman. Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning. Springer-Verlag, New York, 2008. [12] P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012. [13] T. G. Kolda and B. W. Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009. [14] X. Long, L. Jin, and J. Joshi. Exploring trajectory-driven local geographic topics in foursquare. In UbiComp, 2012. [15] A. C. Lozano, H. Li, A. Niculescu-Mizil, Y. Liu, C. Perlich, J. Hosking, and N. Abe. Spatialtemporal causal modeling for climate change attribution. In KDD, 2009. [16] H. Lu, K.N. Plataniotis, and A. Venetsanopoulos. Multilinear Subspace Learning: Dimensionality Reduction of Multidimensional Data. CRC Press, 2013. [17] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT, 2012. [18] A. Mukherjee and J. Zhu. Reduced rank ridge regression and its kernel extensions. Statistical analysis and data mining, 4(6):612–622, 2011. [19] M. Nickel and V. Tresp. An analysis of tensor models for learning on structured data. In Machine Learning and Knowledge Discovery in Databases, pages 272–287. Springer, 2013. [20] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [21] G.C. Reinsel and R.P. Velu. Multivariate reduced-rank regression: theory and applications. Lecture Notes in Statistics. Springer, 1998. [22] B. Romera-Paredes, M. H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In ICML, 2013. [23] M. Signoretto, L. De Lathauwer, and J. K. Suykens. Learning tensors in reproducing kernel hilbert spaces with multilinear spectral penalties. arXiv preprint arXiv:1310.4977, 2013. [24] M. Signoretto, Q. T. Dinh, L. De Lathauwer, and J. K. Suykens. Learning with tensors: a framework based on convex optimization and spectral regularization. Mach. Learn., 1–49, 2013. [25] N. Srebro, N. Alon, and T. S. Jaakkola. Generalization error bounds for collaborative prediction with low-rank matrices. In NIPS, 2004. [26] Hugh E Warren. Lower bounds for approximation by nonlinear manifolds. Transactions of the American Mathematical Society, 133(1):167–178, 1968. [27] K. Wimalawarne, M. Sugiyama, and R. Tomioka. Multitask learning meets tensor factorization: task imputation via convex optimization. In NIPS. 2014. [28] Q. Zhao, C. F. Caiafa, D. P. Mandic, Z. C. Chao, Y. Nagasaka, N. Fujii, L. Zhang, and A. Cichocki. Higher-order partial least squares (hopls). IEEE Trans. on Pattern Analysis and Machine Intelligence, 35(7):1660–1673, 2012. [29] Q. Zhao, Guoxu Z., T. Adalı, L. Zhang, and A. Cichocki. Kernel-based tensor partial least squares for reconstruction of limb movements. In ICASSP, 2013. [30] H. Zhou, L. Li, and H. Zhu. Tensor regression with applications in neuroimaging data analysis. Journal of the American Statistical Association, 108(502):540–552, 2013. 9
2016
136
6,035
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent Chi Jin UC Berkeley chijin@cs.berkeley.edu Sham M. Kakade University of Washington sham@cs.washington.edu Praneeth Netrapalli Microsoft Research India praneeth@microsoft.com Abstract Matrix completion, where we wish to recover a low rank matrix by observing a few entries from it, is a widely studied problem in both theory and practice with wide applications. Most of the provable algorithms so far on this problem have been restricted to the offline setting where they provide an estimate of the unknown matrix using all observations simultaneously. However, in many applications, the online version, where we observe one entry at a time and dynamically update our estimate, is more appealing. While existing algorithms are efficient for the offline setting, they could be highly inefficient for the online setting. In this paper, we propose the first provable, efficient online algorithm for matrix completion. Our algorithm starts from an initial estimate of the matrix and then performs non-convex stochastic gradient descent (SGD). After every observation, it performs a fast update involving only one row of two tall matrices, giving near linear total runtime. Our algorithm can be naturally used in the offline setting as well, where it gives competitive sample complexity and runtime to state of the art algorithms. Our proofs introduce a general framework to show that SGD updates tend to stay away from saddle surfaces and could be of broader interests to other non-convex problems. 1 Introduction Low rank matrix completion refers to the problem of recovering a low rank matrix by observing the values of only a tiny fraction of its entries. This problem arises in several applications such as video denoising [13], phase retrieval [3] and most famously in movie recommendation engines [15]. In the context of recommendation engines for instance, the matrix we wish to recover would be user-item rating matrix where each row corresponds to a user and each column corresponds to an item. Each entry of the matrix is the rating given by a user to an item. Low rank assumption on the matrix is inspired by the intuition that rating of an item by a user depends on only a few hidden factors, which are much fewer than the number of users or items. The goal is to estimate the ratings of all items by users given only partial ratings of items by users, which would then be helpful in recommending new items to users. The seminal works of Candès and Recht [4] first identified regularity conditions under which low rank matrix completion can be solved in polynomial time using convex relaxation – low rank matrix completion could be ill-posed and NP-hard in general without such regularity assumptions [9]. Since then, a number of works have studied various algorithms under different settings for matrix completion: weighted and noisy matrix completion, fast convex solvers, fast iterative non-convex solvers, parallel and distributed algorithms and so on. Most of this work however deals only with the offline setting where all the observed entries are revealed at once and the recovery procedure does computation using all these observations simultaneously. However in several applications [5, 18], we encounter the online setting where observations are 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. only revealed sequentially and at each step the recovery algorithm is required to maintain an estimate of the low rank matrix based on the observations so far. Consider for instance recommendation engines, where the low rank matrix we are interested in is the user-item rating matrix. While we make an observation only when a user rates an item, at any point of time, we should have an estimate of the user-item rating matrix based on all prior observations so as to be able to continuously recommend items to users. Moreover, this estimate should get better as we observe more ratings. Algorithms for offline matrix completion can be used to solve the online version by rerunning the algorithm after every additional observation. However, performing so much computation for every observation seems wasteful and is also impractical. For instance, using alternating minimization, which is among the fastest known algorithms for the offline problem, would mean that we take several passes of the entire data for every additional observation. This is simply not feasible in most settings. Another natural approach is to group observations into batches and do an update only once for each batch. This however induces a lag between observations and estimates which is undesirable. To the best of our knowledge, there is no known provable, efficient, online algorithm for matrix completion. On the other hand, in order to deal with the online matrix completion scenario in practical applications, several heuristics (with no convergence guarantees) have been proposed in literature [2, 19]. Most of these approaches are based on starting with an estimate of the matrix and doing fast updates of this estimate whenever a new observation is presented. One of the update procedures used in this context is that of stochastic gradient descent (SGD) applied to the following non-convex optimization problem min U,V ∥M −UV⊤∥2 F s.t. U ∈Rd1×k, V ∈Rd2×k, (1) where M is the unknown matrix of size d1 × d2, k is the rank of M and UV⊤is a low rank factorization of M we wish to obtain. The algorithm starts with some U0 and V0, and given a new observation (M)ij, SGD updates the ith-row and the jth-row of the current iterates Ut and Vt respectively by U(i) t+1 = U(i) t −2ηd1d2 UtV⊤ t −M  ij V(j) t , and, V(j) t+1 = V(j) t −2ηd1d2 UtV⊤ t −M  ij U(i) t , (2) where η is an appropriately chosen stepsize, and U(i) denote the ith row of matrix U. Note that each update modifies only one row of the factor matrices U and V, and the computation only involves one row of U, V and the new observed entry (M)ij and hence are extremely fast. These fast updates make SGD extremely appealing in practice. Moreover, SGD, in the context of matrix completion, is also useful for parallelization and distributed implementation [23]. 1.1 Our Contributions In this work we present the first provable efficient algorithm for online matrix completion by showing that SGD (2) with a good initialization converges to a true factorization of M at a geometric rate. Our main contributions are as follows. • We provide the first provable, efficient, online algorithm for matrix completion. Starting with a good initialization, after each observation, the algorithm makes quick updates each taking time O(k3) and requires O(µdkκ4(k + log ∥M∥F ϵ ) log d) observations to reach ϵ accuracy, where µ is the incoherence parameter, d = max(d1, d2), k is the rank and κ is the condition number of M. • Moreover, our result features both sample complexity and total runtime linear in d, and is competitive to even the best existing offline results for matrix completion. (either improve over or is incomparable, i.e., better in some parameters and worse in others, to these results). See Table 1 for the comparison. • To obtain our results, we introduce a general framework to show SGD updates tend to stay away from saddle surfaces. In order to do so, we consider distances from saddle surfaces, show that they behave like sub-martingales under SGD updates and use martingale convergence techniques to conclude that the iterates stay away from saddle surfaces. While [24] shows that SGD updates stay away from saddle surfaces, the stepsizes they can handle are 2 Table 1: Comparison of sample complexity and runtime of our algorithm with existing algorithms in order to obtain Frobenius norm error ϵ. eO(·) hides log d factors. See Section 1.2 for more discussion. Algorithm Sample complexity Total runtime Online? Nuclear Norm [22] eO(µdk) eO(d3/√ϵ) No Alternating minimization [14] eO(µdkκ8 log 1 ϵ ) eO(µdk2κ8 log 1 ϵ ) No Alternating minimization [8] eO µdk2κ2 k + log 1 ϵ  eO µdk3κ2 k + log 1 ϵ  No Projected gradient descent[12] eO(µdk5) eO(µdk7 log 1 ϵ ) No SGD [24] eO(µ2dk7κ6) poly(µ, d, k, κ) log 1 ϵ Yes Our result eO µdkκ4 k + log 1 ϵ  eO µdk4κ4 log 1 ϵ  Yes quite small (scaling as 1/poly(d1, d2)), leading to suboptimal computational complexity. Our framework makes it possible to establish the same statement for much larger step sizes, giving us near-optimal runtime. We believe these techniques may be applicable in other non-convex settings as well. 1.2 Related Work In this section we will mention some more related work. Offline matrix completion: There has been a lot of work on designing offline algorithms for matrix completion, we provide the detailed comparison with our algorithm in Table 1. The nuclear norm relaxation algorithm [22] has near-optimal sample complexity for this problem but is computationally expensive. Motivated by the empirical success of non-convex heuristics, a long line of works, [14, 8, 12, 24] and so on, has obtained convergence guarantees for alternating minimization, gradient descent, projected gradient descent etc. Even the best of these are suboptimal in sample complexity by poly(k, κ) factors. Our sample complexity is better than that of [14] and is incomparable to those of [8, 12]. To the best of our knowledge, the only provable online algorithm for this problem is that of Sun and Luo [24]. However the stepsizes they suggest are quite small, leading to suboptimal computational complexity by factors of poly(d1, d2). The runtime of our algorithm is linear in d, which makes poly(d) improvements over it. Other models for online matrix completion: Another variant of online matrix completion studied in the literature is where observations are made on a column by column basis e.g., [16, 26]. These models can give improved offline performance in terms of space and could potentially work under relaxed regularity conditions. However, they do not tackle the version where only entries (as opposed to columns) are observed. Non-convex optimization: Over the last few years, there has also been a significant amount of work in designing other efficient algorithms for solving non-convex problems. Examples include eigenvector computation [6, 11], sparse coding [20, 1] etc. For general non-convex optimization, an interesting line of recent work is that of [7], which proves gradient descent with noise can also escape saddle point, but they only provide polynomial rate without explicit dependence. Later [17, 21] show that without noise, the space of points from where gradient descent converges to a saddle point is a measure zero set. However, they do not provide a rate of convergence. Another related piece of work to ours is [10], proves global convergence along with rates of convergence, for the special case of computing matrix squareroot. 1.3 Outline The rest of the paper is organized as follows. In Section 2 we formally describe the problem and all relevant parameters. In Section 3, we present our algorithms, results and some of the key intuition 3 behind our results. In Section 4 we give proof outline for our main results. We conclude in Section 5. All formal proofs are deferred to the Appendix. 2 Preliminaries In this section, we introduce our notation, formally define the matrix completion problem and regularity assumptions that make the problem tractable. 2.1 Notation We use [d] to denote {1, 2, · · · , d}. We use bold capital letters A, B to denote matrices and bold lowercase letters u, v to denote vectors. Aij means the (i, j)th entry of matrix A. ∥w∥denotes the ℓ2-norm of vector w and ∥A∥/∥A∥F/∥A∥∞denotes the spectral/Frobenius/infinity norm of matrix A. σi(A) denotes the ith largest singular value of A and σmin(A) denotes the smallest singular value of A. We also let κ(A) = ∥A∥/σmin(A) denote the condition number of A (i.e., the ratio of largest to smallest singular value). Finally, for orthonormal bases of a subspace W, we also use PW = WW⊤to denote the projection to the subspace spanned by W. 2.2 Problem statement and assumptions Consider a general rank k matrix M ∈Rd1×d2. Let Ω⊂[d1]×[d2] be a subset of coordinates, which are sampled uniformly and independently from [d1] × [d2]. We denote PΩ(M) to be the projection of M on set Ωso that: [PΩ(M)]ij =  Mij, if (i, j) ∈Ω 0, if (i, j) ̸∈Ω Low rank matrix completion is the task of recovering M by only observing PΩ(M). This task is ill-posed and NP-hard in general [9]. In order to make this tractable, we make by now standard assumptions about the structure of M. Definition 2.1. Let W ∈Rd×k be an orthonormal basis of a subspace of Rd of dimension k. The coherence of W is defined to be µ(W) def = d k max 1≤i≤d ∥PWei∥2 = d k max 1≤i≤d e⊤ i W 2 Assumption 2.2 (µ-incoherence[4, 22]). We assume M is µ-incoherent, i.e., max{µ(X), µ(Y)} ≤ µ, where X ∈Rd1×k, Y ∈Rd2×k are the left and right singular vectors of M. 3 Main Results and Intuition In this section, we present our main result. We will first state result for a special case where M is a symmetric positive semi-definite (PSD) matrix, where the algorithm and analysis are much simpler. We will then discuss the general case. 3.1 Symmetric PSD Case Consider the special case where M is symmetric PSD and let d def = d1 = d2. Then, we can parametrize a rank k symmetric PSD matrix by UU⊤where U ∈Rd×k. Our algorithm for this case is given in Algorithm 1. The following theorem provides guarantees on the performance of Algorithm 1. The algorithm starts by using an initial set of samples Ωinit to construct a crude approximation to the low rank of factorization of M. It then observes samples from M one at a time and updates its factorization after every observation. Theorem 3.1. Let M ∈Rd×d be a rank k, symmetric PSD matrix with µ-incoherence. There exist some absolute constants c0 and c such that if |Ωinit| ≥c0µdk2κ2(M) log d, learning rate η ≤ c µdkκ3(M)∥M∥log d, then with probability at least 1 −1 d8 , we will have for all t ≤d2 that1: ∥UtU⊤ t −M∥2 F ≤  1 −1 2η · σmin(M) t  1 10σmin(M) 2 . 1W.L.O.G, we can always assume t < d2, otherwise we already observed the entire matrix. 4 Algorithm 1 Online Algorithm for PSD Matrix Completion. Input: Initial set of uniformly random samples Ωinit of a symmetric PSD matrix M ∈Rd×d, learning rate η, iterations T Output: U such that UU⊤≈M U0U⊤ 0 ←top k SVD of d2 |Ωinit|PΩinit(M) for t = 0, · · · , T −1 do Observe Mij where (i, j) ∼Unif ([d] × [d]) Ut+1 ←Ut −2ηd2(UtU⊤ t −M)ij(eie⊤ j + eje⊤ i )Ut end for Return UT Remarks: • The algorithm uses an initial set of observations Ωinit to produce a warm start iterate U0, then enters the online stage, where it performs SGD. • The sample complexity of the warm start phase is O(µdk2κ2(M) log d). The initialization consists of a top-k SVD on a sparse matrix, whose runtime is O(µdk3κ2(M) log d). • For the online phase (SGD), if we choose η = c µdkκ3(M)∥M∥log d, the number of observations T required for the error ∥UT U⊤ T −M∥F to be smaller than ϵ is O(µdkκ(M)4 log d log σmin(M) ϵ ). • Since each SGD step modifies two rows of Ut, its runtime is O(k) with a total runtime for online phase of O(kT). Our proof approach is to essentially show that the objective function is well-behaved (i.e., is smooth and strongly convex) in a local neighborhood of the warm start region, and then use standard techniques to show that SGD obtains geometric convergence in this setting. The most challenging and novel part of our analysis comprises of showing that the iterate does not leave this local neighborhood while performing SGD updates. Refer Section 4 for more details on the proof outline. 3.2 General Case Let us now consider the general case where M ∈Rd1×d2 can be factorized as UV⊤with U ∈Rd1×k and V ∈Rd2×k. In this scenario, we denote d = max{d1, d2}. We recall our remarks from the previous section that our analysis of the performance of SGD depends on the smoothness and strong convexity properties of the objective function in a local neighborhood of the iterates. Having U ̸= V introduces additional challenges in this approach since for any nonsingular k-by-k matrix C, and U′ def = UC⊤, V′ def = VC−1, we have U′V′⊤= UV⊤. Suppose for instance C is a very small scalar times the identity i.e., C = δI for some small δ > 0. In this case, U′ will be large while V′ will be small. This drastically deteriorates the smoothness and strong convexity properties of the objective function in a neighborhood of (U′, V′). Algorithm 2 Online Algorithm for Matrix Completion (Theoretical) Input: Initial set of uniformly random samples Ωinit of M ∈Rd1×d2, learning rate η, iterations T Output: U, V such that UV⊤≈M U0V⊤ 0 ←top k SVD of d1d2 |Ωinit|PΩinit(M) for t = 0, · · · , T −1 do WUDW⊤ V ←SVD(UtV⊤ t ) ˜Ut ←WUD 1 2 , ˜Vt ←WV D 1 2 Observe Mij where (i, j) ∼Unif ([d] × [d]) Ut+1 ←˜Ut −2ηd1d2( ˜Ut ˜V⊤ t −M)ijeie⊤ j ˜Vt Vt+1 ←˜Vt −2ηd1d2( ˜Ut ˜V⊤ t −M)ijeje⊤ i ˜Ut end for Return UT , VT . 5 To preclude such a scenario, we would ideally like to renormalize after each step by doing ˜Ut ← WUD 1 2 , ˜Vt ←WV D 1 2 , where WUDW⊤ V is the SVD of matrix UtV⊤ t . This algorithm is described in Algorithm 2. However, a naive implementation of Algorithm 2, especially the SVD step, would incur O(min{d1, d2}) computation per iteration, resulting in a runtime overhead of O(d) over both the online PSD case (i.e., Algorithm 1) as well as the near linear time offline algorithms (see Table 1). It turns out that we can take advantage of the fact that in each iteration we only update a single row of Ut and a single row of Vt, and do efficient (but more complicated) update steps instead of doing an SVD on d1 × d2 matrix. The resulting algorithm is given in Algorithm 3. The key idea is that in order to implement the updates, it suffices to do an SVD of U⊤ t Ut and V⊤ t Vt which are k × k matrices. So the runtime of each iteration is at most O(k3). The following lemma shows the equivalence between Algorithms 2 and 3. Algorithm 3 Online Algorithm for Matrix Completion (Practical) Input: Initial set of uniformly random samples Ωinit of M ∈Rd1×d2, learning rate η, iterations T Output: U, V such that UV⊤≈M U0V⊤ 0 ←top k SVD of d1d2 Ωinit PΩinit(M) for t = 0, · · · , T −1 do RUDUR⊤ U ←SVD(U⊤ t Ut) RV DV R⊤ V ←SVD(V⊤ t Vt) QUDQ⊤ V ←SVD(D 1 2 UR⊤ URV (D 1 2 V )⊤) Observe Mij where (i, j) ∼Unif ([d] × [d]) Ut+1 ←Ut −2ηd1d2(UtV⊤ t −M)ijeie⊤ j VtRV D −1 2 V QV Q⊤ UD 1 2 UR⊤ U Vt+1 ←Vt −2ηd1d2(UtV⊤ t −M)ijeje⊤ i UtRUD −1 2 U QUQ⊤ V D 1 2 V R⊤ V end for Return UT , VT . Lemma 3.2. Algorithm 2 and Algorithm 3 are equivalent in the sense that: given same observations from M and other inputs, the outputs of Algorithm 2, U, V and those of Algorithm 3, U′, V′ satisfy UV⊤= U′V′⊤. Since the output of both algorithms is the same, we can analyze Algorithm 2 (which is easier than that of Algorithm 3), while implementing Algorithm 3 in practice. The following theorem is the main result of our paper which presents guarantees on the performance of Algorithm 2. Theorem 3.3. Let M ∈Rd1×d2 be a rank k matrix with µ-incoherence and let d def = max(d1, d2). There exist some absolute constants c0 and c such that if |Ωinit| ≥c0µdk2κ2(M) log d, learning rate η ≤ c µdkκ3(M)∥M∥log d, then with probability at least 1 −1 d8 , we will have for all t ≤d2 that: ∥UtV⊤ t −M∥2 F ≤  1 −1 2η · σmin(M) t  1 10σmin(M) 2 . Remarks: • Just as in the case of PSD matrix completion (Theorem 3.1), Algorithm 2 needs an initial set of observations Ωinit to provide a warm start U0 and V0 after which it performs SGD. • The sample complexity and runtime of the warm start phase are the same as in symmetric PSD case. The stepsize η and the number of observations T to achieve ϵ error in online phase (SGD) are also the same as in symmetric PSD case. • However, runtime of each update step in online phase is O(k3) with total runtime for online phase O(k3T). The proof of this theorem again follows a similar line of reasoning as that of Theorem 3.1 by first showing that the local neighborhood of warm start iterate has good smoothness and strong convexity properties and then use them to show geometric convergence of SGD. Proof of the fact that iterates do not move away from this local neighborhood however is significantly more challenging due to renormalization steps in the algorithm. Please see Appendix C for the full proof. 6 4 Proof Sketch In this section we will provide the intuition and proof sketch for our main results. For simplicity and highlighting the most essential ideas, we will mostly focus on the symmetric PSD case (Theorem 3.1). For the asymmetric case, though the high-level ideas are still valid, a lot of additional effort is required to address the renormalization step in Algorithm 2. This makes the proof more involved. First, note that our algorithm for the PSD case consists of an initialization and then stochastic descent steps. The following lemma provides guarantees on the error achieved by the initial iterate U0. Lemma 4.1. Let M ∈Rd×d be a rank-k PSD matrix with µ-incoherence. There exists a constant c0 such that if |Ωinit| ≥c0µdk2κ2(M) log d, then with probability at least 1 − 1 d10 , the top-k SVD of d2 |Ωinit|PΩinit(M) (denote as U0U⊤ 0 ) satisfies: ∥M −U0U⊤ 0 ∥F ≤1 20σmin(M) and max j e⊤ j U0 2 ≤10µkκ(M) d ∥M∥ (3) By Lemma 4.1, we know the initialization algorithm already gives U0 in the local region given by Eq.(3). Intuitively, stochastic descent steps should keep doing local search within this local region. To establish linear convergence on ∥UtU⊤ t −M∥2 F and obtain final result, we first establish several important lemmas describing the properties of this local regions. Throughout this section, we always denote SVD(M) = XSX⊤, where X ∈Rd×k, and diagnal matrix S ∈Rk×k. We postpone all the formal proofs in Appendix. Lemma 4.2. For function f(U) = ∥M −UU⊤∥2 F and any U1, U2 ∈{U| ∥U∥≤Γ}, we have: ∥∇f(U1) −∇f(U2)∥F ≤16 max{Γ2, ∥M∥} · ∥U1 −U2∥F Lemma 4.3. For function f(U) = ∥M −UU⊤∥2 F and any U ∈{U|σmin(X⊤U) ≥γ}, we have: ∥∇f(U)∥2 F ≥4γ2f(U) Lemma 4.2 tells function f is smooth if spectral norm of U is not very large. On the other hand, σmin(X⊤U) not too small requires both σmin(U⊤U) and σmin(X⊤W) are not too small, where W is top-k eigenspace of UU⊤. That is, Lemma 4.3 tells function f has a property similar to strongly convex in standard optimization literature, if U is rank k in a robust sense (σk(U) is not too small), and the angle between the top k eigenspace of UU⊤and the top k eigenspace M is not large. Lemma 4.4. Within the region D = {U| M −UU⊤ F ≤ 1 10σk(M)}, we have: ∥U∥≤ p 2 ∥M∥, σmin(X⊤U) ≥ p σk(M)/2 Lemma 4.4 tells inside region {U| M −UU⊤ F ≤ 1 10σk(M)}, matrix U always has a good spectral property which gives preconditions for both Lemma 4.2 and 4.3, where f(U) is both smooth and has a property very similar to strongly convex. With above three lemmas, we already been able to see the intuition behind linear convergence in Theorem 3.1. Denote stochastic gradient SG(U) = 2d2(UU⊤−M)ij(eie⊤ j + eje⊤ i )U (4) where SG(U) is a random matrix depends on the randomness of sample (i, j) of matrix M. Then, the stochastic update step in Algorithm 1 can be rewritten as: Ut+1 ←Ut −ηSG(Ut) Let f(U) = ∥M −UU⊤∥2 F, By easy caculation, we know ESG(U) = ∇f(U), that is SG(U) is unbiased. Combine Lemma 4.4 with Lemma 4.2 and Lemma 4.3, we know within region D specified by Lemma 4.4, we have function f(U) is 32 ∥M∥-smooth, and ∥∇f(U)∥2 F ≥2σmin(M)f(U). Let’s suppose ideally, we always have U0, . . . , Ut inside region D, this directly gives: Ef(Ut+1) ≤Ef(Ut) −ηE⟨∇f(Ut), SG(Ut)⟩+ 16η2 ∥M∥· E∥SG(Ut)∥2 F = Ef(Ut) −ηE∥∇f(Ut)∥2 F + 16η2 ∥M∥· E∥SG(Ut)∥2 F ≤(1 −2ησmin(M))Ef(Ut) + 16η2 ∥M∥· E∥SG(Ut)∥2 F 7 One interesting aspect of our main result is that we actually show linear convergence under the presence of noise in gradient. This is true because for the second-order (η2) term above, we can roughly see from Eq.(4) that ∥SG(U)∥2 F ≤h(U) · f(U), where h(U) is a factor depends on U and always bounded. That is, SG(U) enjoys self-bounded property — ∥SG(U)∥2 F will goes to zero, as objective function f(U) goes to zero. Therefore, by choosing learning rate η appropriately small, we can have the first-order term always dominate the second-order term, which establish the linear convergence. Now, the only remaining issue is to prove that “U0, . . . , Ut always stay inside local region D”. In reality, we can only prove this statement with high probability due to the stochastic nature of the update. This is also the most challenging part in our proof, which makes our analysis different from standard convex analysis, and uniquely required due to non-convex setting. Our key theorem is presented as follows: Theorem 4.5. Let f(U) = UU⊤−M 2 F and gi(U) = e⊤ i U 2. Suppose initial U0 satisfying: f(U0) ≤ σmin(M) 20 2 , max i gi(U0) ≤10µkκ(M)2 d ∥M∥ Then, there exist some absolute constant c such that for any learning rate η < c µdkκ3(M)∥M∥log d, with at least 1 − T d10 probability, we will have for all t ≤T that: f(Ut) ≤(1 −1 2ησmin(M))t σmin(M) 10 2 , max i gi(Ut) ≤20µkκ(M)2 d ∥M∥ (5) Note function maxi gi(U) indicates the incoherence of matrix U. Theorem 4.5 guarantees if inital U0 is in the local region which is incoherent and U0U⊤ 0 is close to M, then with high probability for all steps t ≤T, Ut will always stay in a slightly relaxed local region, and f(Ut) has linear convergence. It is not hard to show that all saddle points of f(U) satisfy σk(U) = 0, and all local minima are global minima. Since U0, . . . , Ut automatically stay in region f(U) ≤( σmin(M) 10 )2 with high probability, we know Ut also stay away from all saddle points. The claim that U0, . . . , Ut stays incoherent is essential to better control the variance and probability 1 bound of SG(Ut), so that we can have large step size and tight convergence rate. The major challenging in proving Theorem 4.5 is to both prove Ut stays in the local region, and achieve good sample complexity and running time (linear in d) in the same time. This also requires the learning rate η in Algorithm 1 to be relatively large. Let the event Et denote the good event where U0, . . . , Ut satisfies Eq.(5). Theorem 4.5 is claiming that P(ET ) is large. The essential steps in the proof is contructing two supermartingles related to f(Ut)1Et and gi(Ut)1Et (where 1(·) denote indicator function), and use Bernstein inequalty to show the concentration of supermartingales. The 1Etterm allow us the claim all previous U0, . . . , Ut have all desired properties inside local region. Finally, we see Theorem 3.1 as a immediate corollary of Theorem 4.5. 5 Conclusion In this paper, we presented the first provable, efficient online algorithm for matrix completion, based on nonconvex SGD. In addition to the online setting, our results are also competitive with state of the art results in the offline setting. We obtain our results by introducing a general framework that helps us show how SGD updates self-regulate to stay away from saddle points. We hope our paper and results help generate interest in online matrix completion, and our techniques and framework prompt tighter analysis for other nonconvex problems. References [1] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. arXiv preprint arXiv:1503.00778, 2015. [2] Matthew Brand. Fast online SVD revisions for lightweight recommender systems. In SDM, pages 37–46. SIAM, 2003. 8 [3] Emmanuel J Candes, Yonina C Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase retrieval via matrix completion. SIAM Review, 57(2):225–251, 2015. [4] Emmanuel J. Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational Mathematics, 9(6):717–772, December 2009. [5] James Davidson, Benjamin Liebald, Junning Liu, Palash Nandy, Taylor Van Vleet, Ullas Gargi, Sujoy Gupta, Yu He, Mike Lambert, Blake Livingston, et al. The youtube video recommendation system. In Proceedings of the fourth ACM conference on Recommender systems, pages 293–296. ACM, 2010. [6] Christopher De Sa, Kunle Olukotun, and Christopher Ré. Global convergence of stochastic gradient descent for some non-convex matrix problems. arXiv preprint arXiv:1411.1134, 2014. [7] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. arXiv preprint arXiv:1503.02101, 2015. [8] Moritz Hardt. Understanding alternating minimization for matrix completion. In Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pages 651–660. IEEE, 2014. [9] Moritz Hardt, Raghu Meka, Prasad Raghavendra, and Benjamin Weitz. Computational limits for matrix completion. In COLT, pages 703–725, 2014. [10] Prateek Jain, Chi Jin, Sham M Kakade, and Praneeth Netrapalli. Computing matrix squareroot via non convex local search. arXiv preprint arXiv:1507.05854, 2015. [11] Prateek Jain, Chi Jin, Sham M Kakade, Praneeth Netrapalli, and Aaron Sidford. Matching matrix bernstein with little memory: Near-optimal finite sample guarantees for oja’s algorithm. arXiv preprint arXiv:1602.06929, 2016. [12] Prateek Jain and Praneeth Netrapalli. Fast exact matrix completion with finite samples. arXiv preprint arXiv:1411.1087, 2014. [13] Hui Ji, Chaoqiang Liu, Zuowei Shen, and Yuhong Xu. Robust video denoising using low rank matrix completion. In CVPR, pages 1791–1798. Citeseer, 2010. [14] Raghunandan Hulikal Keshavan. Efficient algorithms for collaborative filtering. PhD thesis, STANFORD UNIVERSITY, 2012. [15] Yehuda Koren. The BellKor solution to the Netflix grand prize. Netflix prize documentation, 81:1–10, 2009. [16] Akshay Krishnamurthy and Aarti Singh. Low-rank matrix and tensor completion via adaptive sampling. In Advances in Neural Information Processing Systems, pages 836–844, 2013. [17] Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. University of California, Berkeley, 1050:16, 2016. [18] G. Linden, B. Smith, and J. York. Amazon.com recommendations: item-to-item collaborative filtering. IEEE Internet Computing, 7(1):76–80, Jan 2003. [19] Xin Luo, Yunni Xia, and Qingsheng Zhu. Incremental collaborative filtering recommender based on regularized matrix factorization. Knowledge-Based Systems, 27:271–280, 2012. [20] Julien Mairal, Francis Bach, Jean Ponce, and Guillermo Sapiro. Online learning for matrix factorization and sparse coding. The Journal of Machine Learning Research, 11:19–60, 2010. [21] Ioannis Panageas and Georgios Piliouras. Gradient descent converges to minimizers: The case of nonisolated critical points. arXiv preprint arXiv:1605.00405, 2016. [22] Benjamin Recht. A simpler approach to matrix completion. Journal of Machine Learning Research, 12(Dec):3413–3430, 2011. [23] Benjamin Recht and Christopher Ré. Parallel stochastic gradient algorithms for large-scale matrix completion. Mathematical Programming Computation, 5(2):201–226, 2013. [24] Ruoyu Sun and Zhi-Quan Luo. Guaranteed matrix completion via nonconvex factorization. In Foundations of Computer Science (FOCS), 2015 IEEE 56th Annual Symposium on, pages 270–289. IEEE, 2015. [25] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of computational mathematics, 12(4):389–434, 2012. [26] Se-Young Yun, Marc Lelarge, and Alexandre Proutiere. Streaming, memory limited matrix completion with noise. arXiv preprint arXiv:1504.03156, 2015. 9
2016
137
6,036
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences Chi Jin UC Berkeley chijin@cs.berkeley.edu Yuchen Zhang UC Berkeley yuczhang@berkeley.edu Sivaraman Balakrishnan Carnegie Mellon University siva@stat.cmu.edu Martin J. Wainwright UC Berkeley wainwrig@berkeley.edu Michael I. Jordan UC Berkeley jordan@cs.berkeley.edu Abstract We provide two fundamental results on the population (infinite-sample) likelihood function of Gaussian mixture models with M ≥3 components. Our first main result shows that the population likelihood function has bad local maxima even in the special case of equally-weighted mixtures of well-separated and spherical Gaussians. We prove that the log-likelihood value of these bad local maxima can be arbitrarily worse than that of any global optimum, thereby resolving an open question of Srebro [2007]. Our second main result shows that the EM algorithm (or a first-order variant of it) with random initialization will converge to bad critical points with probability at least 1 −e−Ω(M). We further establish that a first-order variant of EM will not converge to strict saddle points almost surely, indicating that the poor performance of the first-order method can be attributed to the existence of bad local maxima rather than bad saddle points. Overall, our results highlight the necessity of careful initialization when using the EM algorithm in practice, even when applied in highly favorable settings. 1 Introduction Finite mixture models are widely used in variety of statistical settings, as models for heterogeneous populations, as flexible models for multivariate density estimation and as models for clustering. Their ability to model data as arising from underlying subpopulations provides essential flexibility in a wide range of applications Titterington [1985]. This combinatorial structure also creates challenges for statistical and computational theory, and there are many problems associated with estimation of finite mixtures that are still open. These problems are often studied in the setting of Gaussian mixture models (GMMs), reflecting the wide use of GMMs in applications, particular in the multivariate setting, and this setting will also be our focus in the current paper. Early work [Teicher, 1963] studied the identifiability of finite mixture models, and this problem has continued to attract significant interest (see the recent paper of Allman et al. [2009] for a recent overview). More recent theoretical work has focused on issues related to the use of GMMs for the density estimation problem [Genovese and Wasserman, 2000, Ghosal and Van Der Vaart, 2001]. Focusing on rates of convergence for parameter estimation in GMMs, Chen [1995] established the surprising result that when the number of mixture components is unknown, then the standard √n-rate for regular parametric models is not achievable. Recent investigations [Ho and Nguyen, 2015] into 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. exact-fitted, under-fitted and over-fitted GMMs have characterized the achievable rates of convergence in these settings. From an algorithmic perspective, the dominant practical method for estimating GMMs is the Expectation-Maximization (EM) algorithm [Dempster et al., 1977]. The EM algorithm is an ascent method for maximizing the likelihood, but is only guaranteed to converge to a stationary point of the likelihood function. As such, there are no general guarantees for the quality of the estimate produced via the EM algorithm for Gaussian mixture models.1 This has led researchers to explore various alternative algorithms which are computationally efficient, and for which rigorous statistical guarantees can be given. Broadly, these algorithms are based either on clustering [Arora et al., 2005, Dasgupta and Schulman, 2007, Vempala and Wang, 2002, Chaudhuri and Rao, 2008] or on the method of moments [Belkin and Sinha, 2010, Moitra and Valiant, 2010, Hsu and Kakade, 2013]. Although general guarantees have not yet emerged, there has nonetheless been substantial progress on the theoretical analysis of EM and its variations. Dasgupta and Schulman [2007] analyzed a two-round variant of EM, which involved over-fitting the mixture and then pruning extra centers. They showed that this algorithm can be used to estimate Gaussian mixture components whose means are separated by at least Ω(d1/4). Balakrishnan et al. [2015] studied the local convergence of the EM algorithm for a mixture of two Gaussians with Ω(1)-separation. Their results show that global optima have relatively large regions of attraction, but still require that the EM algorithm be provided with a reasonable initialization in order to ensure convergence to a near globally optimal solution. To date, computationally efficient algorithms for estimating a GMM provide guarantees under the strong assumption that the samples come from a mixture of Gaussians—i.e., that the model is wellspecified. In practice however, we never expect the data to exactly follow the generative model, and it is important to understand the robustness of our algorithms to this assumption. In fact, maximum likelihood has favorable properties in this regard—maximum-likelihood estimates are well known to be robust to perturbations in the Kullback-Leibler metric of the generative model [Donoho and Liu, 1988]. This mathematical result motivates further study of EM and other likelihood-based methods from the computational point of view. It would be useful to characterize when efficient algorithms can be used to compute a maximum likelihood estimate, or a solution that is nearly as accurate, and which retains the robustness properties of the maximum likelihood estimate. In this paper, we focus our attention on uniformly weighted mixtures of M isotropic Gaussians. For this favorable setting, Srebro [2007] conjectured that any local maximum of the likelihood function is a global maximum in the limit of infinite samples—in other words, that there are no bad local maxima for the population GMM likelihood function. This conjecture, if true, would provide strong theoretical justification for EM, at least for large sample sizes. For suitably small sample sizes, it is known [Améndola et al., 2015] that configurations of the samples can be constructed which lead to the likelihood function having an unbounded number of local maxima. The conjecture of Srebro [2007] avoids this by requiring that the samples come from the specified GMM, as well as by considering the (infinite-sample-size) population setting. In the context of high-dimensional regression, it has been observed that in some cases despite having a non-convex objective function, every local optimum of the objective is within a small, vanishing distance of a global optimum [see, e.g., Loh and Wainwright, 2013, Wang et al., 2014]. In these settings, it is indeed the case that for sufficiently large sample sizes there are no bad local optima. A mixture of two spherical Gaussians: A Gaussian mixture model with a single component is simply a Gaussian, so the conjecture of Srebro [2007] holds trivially in this case. The first interesting case is a Gaussian mixture with two components, for which empirical evidence supports the conjecture that there are no bad local optima. It is possible to visualize the setting when there are only two components and to develop a more detailed understanding of the population likelihood surface. Consider for instance a one-dimensional equally weighted unit variance GMM with true centers µ∗ 1 = −4 and µ∗ 2 = 4, and consider the log-likelihood as a function of the vector µ : = (µ1, µ2). Figure 1 shows both the population log-likelihood, µ 7→L(µ), and the negative 2-norm of its gradient, µ 7→−∥∇L(µ)∥2. Observe that the only local maxima are the vectors (−4, 4) and (4, −4), 1In addition to issues of convergence to non-maximal stationary points, solutions of infinite likelihood exist for GMMs where both the location and scale parameters are estimated. In practice, several methods exist to avoid such solutions. In this paper, we avoid this issue by focusing on GMMs in which the scale parameters are fixed. 2 which are both also global maxima. The only remaining critical point is (0, 0), which is a saddle point. Although points of the form (0, R), (R, 0) have small gradient when |R| is large, the gradient is not exactly zero for any finite R. Rigorously resolving the question of existence or non-existence of local maxima for the setting when M = 2 remains an open problem. In the remainder of our paper, we focus our attention on the setting where there are more than two mixture components and attempt to develop a broader understanding of likelihood surfaces for these models, as well as the consequences for algorithms. -250 20 -200 -150 10 -100 0 -50 20 0 10 -10 0 -10 -20 -20 -20 20 -15 10 -10 -5 0 20 0 10 -10 0 -10 -20 -20 (a) (b) Figure 1: Illustration of the likelihood and gradient maps for a two-component Gaussian mixture. (a) Plot of population log-likehood map µ 7→L(µ). (b) Plot of the negative Euclidean norm of the gradient map µ 7→−∥∇L(µ)∥2. Our first contribution is a negative answer to the open question of Srebro [2007]. We construct a GMM which is a uniform mixture of three spherical unit variance, well-separated, Gaussians whose population log-likelihood function contains local maxima. We further show that the log-likelihood of these local maxima can be arbitrarily worse than that of the global maxima. This result immediately implies that any local search algorithm cannot exhibit global convergence (meaning convergence to a global optimum from all possible starting points), even on well-separated mixtures of Gaussians. The mere existence of bad local maxima is not a practical concern unless it turns out that natural algorithms are frequently trapped in these bad local maxima. Our second main result shows that the EM algorithm, as well as a variant thereof known as the first-order EM algorithm, with random initialization, converges to a bad critical point with an exponentially high probability. In more detail, we consider the following practical scheme for parameter estimation in an M-component Gaussian mixture: (a) Draw M i.i.d. points µ1, . . . , µM uniformly at random from the sample set. (b) Run the EM or first-order EM algorithm to estimate the model parameters, using µ1, . . . , µM as the initial centers. We note that in the limit of infinite samples, the initialization scheme we consider is equivalent to selecting M initial centers i.i.d from the underlying mixture distribution. We show that for a universal constant c > 0, with probability at least 1 −e−cM, the EM and first-order EM algorithms converge to a suboptimal critical point, whose log-likelihood could be arbitrarily worse than that of the global maximum. Conversely, in order to find a solution with satisfactory log-likelihood via this initialization scheme, one needs repeat the above scheme exponentially many (in M) times, and then select the solution with highest log-likelihood. This result strongly indicates that repeated random initialization followed by local search (via either EM or its first order variant) can fail to produce useful estimates under reasonable constraints on computational complexity. We further prove that under the same random initialization scheme, the first-order EM algorithm with a suitable stepsize does not converge to a strict saddle point with probability one. This fact strongly suggests that the failure of local search methods for the GMM model is due mainly to the existence of bad local optima, and not due to the presence of (strict) saddle points. 3 Our proofs introduce new techniques to reason about the structure of the population log-likelihood, and in particular to show the existence of bad local optima. We expect that these general ideas will aid in developing a better understanding of the behavior of algorithms for non-convex optimization. From a practical standpoint, our results strongly suggest that careful initialization is required for local search methods, even in large-sample settings, and even for extremely well-behaved mixture models. The remainder of this paper is organized as follows. In Section 2, we introduce GMMs, the EM algorithm, its first-order variant and we formally set up the problem we consider. In Section 3, we state our main theoretical results and develop some of their implications. Section A is devoted to the proofs of our results, with some of the more technical aspects deferred to the appendices. 2 Background and Preliminaries In this section, we formally define the Gaussian mixture model that we study in the paper. We then describe the EM algorithm, the first-order EM algorithm, as well as the form of random initialization that we analyze. Throughout the paper, we use [M] to denote the set {1, 2, · · · , M}, and N(µ, Σ) to denote the d-dimensional Gaussian distribution with mean vector µ and covariance matrix Σ. We use φ(· | µ, Σ) to denote the probability density function of the Gaussian distribution with mean vector µ and covariance matrix Σ: φ(x | µ, Σ) := 1 p (2π)ddet(Σ) e−1 2 (x−µ)⊤Σ−1(x−µ). (1) 2.1 Gaussian Mixture Models A d-dimensional Gaussian mixture model (GMM) with M components can be specified by a collection µ∗= {µ∗ i , . . . , µ∗ M} of d-dimensional mean vectors, a vector λ∗= (λ∗ 1, . . . , λ∗ M) of nonnegative mixture weights that sum to one, and a collection Σ∗= {Σ∗ 1, . . . , Σ∗ M} of covariance matrices. Given these parameters, the density function of a Gaussian mixture model takes the form p(x | λ∗, µ∗, Σ∗) = M X i=1 λ∗ i φ(x | µ∗ i , Σ∗ i ), where the Gaussian density function φ was previously defined in equation (1). In this paper, we focus on the idealized situation in which every mixture component is equally weighted, and the covariance of each mixture component is the identity. This leads to a mixture model of the form p(x | µ∗) := 1 M M X i=1 φ(x | µ∗ i , I), (2) which we denote by GMM(µ∗). In this case, the only parameters to be estimated are the mean vectors µ∗= {µ∗ i }M i=1 of the M components. The difficulty of estimating a Gaussian mixture distribution depends on the amount of separation between the mean vectors. More precisely, for a given parameter ξ > 0, we say that the GMM(µ∗)model is ξ-separated if ∥µ∗ i −µ∗ j∥2 ≥ξ, for all distinct pairs i, j ∈[M]. (3) We say that the mixture is well-separated if condition (3) holds for some ξ = Ω( √ d). Suppose that we observe an i.i.d. sequence {xℓ}n ℓ=1 drawn according to the distribution GMM(µ∗), and our goal is to estimate the unknown collection of mean vectors µ∗. The sample-based loglikelihood function Ln is given by Ln(µ) := 1 n n X ℓ=1 log  1 M M X i=1 φ(xℓ| µi, I)  . (4a) As the sample size n tends to infinity, this sample likelihood converges to the population log-likelihood function L given by L(µ) = Eµ∗log 1 M M X i=1 φ(X | µi, I) ! . (4b) 4 Here Eµ∗denotes expectation taken over the random vector X drawn according to the model GMM(µ∗). A straightforward implication of the positivity of the KL divergence is that the population likelihood function is in fact maximized at µ∗(along with permutations thereof, depending on how we index the mixture components). On the basis of empirical evidence, Srebro [2007] conjectured that this population log-likelihood is in fact well-behaved, in the sense of having no spurious local optima. In Theorem 1, we show that this intuition is false, and provide a simple example of a mixture of M = 3 well-separated Gaussians in dimension d = 1, whose population log-likelihood function has arbitrarily bad local optima. 2.2 Expectation-Maximization Algorithm A natural way to estimate the mean vectors µ∗is by attempting to maximize the sample log-likelihood defined by the samples {xℓ}n ℓ=1. For a non-degenerate Gaussian mixture model, the log-likelihood is non-concave. Rather than attempting to maximize the log-likelihood directly, the EM algorithm proceeds by iteratively maximizing a lower bound on the log-likelihood. It does so by alternating between two steps: 1. E-step: For each i ∈[M] and ℓ∈[n], compute the membership weight wi(xℓ) = φ(xℓ| µi, I) PM j=1 φ(xℓ| µj, I) . 2. M-step: For each i ∈[M], update the mean µi vector via µnew i = Pn i=1 wi(xℓ) xℓ Pn ℓ=1 wi(xℓ) . In the population setting, the M-step becomes: µnew i = Eµ∗[wi(X) X] Eµ∗[wi(X)] . (5) Intuitively, the M-step updates the mean vector of each Gaussian component to be a weighted centroid of the samples for appropriately chosen weights. First-order EM updates: For a general latent variable model with observed variables X = x, latent variables Z and model parameters θ, by Jensen’s inequality, the log-likelihood function can be lower bounded as log P(x | θ′) ≥EZ∼P(·|x;θ) log P(x, Z | θ′) | {z } :=Q(θ′|θ) −EZ∼P(·|x;θ) log P(Z | x; θ′). Each step of the EM algorithm can also be viewed as optimizing over this lower bound, which gives: θnew := arg max θ′ Q(θ′ | θ) There are many variants of the EM algorithm which rely instead on partial updates at each iteration instead of finding the exact optimum of Q(θ′ | θ). One important example, analyzed in the work of Balakrishnan et al. [2015], is the first-order EM algorithm. The first-order EM algorithm takes a step along the gradient of the function Q(θ′ | θ) (with respect to its first argument) in each iteration. Concretely, given a step size s > 0, the first-order EM updates can be written as: θnew = θ + s∇θ′Q(θ′ | θ) |θ′=θ . In the case of the model GMM(µ∗), the gradient EM updates on the population objective take the form µnew i = µi + s Eµ∗ wi(X)(X −µi)  . (6) This update turns out to be equivalent to gradient ascent on the population likelihood L with step size s > 0 (see the paper Balakrishnan et al. [2015] for details). 5 2.3 Random Initialization Since the log-likelihood function is non-concave, the point to which the EM algorithm converges depends on the initial value of µ. In practice, it is standard to choose these values by some form of random initialization. For instance, one method is to to initialize the mean vectors by sampling uniformly at random from the data set {xℓ}n ℓ=1. This scheme is intuitively reasonable, because it automatically adapts to the locations of the true centers. If the true centers have large mutual distances, then the initialized centers will also be scattered. Conversely, if the true centers concentrate in a small region of the space, then the initialized centers will also be close to each other. In practice, initializing µ by uniformly drawing from the data is often more reasonable than drawing µ from a fixed distribution. In this paper, we analyze the EM algorithm and its variants at the population level. We focus on the above practical initialization scheme of selecting µ uniformly at random from the sample set. In the idealized population setting, this is equivalent to sampling the initial values of µ i.i.d from the distribution GMM(µ∗). Throughout this paper, we refer to this particular initialization strategy as random initialization. 3 Main results We now turn to the statements of our main results, along with a discussion of some of their consequences. 3.1 Structural properties In our first main result (Theorem 1), for any M ≥3, we exhibit an M-component mixture of Gaussians in dimension d = 1 for which the population log-likelihood has a bad local maximum whose log-likelihood is arbitrarily worse than that attained by the true parameters µ∗. This result provides a negative answer to the conjecture of Srebro [2007]. Theorem 1. For any M ≥3 and any constant Cgap > 0, there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) and a local maximum µ′ such that L(µ′) ≤L(µ∗) −Cgap. In order to illustrate the intuition underlying Theorem 1, we give a geometrical description of our construction for M = 3. Suppose that the true centers µ∗ 1, µ∗ 2 and µ∗ 3, are such that the distance between µ∗ 1 and µ∗ 2 is much smaller than the respective distances from µ∗ 1 to µ∗ 3, and from µ∗ 2 to µ∗ 3. Now, consider the point µ := (µ1, µ2, µ3) where µ1 = (µ∗ 1 + µ∗ 2)/2; the points µ2 and µ3 are both placed at the true center µ∗ 3. This assignment does not maximize the population log-likelihood, because only one center is assigned to the two Gaussian components centered at µ∗ 1 and µ∗ 2, while two centers are assigned to the Gaussian component centered at µ∗ 3. However, when the components are well-separated we are able to show that there is a local maximum in the neighborhood of this configuration. In order to establish the existence of a local maximum, we first define a neighborhood of this configuration ensuring that it does not contain any global maximum, and then prove that the log-likelihood on the boundary of this neighborhood is strictly smaller than that of the sub-optimal configuration µ. Since the log-likelihood is bounded from above, this neighborhood must contain at least one maximum of the log-likelihood. Since the global maxima are not in this neighborhood by construction, any maximum in this neighborhood must be a local maximum. See Section A for a detailed proof. 3.2 Algorithmic consequences An important implication of Theorem 1 is that any iterative algorithm, such as EM or gradient ascent, that attempts to maximize the likelihood based on local updates cannot be globally convergent—that is, cannot converge to (near) globally optimal solutions from an arbitrary initialization. Indeed, if any such algorithm is initialized at the local maximum, then they will remain trapped. However, one might argue that this conclusion is overly pessimistic, in that we have only shown that these algorithms fail when initialized at a certain (adversarially chosen) point. Indeed, the mere existence of bad local minima need not be a practical concern unless it can be shown that a typical optimization 6 algorithm will frequently converge to one of them. The following result shows that the EM algorithm, when applied to the population likelihood and initialized according to the random scheme described in Section 2.2, converges to a bad critical point with high probability. Theorem 2. Let µt be the tth iterate of the EM algorithm initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P  ∀t ≥0, L(µt) ≤L(µ∗) −Cgap  ≥1 −e−cM. Theorem 2 shows that, for the specified configuration µ∗, the probability of success for the EM algorithm is exponentially small as a function of M. As a consequence, in order to guarantee recovering a global maximum with at least constant probability, the EM algorithm with random initialization must be executed at least eΩ(M) times. This result strongly suggests that that effective initialization schemes, such as those based on pilot estimators utilizing the method of moments [Moitra and Valiant, 2010, Hsu and Kakade, 2013], are critical to finding good maxima in general GMMs. The key idea in the proof of Theorem 2 is the following: suppose that all the true centers are grouped into two clusters that are extremely far apart, and suppose further that we initialize all the centers in the neighborhood of these two clusters, while ensuring that at least one center lies within each cluster. In this situation, all centers will remain trapped within the cluster in which they were first initialized, irrespective of how many steps we take in the EM algorithm. Intuitively, this suggests that the only favorable initialization schemes (from which convergence to a global maximum is possible) are those in which (1) all initialized centers fall in the neighborhood of exactly one cluster of true centers, (2) the number of centers initialized within each cluster of true centers exactly matches the number of true centers in that cluster. However, this observation alone only suffices to guarantee that the success probability is polynomially small in M. In order to demonstrate that the success probability is exponentially small in M, we need to further refine this construction. In more detail, we construct a Gaussian mixture distribution with a recursive structure: on top level, its true centers can be grouped into two clusters far apart, and then inside each cluster, the true centers can be further grouped into two mini-clusters which are well-separated, and so on. We can repeat this structure for Ω(log M) levels. For this GMM instance, even in the case where the number of true centers exactly matches the number of initialized centers in each cluster at the top level, we still need to consider the configuration of the initial centers within the mini-clusters, which further reduces the probability of success for a random initialization. A straightforward calculation then shows that the probability of a favorable random initialization is on the order of e−Ω(M). The full proof is given in Section A.2. We devote the remainder of this section to a treatment of the first-order EM algorithm. Our first result in this direction shows that the problem of convergence to sub-optimal fixed points remains a problem for the first-order EM algorithm, provided the step-size is not chosen too aggressively. Theorem 3. Let µt be the tth iterate of the first-order EM algorithm with stepsize s ∈(0, 1), initialized by the random initialization scheme described previously. There exists a universal constant c, for any M ≥3 and any constant Cgap > 0, such that there is a well-separated uniform mixture of M unit-variance spherical Gaussians GMM(µ∗) with P ∀t ≥0, L(µt) ≤L(µ∗) −Cgap  ≥1 −e−cM. (7) We note that the restriction on the step-size is weak, and is satisfied by the theoretically optimal choice for a mixture of two Gaussians in the setting studied by Balakrishnan et al. [2015]. Recall that the first-order EM updates are identical to gradient ascent updates on the log-likelihood function. As a consequence, we can conclude that the most natural local search heuristics for maximizing the log-likelihood (EM and gradient ascent), fail to provide statistically meaningful estimates when initialized randomly, unless we repeat this procedure exponentially many (in M) times. Our final result concerns the type of fixed points reached by the first-order EM algorithm in our setting. Pascanu et al. [2014] argue that for high-dimensional optimization problems, the principal difficulty is the proliferation of saddle points, not the existence of poor local maxima. In our setting, however, we can leverage recent results on gradient methods [Lee et al., 2016, Panageas and Piliouras, 2016] to show that the first-order EM algorithm cannot converge to strict saddle points. More precisely: 7 Definition 1 (Strict saddle point Ge et al. [2015]). For a maximization problem, we say that a critical point xss of function f is a strict saddle point if the Hessian ∇2f(xss) has at least one strictly positive eigenvalue. With this definition, we have the following: Theorem 4. Let µt be the tth iterate of the first-order EM algorithm with constant stepsize s ∈(0, 1), and initialized by the random initialization scheme described previously. Then for any M-component mixture of spherical Gaussians: (a) The iterates µt converge to a critical point of the log-likelihood. (b) For any strict saddle point µss, we have P (limt→∞µt = µss) = 0. Theorems 3 and 4 provide strong support for the claim that the sub-optimal points to which the first-order EM algorithm frequently converges are bad local maxima. The algorithmic failure of the first-order EM algorithm is most likely due to the presence of bad local maxima, as opposed to (strict) saddle-points. The proof of Theorem 4 is based on recent work [Lee et al., 2016, Panageas and Piliouras, 2016] on the asymptotic performance of gradient methods. That work reposes on the stable manifold theorem from dynamical systems theory, and, applied directly to our setting, would require establishing that the population likelihood L is smooth. Our proof technique avoids such a smoothness argument; see Section A.4 for the details. The proof technique makes use of specific properties of the first-order EM algorithm that do not hold for the EM algorithm. We conjecture that a similar result is true for the EM algorithm; however, we suspect that a generalized version of the stable manifold theorem will be needed to establish such a result. 4 Conclusion and open problems In this paper, we resolved an open problem of Srebro [2007], by demonstrating the existence of arbitrarily bad local maxima for the population log-likelihood of Gaussian mixture model, even in the idealized situation where each component is uniformly weighted, spherical with unit variance, and well-separated. We further provided some evidence that even in this favorable setting random initialization schemes for the population EM algorithm are likely to fail with high probability. Our results carry over in a straightforward way, via standard empirical process arguments, to settings where a large finite sample is provided. An interesting open question is to resolve the necessity of at least three mixture components in our constructions. In particular, we believe that at least three mixture components are necessary for the log-likelihood to be poorly behaved, and that for a well-separated mixture of two Gaussians the EM algorithm with a random initialization is in fact successful with high probability. In a related vein, understanding the empirical success of EM-style algorithms using random initialization schemes despite their failure on seemingly benign problem instances remains an open problem which we hope to address in future work. Acknowledgements This work was partially supported by Office of Naval Research MURI grant DOD-002888, Air Force Office of Scientific Research Grant AFOSR-FA9550-14-1-001, the Mathematical Data Science program of the Office of Naval Research under grant number N00014-15-1-2670, and National Science Foundation Grant CIF-31712-23800. References Elizabeth S Allman, Catherine Matias, and John A Rhodes. Identifiability of parameters in latent structure models with many observed variables. Annals of Statistics, 37(6A):3099–3132, 2009. Carlos Améndola, Mathias Drton, and Bernd Sturmfels. Maximum likelihood estimates for Gaussian mixtures are transcendental. In International Conference on Mathematical Aspects of Computer and Information Sciences, pages 579–590. Springer, 2015. 8 Sanjeev Arora, Ravi Kannan, et al. Learning mixtures of separated nonspherical Gaussians. The Annals of Applied Probability, 15(1A):69–92, 2005. Sivaraman Balakrishnan, Martin J Wainwright, and Bin Yu. Statistical guarantees for the EM algorithm: From population to sample-based analysis. Annals of Statistics, 2015. Mikhail Belkin and Kaushik Sinha. Polynomial learning of distribution families. In 51st Annual IEEE Symposium on Foundations of Computer Science, pages 103–112. IEEE, 2010. Kamalika Chaudhuri and Satish Rao. Learning mixtures of product distributions using correlations and independence. In 21st Annual Conference on Learning Theory, volume 4, pages 9–1, 2008. Jiahua Chen. Optimal rate of convergence for finite mixture models. Annals of Statistics, 23(1):221–233, 1995. Sanjoy Dasgupta and Leonard Schulman. A probabilistic analysis of EM for mixtures of separated, spherical Gaussians. Journal of Machine Learning Research, 8:203–226, 2007. Arthur P Dempster, Nan M Laird, and Donald B Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38, 1977. David L Donoho and Richard C Liu. The “automatic” robustness of minimum distance functionals. Annals of Statistics, 16(2):552–586, 1988. Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In 28th Annual Conference on Learning Theory, pages 797–842, 2015. Christopher R Genovese and Larry Wasserman. Rates of convergence for the Gaussian mixture sieve. Annals of Statistics, 28(4):1105–1127, 2000. Subhashis Ghosal and Aad W Van Der Vaart. Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities. Annals of Statistics, 29(5):1233–1263, 2001. Nhat Ho and XuanLong Nguyen. Identifiability and optimal rates of convergence for parameters of multiple types in finite mixtures. arXiv preprint arXiv:1501.02497, 2015. Daniel Hsu and Sham M Kakade. Learning mixtures of spherical Gaussians: Moment methods and spectral decompositions. In Proceedings of the 4th Conference on Innovations in Theoretical Computer Science, pages 11–20. ACM, 2013. Jason D Lee, Max Simchowitz, Michael I Jordan, and Benjamin Recht. Gradient descent converges to minimizers. In 29th Annual Conference on Learning Theory, pages 1246–1257, 2016. Po-Ling Loh and Martin J Wainwright. Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima. In Advances in Neural Information Processing Systems, pages 476–484, 2013. Ankur Moitra and Gregory Valiant. Settling the polynomial learnability of mixtures of Gaussians. In 51st Annual IEEE Symposium on Foundations of Computer Science, pages 93–102. IEEE, 2010. Ioannis Panageas and Georgios Piliouras. Gradient descent converges to minimizers: The case of non-isolated critical points. arXiv preprint arXiv:1605.00405, 2016. Razvan Pascanu, Yann N Dauphin, Surya Ganguli, and Yoshua Bengio. On the saddle point problem for non-convex optimization. arXiv preprint arXiv:1405.4604, 2014. Nathan Srebro. Are there local maxima in the infinite-sample likelihood of Gaussian mixture estimation? In 20th Annual Conference on Learning Theory, pages 628–629, 2007. Henry Teicher. Identifiability of finite mixtures. The Annals of Mathematical Statistics, 34(4):1265–1269, 1963. D Michael Titterington. Statistical Analysis of Finite Mixture Distributions. Wiley, 1985. Santosh Vempala and Grant Wang. A spectral algorithm for learning mixtures of distributions. In The 43rd Annual IEEE Symposium on Foundations of Computer Science, pages 113–122. IEEE, 2002. Zhaoran Wang, Han Liu, and Tong Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of Statistics, 42(6):2164, 2014. 9
2016
138
6,037
Diffusion-Convolutional Neural Networks James Atwood and Don Towsley College of Information and Computer Science University of Massachusetts Amherst, MA, 01003 {jatwood|towsley}@cs.umass.edu Abstract We present diffusion-convolutional neural networks (DCNNs), a new model for graph-structured data. Through the introduction of a diffusion-convolution operation, we show how diffusion-based representations can be learned from graphstructured data and used as an effective basis for node classification. DCNNs have several attractive qualities, including a latent representation for graphical data that is invariant under isomorphism, as well as polynomial-time prediction and learning that can be represented as tensor operations and efficiently implemented on a GPU. Through several experiments with real structured datasets, we demonstrate that DCNNs are able to outperform probabilistic relational models and kernel-on-graph methods at relational node classification tasks. 1 Introduction Working with structured data is challenging. On one hand, finding the right way to express and exploit structure in data can lead to improvements in predictive performance; on the other, finding such a representation may be difficult, and adding structure to a model can dramatically increase the complexity of prediction The goal of this work is to design a flexible model for a general class of structured data that offers improvements in predictive performance while avoiding an increase in complexity. To accomplish this, we extend convolutional neural networks (CNNs) to general graph-structured data by introducing a ‘diffusion-convolution’ operation. Briefly, rather than scanning a ‘square’ of parameters across a grid-structured input like the standard convolution operation, the diffusion-convolution operation builds a latent representation by scanning a diffusion process across each node in a graph-structured input. This model is motivated by the idea that a representation that encapsulates graph diffusion can provide a better basis for prediction than a graph itself. Graph diffusion can be represented as a matrix power series, providing a straightforward mechanism for including contextual information about entities that can be computed in polynomial time and efficiently implemented on a GPU. In this paper, we present diffusion-convolutional neural networks (DCNNs) and explore their performance on various classification tasks on graphical data. Many techniques include structural information in classification tasks, such as probabilistic relational models and kernel methods; DCNNs offer a complementary approach that provides a significant improvement in predictive performance at node classification tasks. As a model class, DCNNs offer several advantages: • Accuracy: In our experiments, DCNNs significantly outperform alternative methods for node classification tasks and offer comparable performance to baseline methods for graph classification tasks. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Pt Xt Nt Nt F Zt H Wc (H x F) Wd (H x F) Nt Yt Nt Nt (a) Node classification Pt Xt Nt Nt F Zt H Yt Wc (H x F) Wd (H x F) Nt (b) Graph classification Figure 1: DCNN model definition for node and graph classification tasks. • Flexibility: DCNNs provide a flexible representation of graphical data that encodes node features, edge features, and purely structural information with little preprocessing. DCNNs can be used for a variety of classification tasks with graphical data, including node classification and whole-graph classification. • Speed: Prediction from an DCNN can be expressed as a series of polynomial-time tensor operations, allowing the model to be implemented efficiently on a GPU using existing libraries. The remainder of this paper is organized as follows. In Section 2, we present a formal definition of the model, including descriptions of prediction and learning procedures. This is followed by several experiments in Section 3 that explore the performance of DCNNs at node and graph classification tasks. We briefly describe the limitations of the model in Section 4, then, in Section 5, we present related work and discuss the relationship between DCNNs and other methods. Finally, conclusions and future work are presented in Section 6. 2 Model Consider a situation where we have a set of T graphs G = {Gt|t ∈1...T}. Each graph Gt = (Vt, Et) is composed of vertices Vt and edges Et. The vertices are collectively described by an Nt × F design matrix Xt of features1, where Nt is the number of nodes in Gt, and the edges Et are encoded by an Nt × Nt adjacency matrix At, from which we can compute a degree-normalized transition matrix Pt that gives the probability of jumping from node i to node j in one step. No constraints are placed on the form of Gt; the graph can be weighted or unweighted, directed or undirected. Either the nodes or graphs have labels Y associated with them, with the dimensionality of Y differing in each case. We are interested in learning to predict Y ; that is, to predict a label for each of the nodes in each graph or a label for each graph itself. In each case, we have access to some labeled entities (be they nodes or graphs), and our task is predict the values of the remaining unlabeled entities. This setting can represent several well-studied machine learning tasks. If T = 1 (i.e. there is only one input graph) and the labels Y are associated with the nodes, this reduces to the problem of semisupervised classification; if there are no edges present in the input graph, this reduces further to standard supervised classification. If T > 1 and the labels Y are associated with each graph, then this represents the problem of supervised graph classification. DCNNs are designed to perform any task that can be represented within this formulation. An DCNN takes G as input and returns either a hard prediction for Y or a conditional distribution P(Y |X). Each 1Without loss of generality, we assume that the features are real-valued. 2 entity of interest (be it a node or a graph) is transformed to a diffusion-convolutional representation, which is a H × F real matrix defined by H hops of graph diffusion over F features, and it is defined by an H × F real-valued weight tensor W c and a nonlinear differentiable function f that computes the activations. So, for node classification tasks, the diffusion-convolutional representation of graph t, Zt, will be a Nt × H × F tensor, as illustrated in Figure 1a; for graph classification tasks, Zt will be a H × F matrix, as illustrated in Figures 1b. The model is built on the idea of a diffusion kernel, which can be thought of as a measure of the level of connectivity between any two nodes in a graph when considering all paths between them, with longer paths being discounted more than shorter paths. Diffusion kernels provide an effective basis for node classification tasks [1]. The term ‘diffusion-convolution’ is meant to evoke the ideas of feature learning, parameter tying, and invariance that are characteristic of convolutional neural networks. The core operation of a DCNN is a mapping from nodes and their features to the results of a diffusion process that begins at that node. In contrast with standard CNNs, DCNN parameters are tied according diffusion search depth rather than their position in a grid. The diffusion-convolutional representation is invariant with respect to node index rather than position; in other words, the diffusion-convolututional activations of two isomorphic input graphs will be the same2. Unlike standard CNNs, DCNNs have no pooling operation. Node Classification Consider a node classification task where a label Y is predicted for each input node in a graph. Let P ∗ t be an Nt × H × Nt tensor containing the power series of Pt, defined as follows: P ∗ tijk = P j tik (1) The diffusion-convolutional activation Ztijk for node i, hop j, and feature k of graph t is given by Ztijk = f W c jk · Nt X l=1 P ∗ tijlXtlk ! (2) The activations can be expressed more concisely using tensor notation as Zt = f (W c ⊙P ∗ t Xt) (3) where the ⊙operator represents element-wise multiplication; see Figure 1a. The model only entails O(H × F) parameters, making the size of the latent diffusion-convolutional representation independent of the size of the input. The model is completed by a dense layer that connects Z to Y . A hard prediction for Y , denoted ˆY , can be obtained by taking the maximum activation and a conditional probability distribution P(Y |X) can be found by applying the softmax function: ˆY = arg max f W d ⊙Z  (4) P(Y |X) = softmax f W d ⊙Z  (5) This keeps the same form in the following extensions. Graph Classification DCNNs can be extended to graph classification by taking the mean activation over the nodes Zt = f W c ⊙1T NtP ∗ t Xt/Nt  (6) where 1Nt is an Nt × 1 vector of ones, as illustrated in Figure 1b. Purely Structural DCNNs DCNNs can be applied to input graphs with no features by associating a ‘bias feature’ with value 1.0 with each node. Richer structure can be encoded by adding additional structural node features such as Pagerank or clustering coefficient, although this does introduce some hand-engineering and pre-processing. 2A proof is given in the appendix. 3 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Training Proportion 0.0 0.2 0.4 0.6 0.8 1.0 accuracy cora: accuracy dcnn2 logisticl1 logisticl2 (a) Cora Learning Curve 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Training Proportion 0.0 0.2 0.4 0.6 0.8 1.0 accuracy pubmed: accuracy dcnn2 logisticl1 logisticl2 (b) Pubmed Learning Curve 0 1 2 3 4 5 N Hops 0.60 0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00 accuracy Node Classification pubmed cora (c) Search Breadth Figure 2: Learning curves (2a - 2b) and effect of search breadth (2c) for the Cora and Pubmed datasets. Learning DCNNs are learned via stochastic minibatch gradient descent on backpropagated error. At each epoch, node indices are randomly grouped into several batches. The error of each batch is computed by taking slices of the graph definition power series and propagating the input forward to predict the output, then setting the weights by gradient ascent on the back-propagated error. We also make use of windowed early stopping; training is ceased if the validation error of a given epoch is greater than the average of the last few epochs. 3 Experiments In this section we present several experiments to investigate how well DCNNs perform at node and graph classification tasks. In each case we compare DCNNs to other well-known and effective approaches to the task. In each of the following experiments, we use the AdaGrad algorithm [2] for gradient ascent with a learning rate of 0.05. All weights are initialized by sampling from a normal distribution with mean zero and variance 0.01. We choose the hyperbolic tangent for the nonlinear differentiable function f and use the multiclass hinge loss between the model predictions and ground truth as the training objective. The model was implemented in Python using Lasagne and Theano [3]. 3.1 Node classification We ran several experiments to investigate how well DCNNs classify nodes within a single graph. The graphs were constructed from the Cora and Pubmed datasets, which each consist of scientific papers (nodes), citations between papers (edges), and subjects (labels). Protocol In each experiment, the set G consists of a single graph G. During each trial, the input graph’s nodes are randomly partitioned into training, validation, and test sets, with each set having Cora Pubmed Model Accuracy F (micro) F (macro) Accuracy F (micro) F (macro) l1logistic 0.7087 0.7087 0.6829 0.8718 0.8718 0.8698 l2logistic 0.7292 0.7292 0.7013 0.8631 0.8631 0.8614 KED 0.8044 0.8044 0.7928 0.8125 0.8125 0.7978 KLED 0.8229 0.8229 0.8117 0.8228 0.8228 0.8086 CRF-LBP 0.8449 – 0.8248 – – – 2-hop DCNN 0.8677 0.8677 0.8584 0.8976 0.8976 0.8943 Table 1: A comparison of the performance between baseline ℓ1 and ℓ2-regularized logistic regression models, exponential diffusion and Laplacian exponential diffusion kernel models, loopy belief propagation (LBP) on a partially-observed conditional random field (CRF), and a two-hop DCNN on the Cora and Pubmed datasets. The DCNN offers the best performance according to each measure, and the gain is statistically significant in each case. The CRF-LBP result is quoted from [4], which follows the same experimental protocol. 4 the same number of nodes. During training, all node features X, all edges E, and the labels Y of the training and validation sets are visible to the model. We report classification accuracy as well as micro– and macro–averaged F1; each measure is reported as a mean and confidence interval computed from several trials. We also provide learning curves for the CORA and Pubmed datasets. In this experiment, the validation and test set each contain 10% of the nodes, and the amount of training data is varied between 10% and 100% of the remaining nodes. Baseline Methods ‘l1logistic’ and ‘l2logistic’ indicate ℓ1 and ℓ2-regularized logistic regression, respectively. The inputs to the logistic regression models are the node features alone (e.g. the graph structure is not used) and the regularization parameter is tuned using the validation set. ‘KED’ and ‘KLED’ denote the exponential diffusion and Laplacian exponential diffusion kernels-on-graphs, respectively, which have previously been shown to perform well on the Cora dataset [1]. These kernel models take the graph structure as input (e.g. node features are not used) and the validation set is used to determine the kernel hyperparameters. ‘CRF-LBP’ indicates a partially-observed conditional random field that uses loopy belief propagation for inference. Results for this model are quoted from prior work [4] that uses the same dataset and experimental protocol. Node Classification Data The Cora corpus [5] consists of 2,708 machine learning papers and the 5,429 citation edges that they share. Each paper is assigned a label drawn from seven possible machine learning subjects, and each paper is represented by a bit vector where each feature corresponds to the presence or absence of a term drawn from a dictionary with 1,433 unique entries. We treat the citation network as an undirected graph. The Pubmed corpus [5] consists of 19,717 scientific papers from the Pubmed database on the subject of diabetes. Each paper is assigned to one of three classes. The citation network that joins the papers consists of 44,338 links, and each paper is represented by a Term Frequency Inverse Document Frequency (TFIDF) vector drawn from a dictionary with 500 terms. As with the CORA corpus, we construct an adjacency-based DCNN that treats the citation network as an undirected graph. Results Discussion Table 1 compares the performance of a two-hop DCNN with several baselines. The DCNN offers the best performance according to different measures including classification accuracy and micro– and macro–averaged F1, and the gain is statistically significant in each case with negligible p-values. For all models except the CRF, we assessed this via a one-tailed two-sample Welch’s t-test. The CRF result is quoted from prior work, so we used a one-tailed one-sample test. Figures 2a and Figure 2b show the learning curves for the Cora and Pubmed datasets. The DCNN generally outperforms the baseline methods on the Cora dataset regardless of the amount of training data available, although the Laplacian exponential diffusion kernel does offer comparable performance when the entire training set is available. Note that the kernel methods were prohibitively slow to run on the Pubmed dataset, so we do not include them in the learning curve. Finally, the impact of diffusion breadth on performance is shown in Figure 2. Most of the performance is gained as the diffusion breadth grows from zero to three hops, then levels out as the diffusion process converges. 3.2 Graph Classification We also ran experiments to investigate how well DCNNs can learn to label whole graphs. Protocol At the beginning of each trial, input graphs are randomly assigned to training, validation, or test, with each set having the same number of graphs. During the learning phase, the training and validation graphs, their node features, and their labels are made visible; the training set is used to determine the parameters and the validation set to determine hyperparameters. At test time, the test graphs and features are made visible and the graph labels are predicted and compared with ground truth. Table 2 reports the mean accuracy, micro-averaged F1, and macro-averaged F1 over several trials. We also provide learning curves for the MUTAG (Figure 3a) and ENZYMES (Figure 3b) datasets. In these experiments, validation and test sets each containing 10% of the graphs, and we report the 5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Training Proportion 0.0 0.2 0.4 0.6 0.8 1.0 accuracy mutag: accuracy dcnn2 logisticl1 logisticl2 (a) MUTAG Learning Curve 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Training Proportion 0.0 0.2 0.4 0.6 0.8 1.0 accuracy enzymes: accuracy dcnn2 logisticl1 logisticl2 (b) ENZYMES Learning Curve 0 2 4 6 8 10 N Hops 0.0 0.2 0.4 0.6 0.8 1.0 accuracy Graph Classification nci109 enzymes nci1 ptc mutag (c) Search Breadth Figure 3: Learning curves for the MUTAG (3a) and ENZYMES (3b) datasets as well as the effect of search breadth (3c) performance of each model as a function of the proportion of the remaining graphs that are made available for training. Baseline Methods As a simple baseline, we apply linear classifiers to the average feature vector of each graph; ‘l1logistic’ and ‘l2logistic’ indicate ℓ1 and ℓ2-regularized logistic regression applied as described. ‘deepwl’ indicates the Weisfeiler-Lehman (WL) subtree deep graph kernel. Deep graph kernels decompose a graph into substructures, treat those substructures as words in a sentence, and fit a word-embedding model to obtain a vectorization [6]. Graph Classification Data We apply DCNNs to a standard set of graph classification datasets that consists of NCI1, NCI109, MUTAG, PCI, and ENZYMES. The NCI1 and NCI109 [7] datasets consist of 4100 and 4127 graphs that represent chemical compounds. Each graph is labeled with whether it is has the ability to suppress or inhibit the growth of a panel of human tumor cell lines, and each node is assigned one of 37 (for NCI1) or 38 (for NCI109) possible labels. MUTAG [8] contains 188 nitro compounds that are labeled as either aromatic or heteroaromatic with seven node features. PTC [9] contains 344 compounds labeled with whether they are carcinogenic in rats with 19 node features. Finally, ENZYMES [10] is a balanced dataset containing 600 proteins with three node features. Results Discussion In contrast with the node classification experiments, there is no clear best model choice across the datasets or evaluation measures. In fact, according to Table 2, the only clear choice is the ‘deepwl’ graph kernel model on the ENZYMES dataset, which significantly outperforms the other methods in terms of accuracy and micro– and macro–averaged F measure. Furthermore, as shown in Figure 3, there is no clear benefit to broadening the search breadth H. These results suggest that, while diffusion processes are an effective representation for nodes, they do a poor job of summarizing entire graphs. It may be possible to improve these results by finding a more effective way to aggregate the node operations than a simple mean, but we leave this as future work. NCI1 NCI109 Model Accuracy F (micro) F (macro) Accuracy F (micro) F (macro) l1logistic 0.5728 0.5728 0.5711 0.5555 0.5555 0.5411 l2logistic 0.5688 0.5688 0.5641 0.5586 0.5568 0.5402 deepwl 0.6215 0.6215 0.5821 0.5801 0.5801 0.5178 2-hop DCNN 0.6250 0.5807 0.5807 0.6275 0.5884 0.5884 5-hop DCNN 0.6261 0.5898 0.5898 0.6286 0.5950 0.5899 MUTAG PTC ENZYMES Model Accuracy F (micro) F (macro) Accuracy F (micro) F (macro) Accuracy F (micro) F (macro) l1logistic 0.7190 0.7190 0.6405 0.5470 0.5470 0.4272 0.1640 0.1640 0.0904 l2logistic 0.7016 0.7016 0.5795 0.5565 0.5565 0.4460 0.2030 0.2030 0.1110 deepwl 0.6563 0.6563 0.5942 0.5113 0.5113 0.4444 0.2155 0.2155 0.1431 2-hop DCNN 0.6635 0.7975 0.79747 0.5660 0.0500 0.0531 0.1590 0.1590 0.0809 5-hop DCNN 0.6698 0.8013 0.8013 0.5530 0.0 0.0526 0.1810 0.1810 0.0991 Table 2: A comparison of the performance between baseline methods and two and five-hop DCNNs on several graph classification datasets. 6 4 Limitations Scalability DCNNs are realized as a series of operations on dense tensors. Storing the largest tensor (P ∗, the transition matrix power series) requires O(N 2 t H) memory, which can lead to out-of-memory errors on the GPU for very large graphs in practice. As such, DCNNs can be readily applied to graphs of tens to hundreds of thousands of nodes, but not to graphs with millions to billions of nodes. Locality The model is designed to capture local behavior in graph-structured data. As a consequence of constructing the latent representation from diffusion processes that begin at each node, we may fail to encode useful long-range spatial dependencies between individual nodes or other non-local graph behavior. 5 Related Work In this section we describe existing approaches to the problems of semi-supervised learning, graph classification, and edge classification, and discuss their relationship to DCNNs. Other Graph-Based Neural Network Models Other researchers have investigated how CNNs can be extended from grid-structured to more general graph-structured data. [11] propose a spatial method with ties to hierarchical clustering, where the layers of the network are defined via a hierarchical partitioning of the node set. In the same paper, the authors propose a spectral method that extends the notion of convolution to graph spectra. Later, [12] applied these techniques to data where a graph is not immediately present but must be inferred. DCNNs, which fall within the spatial category, are distinct from this work because their parameterization makes them transferable; a DCNN learned on one graph can be applied to another. A related branch of work that has focused on extending convolutional neural networks to domains where the structure of the graph itself is of direct interest [13, 14, 15]. For example, [15] construct a deep convolutional model that learns real-valued fingerprint representation of chemical compounds. Probabilistic Relational Models DCNNs also share strong ties to probabilistic relational models (PRMs), a family of graphical models that are capable of representing distributions over relational data [16]. In contrast to PRMs, DCNNs are deterministic, which allows them to avoid the exponential blowup in learning and inference that hampers PRMs. Our results suggest that DCNNs outperform partially-observed conditional random fields, the stateof-the-art model probabilistic relational model for semi-supervised learning. Furthermore, DCNNs offer this performance at considerably lower computational cost. Learning the parameters of both DCNNs and partially-observed CRFs involves numerically minimizing a nonconvex objective – the backpropagated error in the case of DCNNs and the negative marginal log-likelihood for CRFs. In practice, the marginal log-likelihood of a partially-observed CRF is computed using a contrastof-partition-functions approach that requires running loopy belief propagation twice; once on the entire graph and once with the observed labels fixed [17]. This algorithm, and thus each step in the numerical optimization, has exponential time complexity O(EtN Ct t ) where Ct is the size of the maximal clique in Gt [18]. In contrast, the learning subroutine for an DCNN requires only one forward and backward pass for each instance in the training data. The complexity is dominated by the matrix multiplication between the graph definition matrix A and the design matrix V , giving an overall polynomial complexity of O(N 2 t F). Kernel Methods Kernel methods define similarity measures either between nodes (so-called kernels on graphs) [1] or between graphs (graph kernels) and these similarities can serve as a basis for prediction via the kernel trick. The performance of graph kernels can be improved by decomposing a graph into substructures, treating those substructures as a words in a sentence, and fitting a word-embedding model to obtain a vectorization [6]. DCNNs share ties with the exponential diffusion family of kernels on graphs. The exponential diffusion graph kernel KED is a sum of a matrix power series: KED = ∞ X j=0 αjAj j! = exp(αA) (7) 7 The diffusion-convolution activation given in (3) is also constructed from a power series. However, the representations have several important differences. First, the weights in (3) are learned via backpropagation, whereas the kernel representation is not learned from data. Second, the diffusionconvolutional representation is built from both node features and the graph structure, whereas the exponential diffusion kernel is built from the graph structure alone. Finally, the representations have different dimensions: KED is an Nt × Nt kernel matrix, whereas Zt is a Nt × H × F tensor that does not conform to the definition of a kernel. 6 Conclusion and Future Work By learning a representation that encapsulates the results of graph diffusion, diffusion-convolutional neural networks offer performance improvements over probabilistic relational models and kernel methods at node classification tasks. We intend to investigate methods for a) improving DCNN performance at graph classification tasks and b) making the model scalable in future work. 7 Appendix: Representation Invariance for Isomorphic Graphs If two graphs G1 and G2 are isomorphic, then their diffusion-convolutional activations are the same. Proof by contradiction; assume that G1 and G2 are isomorphic and that their diffusion-convolutional activations are different. The diffusion-convolutional activations can be written as Z1jk = f W c jk ⊙ X v∈V1 X v′∈V1 P ∗ 1vjv′X1v′k/N1 ! Z2jk = f W c jk ⊙ X v∈V2 X v′∈V2 P ∗ 2vjv′X2v′k/N2 ! Note that V1 = V2 = V X1vk = X2vk = Xvk ∀v ∈V, k ∈[1, F] P ∗ 1vjv′ = P ∗ 2vjv′ = P ∗ vjv′ ∀v, v′ ∈V, j ∈[0, H] N1 = N2 = N by isomorphism, allowing us to rewrite the activations as Z1jk = f W c jk ⊙ X v∈V X v′∈V P ∗ vjv′Xv′k/N ! Z2jk = f W c jk ⊙ X v∈V X v′∈V P ∗ vjv′Xv′k/N ! This implies that Z1 = Z2 which presents a contradiction and completes the proof. Acknowledgments We would like to thank Bruno Ribeiro, Pinar Yanardag, and David Belanger for their feedback on drafts of this paper. This work was supported in part by Army Research Office Contract W911NF12-1-0385 and ARL Cooperative Agreement W911NF-09-2-0053. This work was also supported by NVIDIA through the donation of equipment used to perform experiments. References [1] François Fouss, Kevin Francoisse, Luh Yen, Alain Pirotte, and Marco Saerens. An experimental investigation of kernels on graphs for collaborative recommendation and semisupervised classification. Neural Networks, 31:53–72, July 2012. 8 [2] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 2011. [3] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), 2010. [4] P Sen and L Getoor. Link-based classification. Technical Report, 2007. [5] Prithviraj Sen, Galileo Mark Namata, Mustafa Bilgic, Lise Getoor, Brian Gallagher, and Tina Eliassi-Rad. Collective Classification in Network Data. AI Magazine, 2008. [6] Pinar Yanardag and S V N Vishwanathan. Deep Graph Kernels. In the 21th ACM SIGKDD International Conference, pages 1365–1374, New York, New York, USA, 2015. ACM Press. [7] Nikil Wale, Ian A Watson, and George Karypis. Comparison of descriptor spaces for chemical compound retrieval and classification. Knowledge and Information Systems, 14(3):347–375, August 2007. [8] Asim Kumar Debnath, Rosa L Lopez de Compadre, Gargi Debnath, Alan J Shusterman, and Corwin Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991. [9] Hannu Toivonen, Ashwin Srinivasan, Ross D King, Stefan Kramer, and Christoph Helma. Statistical evaluation of the predictive toxicology challenge 2000–2001. Bioinformatics, 19(10):1183– 1193, 2003. [10] Karsten M Borgwardt, Cheng Soon Ong, Stefan Schönauer, SVN Vishwanathan, Alex J Smola, and Hans-Peter Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(suppl 1):i47–i56, 2005. [11] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv.org, 2014. [12] M Henaff, J Bruna, and Y LeCun. Deep Convolutional Networks on Graph-Structured Data. arXiv.org, 2015. [13] F Scarselli, M Gori, Ah Chung Tsoi, M Hagenbuchner, and G Monfardini. The Graph Neural Network Model. IEEE Transactions on Neural Networks, 2009. [14] A Micheli. Neural Network for Graphs: A Contextual Constructive Approach. IEEE Transactions on Neural Networks, 2009. [15] David K Duvenaud, Dougal Maclaurin, Jorge Aguilera-Iparraguirre, Rafael Gómez-Bombarelli, Timothy Hirzel, Alán Aspuru-Guzik, and Ryan P Adams. Convolutional Networks on Graphs for Learning Molecular Fingerprints. NIPS, 2015. [16] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009. [17] Jakob Verbeek and William Triggs. Scene segmentation with crfs learned from partially labeled images. NIPS, 2007. [18] Trevor Cohn. Efficient Inference in Large Conditional Random Fields. ECML, 2006. 9
2016
139
6,038
Fast Distributed Submodular Cover: Public-Private Data Summarization Baharan Mirzasoleiman Morteza Zadimoghaddam Amin Karbasi ETH Zurich Google Research Yale University Abstract In this paper, we introduce the public-private framework of data summarization motivated by privacy concerns in personalized recommender systems and online social services. Such systems have usually access to massive data generated by a large pool of users. A major fraction of the data is public and is visible to (and can be used for) all users. However, each user can also contribute some private data that should not be shared with other users to ensure her privacy. The goal is to provide a succinct summary of massive dataset, ideally as small as possible, from which customized summaries can be built for each user, i.e. it can contain elements from the public data (for diversity) and users’ private data (for personalization). To formalize the above challenge, we assume that the scoring function according to which a user evaluates the utility of her summary satisfies submodularity, a widely used notion in data summarization applications. Thus, we model the data summarization targeted to each user as an instance of a submodular cover problem. However, when the data is massive it is infeasible to use the centralized greedy algorithm to find a customized summary even for a single user. Moreover, for a large pool of users, it is too time consuming to find such summaries separately. Instead, we develop a fast distributed algorithm for submodular cover, FASTCOVER, that provides a succinct summary in one shot and for all users. We show that the solution provided by FASTCOVER is competitive with that of the centralized algorithm with the number of rounds that is exponentially smaller than state of the art results. Moreover, we have implemented FASTCOVER with Spark to demonstrate its practical performance on a number of concrete applications, including personalized location recommendation, personalized movie recommendation, and dominating set on tens of millions of data points and varying number of users. 1 Introduction Data summarization, a central challenge in machine learning, is the task of finding a representative subset of manageable size out of a large dataset. It has found numerous applications, including image summarization [1], recommender systems [2], scene summarization [3], clustering [4, 5], active set selection in non-parametric learning [6], and document and corpus summarization [7, 8], to name a few. A general recipe to obtain a faithful summary is to define a utility/scoring function that measures coverage and diversity of the selected subset [1]. In many applications, the choice of utility functions used for summarization exhibit submodularity, a natural diminishing returns property. In words, submodularity implies that the added value of any given element from the dataset decreases as we include more data points to the summary. Thus, the data summarization problem can be naturally reduced to that of a submodular cover problem where the objective is to find the smallest subset whose utility achieves a desired fraction of the utility provided by the entire dataset. It is known that the classical greedy algorithm yields a logarithmic factor approximation to the optimum summary [9]. It starts with an empty set, and at each iteration adds an element with the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. maximum added value to the summary selected so far. It is also known that improving upon the logarithmic approximation ratio is NP-hard [10]. Even though the greedy algorithm produces a near-optimal solution, it is highly impractical for massive datasets, as sequentially selecting elements on a single machine is heavily constrained in terms of speed and memory. Hence, in order to solve the submodular cover problem at scale, we need to make use of MapReduce-style parallel computation models [11, 12]. The greedy algorithm, due to its sequential nature, is poorly suited for parallelization. In this paper, we propose a fast distributed algorithm, FASTCOVER, that enables us to solve the more general problem of covering multiple submodular functions in one run of the algorithm. It relies one three important ingredients: 1) a reduction from multiple submodular cover problems into a single instance of a submodular cover problem [13, 14], 2) randomized filtration mechanism to select elements with high utility, and 3) a set of carefully chosen threshold functions used for the filteration mechanism. FASTCOVER also provides a natural tarde-off between the number of MapReduce rounds and the size of the returned summary. It effectively lets us choose between compact summaries (i.e., smaller solution size) while running more MapReduce rounds or larger summaries while running fewer MapReduce rounds. This setting is motivated by privacy concerns in many modern applications, including personalized recommender systems, online social services, and the data collected by apps on mobile platforms [15, 16]. In such applications, users have some control over their own data and can mark some part of it private (in a slightly more general case, we can assume that users can make part of their data private to specific groups and public to others). As a result, the dataset consists of public data, shared among all users, and disjoint sets of private data accessible to the owners only. We call this more general framework for data summarization, public-private data summarization, where the private data of one user should not be included in another user’s summary (see also [15]). This model naturally reduces to solving one instance of the submodular cover problem for each user, as their view of the dataset and the specific utility function specifying users’ preferences differ across users. When the number of users is small, one can solve the public-private data summarization separately for each user, using the greedy algorithm (for datasets of small size) or the recently proposed distributed algorithm DISCOVER [12] (for datasets of moderate size). However, when there are many users or the dataset is massive, none of the prior work truly scales. We report performance of DISCOVER using Spark on concrete applications of the public-private data summarization, including personalized movie recommendation on a dataset containing 2 million ratings by more than 100K users for 1000 movies, personalized location recommendation based on 20 users and their collected GPS locations, and finding the dominating set on a social network containing more than 65 million nodes and 1.8 billion edges. For small to moderate sized datasets, we compare our results with previous work, namely, classical greedy algorithm and DISCOVER [12]. For truly large-scale experiments, where the data is big and/or there are many users involved (e.g., movie recommendation), we cannot run DISCOVER as the number of MapReduce rounds in addition to their communication costs is prohibitive. In our experiments, we constantly observe that FASTCOVER provides solutions of size similar to the greedy algorithm (and very often even smaller) with the number of rounds that are orders of magnitude smaller than DISCOVER. This makes FASTCOVER the first distributed algorithm that solves the public-private data summarization fast and at scale. 2 Problem Statement: Public-Private Data Summarization In this section, we formally define the public-private model of data summarization1. Here, we consider a potentially large dataset (sometimes called universe of items) V of size n and a set of users U. The dataset consists of public data VP and disjoint subsets of private data Vu for each user u ∈U. The public-private aspect of data summarization realizes in two dimensions. First, each user u ∈U has her own utility function fu(S) according to which she scores the value of a subset S ⊆V. Throughout this paper we assume that fu(·) is integer-valued2, non-negative, and monotone 1All the results are applicable to submodular cover as a special case where there is only public data. 2For the submodular cover problem it is a standard assumption that the function is integer-values for the theoretical results to hold. In applications where this assumption is not satisfied, either we can appropriately discretize and rescale the function, or instead of achieving the desired utility Q, try to reach (1 −δ)Q, for some 0 < δ < 1. In the latter case, we can simply replace Q with Q/δ in the theorems to get the correct bounds. 2 submodular. More formally, submodularity means that fu(A ∪{e}) −fu(A) ≥fu(B ∪{e}) −fu(B) ∀A ⊆B ⊂V and ∀e ∈V \ B. Monotonicity implies that for any A ⊆V and e ∈V we have ∆fu(e|A) .= fu(A∪{e})−fu(A) ≥0. The term ∆fu(e|A) is called the marginal gain (or added value) of e to the set A. Whenever it is clear from the context we drop fu from ∆fu(e|A). Without loss of generality, we normalize all users’ functions so that they achieve the same maximum value, i.e., fu(V) = fv(V) for all u, v ∈U. Second, and in contrast to public data that is shared among all users, the private data of a user cannot be shared with others. Thus, a user u ∈U can only evaluate the public and her own private part of a summary S, i.e., S ∩(VP ∪Vu). In other words, if the summary S contains private data of a user v ̸= u, the user u cannot have access or evaluate v’s private part of S, i.e., S ∩Vv. In public-private data summarization, we would like to find the smallest subset S ⊆V such that all users reach a desired utility Q ≤fu(V) = fu(VP ∪Vu) simultaneously, i.e., OPT = arg min S⊆V |S|, such that fu(S ∩(VP ∪Vu)) ≥Q ∀u ∈U. (1) A naive way to solve the above problem is to find a separate summary for each user and then return the union of all summaries as S. A more clever way is to realize that problem (1) is in fact equivalent to the following problem [13, 14] OPT = arg min S⊆V |S|, such that f(S) .= X u∈U min{fu(S ∩(VP ∪Vu)), Q} ≥Q × |U|. (2) Note that the surrogate function f(·) is also monotone submodular as a thresholded submodular function remains submodular. Thus, finding a set S that provides each user with utility Q is equivalent of finding a set S with f(S) ≥L .= Q × |U|. This reduction lets us focus on developing a fast distributed solution for solving a single submodular cover problem. Our method FASTCOVER is explained in detail in Section 4. Related Work: When the data is small, we can use the centralized greedy algorithm to solve problem (2) (and equivalently problem (1)). The greedy algorithm sequentially picks elements and returns a solution of size (1 + ln M)OPT ≈ln(L)|OPT| where M = maxe∈V f(e). As elaborated earlier, when the data is large, one cannot run this greedy algorithm as it requires centralized access to the full dataset. This is why scalable solutions for the submodular cover problem have recently gained a lot of interest. In particular, for the set cover problem (a special case of submodular cover problem) there have been efficient MapReduce-based implementations proposed in the literature [17, 18, 19]. There have also been recent studies on the streaming set cover problem [20]. Perhaps the closest work to our efforts is [12] where the authors proposed a distributed algorithm for the submodular cover problem called DISCOVER. Their method relies on the reduction of the submodular cover problem to multiple instances of the distributed constrained submodular maximization problem [6, 21]. For any fixed 0 < α ≤1, DISCOVER returns a solution of size ⌈2αk+72 log(L)|OPT| p min(m, α|OPT|))⌉ in ⌈log(α|OPT|) + 36 p min(m, α|OPT|) log(L)/α + 1⌉rounds, where m denotes the number of machines. Even though DISCOVER scales better than the greedy algorithm, the solution it returns is usually much larger. Moreover, the dependency of the number of MapReduce rounds on p min(m, α|OPT|) is far from desirable. Note that as we increase the number of machines, the number of rounds may increase (rather than decreasing). Instead, in this paper we propose a fast distributed algorithm, FASTCOVER, that truly scales to massive data and produces a solution that is competitive with that of the greedy algorithm. More specifically, for any ϵ > 0, FASTCOVER returns a solution of size at most ⌈ln(L)|OPT|/(1−ϵ)⌉with at most ⌈log3/2(n/m|OPT|) log(M)/ϵ+log(L)⌉ rounds, where M = maxe∈V f(e). Thus, in terms of speed, FASTCOVER improves exponentially upon DISCOVER while providing a smaller solution. Moreover, in our work, the number of rounds decreases as the number of machines increases, in sharp contrast to [12]. 3 Applications of Pubic-Private Data Data Summarization In this section, we discuss 3 concrete applications where parts of data are private and the remaining parts are public. All objective functions are non-negative, monotone, and submodular. 3 3.1 Personalized Movie Recommendation Consider a movie recommender system that allows users to anonymously and privately rate movies. The system can use this information to recognize users’ preferences using existing matrix completion techniques [22]. A good set of recommended movies should meet two criteria: 1) be correlated with user’s preferences, and 2) be diverse and contains globally popular movies. To this end, we define the following sum-coverage function to score the quality of the selected movies S for a user u: fu(S) = αu X i∈S,j∈Vu si,j + (1 −αu) X i∈S,j∈VP \S si,j, (3) where Vu is the list of highly ranked movies by user u (i.e., private information), VP is the set of all movies in the database3, and si,j measures the similarity between movie i and j. The similarity can be easily calculated using the inner product between the corresponding feature vectors of any two movies i and j. The term P i∈S,j∈Vu si,j measures the similarity between the recommended set S and the user’s preferences. The second term P i∈S,j∈VP \S si,j encourages diversity. Finally, the parameter 0 ≤αu ≤1 provides the user the freedom to specify how much she cares about personalization versus diversity, i.e., αu = 1 indicates that all the recommended movies should be very similar to the movies she highly ranked and αu = 0 means that she prefers to receive a set of globally popular movies among all users, irrespective of her own private ratings. Note that in this application, the universe of items (i.e., movies) is public. What is private is the users’ ratings through which we identify the set of highly ranked movies by each user Vu. The effect of private data is expressed in users’ utility functions. The objective is to find the smallest set S of movies V, from which we can build recommendations for all users in a way that all reach a certain utility. 3.2 Personalized Location Recommendation Nowadays, many mobile apps collect geolocation data of their users. To comply with privacy concerns, some let their customers have control over their data, i.e., users can mark some part of their data private and disallow the app to share it with other users. In the personalized location recommendation, a user is interested in identifying a set of locations that are correlated with the places she visited and popular places everyone else visited. Note that as close by locations are likely to be similar it is very typical to define a kernel matrix K capturing the similarity between data points. A commonly used kernel in practice is the squared exponential kernel K(ei, ej) = exp(−||ei −ej||2 2/h2). To define the information gain of a set of locations indexed by S, it is natural to use f(S) = log det(I + σKS,S). The information gain objective captures the diversity and is used in many ML applications, e.g., active set selection for nonparametric learning [6], sensor placement [13], determinantal point processes, among many others. Then, the personalized location recommendation can be modeled by fu(S) = αuf(S ∩Vu) + (1 −αu)f(S ∩VP ), (4) where Vu is the set of locations that user u does not want to share with others and VP is the collection of all publicly disclosed locations. Again, the parameter αu lets the user indicate to what extent she is willing to receive recommendations based on her private information. The objective is to find the smallest set of locations to recommend to all users such that each reaches a desired threshold. Note that private data is usually small and private functions are fast to compute. Thus, the function evaluation is mainly affected by the amount of public data. Moreover, for many objectives, e.g., information gain, each machine can evaluate fu(S) by using its own portion of the private data. 3.3 Dominating Set in Social Networks Probably the easiest way to define the influence of a subset of users on other members of a social network is by the dominating set problem. Here, we assume that there is a graph G = (V, E) where V and E indicate the set of nodes and edges, respectively. Let N(S) denote the neighbors of S. Then, we define the coverage size of S by f(S) = |N(S)∪S|. The goal is to find the smallest subset S such that the coverage size is at least some fraction of |V|.This is a trivial instance of public-private data summarization as all the data is public and there is a single utility function. We use the dominating set problem to run a large-scale application for which DISCOVER terminates in a reasonable amount of time and its performance can be compared to our algorithm FASTCOVER. 3Two private lists may point to similar movies, but for now we treat the items on each list as unique entities. 4 4 FASTCOVER for Fast Distributed Submodular Cover In this section, we explain in detail our fast distributed Algorithm FASTCOVER shown in Alg. 1. It receives a universe of items V and an integer-valued, non-negative, monotone submodular function f : 2V →R+. The objective is to find the smallest set S that achieves a value L ≤f(V). FASTCOVER starts with S = ∅, and keeps adding those items x ∈V to S whose marginal values ∆(e|S) are at least some threshold τ. In the beginning, τ is set to a conservative initial value M .= maxx∈V f(x). When there are no more items with a marginal value τ, FASTCOVER lowers τ by a factor of (1 −ϵ), and iterates anew through the elements. Thus, τ ranges over τ0 = M, τ1 = (1 −ϵ)M, · · · , τℓ= (1 −ϵ)ℓM, · · · . FASTCOVER terminates when f(S) ≥L. The parameter ϵ determines the size of the final solution. When ϵ is small, we expect to find better solutions (i.e., smaller in size) while having to spend more number of rounds. One of the key ideas behind FASTCOVER is that finding elements with marginal values τ = τℓcan be done in a distributed manner. Effectively, FASTCOVER partitions V into m sets T1, . . . , Tm, one for each cluster node/machine. A naive distributed implementation is the following. For a given set S (whose elements are communicated to all machines) each machine i finds all of its items x ∈Ti whose marginal values ∆(x|S) are larger than τ and send them all to a central machine (note that S is fixed on each machine). Then, this central machine sequentially augments S with elements whose marginal values are more than τ (here S changes by each insertion). The new elements of S are communicated back to all machines and they run the same procedure, this time with a smaller threshold τ(1 −ϵ). The main problem with this approach is that there might be many items on each machine that satisfy the chosen threshold τ at each round (i.e., many more than |OPT|). A flood of such items from m machines overwhelms the central machine. Instead, what FASTCOVER does is to enforce each machine to randomly pick only k items from their potentially big set of candidates (i.e., THRESHOLDSAMPLE algorithm shown in Alg. 2). The value k is carefully chosen (line 7). This way the number of items the central machine processes is never more than O(m|OPT|). 1 Input: V, ϵ, L, and m 2 Output: S ⊆V where f(S) ≥L 3 Find a balanced partition {Ti}m i=1 of V; 4 S ←∅; 5 τ ←maxx∈V f(x); 6 while τ ≥1 do 7 k ←⌈(L −f(S))/τ⌉; 8 forall the 1 ≤i ≤m do 9 <Si, Fulli>←ThresholdSample(i,τ,k,S); 10 forall the x ∈∪m i=1Si do 11 if f({x} ∪S) −f(S) ≥τ then 12 S ←S ∪{x}; 13 if f(S) ≥L then Break; 14 if ∀i : Fulli = False then 15 if τ > 1 then τ ←max{1, (1 −ϵ)τ}; 16 else Break; 17 Return S; Algorithm 1: FASTCOVER 1 Input: Index i, τ, k, and S 2 Output: Si ⊂Ti with |Si| ≤k 3 Si ←∅; 4 forall the x ∈Si do 5 if f(S ∪{x}) −f(S) ≥τ then 6 Si ←Si ∪{x}; 7 if |Si| ≤k then 8 Return < Si, False >; 9 else 10 Si ←k random items of Si; 11 Return < Si, True >; Algorithm 2: THRESHOLDSAMPLE Theorem 4.1. FASTCOVER terminates with at most log3/2(n/(|OPT|m))(1+log(M)/ϵ)+log2(L) rounds (with high probability) and a solution of size at most |OPT| ln(L)/(1 −ϵ). Although FASTCOVER is distributed and unlike centralized algorithms does not enjoy the benefits of accessing all items together, its solution size is truly competitive with the greedy algorithm and is only away by a factor of 1/(1 −ϵ). Moreover, its number of rounds is logarithmic in n and L. This is in sharp contrast with the previously best known algorithm, DISCOVER [12], where the number of rounds scales with p min(m, |OPT|)4. Thus, FASTCOVER not only improves exponentially over 4Note that p min(m, |OPT|) can be as large as n1/6 when |OPT| = n1/3 and the memory limit of each machine is n2/3 which results in m ≥n1/3. 5 DISCOVER in terms of speed but also its number of rounds decreases as the number of available machines m increases. Even though FASTCOVER is a simple distributed algorithm, its performance analysis is technical and is deferred to the supplementary materials. Below, we provide the main ideas behind the proof of Theorem 4.1. Proof sketch. We say that an item has a high value if its marginal value to S is at least τ. We define an epoch to be the rounds during which τ does not change. In the last round of each epoch, all high value items are sent to the central machine (i.e., the set ∪m i=1Si) because Fulli is false for all machines. We also add every high value item to S in lines 11 −12. So, at the end of each epoch, marginal values of all items to S are less than τ. Since we reduce τ by a factor of (1 −ϵ), we can always say that τ ≥(1 −ϵ) maxx∈V ∆(x|S) which means we are only adding items that have almost the highest marginal values. By the classic analysis of greedy algorithm for submodular maximization, we can conclude that every item we add has an added value that is at least (1−ϵ)(L−f(S))/|OPT|. Therefore, after adding |OPT| ln(L)/(1 −ϵ) items, f(S) becomes at least L. To upper bound rounds, we divide the rounds into two groups. In a good round, the algorithm adds at least k 2 items to S. The rest are bad rounds. In a good round, we add k/2 ≥(L −f(S))/(2τ) items, and each of them increases the value of S by τ. Therefore in a good round, we see at least (L −f(S))/2 increase in value of S. In other words, the gap L −f(S) is reduced by a factor of at least 2 in each good round. Since f only takes integer values, once L −f(S) becomes less than 1, we know that f(S) ≥L. Therefore, there cannot be more than log2 L good rounds. Every time we update τ (start of an epoch), we decrease it by a factor of 1 −ϵ (except maybe the last round for which τ = 1). Therefore, there are at most 1 + log 1 1−ϵ (M) ≤1 + log(M) log(1/(1−ϵ)) ≤1 + log(M) ϵ epochs. In a bad round, a machine with more than k high value items, sends k of those to the central machine, and at most k/2 of them are selected. In other words, the addition of these items to S in this bad round caused more than half of high value items of each machine to become of low value (marginal values less than τ). Since there are n/m items in each machine, and Fulli becomes False once there are at most k high value items in the machine, we conclude that in expectation there should not be more than log2(n/km) bad rounds in each epoch. Summarizing the upper bounds yields the bound on total number of rounds. Finer analysis leads to the high probability claim. 5 Experiments In this section, we evaluate the performance of FASTCOVER on the three applications that we described in Section 3: personalized movie recommendation, personalized location recommendation, and dominating set on social networks. To validate our theoretical results and demonstrate the effectiveness of FASTCOVER, we compare the performance of our algorithm against DISCOVER and the centralized greedy algorithm (when possible). Our experimental infrastructure was a cluster of 16 quad-core machines with 20GB of memory each, running Spark. The cluster was configured with one master node responsible for resource management, and the remaining 15 machines working as executors. We set the number of reducers to m = 60. To run FASTCOVER on Spark, we first distributed the data uniformly at random to the machines, and performed a map/reduce task to find the highest marginal gain τ = M. Each machine then carries out a set of map/reduce tasks in sequence, where each map/reduce stage filters out elements with a specific threshold τ on the whole dataset. We then tune the parameter τ, communicate back the results to the machines and perform another round of map/reduce calculation. We continue performing map/reduce tasks until we get to the desired value L. 5.1 Personalized Location Recommendation with Spark Our location recommendation experiment involves applying FASTCOVER to the information gain utility function, described in Eq. (4). Our dataset consists of 3,056 GPS measurements from 20 users in the form of (latitude, longitude, altitude) collected during bike tours around Zurich [23]. The size of each path is between 50 and 500 GPS coordinates. For each pairs of points i and j we used the corresponding GPS coordinates to calculate their distance in meters d(i, j) and then formed a squared exponential kernel Ki,j = exp(−d(i, j)2/h2) with h = 1500. For each user, we marked 20% of her data private (data points are chosen consecutively) selected from each path taken by the biker. The parameter αu is set randomly for each user u. Figures 1a, 1b, 1c compare the performance of FASTCOVER to the benchmarks for building a recommendation set that covers 60%, 80%, and 90% of the maximum utility of each user. We 6 considered running DISCOVER with different values of parameter α that makes a trade off between the size of the solution and number of rounds of the algorithm. It can be seen that by avoiding the doubling steps of DISCOVER, our algorithm FASTCOVER is able to return a significantly smaller solution than that of DISCOVER in considerably less number of rounds. Interestingly, for small values of ϵ, FASTCOVER returns a solution that is even smaller than the centralized greedy algorithm. 5.2 Personalized Movie Recommendation with Spark Our personalized public-private recommendation experiment involves FASTCOVER applied to a set of 1,313 movies, and 20,000,263 users’ ratings from 138,493 users of the MovieLens database [24]. All selected users rated at least 20 movies. Each movie is associated with a 25 dimensional feature vector calculated from users’ ratings. We use the inner product of the non-normalized feature vectors to compute the similarity si,j between movies i and j [25]. Our final objective function consists of 138,493 coverage functions -one per user- and a global sum-coverage function defined on the whole pool of movies (see Eq. (3)). Each function is normalized by its maximum value to make sure that all functions have the same scale. Fig 1d, 1e, 1f show the ratio of the size of the solutions obtained by FASTCOVER to that of the greedy algorithm. The figures demonstrate the results for 10%, 20%, and 30% covers for all the 138,493 users’ utility functions. The parameter αu is set to 0.7 for all users. We scaled down the number of iterations by a factor of 0.01, so that the corresponding bars can be shown in the same figures. Again, FASTCOVER was able to find a considerably smaller solution than the centralized greedy. Here, we couldn’t run DISCOVER because of its prohibitive running time on Spark. Fig 1g shows the size of the solution set obtained by FASTCOVER for building recommendations from a set of 1000 movies for 1000 users vs. the size of the merged solutions found by finding recommendations separately for each user. It can be seen that FASTCOVER was able to find a much smaller solution by covering all the functions at the same time. 5.3 Large Scale Dominating Set with Spark In order to be able to compare the performance of our algorithm with DISCOVER more precisely, we applied FASTCOVER to the Friendster network consists of 65,608,366 nodes and 1,806,067,135 edges [26]. This dataset was used in [12] to evaluate the performance of DISCOVER. Fig. 1j, 1k, 1l show the performance of FASTCOVER for obtaining covers for 50%, 40%, 30% of the whole graph, compared to the centralized greedy solution. Again, the size of the solution obtained by FASTCOVER is smaller than the greedy algorithm for small values of ϵ. Note that running the centralized greedy is impractical if the dataset cannot fit into the memory of a single machine. Fig. 1h compares the solution set size and the number of rounds for FASTCOVER and DISCOVER with different values of ϵ and α. The points in the bottom left correspond to the solution obtained by FASTCOVER which confirm its superior performance. We further measured the actual running time of both algorithms on a smaller instance of the same graph with 14,043,721 nodes. We tuned ϵ and α to get solutions of approximately equal size for both algorithms. Fig. 1i shows the speedup of FASTCOVER over DISCOVER. It can be observed that by increasing the coverage value L, FASTCOVER shows an exponential speedup over DISCOVER. 6 Conclusion In this paper, we introduced the public-private model of data summarization motivated by privacy concerns of recommender systems. We also developed a fast distributed algorithm, FASTCOVER, that provides a succinct summary for all users without violating their privacy. We showed that FASTCOVER returns a solution that is competitive to that of the best centralized, polynomial-time algorithm (i.e., greedy solution). We also showed that FASTCOVER runs exponentially faster than the previously proposed distributed algorithms. The superior practical performance of FASTCOVER against all the benchmarks was demonstrated through a large set of experiments, including movie recommendation, location recommendation and dominating set (all were implemented with Spark). Our theoretical results combined with the practical performance of FASTCOVER makes it the only existing distributed algorithm for the submodular cover problem that truly scales to massive data. Acknowledgment: This research was supported by Google Faculty Research Award and DARPA Young Faculty Award (D16AP00046). 7 Number of rounds 10 20 30 40 Solution set size 380 390 400 410 420 430 440 450 FastCover DisCover Greedy ,=1.0 0=0.6 0=0.9 ,=0.2 ,=0.1 0=0.4 0=0.3 (a) Location data (60%) Number of rounds 10 20 30 40 50 60 Solution set size 1250 1300 1350 1400 1450 1500 FastCover DisCover Greedy ,=1.0 ,=0.4 0=0.9 0=0.6 ,=0.2 ,=0.1 0=0.4 0=0.3 (b) Location data (80%) Number of rounds 10 20 30 40 50 Solution set size 2100 2150 2200 2250 2300 2350 2400 FastCover DisCover Greedy 0=0.9 ,=1.0 ,=0.4 0=0.6 0=0.4 0=0.3 ,=0.1 ,=0.2 (c) Location data (90%) 0=0.5 0=0.3 0=0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Number of iterations Normalized solution set size (d) Movies (10%) 0=0.7 0=0.5 0=0.3 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Number of iterations Normalized solution set size (e) Movies (20%) 0=0.7 0=0.5 0=0.3 0 0.05 0.1 0.15 0.2 0.25 0.3 Number of iterations Normalized solution set size (f) Movies (30%) Coverage 0.1 0.2 0.3 0.4 0.5 Solution set size 100 200 300 400 500 600 700 800 900 Union of the summaries for each user Single summary for all users (g) Movie (1K) Number of rounds 0 50 100 150 200 Solution set size #105 2.6 2.8 3 3.2 3.4 3.6 3.8 4 DisCover ,=0.1 DisCover ,=0.2 DisCover ,=0.4 DisCover ,=1.0 FastCover 0=0.5 FastCover 0=0.3 FastCover 0=0.1 (h) Friendster (50%) Solution set size 1M 2M 3M 4M 5M 6M 7M FastCover speedup 0 1 2 3 4 5 6 7 8 (i) Friendster (14M) Number of rounds 10 20 30 Solution set size #104 4.4 4.5 4.6 4.7 4.8 4.9 5 5.1 FastCover Greedy 0=0.3 0=0.5 0=0.1 (j) Friendster (30%) Number of rounds 10 20 30 40 Solution set size #105 1.05 1.1 1.15 1.2 1.25 1.3 1.35 FastCover Greedy 0=0.3 0=0.1 0=0.5 (k) Friendster (40%) Number of rounds 10 20 30 40 50 Solution set size #105 2.7 2.75 2.8 2.85 2.9 2.95 3 3.05 3.1 FastCover Greedy 0=0.1 0=0.3 0=0.5 (l) Friendster (50%) Figure 1: Performance of FASTCOVER vs. other baselines. a), b), c) solution set size vs. number of rounds for personalized location recommendation on a set of 3,056 GPS measurements, for covering 60%, 80%, 90% of the maximum utility of each user. d), e), f) same measures for personalized movie recommendation on a set of 1000 movies, 138,493 users and 20,000,263 ratings, for covering 10%, 20%, 30% of the maximum utility of each user. g) solution set size vs. coverage for simultaneously covering all users vs. covering users one by one and taking the union. The recommendation is on a set of 1000 movies for 1000 users. h) solution set size vs. the number of rounds for FASTCOVER and DISCOVER for covering 50% of the Friendster network with 65,608,366 vertices. i) Exponential speedup of FASTCOVER over DISCOVER on a subgraph of 14M nodes. j), k), l) solution set size vs. the number of rounds for covering 30%, 40%, 50% of the Friendster network. 8 References [1] Sebastian Tschiatschek, Rishabh Iyer, Haochen Wei, and Jeff Bilmes. Learning Mixtures of Submodular Functions for Image Collection Summarization. In NIPS, 2014. [2] Khalid El-Arini and Carlos Guestrin. Beyond keyword search: discovering relevant scientific literature. In KDD, 2011. [3] Ian Simon, Noah Snavely, and Steven M Seitz. Scene summarization for online image collections. In ICCV, 2007. [4] Delbert Dueck and Brendan J Frey. Non-metric affinity propagation for unsupervised image categorization. In ICCV, 2007. [5] Ryan Gomes and Andreas Krause. Budgeted nonparametric learning from data streams. In ICML, 2010. [6] Baharan Mirzasoleiman, Amin Karbasi, Rik Sarkar, and Andreas Krause. Distributed submodular maximization: Identifying representative elements in massive data. In NIPS, 2013. [7] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In North American chapter of the Assoc. for Comp. Linguistics/Human Lang. Tech., 2011. [8] Ruben Sipos, Adith Swaminathan, Pannaga Shivaswamy, and Thorsten Joachims. Temporal corpus summarization using submodular word coverage. In CIKM, 2012. [9] Laurence A. Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 1982. [10] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 1998. [11] J. Dean and S. Ghemawat. Mapreduce: Simplified data processing on large clusters. In OSDI, 2004. [12] Baharan Mirzasoleiman, Amin Karbasi, Ashwinkumar Badanidiyuru, and Andreas Krause. Distributed submodular cover: Succinctly summarizing massive data. In NIPS, 2015. [13] Andreas Krause, Brendan McMahan, Carlos Guestrin, and Anupam Gupta. Robust submodular observation selection. JMLR, 2008. [14] Rishabh K Iyer and Jeff A Bilmes. Submodular optimization with submodular cover and submodular knapsack constraints. In NIPS, 2013. [15] Flavio Chierichetti, Alessandro Epasto, Ravi Kumar, Silvio Lattanzi, and Vahab Mirrokni. Efficient algorithms for public-private social networks. In KDD, 2015. [16] Baharan Mirzasoleiman, Ashwinkumar Badanidiyuru, and Amin Karbasi. Fast constrained submodular maximization: Personalized data summarization. In ICML, 2016. [17] Bonnie Berger, John Rompel, and Peter W Shor. Efficient nc algorithms for set cover with applications to learning and geometry. Journal of Computer and System Sciences, 1994. [18] Guy E. Blelloch, Richard Peng, and Kanat Tangwongsan. Linear-work greedy parallel approximate set cover and variants. In SPAA, 2011. [19] Stergios Stergiou and Kostas Tsioutsiouliklis. Set cover at web scale. In SIGKDD, 2015. [20] Erik D Demaine, Piotr Indyk, Sepideh Mahabadi, and Ali Vakilian. On streaming and communication complexity of the set cover problem. In Distributed Computing. 2014. [21] Ravi Kumar, Benjamin Moseley, Sergei Vassilvitskii, and Andrea Vattani. Fast greedy algorithms in mapreduce and streaming. TOPC, 2015. [22] Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Foundations of Computational mathematics, 2009. [23] https://refind.com/fphilipe/topics/open-data. [24] Grouplens. movielens 20m dataset. http://grouplens.org/datasets/movielens/20m/. [25] Erik M Lindgren, Shanshan Wu, and Alexandros G Dimakis. Sparse and greedy: Sparsifying submodular facility location problems. NIPS, 2015. [26] Jaewon Yang and Jure Leskovec. Defining and evaluating network communities based on ground-truth. Knowledge and Information Systems, 2015. 9
2016
14
6,039
Completely random measures for modelling block-structured sparse networks Tue Herlau Mikkel N. Schmidt Morten Mørup DTU Compute Technical University of Denmark Richard Petersens plads 31, 2800 Lyngby, Denmark {tuhe,mns,mmor}@dtu.dk Abstract Statistical methods for network data often parameterize the edge-probability by attributing latent traits such as block structure to the vertices and assume exchangeability in the sense of the Aldous-Hoover representation theorem. These assumptions are however incompatible with traits found in real-world networks such as a power-law degree-distribution. Recently, Caron & Fox (2014) proposed the use of a different notion of exchangeability after Kallenberg (2005) and obtained a network model which permits edge-inhomogeneity, such as a power-law degree-distribution whilst retaining desirable statistical properties. However, this model does not capture latent vertex traits such as block-structure. In this work we re-introduce the use of block-structure for network models obeying Kallenberg’s notion of exchangeability and thereby obtain a collapsed model which both admits the inference of block-structure and edge inhomogeneity. We derive a simple expression for the likelihood and an efficient sampling method. The obtained model is not significantly more difficult to implement than existing approaches to block-modelling and performs well on real network datasets. 1 Introduction Two phenomena are generally considered important for modelling complex networks. The first is community or block structure, where the vertices are partitioned into non-overlapping blocks (denoted by ℓ= 1, . . . , K in the following) and the probability two vertices i, j are connected depends on their assignment to blocks: P Edge between vertex i and j  = ξℓm where ξℓm ∈[0, 1] is a number only depending on the blocks ℓ, m to which i, j respectively belongs. Stochastic block models (SBMs) were first proposed by White et al. (1976) and today form the basic starting point for many important link-prediction methods such as the infinite relational model (Xu et al., 2006; Kemp et al., 2006). While block-structure is important for link prediction, the degree distribution of edges in complex networks is often found to follow a power-law (Newman et al., 2001; Strogatz, 2001). This realization has led to many important models of network growth, such as the preferential attachment (PA) model of Barabási (1999). Models such as the IRM and the PA model have different goals. The PA model attempts to explain how network structure, such as the degree distribution, follows from simple rules of network growth and is not suitable for link prediction. In contrast, the IRM aims to discover latent block-structure 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. and predict edges — tasks for which the PA model is unsuitable. In the following, network model will refer to a model with the same aims as the IRM, most notably prediction of missing edges. 1.1 Exchangeability Invariance is an important theme in Bayesian approaches to network modelling. For network data, the invariance which has received most attention is infinite exchangeability of random arrays. Suppose we represent the network as a subset of an infinite matrix A = (Aij)ij≥1 such that Aij is the number of edges between vertex i and j (we will allow multi and self-edges in the following). Infinite exchangeability of the random array (Aij)ij≥1 is the requirement that (Hoover, 1979; Aldous, 1981) (Aij)ij≥1 d= (Aσ(i)σ(j))ij≥1 for all finite permutations σ of N. The distribution of a finite network is then obtained by marginalization. According to the Aldous-Hoover theorem (Hoover, 1979; Aldous, 1981), an infinite exchangeable network has a representation in terms of a random function, and furthermore, the number of edges in the network must either scale as the square of the number of vertices or (with probability 1) be zero (Orbanz & Roy, 2015). Neither of these options are compatible with a power-law degree distribution and one is faced with the dilemma of giving up either the power-law distribution or exchangeability. It is the first horn of this dilemma which has been pursued by much work on Bayesian network modelling (Orbanz & Roy, 2015). It is, however, possible to substitute the notation of infinite exchangeability in the above sense with a different definition due to Kallenberg (2005, chapter 9). The new notion retains many important characteristics of the former, including a powerful representation theorem parallelling the AldousHoover theorem but expressed in terms of a random set. Important progress in exploring network models based on this representation has recently been made by Caron & Fox (2014), who demonstrate the ability to model power-law behaviour of the degree distribution and construct an efficient sampler for parameter inference. The reader is encouraged to consult this reference for more details. In this paper, we will apply the ideas of Caron & Fox (2014) to block-structured network data, thereby obtaining a model based on the same structural invariance, yet able to capture both blockstructure and degree heterogeneity. The contribution of this work is fourfold: (i) we propose general extension of sparse networks to allow latent structure, (ii) using this construction we implement a block-structured network model which obey Kallenbergs notion of exchangeability, (iii) we derive a collapsed expression of the posterior distribution which allows efficient sampling, (iv) demonstrate that the resulting model offers superior link prediction compared to both standard block-modelling and the model of Caron & Fox (2014). It should be noted that independently of this manuscript, Veitch & Roy (2015) introduced a construction similar to our eq. (4) but focusing on the statistical properties of this type of random process, whereas this manuscript focuses on the practical implementation of network models based on the construction. 2 Methods Before introducing the full method we will describe the construction informally, omitting details relating to completely random measures. 2.1 A simple approach to sparse networks Suppose the vertices in the network are labelled by real numbers in R+. An edge e (edges are considered directed and we allow for self-edges) then consists of two numbers (xe1, xe2) ∈R2 + denoted the edge endpoint. A network X of L edges (possibly L = ∞) is simply the collection of points X = ((xe1, xe2))L e=1 ⊂R2 +. We adopt the convention that multi-edges implies duplicates in the list of edges. Suppose X is generated by a Poisson process with base measure ξ on R2 + X ∼PP ξ  . (1) A finite network Xα can then be obtained by considering the restriction of X to [0, α]2: Xα = X ∩[0, α]2. As an illustration, suppose ξ is the Lebesgue measure. The number of edges is then L ∼Poisson(α2) and the edge-endpoints xe1, xe2 are i.i.d. on [0, α] simply corresponding to selecting L random points in [0, α]2. The edges are indicated by the gray squares in figure 1a and the 2 0 α α 0 (xe1, xe2) xe1 xe2 xe1 xe2 (a) Maximally sparse network 0 α P i≥1 wiδθi Aij α 0 θ6 θ2 θ1 θ3 θ4 θ5 θ1 θ3 θ2 θ6 θ4 w3 w1 w2 w6 w4 w5 θ5 p(Aij) = Poisson(wiwj) θ4 w4 (b) Nontrivial network 0 α P zi=1 wiδθi p(Aij) = Poisson(wiwjη13) Aij P zi=2 wiδθi P zi=3 wiδθi α 0 zi = 1 zi = 2 zi = 3 (c) Nontrivial network Figure 1: (Left:) A network is generated by randomly selecting points from [0, α]2 ⊂R2 + corresponding to edges (squares) and identifying the unique coordinates with vertices (circles), giving the maximally disconnected graph. (Middle:) The edges are restricted to lie at the intersection of randomly generated gray lines at θi, each with a mass/sociability parameter wi. The probability of selecting an intersection is proportional to wiwj, giving a non-trivial network structure. (Right:) Each vertex is assigned a latent trait zi (the assignment to blocks as indicated by the colors) that modulates the edge probability with a parameter ηℓm ≥0, thus allowing block-structured networks. vertices as circles. Notice the vertices will be distinct with probability 1 and the procedure therefore gives rise to the degenerate but sparse network of 2L vertices and L edges, shown in figure 1a. To generate non-trivial networks, the edge-endpoints must coincide with nonzero probability. Similar to Caron & Fox (2014), suppose the coordinates are restricted to only take a countable number of potential values, θ1, θ2, · · · ∈R+ and each value has an associated sociability (or mass) parameter w1, w2, · · · ∈[0, ∞[ (we use the shorthand (θi)i = (θi)∞ i=1 for a series). If we define the measure µ = P i≥1 wiδθi and let ξ = µ × µ, then generating Xα according to the procedure of eqn. (1) the number of edges L is Poisson T 2 , T = µ([0, α]) = P∞ i=1 wi distributed. The position of the edges remains identically distributed, but with probability proportional to wiwj of selecting coordinate (θi, θj). Since the edge-endpoints coincide with non-zero probability this procedure allows the generation of a non-trivial associative network structure, see figure 1b. With proper choice of (wi, θi)i≥1 these networks exhibit many desirable properties, such as a power-law degree distribution and sparsity (Caron & Fox, 2014). This process can be intuitively extended to block-structured networks, as illustrated in figure 1c. There, each vertex is assigned a latent trait (i.e. a block assignment), here highlighted by the colors. We use the symbol zi ∈{1, . . . , K} to indicate the assignment of vertex i to one of the K blocks. We can then consider a measure of the form ξ = X i,j≥1 ηzizjwiwjδ(θi,θj) = K X ℓ,m=1 ηℓmµℓ× µm, (2) where we have introduced µℓ= P i:zi=ℓwiδθi. Defined in this manner, ξ is a measure on [0, α]2 and ηℓm parameterizes the interaction strength between community ℓand m. Notice the number of edges Lℓm between block ℓand m is, by basic properties of the Poisson process, distributed as Lℓm ∼Poisson(ηℓmTℓTm), where Tℓ= µℓ([0, α]). In figure 1c the locations θi of the vertices have been artificially ordered according to color for easy visualization. The following section will show the connection between the above construction of eq. (2) and the exchangeable representation due to Kallenberg (2005). However, for greater generality, we will let the latent trait be a general continuous parameter ui ∈[0, 1] and later show that block-structured models can be obtained as a special case. 2.2 Exchangeability and point-process network models Since the networks in the point-set representation are determined by the properties of the measure ξ, invariance (i.e. exchangeability) of random point-set networks is defined as invariance of this random measure. Recall infinite exchangeability for infinite matrices requires that the distribution of the random matrix to be unchanged by permutation of the rows/columns in the network. For 3 0 0.5 1 0 1 2 3 4 5 6 7 8 9 10 0 0.5 Step 1: Generate candidate vertices w θ u 0 0 β1 β2 β3 f(u, u′) = 1 1 β1 β2 β3 η11 η22 η33 η12 η13 η23 η21 η31 η32 Step 2: Select graphon f 0 2 4 6 8 10 0 2 4 6 8 10 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 θi θj P i≥1 wiδ(θi,ui) ∼CRM(ρσ,τ, R+ × [0, 1]). ρσ,τ is the Lévy intensity of a GGP (βi)K i=1 ∼Dirichlet α0 K , · · · , α0 K  ηℓm ∼Gamma(λa, λb) Step 3: Form measure ξ = P i,j≥1 wiwjf(ui, uj)δ(θi,θj) Figure 2: (Step 1:) The potential vertex locations, θi, latent traits ui and sociability parameters wi are generated using a generalized gamma process (Step 2:) The interaction of the latent traits f : [0, 1]2 →R+, the graphon, is chosen to be a piece-wise constant function (Step 3:) Together, these determine the random measure ξ which is used to generate the network from a Poisson process a random measure on R2 +, the corresponding requirement is that it should be possible to partition R+ into intervals I1, I2, I3, . . . , permute the intervals, and have the random measure be invariant to this permutation. Formally, a random measure ξ on R2 + is then said to be jointly exchangeable if ξ ◦(ϕ ⊗ϕ)−1 d= ξ for all measure-preserving transformations ϕ of R+. According to Kallenberg (2005, theorem 9.24), this is ensured provided the measure has a representation of the form: ξ = X i,j≥1 h(ζ, xi, xj)δ(θi,θj), (3) where h is a measurable function, ζ is a random variable and {(xi, θi)}i≥1 is a unit rate Poisson process on R2 + (the converse involves five additional terms (Kallenberg, 2005)). In this representation, the locations (θi)i and the parameters (xi)i are decoupled, however we are free to select the random parameters (xi)i≥1 to lie in a more general space than R+. Specifically, we define xi = (ui, vi) ∈[0, 1] × R+, with the interpretation that each vi corresponds to a random mass wi through a transformation wi = g(vi), and each ui ∈[0, 1] is a general latent trait of the vertex. (In figure 1 this parameter corresponded to the assignment to blocks). We then consider the following choice: h(ζ, xi, xj) = f(ui, uj)gzi(vi)gzj(vj) (4) where f : [0, 1]2 →R+ is a measurable function playing a similar role as the graphon in the AldousHoover representation, and {(ui, vi, θi)}i≥1 follows a unit-rate Poisson process on [0, 1] × R2 +. To see the connection with the block-structured model, suppose the function f is a piece-wise constant function f(u, u′) = K X ℓ,m=1 ηℓm1Jℓ(u)1Jm(u′), where Jℓ= hPℓ−1 m=1 βm, Pℓ m=1 βm h , PK ℓ=1 βℓ= 1, βℓ> 0 and zi = ℓdenotes the event 1Jℓ(ui) = 1. Notice this choice for f is exactly equivalent to the graphon for the block-structured network model in the Aldous-Hoover representation (Orbanz & Roy, 2015). The procedure is illustrated in figure 2. Realizations of networks generated by this process using different values of K can be obtained using the simulation methods of Caron & Fox (2014) and can be seen in figure 3. Notice the K = 1, η11 = 1 case corresponds to their method. To fully define the method we must first introduce the relevant prior for the measure µ = P i≥1 wiδ(θi,ui). As a prior we will use the Generalized Gamma-process (GGP) (Hougaard, 1986). In the following section, we will briefly review properties of completely random measures and use these to derive a simple expression of the posterior. 4 2.3 Random measures K = 1 K = 2 K = 3 K = 4 k = 188 k = 537 k = 689 k = 1 961 Figure 3: (Top:) Example of four randomly generated networks for K = 1, 2, 3 and 4 using the choice of random measure discussed in section 2.3. The other parameters were fixed at α = 20K, τ = 1, σ = 0.5 and λa = λb = 1. Vertices have been sorted according to their assignment to blocks and sociability parameters.(Bottom:) The same networks as above but applying a random permutation to the edges within each tile. A standard SBM assumes a network structure of this form. As a prior for µ we will use completely random measures (CRMs) and the reader is referred to (Kallenberg, 2005; Kingman, 1967) for a comprehensive account. Recall first the definition of a CRM. Assume S is a separable complete metric space with the Borel σ-field B(S) (for our purpose S = [0, α]). A random measure µ is a random variable whose values are measures on S. For each measurable set A ∈B(S), the random measure induces a random variable µ(A), and the random measure µ will be said to be completely random if for any finite collection A1, . . . , An of disjoint measurable sets the random variables µ(A1), . . . , µ(An) are independent. It was shown by Kingman (1967) that the non-trivial part of any random measure µ is discrete almost certainly with a representation µ = ∞ X i=1 wiδθi, (5) where the sequence of masses and locations (wi, θi)i (also known as the atoms) is a Poisson random measure on R+ × S, with mean measure ν known as the Lévy intensity measure. We will consider homogeneous CRMs, where locations are independent, ν(dw, dθ) = ρ(dw)κα(dθ), and assume κα is the Lebesgue measure on [0, α]. Since the construction as outlined in figure 1c depends on sampling the edge start and end-points at random from the locations (θi)i, with probability proportional to wi, the normalized form of eqn. (5) will be of particular interest. Specifically, the chance of selecting a particular location from a random draw is governed by P = µ T = ∞ X i=1 piδθi, pi = wi T , T = µ(S) = ∞ X i=1 wi, (6) which is known as the normalized random measure (NRM) and T is the total mass of the CRM µ (Kingman, 1967). A random draw from a Poisson process based on the CRM can thus be realized by first sampling the number of generated points, L ∼Poisson(T), and then drawing their locations in a i.i.d. manner from the NRM of eqn. (6). The reader is referred to James (2002) for a comprehensive treatment on NRMs. With the notation in place, we can provide the final form of the generative process for a network Xα. Suppose the CRM µ (restricted to the region [0, α]) has been generated. Assume zi = ℓiff. ui ∈Jℓ and define the K thinned measures on [0, α] as: µℓ= X i:zi=ℓ wiδθi each with total mass Tℓ= µℓ([0, α]). By basic properties of CRMs, the thinned measures are also CRMs (Pitman, 2006). The number of points in each tile Lℓm is then Poisson(ηℓmTℓTm) distributed, and given Lℓm the edge-endpoints (xe1ℓ, xe2m) between atoms in measure ℓand m can then be drawn from the corresponding NRM. The generative process is then simply: (βℓ)K ℓ=1 ∼Dirichlet β0/K, . . . , β0/K  µiid ∼CRM(ρ, U[0,1] × UR+) ηℓk iid ∼Gamma(λa, λb) Lℓm iid ∼Poisson(ηℓmTℓTm) for e=1, . . . , Lℓm: xe1ℓiid ∼Categorical (wi/Tℓ)zi=ℓ  xe2m iid ∼Categorical wj/Tm)zj=m  . 5 In the following we will use the generalized gamma process (GGP) as the choice of Lévy intensity measure (James, 2002). The GGP is parameterized with two parameters σ, τ and has the functional form ρσ,τ(dw) = 1 Γ(1 −σ)w−1−σe−τwdw. The parameters (σ, τ) will be restricted to lie in the region ]0, 1[×[0, ∞[ as in (Caron & Fox, 2014). In conjunction with α we thus obtain three parameters (α, σ, τ) which fully describe the CRM and the induced partition structure. 2.4 Posterior distribution In order to define a sampling procedure of the CRMSBM we must first characterize the posterior distribution. In Caron & Fox (2014) this was calculated using a specially tailored version of Palm’s formula. In this work we will use a counting argument inspired by Pitman (2003, eqn. (32)) and a reparameterization to collapse the weight-parameter (wi)i≥1 to obtain a fairly simple analytical expression which is amenable to standard sampling procedures. The full derivation is, however, somewhat lengthy and is included in the supplementary material. First notice the distribution of the total mass Tℓof each of the thinned random measures µℓis a tilted σ-stable random variable (Pitman, 2006). If we introduce αℓ≡βℓα, its density gαℓ,σ,τ may be written as gα,σ,τ(t) = θ−1 σ fσ(tθ−1 σ )φλ(tθ−1 σ ) where φλ(t) = eλσ−λt, λ = τθ 1 σ , θ = α σ and fσ is the density of a σ-stable random variable. See Devroye & James (2014) for more details. According to Zolotarev’s integral representation, the function fσ has the following form (Zolotarev, 1964) fσ(x) = σx −1 1−σ π(1 −σ) Z π 0 du A(σ, u)e −A(σ,u) xσ/(1−σ) , A(σ, u) = sin((1−σ)u) sin(σu)σ sin(u)  1 1−σ . (7) Since not all potential vertices (i.e. terms wiδθi in µ) will have edges attached to them, it is useful to introduce a variable which encapsulates this distinction. We therefore define the variable ˜zi = 0, 1, . . . , K with the definition: ˜zi =  zi if there exists (x, y) ∈Xα s.t. θi ∈{x, y}, 0 otherwise. In addition, suppose for each measure µℓ, the end-points of the edges associated with this measure selects kℓ= |{i : ˜zi = ℓ}| unique atoms and k = PK ℓ=1 kℓis the total number of vertices in the network. Next, we consider a specific network (Aij)k i,j=1 and assume it is labelled such that atom (wi, θi) corresponds to a particular vertex i in the network. We also define ni = P j(Aij + Aji) as the number of edge-endpoints that selects atom i, nℓ= P i:˜zi=ℓni as the aggregated edge-endpoints that select measure µℓand nℓm = P ˜zi=ℓ,˜zm=j Aij as the edges between measure µℓand µm. The posterior distribution is then P(A, (zi)i, σ, τ, (αℓ, sℓ, tℓ)ℓ) = Γ(β0) QK ℓ=1 α β0 K −1 ℓ Eℓ Γ( β0 K )Kαβ0 Q ij Aij! Y ℓm G(λa+nℓm, λb+TℓTm) G(λa, λb) , (8) where we have introduced: Eℓ= αkℓsnℓ−kℓσ−1 ℓ Γ(nℓ−kℓσ)eτsℓgαℓ,τ,σ(Tℓ−sℓ) Y ˜zi=ℓ (1 −σ)ni and sℓ= P i:˜zi=ℓwi is the mass of the "occupied" atoms in the measure µℓ. The posterior distribution can be seen as the product of K partition functions corresponding to the GGP, multiplied by the K2 interaction factors involving the function G(a, b) = Γ(a)b−a, and corresponding to the interaction between the measures according to the block structure assumption. Note that the η = 1 case, corresponding to a collapsed version of Caron & Fox (2014), can be obtained by taking the limit λa = λb →∞, in which case G(λa+n,λb+T ) G(λa,λb) →e−T . When discussing the K = 1 case, we will assume this limit has been taken. 6 2.5 Inference Sampling the expression eqn. (8) requires three types of sampling updates: (i) the sequence of block-assignments (zi)i must be updated, (ii) in the simulations we will consider binary networks and we will therefore need to both impute the integer valued counts (if Aij > 0), as well as missing values in the network, and (iii) both the parameters associated with the random measure, σ and τ, as well as the remaining variables associated with each expression Eℓmust be updated. All terms, except the densities gα,σ,τ, are amenable to standard sampling techniques. We opted for the approach of Lomelí et al. (2014), in which u in Zolotarev’s integral representation (eqn. 7) is considered an auxiliary parameter. The full inference procedure can be found in the supplementary material, however, the main steps are: 1 Update of (zi)i: For each ℓ, impute (wi)˜zi=ℓonce per sweep (see supplementary for details), and then iterate over i and update each zi using a Gibbs sweep from the likelihood. The Gibbs sweep is no more costly than that of a standard SBM. Update of A: Impute (ηℓm)ℓm and (wi)i once per sweep (see supplementary for details), and then for each (ij) such that the edge is either unobserved or must be imputed (Aij ≥1), generate a candidate a ∼Poisson(ηℓmwiwj). Then, if the edge is unobserved, simply set Aij = a, otherwise if the edge is observed and a = 0, reject the update. Update of σ, τ: For ℓ= 1, . . . , K, introduce uℓcorresponding to u in Zolotarev’s integral representation (eqn. 7) and let tℓ= Tℓ−sℓ. Update the four variables in Φℓ= (αℓ, uℓ, sℓ, tℓ) and σ, τ using random-walk Metropolis Hastings updates. In terms of computational cost, the inference procedure is of the same order as the SBM albeit with higher constants due to the overall complexity of the likelihood and because the parameters (αℓ, uℓ, sℓ, tℓ) must be sampled for each CRM. In Caron & Fox (2014), the parameters (wi)i≥1 were sampled using Hamiltonian Monte Carlo, whereas herein they are collapsed and re-imputed. The parameters Φℓand σ, τ are important for determining the sparsity and power-law properties of the network model (Caron & Fox, 2014). To investigate convergence of the sampler for these parameters, we generated a single network problem using α = 25, σ = 0.5, τ = 2 and evaluated 12 samplers with K = 1 on the problem. Autocorrelation plots (mean and standard deviation computed over 12 restarts) can be seen in figure 4a. All parameters mix, however the different parameters have different mixing times with u in particular being affected by excursions. This indicates many sampling updates of Φℓare required to explore the state space sufficiently and we therefore applied 50 updates of Φℓfor each update of (zi)i and Aij. Additional validation of the sampling procedure can be found in the supplementary material. 3 Experiments The proposed method was evaluated on 11 network datasets (a description of how the datasets were obtained and prepared can be found in the supplementary material) using K = 200 in the truncated stick-breaking representation. As a criteria of evaluation we choose AUC score on held-out edges, i.e. predicting the presence or absence of unobserved edges using the imputation method described in the previous section. All networks were initially processed by thresholds at 0, and vertices with zero edges were removed. A fraction of 5% of the edges were removed and considered as held-out data. To examine the effect of using blocks, we compared the method against the method of Caron & Fox (2014) (CRM) (corresponding to ηℓm = 1 and K = 1), a standard block-structured model with Poisson observations (pIRM) (Kemp et al., 2006), and the degree-corrected stochastic block model (DCSBM) Herlau et al. (2014). The later allows both block-structure and degree-heterogeneity but it is not exchangeable. More details on the simulations and methods are found in the supplementary material. The pIRM was selected since it is the closest block-structured model to the CRMSBM without degree-correction. This allows us to determine the relative benefit of inferring the degree-distribution compared to only the block-structure. For the priors we selected uniform priors for σ, τ, α and a Gamma(2, 1) prior for β0, λa, λb. Similar choices were made for the other models. 1Code available at http://people.compute.dtu.dk/tuhe/crmsbm. 7 u s t τ σ α Lag Autocorrelation 0 200 400 600 800 1000 −0.05 0 0.05 0.1 0.15 0.2 (a) Autocorrelation plots CRM pIRM DCSBM CRMSBM Swarthmore Simmons SmaGri SciMet Netsci Haverford Reed Caltech Hagmann Yeast NIPS AUC score of held-out edges 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 (b) Link prediction Figure 4: (Left:) Autocorrelation plots of the parameters α, σ, τ, s, t and u for a K = 1 network drawn from the prior distribution using α = 25, σ = 0.5 and τ = 2. The plots were obtained by evaluating the proposed sampling procedure for 106 iterations and the shaded region indicates standard deviation obtained over 12 re-runs. The simulation indicates reasonable mixing for all parameters, with u being the most affected by excursions. (Right:) AUC score on held-out edges for the selected methods (averaged over 4 restarts) on 11 network datasets. For the same number of blocks, the CRMSBM offers good link-prediction performance compared to the method of Caron & Fox (2014) (CRM), a SBM with Poisson observations (pIRM) and the degree-corrected SBM (DCSBM) (Herlau et al., 2014). Additional information is found in the supplementary material. All methods were evaluated for T = 2 000 iterations, and the latter half of the chains was used for link prediction. We used 4 random selections of held-out edges per network to obtain the results seen in figure 4b (same sets of held-out edges were used for all methods). It is evident that blockstructure is crucial to obtain good link prediction performance. For the block-structured methods, the results indicate additional benefits from using models which permits degree-heterogenity upon most networks, except the Hagmann brain connectivity graph. This result is possibly explained by the Hagmann graph having little edge-inhomogeneity. Comparing the CRMSBM and the DCSBM, these models perform either on par with or with a slight advantage to the CRMSBM. 4 Discussion and Conclusion Models of networks based on the CRM representation of Kallenberg (2005) offer one of the most important new ideas in statistical modelling of networks in recent years. To our knowledge Caron and Fox (2014) were the first to realize the benefits of this modelling approach, describe its statistical properties and provide an efficient sampling procedure. The degree distribution of a network is only one of several important characteristics of a complex network. In this work we have examined how the ideas presented in Caron and Fox (2014) can be applied for a simple block-structured network model to obtain a model which admits block structure and degree correction. Our approach is a fairly straightforward generalization of the methods of Caron and Fox (2014). However, we have opted to explicitly represent the density of the total mass gαℓ,σ,τ and integrate out the sociability parameters (wi)i, thereby reducing the number of parameters associated with the CRM from the order of vertices to the order of blocks. The resulting model has the increased flexibility of being able to control the degree distribution within each block. In practice, results of the model on 11 real-world datasets indicate that this flexibility offers benefits over purely block-structured approaches to link prediction for most networks, as well as potential benefits over alternative approaches to modelling block-structure and degree-heterogeneity. The results strongly indicate that structural assumptions (such as block-structure) are important to obtain reasonable link prediction. Block-structured network modelling is in turn the simplest structural assumption for block-modelling. The extension of the method of Caron and Fox (2014) to overlapping blocks, possibly using the dependent random measures of Chen et al. (2013), appears fairly straightforward and should potentially offer a generalization of overlapping block models. 8 Acknowledgments This project was funded by the Lundbeck Foundation (grant nr. R105-9813). References Aldous, David J. Representations for partially exchangeable arrays of random variables. Journal of Multivariate Analysis, 11(4):581–598, 1981. Barabási, Albert-László. Emergence of Scaling in Random Networks. Science, 286(5439):509–512, October 1999. ISSN 00368075. doi: 10.1126/science.286.5439.509. Caron, Francois and Fox, Emily B. Bayesian nonparametric models of sparse and exchangeable random graphs. arXiv preprint arXiv:1401.1137, 2014. Chen, Changyou, Rao, Vinayak, Buntine, Wray, and Teh, Yee Whye. Dependent normalized random measures. In Proceedings of The 30th International Conference on Machine Learning, pp. 969–977, 2013. Devroye, Luc and James, Lancelot. On simulation and properties of the stable law. Statistical methods & applications, 23(3):307–343, 2014. Herlau, Tue, Schmidt, Mikkel N, and Mørup, Morten. Infinite-degree-corrected stochastic block model. Phys. Rev. E, 90:032819, Sep 2014. doi: 10.1103/PhysRevE.90.032819. Hoover, Douglas N. Relations on probability spaces and arrays of random variables. Preprint, Institute for Advanced Study, Princeton, NJ, 2, 1979. Hougaard, Philip. Survival models for heterogeneous populations derived from stable distributions. Biometrika, 73(2):387–396, 1986. James, Lancelot F. Poisson process partition calculus with applications to exchangeable models and Bayesian nonparametrics. arXiv preprint math/0205093, 2002. Kallenberg, Olaf. Probabilistic Symmetries and Invariance Principles. Number v. 10 in Applied probability. Springer, 2005. ISBN 9780387251158. Kemp, Charles, Tenenbaum, Joshua B, Griffiths, Thomas L, Yamada, Takeshi, and Ueda, Naonori. Learning systems of concepts with an infinite relational model. In AAAI, volume 3, pp. 5, 2006. Kingman, John. Completely random measures. Pacific Journal of Mathematics, 21(1):59–78, 1967. Lomelí, María, Favaro, Stefano, and Teh, Yee Whye. A marginal sampler for σ-stable Poisson-Kingman mixture models. arXiv preprint arXiv:1407.4211, 2014. Newman, M. E. J., Strogatz, S. H., and Watts, D. J. Random graphs with arbitrary degree distributions and their applications. Physical Review E, 64(2), July 2001. ISSN 1063-651X. Orbanz, Peter and Roy, Daniel M. Bayesian models of graphs, arrays and other exchangeable random structures. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(2):437–461, 2015. Pitman, Jim. Poisson-Kingman partitions. Lecture Notes-Monograph Series, pp. 1–34, 2003. Pitman, Jim. Combinatorial Stochastic Processes: Ecole D’Eté de Probabilités de Saint-Flour XXXII-2002. Springer, 2006. Strogatz, Steven H. Exploring complex networks. Nature, 410(6825):268–276, 2001. Veitch, Victor. and Roy, Daniel M. The Class of Random Graphs Arising from Exchangeable Random Measures. ArXiv e-prints, December 2015. White, Harrison C, Boorman, Scott A, and Breiger, Ronald L. Social structure from multiple networks. i. blockmodels of roles and positions. American journal of sociology, pp. 730–780, 1976. Xu, Zhao, Tresp, Volker, Yu, Kai, and Kriegel, Hans-Peter. Infinite hidden relational models. In Proceedings of the 22nd International Conference on Uncertainty in Artificial Intelligence (UAI 2006), 2006. Zolotarev, Vladimir Mikhailovich. On the representation of stable laws by integrals. Trudy Matematicheskogo Instituta im. VA Steklova, 71:46–50, 1964. 9
2016
140
6,040
Pruning Random Forests for Prediction on a Budget Feng Nan Systems Engineering Boston University fnan@bu.edu Joseph Wang Electrical Engineering Boston University joewang@bu.edu Venkatesh Saligrama Electrical Engineering Boston University srv@bu.edu Abstract We propose to prune a random forest (RF) for resource-constrained prediction. We first construct a RF and then prune it to optimize expected feature cost & accuracy. We pose pruning RFs as a novel 0-1 integer program with linear constraints that encourages feature re-use. We establish total unimodularity of the constraint set to prove that the corresponding LP relaxation solves the original integer program. We then exploit connections to combinatorial optimization and develop an efficient primal-dual algorithm, scalable to large datasets. In contrast to our bottom-up approach, which benefits from good RF initialization, conventional methods are top-down acquiring features based on their utility value and is generally intractable, requiring heuristics. Empirically, our pruning algorithm outperforms existing state-of-the-art resource-constrained algorithms. 1 Introduction Many modern classification systems, including internet applications (such as web-search engines, recommendation systems, and spam filtering) and security & surveillance applications (such as widearea surveillance and classification on large video corpora), face the challenge of prediction-time budget constraints [21]. Prediction-time budgets can arise due to monetary costs associated with acquiring information or computation time (or delay) involved in extracting features and running the algorithm. We seek to learn a classifier by training on fully annotated training datasets that maintains high-accuracy while meeting average resource constraints during prediction-time. We consider a system that adaptively acquires features as needed depending on the instance(example) for high classification accuracy with reduced feature acquisition cost. We propose a two-stage algorithm. In the first stage, we train a random forest (RF) of trees using an impurity function such as entropy or more specialized cost-adaptive impurity [16]. Our second stage takes a RF as input and attempts to jointly prune each tree in the forest to meet global resource constraints. During prediction-time, an example is routed through all the trees in the ensemble to the corresponding leaf nodes and the final prediction is based on a majority vote. The total feature cost for a test example is the sum of acquisition costs of unique features1 acquired for the example in the entire ensemble of trees in the forest. 2 We derive an efficient scheme to learn a globally optimal pruning of a RF minimizing the empirical error and incurred average costs. We formulate the pruning problem as a 0-1 integer linear program that incorporates feature-reuse constraints. By establishing total unimodularity of the constraint set, we show that solving the linear program relaxation of the integer program yields the optimal solution to the integer program resulting in a polynomial 1When an example arrives at an internal node, the feature associated with the node is used to direct the example. If the feature has never been acquired for the example an acquisition cost is incurred. Otherwise, no acquisition cost is incurred as we assume that feature values are stored once computed. 2For time-sensitive cases such as web-search we parallelize the implementation by creating parallel jobs across all features and trees. We can then terminate jobs based on what features are returned. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. time algorithm for optimal pruning. We develop a primal-dual algorithm by leveraging results from network-flow theory for scaling the linear program to large datasets. Empirically, this pruning outperforms state-of-the-art resource efficient algorithms on benchmarked datasets. No Usage 1–7 > 7 Cost Error Unpruned RF 7.3% 91.7% 1% 42.0 6.6% BudgetPrune 68.3% 31.5% 0.2% 24.3 6.7% Table 1: Typical feature usage in a 40 tree RF before and after pruning (our algorithm) on the MiniBooNE dataset. Columns 2-4 list percentage of test examples that do not use the feature, use it 1 to 7 times, and use it greater than 7 times, respectively. Before pruning, 91% examples use the feature only a few (1 to 7) times, paying a significant cost for its acquisition; after pruning, 68% of the total examples no longer use this feature, reducing cost with minimal error increase. Column 5 is the average feature cost (the average number of unique features used by test examples). Column 6 is the test error of RFs. Overall, pruning dramatically reduces average feature cost while maintaining the same error level. Our approach is motivated by the following considerations: (i) RFs are scalable to large datasets and produce flexible decision boundaries yielding high prediction-time accuracy. The sequential feature usage of decision trees lends itself to adaptive feature acquisition. (ii) RF feature usage is superfluous, utilizing features with introduced randomness to increase diversity and generalization. Pruning can yield significant cost reduction with negligible performance loss by selectively pruning features sparsely used across trees, leading to cost reduction with minimal accuracy degradation (due to majority vote). See Table 1. (iii) Optimal pruning encourages examples to use features either a large number of times, allowing for complex decision boundaries in the space of those features, or not to use them at all, avoiding incurring the cost of acquisition. It enforces the fact that once a feature is acquired for an example, repeated use incurs no additional acquisition cost. Intuitively, features should be repeatedly used to increase discriminative ability without incurring further cost. (iv) Resource constrained prediction has been conventionally viewed as a top-down (tree-growing) approach, wherein new features are acquired based on their utility value. This is often an intractable problem with combinatorial (feature subsets) and continuous components (classifiers) requiring several relaxations and heuristics. In contrast, ours is a bottom-up approach that starts with good initialization (RF) and prunes to realize optimal cost-accuracy tradeoff. Indeed, while we do not pursue it, our approach can also be used in conjunction with existing approaches. Related Work: Learning decision rules to minimize error subject to a budget constraint during prediction-time is an area of recent interest, with many approaches proposed to solve the predictiontime budget constrained problem [9, 22, 19, 20, 12]. These approaches focus on learning complex adaptive decision functions and can be viewed as orthogonal to our work. Conceptually, these are top-down “growing” methods as we described earlier (see (iv)). Our approach is bottom-up that seeks to prune complex classifiers to tradeoff cost vs. accuracy. Our work is based on RF classifiers [3]. Traditionally, feature cost is not incorporated when constructing RFs, however recent work has involved approximation of budget constraints to learn budgeted RFs [16]. The tree-growing algorithm in [16] does not take feature re-use into account. Rather than attempting to approximate the budget constraint during tree construction, our work focuses on pruning ensembles of trees subject to a budget constraint. Methods such as traditional ensemble learning and budgeted random forests can be viewed as complementary. Decision tree pruning has been studied extensively to improve generalization performance, we are not aware of any existing pruning method that takes into account the feature costs. A popular method for pruning to reduce generalization error is Cost-Complexity Pruning (CCP), introduced by Breiman et al. [4]. CCP trades-off classification ability for tree size, however it does not account for feature costs. As pointed out by Li et al. [15], CCP has undesirable “jumps" in the sequence of pruned tree sizes. To alleviate this, they proposed a Dynamic-Program-based Pruning (DPP) method for binary trees. The DPP algorithm is able to obtain optimally pruned trees of all sizes; however, it faces the curse of dimensionality when pruning an ensemble of decision trees and taking feature cost into account. [23, 18] proposed to solve the pruning problem as a 0-1 integer program; again, their formulations do not account for feature costs that we focus on in this paper. The coupling nature of feature usage makes our problem much harder. In general pruning RFs is not a focus of attention as it is assumed that overfitting can be avoided by constructing an ensemble of trees. While this is true, it often leads to extremely large prediction-time costs. Kulkarni and Sinha [11] provide a survey of methods to prune RFs in order to reduce ensemble size. However, these methods do not explicitly account for feature costs. 2 2 Learning with Resource Constraints In this paper, we consider solving the Lagrangian relaxed problem of learning under prediction-time resource constraints, also known as the error-cost tradeoff problem: min f∈F E(x,y)∼P [err (y, f(x))] + λEx∼Px [C (f, x)] , (1) where example/label pairs (x, y) are drawn from a distribution P; err(y, ˆy) is the error function; C(f, x) is the cost of evaluating the classifier f on example x; λ is a tradeoff parameter. A larger λ places a larger penalty on cost, pushing the classifier to have smaller cost. By adjusting λ we can obtain a classifier satisfying the budget constraint. The family of classifiers F in our setting is the space of RFs, and each RF f is composed of T decision trees T1, . . . , TT . Our approach: Rather than attempting to construct the optimal ensemble by solving Eqn. (1) directly, we instead propose a two-step algorithm that first constructs an ensemble with low prediction error, then prunes it by solving Eqn. (1) to produce a pruned ensemble given the input ensemble. By adopting this two-step strategy, we obtain an ensemble with low expected cost while simultaneously preserving the low prediction error. There are many existing methods to construct RFs, however the focus of this paper is on the second step, where we propose a novel approach to prune RFs to solve the tradeoff problem Eqn.(1). Our pruning algorithm is capable of taking any RF as input, offering the flexibility to incorporate any state-of-the-art RF algorithm. 3 Pruning with Costs In this section, we treat the error-cost tradeoff problem Eqn. (1) as an RF pruning problem. Our key contribution is to formulate pruning as a 0-1 integer program with totally unimodular constraints. We first define notations used throughout the paper. A training sample S = {(x(i), y(i)) : i = 1, . . . , N} is generated i.i.d. from an unknown distribution, where x(i) ∈ℜK is the feature vector with a cost assigned to each of the K features and y(i) is the label for the ith example. In the case of multi-class classification y ∈{1, . . . , M}, where M is the number of classes. Given a decision tree T , we index the nodes as h ∈{1, . . . , |T |}, where node 1 represents the root node. Let ˜T denote the set of leaf nodes of tree T . Finally, the corresponding definitions for T can be extended to an ensemble of T decision trees {Tt : t = 1, . . . , T} by adding a subscript t. Pruning Parametrization: In order to model ensemble pruning as an optimization problem, we parametrize the space of all prunings of an ensemble. The process of pruning a decision tree T at an internal node h involves collapsing the subtree of T rooted at h, making h a leaf node. We say a pruned tree T (p) is a valid pruned tree of T if (1) T (p) is a subtree of T containing root node 1 and (2) for any h ̸= 1 contained in T (p), the sibling nodes (the set of nodes that share the same immediate parent node as h in T ) must also be contained in T (p). Specifying a pruning is equivalent to specifying the nodes that are leaves in the pruned tree. We therefore introduce the following binary variable for each node h ∈T zh =  1 if node h is a leaf in the pruned tree, 0 otherwise. We call the set {zh, ∀h ∈T } the node variables as they are associated with each node in the tree. Consider any root-to-leaf path in a tree T , there should be exactly one node in the path that is a leaf node in the pruned tree. Let p(h) denote the set of predecessor nodes, the set of nodes (including h) that lie on the path from the root node to h. The set of valid pruned trees can be represented as the set of node variables satisfying the following set of constraints: P u∈p(h) zu = 1 ∀h ∈˜T . Given a valid pruning for a tree, we now seek to parameterize the error of the pruning. Pruning error: As in most supervised empirical risk minimization problems, we aim to minimize the error on training data as a surrogate to minimizing the expected error. In a decision tree T , each node h is associated with a predicted label corresponding to the majority label among the training examples that fall into the node h. Let Sh denote the subset of examples in S routed to or through node h on T and let Predh denote the predicted label at h. The number of misclassified examples 3 at h is therefore eh = P i∈Sh 1[y(i)̸=Predh]. We can thus estimate the error of tree T in terms of the number of misclassified examples in the leaf nodes: 1 N P h∈˜T eh, where N = |S| is the total number of examples. Our goal is to minimize the expected test error of the trees in the random forest, which we empirically approximate based on the aggregated probability distribution in Step (6) of Algorithm 1 with 1 T N PT t=1 P h∈˜ Tt eh. We can express this error in terms of the node variables: 1 T N PT t=1 P h∈Tt ehzh. Pruning cost: Assume the acquisition costs for the K features, {ck : k = 1, . . . , K}, are given. The feature acquisition cost incurred by an example is the sum of the acquisition costs of unique features acquired in the process of running the example through the forest. This cost structure arises due to the assumption that an acquired feature is cached and subsequent usage by the same example incurs no additional cost. Formally, the feature cost of classifying an example i on the ensemble T[T ] is given by Cfeature(T[T ], x(i)) = PK k=1 ckwk,i, where the binary variables wk,i serve as the indicators: wk,i =  1 if feature k is used by x(i) in any Tt, t = 1, . . . , T 0 otherwise. The expected feature cost of a test example can be approximated as 1 N PN i=1 PK k=1 ckwk,i. In some scenarios, it is useful to account for computation cost along with feature acquisition cost during prediction-time. In an ensemble, this corresponds to the expected number of Boolean operations required running a test through the trees, which is equal to the expected depth of the trees. This can be modeled as 1 N PT t=1 P h∈Tt |Sh|dhzh, where dh is the depth of node h. Putting it together: Having modeled the pruning constraints, prediction performance and costs, we formulate the problem of pruning using the relationship between the node variables zh’s and feature usage variables wk,i’s. Given a tree T , feature k, and example x(i), let uk,i be the first node associated with feature k on the root-to-leaf path the example follows in T . Feature k is used by x(i) if and only if none of the nodes between the root and uk,i is a leaf. We represent this by the constraint wk,i + P h∈p(uk,i) zh = 1 for every feature k used by example x(i) in T . Recall wk,i indicates whether or not feature k is used by example i and p(uk,i) denotes the set of predecessor nodes of uk,i. Intuitively, this constraint says that either the tree is pruned along the path followed by example i before feature k is acquired, in which case zh = 1 for some node h ∈p(uk,i) and wk,i = 0; or wk,i = 1, indicating that feature k is acquired for example i. We extend the notations to ensemble pruning with tree index t: z(t) h indicates whether node h in Tt is a leaf after pruning; w(t) k,i indicates whether feature k is used by the ith example in Tt; wk,i indicates whether feature k is used by the ith example in any of the T trees T1, . . . , TT ; ut,k,i is the first node associated with feature k on the root-to-leaf path the example follows in Tt; Kt,i denotes the set of features the ith example uses on tree Tt. We arrive at the following integer program. min z(t) h ,w(t) k,i, wk,i∈{0,1} error z }| { 1 NT T X t=1 X h∈Tt e(t) h z(t) h +λ       feature acquisition cost z }| { K X k=1 ck( 1 N N X i=1 wk,i) + computational cost z }| { 1 N T X t=1 X h∈Tt |Sh|dhzh       (IP) s.t. P u∈p(h) z(t) u = 1, ∀h ∈˜Tt, ∀t ∈[T], (feasible prunings) w(t) k,i + P h∈p(ut,k,i) z(t) h = 1, ∀k ∈Kt,i, ∀i ∈S, ∀t ∈[T], (feature usage/ tree) w(t) k,i ≤wk,i, ∀k ∈[K], ∀i ∈S, ∀t ∈[T]. (global feature usage) Totally Unimodular constraints: Even though integer programs are NP-hard to solve in general, we show that (IP) can be solved exactly by solving its LP relaxation. We prove this in two steps: first, we examine the special structure of the equality constraints; then we examine the inequality constraint that couples the trees. Recall that a network matrix is one with each column having exactly one element equal to 1, one element equal to -1 and the remaining elements being 0. A network matrix defines a directed graph with the nodes in the rows and arcs in the columns. We have the following lemma. 4 11 32 5 4 2      z1 z2 z3 z4 z5 w(1) 1,1 w(1) 2,1 r1 1 1 0 0 0 0 0 r2 1 0 1 1 0 0 0 r3 1 0 1 0 1 0 0 r4 1 0 1 0 0 0 1 r5 1 0 0 0 0 1 0             z1 z2 z3 z4 z5 w(1) 1,1 w(1) 2,1 −r1 −1 −1 0 0 0 0 0 r1−r2 0 1 −1 −1 0 0 0 r2−r3 0 0 0 1 −1 0 0 r3−r4 0 0 0 0 1 0 −1 r4−r5 0 0 1 0 0 −1 1 r5 1 0 0 0 0 1 0        Figure 1: A decision tree example with node numbers and associated feature in subscripts together with the constraint matrix and its equivalent network matrix form. Lemma 3.1 The equality constraints in (IP) can be turned into an equivalent network matrix form for each tree. Proof We observe the first constraint P u∈p(h) z(t) u = 1 requires the sum of the node variables along a path to be 1. The second constraints w(t) k,i + P h∈p(ut,k,i) z(t) h = 1 has a similar sum except the variable w(t) k,i. Imagine w(t) k,i as yet another node variable for a fictitious child node of ut,k,i and the two equations are essentially equivalent. The rest of proof follows directly from the construction in Proposition 3 of [18]. Figure 1 illustrates such a construction. The nodes are numbered 1 to 5. The subscripts at node 1 and 3 are the feature index used in the nodes. Since the equality constraints in (IP) can be separated based on the trees, we consider only one tree and one example being routed to node 4 on the tree for simplicity. The equality constraints can be organized in the matrix form as shown in the middle of Figure 1. Through row operations, the constraint matrix can be transformed to an equivalent network matrix. Such transformation always works as long as the leaf nodes are arranged in a pre-order manner. Next, we deal with the inequality constraints and obtain our main result. Theorem 3.2 The LP relaxation of (IP), where the 0-1 integer constraints are relaxed to interval constraints [0, 1] for all integer variables, has integral optimal solutions. Due to space limit the proof can be found in the Suppl. Material. The main idea is to show the constraints are still totally unimodular even after adding the coupling constraints and the LP relaxed polyhedron has only integral extreme points [17]. As a result, solving the LP relaxation results in the optimal solution to the integer program (IP), allowing for polynomial time optimization. 3 Algorithm 1 BUDGETPRUNE During Training: input - ensemble(T1, . . . , TT ), training/validation data with labels, λ 1: initialize dual variables β(t) k,i ←0. 2: update z(t) h , w(t) k,i for each tree t (shortest-path algo). wk,i = 0 if µk,i > 0, wk,i = 1 if µk,i < 0. 3: β(t) k,i ←[β(t) k,i + γ(w(t) k,i −wk,i)]+ for step size γ, where [·]+ = max{0, ·}. 4: go to Step 2 until duality gap is small enough. During Prediction: input - test example x 5: Run x on each tree to leaf, obtain the probability distribution over label classes pt at leaf. 6: Aggregate p = 1 T PT t=1 pt. Predict the class with the highest probability in p. 4 A Primal-Dual Algorithm Even though we can solve (IP) via its LP relaxation, the resulting LP can be too large in practical applications for any general-purpose LP solver. In particular, the number of variables and constraints is roughly O(T × |Tmax| + N × T × Kmax), where T is the number of trees; |Tmax| is the maximum 3The nice result of totally unimodular constraints is due to our specific formulation. See Suppl. Material for an alternative formulation that does not have such a property. 5 number of nodes in a tree; N is the number of examples; Kmax is the maximum number of features an example uses in a tree. The runtime of the LP thus scales O(T 3) with the number of trees in the ensemble, limiting the application to only small ensembles. In this section we propose a primal-dual approach that effectively decomposes the optimization into many sub-problems. Each sub-problem corresponds to a tree in the ensemble and can be solved efficiently as a shortest path problem. The runtime per iteration is O( T p (|Tmax|+N ×Kmax) log(|Tmax|+N ×Kmax)), where p is the number of processors. We can thus massively parallelize the optimization and scale to much larger ensembles as the runtime depends only linearly on T p . To this end, we assign dual variables β(t) k,i for the inequality constraints w(t) k,i ≤wk,i and derive the dual problem. max β(t) k,i≥0 min z(t) h ∈[0,1] w(t) k,i∈[0,1] wk,i∈[0,1] 1 NT T X t=1 X h∈Tt ˆe(t) h z(t) h + λ K X k=1 ck( 1 N N X i=1 wk,i) ! + T X t=1 N X i=1 X k∈Kt,i β(t) k,i(w(t) k,i −wk,i) s.t. X u∈p(h) z(t) u = 1, ∀h ∈˜Tt, ∀t ∈[T], w(t) k,i + X h∈p(ut,k,i) z(t) h = 1, ∀k ∈Kt,i, ∀i ∈S, ∀t ∈[T], where for simplicity we have combined coefficients of z(t) h in the objective of (IP) to ˆe(t) h . The primal-dual algorithm is summarized in Algorithm 1. It alternates between updating the primal and the dual variables. The key is to observe that given dual variables, the primal problem (inner minimization) can be decomposed for each tree in the ensemble and solved in parallel as shortest path problems due to Lemma 3.1. (See also Suppl. Material). The primal variables wk,i can be solved in closed form: simply compute µk,i = λck/N −P t∈Tk,i β(t) k,i, where Tk,i is the set of trees in which example i encounters feature k. So wk,i should be set to 0 if µk,i > 0 and wk,i = 1 if µk,i < 0. Note that our prediction rule aggregates the leaf distributions from all trees instead of just their predicted labels. In the case where the leaves are pure (each leaf contains only one class of examples), this prediction rule coincides with the majority vote rule commonly used in random forests. Whenever the leaves contain mixed classes, this rule takes into account the prediction confidence of each tree in contrast to majority voting. Empirically, this rule consistently gives lower prediction error than majority voting with pruned trees. 5 Experiments We test our pruning algorithm BUDGETPRUNE on four benchmark datasets used for prediction-time budget algorithms. The first two datasets have unknown feature acquisition costs so we assign costs to be 1 for all features; the aim is to show that BUDGETPRUNE successfully selects a sparse subset of features on average to classify each example with high accuracy. 4 The last two datasets have real feature acquisition costs measured in terms of CPU time. BUDGETPRUNE achieves high prediction accuracy spending much less CPU time in feature acquisition. For each dataset we first train a RF and apply BUDGETPRUNE on it using different λ’s to obtain various points on the accuracy-cost tradeoff curve. We use in-bag data to estimate error probability at each node and the validation data for the feature cost variables wk,i’s. We implement BUDGETPRUNE using CPLEX [1] network flow solver for the primal update step. The running time is significantly reduced (from hours down to minutes) compared to directly solving the LP relaxation of (IP) using standard solvers such as Gurobi [10]. Futhermore, the standard solvers simply break trying to solve the larger experiments whereas BUDGETPRUNE handles them with ease. We run the experiments for 10 times and report the means and standard deviations. Details of datasets and parameter settings of competing methods are included in the Suppl. Material. Competing methods: We compare against four other approaches. (i) BUDGETRF[16]: the recursive node splitting process for each tree is stopped as soon as node impu4In contrast to traditional sparse feature selection, our algorithm allows adaptivity, meaning different examples use different subsets of features. 6 rity (entropy or Pairs) falls below a threshold. The threshold is a measure of impurity tolerated in the leaf nodes. This can be considered as a naive pruning method as it reduces feature acquisition cost while maintaining low impurity in the leaves. 5 10 15 20 25 30 35 40 0.88 0.89 0.9 0.91 0.92 0.93 Test Accuracy Average Feature Cost BudgetPrune CCP [Breiman et al. 1984] BudgetRF [Nan et al. 2015] GreedyPrune GreedyMiser [Xu et al. 2012] (a) MiniBooNE 8 10 12 14 16 18 20 22 0.78 0.8 0.82 0.84 0.86 0.88 0.9 0.92 Test Accuracy Average Feature Cost BudgetPrune CCP [Breiman et al. 1984] BudgetRF [Nan et al. 2015] GreedyPrune GreedyMiser [Xu et al. 2012] (b) Forest Covertype 40 60 80 100 120 140 160 180 200 0.12 0.125 0.13 0.135 0.14 Average Precision@5 Average Feature Cost BudgetPrune CCP [Breiman et al. 1984] BudgetRF [Nan et al. 2015] GreedyPrune GreedyMiser [Xu et al. 2012] (c) Yahoo! Rank 5 10 15 20 25 30 35 0.72 0.74 0.76 0.78 0.8 0.82 0.84 Test Accuracy Average Feature Cost BudgetPrune CCP [Breiman et al. 1984] BudgetRF [Nan et al. 2015] GreedyPrune GreedyMiser [Xu et al. 2012] (d) Scene15 Figure 2: Comparison of BUDGETPRUNE against CCP, BUDGETRF with early stopping, GREEDYPRUNE and GREEDYMISER on 4 real world datasets. BUDGETPRUNE (red) outperforms competing state-of-art methods. GREEDYMISER dominates ASTC [12], CSTC [21] and DAG [20] significantly on all datasets. We omit them in the plots to clearly depict the differences between competing methods. (ii) CostComplexity Pruning (CCP) [4]: it iteratively prunes subtrees such that the resulting tree has low error and small size. We perform CCP on individual trees to different levels to obtain various points on the accuracy-cost tradeoff curve. CCP does not take into account feature costs. (iii) GREEDYPRUNE: is a greedy global feature pruning strategy that we propose; at each iteration it attempts to remove all nodes corresponding to one feature from the RF such that the resulting pruned RF has the lowest training error and average feature cost. The process terminates in at most K iterations, where K is the number of features. The idea is to reduce feature costs by successively removing features that result in large cost reduction yet small accuracy loss. We also compare against the state-of-the-art methods in budgeted learning (iv) GREEDYMISER [22]: it is a modification of gradient boosted regression tree [8] to incorporate feature cost. Specifically, each weak learner (a low-depth decision tree) is built to minimize squared loss with respect to current gradient at the training examples plus feature acquisition cost. To build each weak learner the feature costs are set to zero for those features already used in previous weak learners. Other prediction-time budget algorithms such as ASTC [12], CSTC [21] and cost-weighted l-1 classifiers are shown to perform strictly worse than GREEDYMISER by a significant amount [12, 16] so we omit them in our plots. Since only the feature acquisition costs are standardized, for fair comparison we do not include the computation cost term in the objective of (IP) and focus instead on feature acquisition costs. MiniBooNE Particle Identification and Forest Covertype Datasets:[7] Feature costs are uniform in both datasets. Our base RF consists of 40 trees using entropy split criteria and choosing from the full set of features at each split. As shown in (a) and (b) of Figure 2, BUDGETPRUNE (in red) achieves the best accuracy-cost tradeoff. The advantage of BUDGETPRUNE is particularly large in (b). GREEDYMISER has lower accuracy in the high budget region compared to BUDGETPRUNE in (a) and significantly lower accuracy in (b). The gap between BUDGETPRUNE and other pruning methods is small in (a) but much larger in (b). This indicates large gains from globally encouraging feature sharing in the case of (b) compared to (a). In both datasets, BUDGETPRUNE successfully prunes away large number of features while maintaining high accuracy. For example in (a), using only 18 unique features on average instead of 40, we can get essentially the same accuracy as the original RF. Yahoo! Learning to Rank:[6] This ranking dataset consists of 473134 web documents and 19944 queries. Each example in the dataset contains features of a query-document pair together with the 7 relevance rank of the document to the query. There are 141397/146769/184968 examples in the training/validation/test sets. There are 519 features for each example; each feature is associated with an acquisition cost in the set {1, 5, 20, 50, 100, 150, 200}, which represents the units of CPU time required to extract the feature and is provided by a Yahoo! employee. The labels are binarized so that the document is either relevant or not relevant to the query. The task is to learn a model that takes a new query and its associated set of documents to produce an accurate ranking using as little feature cost as possible. As in [16], we use the Average Precision@5 as the performance metric, which gives a high reward for ranking the relevant documents on top. Our base RF consists of 140 trees using cost weighted entropy split criteria as in [16] and choosing from a random subset of 400 features at each split. As shown in (c) of Figure 2, BUDGETPRUNE achieves similar ranking accuracy as GREEDYMISER using only 30% of its cost. Scene15 [13]: This scene recognition dataset contains 4485 images from 15 scene classes (labels). Following [22] we divide it into 1500/300/2685 examples for training/validation/test sets. We use a diverse set of visual descriptors and object detectors from the Object Bank [14]. We treat each individual detector as an independent descriptor so we have a total of 184 visual descriptors. The acquisition costs of these visual descriptors range from 0.0374 to 9.2820. For each descriptor we train 15 one-vs-rest kernel SVMs and use the output (margins) as features. Once any feature corresponding to a visual descriptor is used for a test example, an acquisition cost of the visual descriptor is incurred and subsequent usage of features from the same group is free for the test example. Our base RF consists of 500 trees using entropy split criteria and choosing from a random subset of 20 features at each split. As shown in (d) of Figure 2, BUDGETPRUNE and GREEDYPRUNE significantly outperform other competing methods. BUDGETPRUNE has the same accuracy at the cost of 9 as at the full cost of 32. BUDGETPRUNE and GREEDYPRUNE perform similarly, indicating the greedy approach happen to solve the global optimization in this particular initial RF. 5.1 Discussion & Concluding Comments We have empirically evaluated several resource constrained learning algorithms including BUDGETPRUNE and its variations on benchmarked datasets here and in the Suppl. Material. We highlight key features of our approach below. (i) STATE-OF-THE-ART METHODS. Recent work has established that GREEDYMISER and BUDGETRF are among the state-of-the-art methods dominating a number of other methods [12, 21, 20] on these benchmarked datasets. GREEDYMISER requires building class-specific ensembles and tends to perform poorly and is increasingly difficult to tune in multi-class settings. RF, by its nature, can handle multi-class settings efficiently. On the other hand, as we described earlier, [12, 20, 21] are fundamentally "tree-growing" approaches, namely they are top-down methods acquiring features sequentially based on a surrogate utility value. This is a fundamentally combinatorial problem that is known to be NP hard [5, 21] and thus requires a number of relaxations and heuristics with no guarantees on performance. In contrast our pruning strategy is initialized to realize good performance (RF initialization) and we are able to globally optimize cost-accuracy objective. (ii) VARIATIONS ON PRUNING. By explicitly modeling feature costs, BUDGETPRUNE outperforms other pruning methods such as early stopping of BUDGETRF and CCP that do not consider costs. GREEDYPRUNE performs well validating our intuition (see Table. 1) that pruning sparsely occurring feature nodes utilized by large fraction of examples can improve test-time cost-accuracy tradeoff. Nevertheless, the BUDGETPRUNE outperforms GREEDYPRUNE, which is indicative of the fact that apart from obvious high-budget regimes, node-pruning must account for how removal of one node may have an adverse impact on another downstream one. (iii) SENSITIVITY TO IMPURITY, FEATURE COSTS, & OTHER INPUTS. We explore these issues in Suppl. Material. We experiment BUDGETPRUNE with different impurity functions such as entropy and Pairs [16] criteria. Pairs-impurity tends to build RFs with lower cost but also lower accuracy compared to entropy and so has poorer performance. We also explored how non-uniform costs can impact cost-accuracy tradeoff. An elegant approach has been suggested by [2], who propose an adversarial feature cost proportional to feature utility value. We find that BUDGETPRUNE is robust with such costs. Other RF parameters including number of trees and feature subset size at each split do impact cost-accuracy tradeoff in obvious ways with more trees and moderate feature subset size improving prediction accuracy while incurring higher cost. Acknowledgment: We thank Dr Kilian Weinberger for helpful discussions and Dr David Castanon for the insights on the primal dual algorithm. This material is based upon work supported in part by NSF Grants CCF: 1320566, CNS: 1330008, CCF: 1527618, DHS 2013-ST-061-ED0001, ONR Grant 50202168 and US AF contract FA8650-14-C-1728. 8 References [1] IBM ILOG CPLEX Optimizer. http://www-01.ibm.com/software/ integration/optimization/cplex-optimizer/, 2010. [2] Djalel Benbouzid. Sequential prediction for budgeted learning : Application to trigger design. Theses, Université Paris Sud - Paris XI, February 2014. [3] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [4] L. Breiman, J. Friedman, C. J. Stone, and R A Olshen. Classification and regression trees. CRC press, 1984. [5] Venkatesan T. Chakaravarthy, Vinayaka Pandit, Sambuddha Roy, Pranjal Awasthi, and Mukesh K. Mohania. Decision trees for entity identification: Approximation algorithms and hardness results. ACM Trans. Algorithms, 7(2):15:1–15:22, March 2011. [6] O. Chapelle, Y. Chang, and T. Liu, editors. Proceedings of the Yahoo! Learning to Rank Challenge, held at ICML 2010, Haifa, Israel, June 25, 2010, 2011. [7] A. Frank and A. Asuncion. UCI machine learning repository, 2010. [8] J. H. Friedman. Greedy function approximation: A gradient boosting machine. Annals of Statistics, 29:1189–1232, 2000. [9] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural Information Processing Systems (NIPS), 2011. [10] Gurobi Optimization Inc. Gurobi optimizer reference manual, 2015. [11] V.Y. Kulkarni and P.K. Sinha. Pruning of random forest classifiers: A survey and future directions. In International Conference on Data Science Engineering (ICDSE), 2012. [12] M. Kusner, W. Chen, Q. Zhou, E. Zhixiang, K. Weinberger, and Y. Chen. Feature-cost sensitive learning with submodular trees of classifiers. In AAAI, 2014. [13] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In IEEE CVPR, 2006. [14] L. J. Li, H. Su, E. P. Xing, and L. Fei-Fei. Object Bank: A High-Level Image Representation for Scene Classification and Semantic Feature Sparsification. In NIPS. 2010. [15] X. Li, J. Sweigart, J. Teng, J. Donohue, and L. Thombs. A dynamic programming based pruning method for decision trees. INFORMS J. on Computing, 13(4):332–344, September 2001. [16] F. Nan, J. Wang, and V. Saligrama. Feature-budgeted random forest. In Proceedings of the 32nd International Conference on Machine Learning (ICML-15), 2015. [17] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265–294, 1978. [18] H. D. Sherali, A. G. Hobeika, and C. Jeenanunta. An optimal constrained pruning strategy for decision trees. INFORMS Journal on Computing, 21(1):49–61, 2009. [19] K. Trapeznikov and V. Saligrama. Supervised sequential classification under budget constraints. In International Conference on Artificial Intelligence and Statistics, pages 581–589, 2013. [20] J. Wang, K. Trapeznikov, and V. Saligrama. Efficient learning by directed acyclic graph for resource constrained prediction. In Advances in Neural Information Processing Systems. 2015. [21] Z. Xu, M. Kusner, M. Chen, and K. Q. Weinberger. Cost-sensitive tree of classifiers. In Proceedings of the 30th International Conference on Machine Learning, 2013. [22] Z. E. Xu, K. Q. Weinberger, and O. Chapelle. The greedy miser: Learning under test-time budgets. In Proceedings of the International Conference on Machine Learning, ICML, 2012. [23] Yi Zhang and Huang Huei-chuen. Decision tree pruning via integer programming. Working paper, 2005. 9
2016
141
6,041
Synthesis of MCMC and Belief Propagation Sungsoo Ahn∗ Michael Chertkov† Jinwoo Shin∗ ∗School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Korea †1 Theoretical Division, T-4 & Center for Nonlinear Studies, Los Alamos National Laboratory, Los Alamos, NM 87545, USA, †2Skolkovo Institute of Science and Technology, 143026 Moscow, Russia ∗{sungsoo.ahn, jinwoos}@kaist.ac.kr †chertkov@lanl.gov Abstract Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus approach which allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes. 1 Introduction GMs express factorization of the joint multivariate probability distributions in statistics via graph of relations between variables. The concept of GM has been used successfully in information theory, physics, artificial intelligence and machine learning [1, 2, 3, 4, 5, 6]. Of many inference problems one can set with a GM, computing partition function (normalization), or equivalently marginalizing the joint distribution, is the most general problem of interest. However, this paradigmatic inference problem is known to be computationally intractable in general, i.e., formally it is #P-hard even to approximate [7, 8]. To address this obstacle, extensive efforts have been made to develop practical approximation methods, among which MCMC- [9] based and BP- [10] based algorithms are, arguably, the most popular and practically successful ones. MCMC is exact, i.e., it converges to the correct answer, but its convergence/mixing is, in general, exponential in the system size. On the other hand, message 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. passing implementations of BP typically demonstrate fast convergence, however in general lacking approximation guarantees for GM containing loops. Motivated by this complementarity of the MCMC and BP approaches, we aim here to synthesize a hybrid approach benefiting from a joint use of MCMC and BP. At a high level, our proposed scheme uses BP as the first step and then runs MCMC to correct for the approximation error of BP. To design such an “error-correcting" MCMC, we utilize the Loop Calculus approach [11] which allows, in a nutshell, to express the BP error as a sum (i.e., series) of weights of the so-called generalized loops (sub-graphs of a special structure). There are several challenges one needs to overcome. First of all, to design an efficient Markov Chain (MC) sampler, one needs to design a scheme which allows efficient transitions between the generalized loops. Second, even if one designs such a MC which is capable of accessing all the generalized loops, it may mix slowly. Finally, weights of generalized loops can be positive or negative, while an individual MCMC can only generate non-negative contributions. Since approximating the full loop series (LS) is intractable in general, we first explore whether we can deal with the challenges at least in the case of the truncated LS corresponding to 2-regular loops. In fact, this problem has been analyzed in the case of the planar pairwise binary GMs [12, 13] where it was shown that the 2-regular LS is computable exactly in polynomial-time through a reduction to a Pfaffian (or determinant) computation [14]. In particular, the partition function of the Ising model without external field (i.e., where only pair-wise factors present) is computable exactly via the 2-regular LS. Furthermore, the authors show that in the case of general planar pairwise binary GMs, the 2-regular LS provides a highly accurate approximation empirically. Motivated by these results, we address the same question in the general (i.e., non-planar) case of pairwise binary GMs via MCMC. For the choice of MC, we adopt the Worm algorithm [15]. We prove that with some modification including rejections, the algorithm allows to sample (with probabilities proportional to respective weights) 2-regular loops in polynomial-time. Then, we design a novel simulated annealing strategy using the sampler to estimate separately positive and negative parts of the 2-regular LS. Given any ε > 0, this leads to a ε-approximation polynomial-time scheme for the 2-regular LS under a mild assumption. We next turn to estimating the full LS. In this part, we ignore the theoretical question of establishing the polynomial mixing time of a MC, and instead focus on designing an empirically efficient MCMC scheme. We design an MC using a cycle basis of the graph [16] to sample generalized loops directly, without rejections. It transits from one generalized loop to another by adding or deleting a random element of the cycle basis. Using the MC sampler, we design a simulated annealing strategy for estimating the full LS, which is similar to what was used earlier to estimate the 2-regular LS. Notice that even though the prime focus of this paper is on pairwise binary GMs, the proposed MCMC scheme allows straightforward generalization to general non-binary GMs. In summary, we propose novel MCMC schemes to estimate the LS correction to the BP contribution to the partition function. Since already the bare BP provides a highly non-trivial estimation for the partition function, it is naturally expected and confirmed in our experimental results that the proposed algorithm outperforms other standard (not related to BP) MCMC schemes applied to the original GM. We believe that our approach provides a new angle for approximate inference on GM and is of broader interest to various applications involving GMs. 2 Preliminaries 2.1 Graphical models and belief propagation Given undirected graph G = (V, E) with |V | = n, |E| = m, a pairwise binary Markov Random Fields (MRF) defines the following joint probability distribution on x = [xv ∈{0, 1} : v ∈V ]: p(x) = 1 Z Y v∈V ψv(xv) Y (u,v)∈E ψu,v(xu, xv), Z := X x∈{0,1}n Y v∈V ψv(xv) Y (u,v)∈E ψu,v,(xu, xv) where ψv, ψu,v are some non-negative functions, called compatibility or factor functions, and the normalization constant Z is called the partition function. Without loss of generality, we assume G is connected. It is known that approximating the partition function is #P-hard in general [8]. Belief Propagation (BP) is a popular message-passing heuristic for approximating marginal distributions of 2 MRF. The BP algorithm iterates the following message updates for all (u, v) ∈E: mt+1 u→v(xv) ∝ X xu∈{0,1} ψu,v(xu, xv)ψu(xu) Y w∈N(u)\v mt w→u(xu), where N(v) denotes the set of neighbors of v. In general BP may fail to converge, however in this case one may substitute it with a somehow more involved algorithm provably convergent to its fixed point [22, 23, 24]. Estimates for the marginal probabilities are expressed via the fixed-point messages {mu→v : (u, v) ∈E} as follows: τv(xv) ∝ψv(xv) Q u∈N(v) mu→v(xv) and τu,v(xu, xv) ∝ψu(xu)ψv(xv)ψu,v(xu, xv)   Y w∈N(u) mw→v(xu)     Y w∈N(v) mw→v(xv)  . 2.2 Bethe approximation and loop calculus BP marginals also results in the following Bethe approximation for the partition function Z: log ZBethe = X v∈V X xv τv(xv) log ψv(xv) + X (u,v)∈E X xu,xv τu,v(xu, xv) log ψu,v(xu, xv) − X v∈V X xv τv(xv) log τv(xv) − X (u,v)∈E X xu,xv τu,v(xu, xv) log τu,v(xu, xv) τu(xu)τv(xv) If graph G is a tree, the Bethe approximation is exact, i.e., ZBethe = Z. However, in general, i.e. for the graph with cycles, BP algorithm provides often rather accurate but still an approximation. Loop Series (LS) [11] expresses, Z/ZBethe, as the following sum/series: Z ZBethe = ZLoop := X F ∈L w(F), w(∅) = 1, w(F) := Y (u,v)∈EF  τu,v(1, 1) τu(1)τv(1) −1  Y v∈VF τv(1) + (−1)dF (v)  τv(1) 1 −τv(1) dF (v)−1 τv(1) ! where each term/weight is associated with the so-called generalized loop F and L denotes the set of all generalized loops in graph G (including the empty subgraph ∅). Here, a subgraph F of G is called generalized loop if all vertices v ∈F have degree dF (v) (in the subgraph) no smaller than 2. Since the number of generalized loops is exponentially large, computing ZLoop is intractable in general. However, the following truncated sum of ZLoop, called 2-regular loop series, is known to be computable in polynomial-time if G is planar [12]:1 Z2-Loop := X F ∈L2-Loop w(F), where L2-Loop denotes the set of all 2-regular generalized loops, i.e., F ∈L2-Loop if dF (v) = 2 for every vertex v of F. One can check that ZLoop = Z2-Loop for the Ising model without the external fields. Furthermore, as stated in [12, 13] for the general case, Z2-Loop provides a good empirical estimation for ZLoop. 3 Estimating 2-regular loop series via MCMC In this section, we aim to describe how the 2-regular loop series Z2-Loop can be estimated in polynomial-time. To this end, we first assume that the maximum degree ∆of the graph G is at most 3. This degree constrained assumption is not really restrictive since any pairwise binary model can be easily expressed as an equivalent one with ∆≤3, e.g., see the supplementary material. 1 Note that the number of 2-regular loops is exponentially large in general. 3 The rest of this section consists of two parts. We first propose an algorithm generating a 2-regular loop sample with the probability proportional to the absolute value of its weight, i.e., π2-Loop(F) := |w(F)| Z† 2-Loop , where Z† 2-Loop = X F ∈L2-Loop |w(F)|. Note that this 2-regular loop contribution allows the following factorization: for any F ∈L2-Loop, |w(F)| = Y e∈F w(e), where w(e) := τu,v(1, 1) −τu(1)τv(1) p τu(1)τv(1)(1 −τu(1))(1 −τv(1)) . (1) In the second part we use the sampler constructed in the first part to design a simulated annealing scheme to estimate Z2-Loop. 3.1 Sampling 2-regular loops We suggest to sample the 2-regular loops distributed according to π2-Loop through a version of the Worm algorithm proposed by Prokofiev and Svistunov [15]. It can be viewed as a MC exploring the set, L2-Loop S L2-Odd, where L2-Odd is the set of all subgraphs of G with exactly two odd-degree vertices. Given current state F ∈L2-Loop S L2-Odd, it chooses the next state F ′ as follows: 1. If F ∈L2-Odd, pick a random vertex v (uniformly) from V . Otherwise, pick a random odd-degree vertex v (uniformly) from F. 2. Choose a random neighbor u of v (uniformly) within G, and set F ′ ←F initially. 3. Update F ′ ←F ⊕{u, v} with the probability          min  1 n |w(F ⊕{u,v})| |w(F )| , 1  if F ∈L2-Loop min  n 4 |w(F ⊕{u,v})| |w(F )| , 1  else if F ⊕{u, v} ∈L2-Loop min  d(v) 2d(u) |w(F ⊕{u,v})| |w(F )| , 1  else if F, F ⊕{u, v} ∈L2-Odd Here, ⊕denotes the symmetric difference and for F ∈L2-Odd, its weight is defined according to w(F) = Q e∈F w(e). In essence, the Worm algorithm consists in either deleting or adding an edge to the current subgraph F. From the Worm algorithm, we transition to the following algorithm which samples 2-regular loops with probability π2-Loop simply by adding rejection of F if F ∈L2-Odd. Algorithm 1 Sampling 2-regular loops 1: Input: Number of trials N; number of iterations T of the Worm algorithm 2: Output: 2-regular loop F. 3: for i = 1 →N do 4: Set F ←∅and update it T times by running the Worm algorithm 5: if F is a 2-regular loop then 6: BREAK and output F. 7: end if 8: end for 9: Output F = ∅. The following theorem states that Algorithm 1 can generate a desired random sample in polynomialtime. Theorem 1. Given δ > 0, choose inputs of Algorithm 1 as N ≥1.2 n log(3δ−1), and T ≥(m −n + 1) log 2 + 4∆mn4 log(3nδ−1). Then, it follows that 1 2 X F ∈L2-Loop P  Algorithm 1 outputs F  −π2-Loop(F) ≤δ. namely, the total variation distance between π2-Loop and the output distribution of Algorithm 1 is at most δ. 4 The proof of the above theorem is presented in the supplementary material due to the space constraint. In the proof, we first show that MC induced by the Worm algorithm mixes in polynomial time, and then prove that acceptance of a 2-regular loop, i.e., line 6 of Algorithm 1, occurs with high probability. Notice that the uniform-weight version of the former proof, i.e., fast mixing, was recently proven in [18]. For completeness of the material exposition, we present the general case proof of interest for us. The latter proof, i.e., high acceptance, requires to bound |L2-Loop| and |L2-Odd| to show that the probability of sampling 2-regular loops under the Worm algorithm is 1/poly(n) for some polynomial function poly(n). 3.2 Simulated annealing for approximating 2-regular loop series Here we utilize Theorem 1 to describe an algorithm approximating the 2-regular LS Z2-Loop in polynomial time. To achieve this goal, we rely on the simulated annealing strategy [19] which requires to decide a monotone cooling schedule β0, β1, . . . , βℓ−1, βℓ, where βℓcorresponds to the target counting problem and β0 does to its relaxed easy version. Thus, designing an appropriate cooling strategy is the first challenge to address. We will also describe how to deal with the issue that Z2-Loop is a sum of positive and negative terms, while most simulated annealing strategies in the literature mainly studied on sums of non-negative terms. This second challenge is related to the so-called ‘fermion sign problem’ common in statistical mechanics of quantum systems [25]. Before we describe the proposed algorithm in details, let us provide its intuitive sketch. The proposed algorithm consists of two parts: a) estimating Z† 2-Loop via a simulated annealing strategy and b) estimating Z2-Loop/Z† 2-Loop via counting samples corresponding to negative terms in the 2regular loop series. First consider the following β-parametrized, auxiliary distribution over 2-regular loops: π2-Loop(F : β) = 1 Z† 2-Loop(β) |w(F)|β, for 0 ≤β ≤1. (2) Note that one can generate samples approximately with probability (2) in polynomial-time using Algorithm 1 by setting w ←wβ. Indeed, it follows that for β′ > β, Z† 2-Loop(β′) Z† 2-Loop(β) = X F ∈L2-Loop |w(F)|β′−β |w(F)|β Z† 2-Loop(β) = Eπ2-Loop(β) h |w(F)|β′−βi , where the expectation can be estimated using O(1) samples if it is Θ(1), i.e., β′ is sufficiently close to β. Then, for any increasing sequence β0 = 0, β1, . . . , βn−1, βn = 1, we derive Z† 2-Loop = Z† 2-Loop(βn) Z† 2-Loop(βn−1) · Z† 2-Loop(βn−1) Z† 2-Loop(βn−2) · · · Z† 2-Loop(β2) Z† 2-Loop(β1) Z† 2-Loop(β1) Z† 2-Loop(β0) Z† 2-Loop(0), where it is know that Z† 2-Loop(0), i.e., the total number of 2-regular loops, is exactly 2m−n+1 [16]. This allows us to estimate Z† 2-Loop simply by estimating Eπ2-Loop(βi)  |w(F)|βi+1−βi for all i. Our next step is to estimate the ratio Z2-Loop/Z† 2-Loop. Let L− 2-Loop denote the set of negative 2-regular loops, i.e., L− 2-Loop := {F : F ∈L2-Loop, w(F) < 0}. Then, the 2-regular loop series can be expressed as Z2-Loop = 1 −2 P F ∈L− 2-Loop |w(F)| Z† 2-Loop ! Z† 2-Loop =  1 −2Pπ2–Loop  w(F) < 0  Z† 2-Loop, where we estimate Pπ2–Loop  w(F) < 0  again using samples generated by Algorithm 1. We provide the formal description of the proposed algorithm and its error bound as follows. 5 Algorithm 2 Approximation for Z2-Loop 1: Input: Increasing sequence β0 = 0 < β1 < · · · < βn−1 < βn = 1; number of samples s1, s2; number of trials N1; number of iterations T1 for Algorithm 1. 2: for i = 0 →n −1 do 3: Generate 2-regular loops F1, . . . , Fs1 for π2-Loop(βi) using Algorithm 1 with input N1 and T1, and set Hi ←1 s1 X j w(Fj)βi+1−βi. 4: end for 5: Generate 2-regular loops F1, . . . , Fs2 for π2-Loop using Algorithm 1 with input N2 and T2, and set κ ←|{Fj : w(Fj) < 0}| s2 . 6: Output: bZ2-Loop ←(1 −2κ)2m−n+1 Q i Hi. Theorem 2. Given ε, ν > 0, choose inputs of Algorithm 2 as βi = i/n for i = 1, 2, . . . , n −1, s1 ≥18144n2ε−2w−1 min⌈log(6nν−1)⌉, N1 ≥1.2n log(144nε−1w−1 min), T1 ≥(m −n + 1) log 2 + 4∆mn4 log(48nε−1w−1 min), s2 ≥18144ζ(1 −2ζ)−2ε−2⌈log(3ν−1)⌉, N2 ≥1.2n log(144ε−1(1 −2ζ)−1), T2 ≥(m −n + 1) log 2 + 4∆mn4 log(48ε−1(1 −2ζ)−1) where wmin = mine∈E w(e) and ζ = Pπ2–Loop[w(F) < 0]. Then, the following statement holds P " | bZ2-Loop −Z2-Loop| Z2-Loop ≤ε # ≤1 −ν, which means Algorithm 2 estimates Z2-Loop within approximation ratio 1 ± ε with high probability. The proof of the above theorem is presented in the supplementary material due to the space constraint. We note that all constants entering in Theorem 2 were not optimized. Theorem 2 implies that complexity of Algorithm 2 is polynomial with respect to n, 1/ε, 1/ν under assumption that w−1 min and 1 −2Pπ2–Loop[w(F) < 0] are polynomially small. Both w−1 min and 1 −2Pπ2–Loop[w(F) < 0] depend on the choice of BP fixed point, however it is unlikely (unless a degeneracy) that these characteristics become large. In particular, Pπ2–Loop[w(F) < 0] = 0 in the case of attractive models [20]. 4 Estimating full loop series via MCMC In this section, we aim for estimating the full loop series ZLoop. To this end, we design a novel MC sampler for generalized loops, which adds (or removes) a cycle basis or a path to (or from) the current generalized loop. Therefore, we naturally start this section introducing necessary backgrounds on cycle basis. Then, we turn to describe the design of MC sampler for generalized loops. Finally, we describe a simulated annealing scheme similar to the one described in the preceding section. We also report its experimental performance comparing with other methods. 4.1 Sampling generalized loops with cycle basis The cycle basis C of the graph G is a minimal set of cycles which allows to represent every Eulerian subgraph of G (i.e., subgraphs containing no odd-degree vertex) as a symmetric difference of cycles in the set [16]. Let us characterize the combinatorial structure of the generalized loop using the cycle basis. To this end, consider a set of paths between any pair of vertices: P = {Pu,v : u ̸= v, u, v ∈V, Pu,v is a path from u to v}, i.e., |P| = n 2  . Then the following theorem allows to decompose any generalized loop with respect to any selected C and P. 6 Theorem 3. Consider any cycle basis C and path set P. Then, for any generalized loop F, there exists a decomposition, B ⊂C ∪P, such that F can be expressed as a symmetric difference of the elements of B, i.e., F = B1 ⊕B2 ⊕· · · Bk−1 ⊕Bk for some Bi ∈B. The proof of the above theorem is given in the supplementary material due to the space constraint. Now given any choice of C, P, consider the following transition from F ∈L, to the next state F ′: 1. Choose, uniformly at random, an element B ∈C ∪P, and set F ′ ←F initially. 2. If F ⊕B ∈L, update F ′ ← ( F ⊕B with probability min n 1, |w(F ⊕B|)| |w(F )| o F otherwise . Due to Theorem 3, it is easy to check that the proposed MC is irreducible and aperiodic, i.e., ergodic, and the distribution of its t-th state converges to the following stationary distribution as t →∞: πLoop(F) = |w(F)| Z† Loop , where Z† Loop = X F ∈LLoop |w(F)|. One also has a freedom in choosing C, P. To accelerate mixing of MC, we suggest to choose the minimum weighted cycle basis C and the shortest paths P with respect to the edge weights {log w(e)} defined in (1), which are computable using the algorithm in [16] and the Bellman-Ford algorithm [21], respectively. This encourages transitions between generalized loops with similar weights. 4.2 Simulated annealing for approximating full loop series Algorithm 3 Approximation for ZLoop 1: Input: Decreasing sequence β0 > β1 > · · · > βℓ−1 > βℓ= 1; number of samples s0, s1, s2; number of iterations T0, T1, T2 for the MC described in Section 4.1 2: Generate generalized loops F1, · · · , Fs0 by running T0 iterations of the MC described in Section 4.1 for πLoop(β0), and set U ←s0 s∗|w(F ∗)|β0, where F ∗= arg maxF ∈{F1,··· ,Fs0} |w(F)| and s∗is the number of F ∗sampled. 3: for i = 0 →ℓ−1 do 4: Generate generalized loops F1, · · · , Fs1 by running T1 iterations of the MC described in Section 4.1 for πLoop(βi), and set Hi ← 1 s1 P j |w(Fj)|βi+1−βi. 5: end for 6: Generate generalized loops F1, · · · Fs2 by running T2 iterations of the MC described in Section 4.1 for πLoop, and set κ ←|{Fj : w(Fj) < 0}| s2 . 7: Output: bZLoop ←(1 −2κ) Q i HiU. Now we are ready to describe a simulated annealing scheme for estimating ZLoop. It is similar, in principle, with that in Section 3.2. First, we again introduce the following β-parametrized, auxiliary probability distribution πLoop(F : β) = |w(F)|β/Z† Loop(β). For any decreasing sequence of annealing parameters, β0, β1, · · · , βℓ−1, βℓ= 1, we derive Z† Loop = Z† Loop(βℓ) Z† Loop(βℓ−1) · Z† Loop(βℓ−1) Z† Loop(βℓ−2) · · · Z† Loop(β2) Z† Loop(β1) · Z† Loop(β1) Z† Loop(β0) Z† Loop(β0). Following similar procedures in Section 3.2, one can estimate Z† Loop(β′)/Z† Loop(β) = EπLoop(β)[|w(F)|β′−β] using the sampler described in Section 4.1. Moreover, Z† Loop(β0) = |w(F ∗)|/PπLoop(β0)(F ∗) is estimated by sampling generalized loop F ∗with the highest probability PπLoop(β0)(F ∗). For large enough β0, the approximation error becomes relatively small since PπLoop(β0)(F ∗) ∝|w(F ∗)|β0 dominates over the distribution. In combination, this provides a desired approximation for ZLoop. The result is stated formally in Algorithm 3. 7 (a) (b) (c) Figure 1: Plots of the log-partition function approximation error with respect to (average) interaction strength: (a) Ising model with no external field, (b) Ising model with external fields and (c) Hard-core model. Each point is averaged over 20 (random) models. 4.3 Experimental results In this section, we report experimental results for computing partition function of the Ising model and the hard-core model. We compare Algorithm 2 in Section 3 (coined MCMC-BP-2reg) and Algorithm 3 in Section 4.2 (coined MCMC-BP-whole), with the bare Bethe approximation (coined BP) and the popular Gibbs-sampler (coined MCMC-Gibbs). To make the comparison fair, we use the same annealing scheme for all MCMC schemes, thus making their running times comparable. More specifically, we generate each sample after running T1 = 1, 000 iterations of an MC and take s1 = 100 samples to compute each estimation (e.g., Hi) at intermediate steps. For performance measure, we use the log-partition function approximation error defined as | log Z −log Zapprox|/| log Z|, where Zapprox is the output of the respective algorithm. We conducted 3 experiments on the 4 × 4 grid graph. In our first experimental setting, we consider the Ising model with varying interaction strength and no external (magnetic) field. To prepare the model of interest, we start from the Ising model with uniform (ferromagnetic/attractive and anti-ferromagnetic/repulsive) interaction strength and then add ‘glassy’ variability in the interaction strength modeled via i.i.d Gaussian random variables with mean 0 and variance 0.52, i.e. N(0, 0.52). In other words, given average interaction strength 0.3, each interaction strength in the model is independently chosen as N(0.3, 0.52). The second experiment was conducted by adding N(0, 0.52) corrections to the external fields under the same condition as in the first experiment. In this case we observe that BP often fails to converge, and use the Concave Convex Procedure (CCCP) [23] for finding BP fixed points. Finally, we experiment with the hard-core model on the 4× 4 grid graph with varying a positive parameter λ > 0, called ‘fugacity’ [26]. As seen clearly in Figure 1, BP and MCMC-Gibbs are outperformed by MCMC-BP-2reg or MCMC-BP-whole at most tested regimes in the first experiment with no external field, where in this case, the 2-regular loop series (LS) is equal to the full one. Even in the regimes where MCMC-Gibbs outperforms BP, our schemes correct the error of BP and performs at least as good as MCMC-Gibbs. In the experiments, we observe that advantage of our schemes over BP is more pronounced when the error of BP is large. A theoretical reasoning behind this observation is as follows. If the performance of BP is good, i.e. the loop series (LS) is close to 1, the contribution of empty generalized loop, i.e., w(∅), in LS is significant, and it becomes harder to sample other generalized loops accurately. 5 Conclusion In this paper, we propose new MCMC schemes for approximate inference in GMs. The main novelty of our approach is in designing BP-aware MCs utilizing the non-trivial BP solutions. In experiments, our BP based MCMC scheme also outperforms other alternatives. We anticipate that this new technique will be of interest to many applications where GMs are used for statistical reasoning. Acknowledgement This work was supported by the National Research Council of Science & Technology (NST) grant by the Korea government (MSIP) (No. CRC-15-05-ETRI), and funding from the U.S. Department of Energy’s Office of Electricity as part of the DOE Grid Modernization Initiative. 8 References [1] J. Pearl, “Probabilistic reasoning in intelligent systems: networks of plausible inference,” Morgan Kaufmann, 2014. [2] R. G. Gallager, “Low-density parity-check codes,” Information Theory, IRE Transactions 8(1): 21-28, 1962. [3] R. F. Kschischang, and J. F. Brendan, “Iterative decoding of compound codes by probability propagation in graphical models,” Selected Areas in Communications, IEEE Journal 16(2): 219-230, 1998. [4] M. I. Jordan, ed. “Learning in graphical models,” Springer Science & Business Media 89, 1998. [5] R.J. Baxter, “Exactly solved models in statistical mechanics,” Courier Corporation, 2007. [6] W.T. Freeman, C.P. Egon, and T.C. Owen, “Learning low-level vision.” International journal of computer vision 40(1): 25-47, 2000. [7] V. Chandrasekaran, S. Nathan, and H. Prahladh, “Complexity of Inference in Graphical Models,” Association for Uncertainty and Artificial Intelligence, 2008 [8] M. Jerrum, and A. Sinclair, “Polynomial-time approximation algorithms for the Ising model,” SIAM Journal on computing 22(5): 1087-1116, 1993. [9] C. Andrieu, N. Freitas, A. Doucet, and M. I. Jordan, “An introduction to MCMC for machine learning,” Machine learning 50(1-2), 5-43, 2003. [10] J. Pearl, “Reverend Bayes on inference engines: A distributed hierarchical approach,” Association for the Advancement of Artificial Intelligence, 1982. [11] M. Chertkov, and V. Y. Chernyak, “Loop series for discrete statistical models on graphs,” Journal of Statistical Mechanics: Theory and Experiment 2006(6): P06009, 2006. [12] M. Chertkov, V. Y. Chernyak, and R. Teodorescu, “Belief propagation and loop series on planar graphs,” Journal of Statistical Mechanics: Theory and Experiment 2008(5): P05003, 2008. [13] V. Gomez, J. K. Hilbert, and M. Chertkov, “Approximate inference on planar graphs using Loop Calculus and Belief Propagation,” The Journal of Machine Learning Research, 11: 1273-1296, 2010. [14] P. W. Kasteleyn, “The statistics of dimers on a lattice,” Classic Papers in Combinatorics. Birkhäuser Boston, 281-298, 2009. [15] N. Prokof’ev, and B. Svistunov, “Worm algorithms for classical statistical models,” Physical review letters 87(16): 160601, 2001. [16] J.D. Horton, “A polynomial-time algorithm to find the shortest cycle basis of a graph.” SIAM Journal on Computing 16(2): 358-366, 1987. APA [17] H. A. Kramers, and G. H. Wannier, “Statistics of the two-dimensional ferromagnet. Part II,” Physical Review 60(3): 263, 1941. [18] A. Collevecchio, T. M. Garoni, T.Hyndman, and D. Tokarev, “The worm process for the Ising model is rapidly mixing,” arXiv preprint arXiv:1509.03201, 2015. [19] S. Kirkpatrick, “Optimization by simulated annealing: Quantitative studies.” Journal of statistical physics 34(5-6): 975-986, 1984. [20] R. Nicholas, “The Bethe partition function of log-supermodular graphical models,” Advances in Neural Information Processing Systems. 2012. [21] J. Bang, J., and G. Z. Gutin. “Digraphs: theory, algorithms and applications.” Springer Science & Business Media, 2008. [22] Y. W. Teh and M. Welling, “Belief optimization for binary networks: a stable alternative to loopy belief propagation,” Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence, 493-500, 2001. [23] A. L. Yuille, “CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent alternatives to belief propagation,” Neural Computation, 14(7): 1691-1722, 2002. [24] J. Shin, “The complexity of approximating a Bethe equilibrium,” Information Theory, IEEE Transactions on, 60(7): 3959-3969, 2014. [25] https://www.quora.com/Statistical-Mechanics-What-is-the-fermion-sign-problem [26] Dyer, M., Frieze, A., and Jerrum, M. “On counting independent sets in sparse graphs,” SIAM Journal on Computing 31(5): 1527-1541, 2002. [27] J. Schweinsberg, “An O(n2) bound for the relaxation time of a Markov chain on cladograms.” Random Structures & Algorithms 20(1): 59-70, 2002. 9
2016
142
6,042
Neurons Equipped with Intrinsic Plasticity Learn Stimulus Intensity Statistics Travis Monk Cluster of Excellence Hearing4all University of Oldenburg 26129 Oldenburg, Germany travis.monk@uol.de Cristina Savin IST Austria 3400 Klosterneuburg Austria csavin@ist.ac.at J¨org L¨ucke Cluster of Excellence Hearing4all University of Oldenburg 26129 Oldenburg, Germany joerg.luecke@uol.de Abstract Experience constantly shapes neural circuits through a variety of plasticity mechanisms. While the functional roles of some plasticity mechanisms are wellunderstood, it remains unclear how changes in neural excitability contribute to learning. Here, we develop a normative interpretation of intrinsic plasticity (IP) as a key component of unsupervised learning. We introduce a novel generative mixture model that accounts for the class-specific statistics of stimulus intensities, and we derive a neural circuit that learns the input classes and their intensities. We will analytically show that inference and learning for our generative model can be achieved by a neural circuit with intensity-sensitive neurons equipped with a specific form of IP. Numerical experiments verify our analytical derivations and show robust behavior for artificial and natural stimuli. Our results link IP to nontrivial input statistics, in particular the statistics of stimulus intensities for classes to which a neuron is sensitive. More generally, our work paves the way toward new classification algorithms that are robust to intensity variations. 1 Introduction Confronted with the continuous flow of experience, the brain takes amorphous sensory inputs and translates them into coherent objects and scenes. This process requires neural circuits to extract key regularities from their inputs and to use those regularities to interpret novel experiences. Such learning is enabled by a variety of plasticity mechanisms which allow neural networks to represent the statistics of the world. The most well-studied plasticity mechanism is synaptic plasticity, where the strength of connections between neurons changes as a function of their activity [1]. Other plasticity mechanisms exist and operate in tandem. One example is intrinsic plasticity (IP), where a neuron’s response to inputs changes as a function of its own past activity. It is a challenge for computational neuroscience to understand how different plasticity rules jointly contribute to circuit computation. While much is known about the contribution of Hebbian plasticity to different variants of unsupervised learning, including linear and non-linear sparse coding [2–5], ICA [6], PCA [7] or clustering [8–12], other aspects of unsupervised learning remain unclear. First, on the computational side, there are many situations in which the meaning of inputs should be invariant to its overall gain. For example, a visual scene’s content does not depend on light intensity, and a word utterance should 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. be recognized irrespective of its volume. Current models do not explicitly take into account such gain variations, and often eliminate them using an ad hoc preprocessing step that normalizes inputs [8, 9, 13]. Second, on the biological side, the roles of other plasticity mechanisms such as IP, and their potential contributions to unsupervised learning, remain poorly understood. IP changes the input-output function of a neuron depending on its past activity. Typically, IP is a homeostatic negative feedback loop that preserves a neuron’s activation levels despite its changing input [14,15]. There is no consensus on which quantities IP regulates, e.g. a neuron’s firing rate, its internal Ca concentration, its spiking threshold, etc. In modeling work, IP is usually implemented as a simple threshold change that controls the mean firing rate, although some models propose more sophisticated rules that also constrain higher order statistics of the neuron’s output [6, 16]. Functionally, while there have been suggestions that IP can play an important role in circuit function [6,10,11,17], its role in unsupervised learning is still not fully understood. Here we show that a neural network that combines specific forms of Hebbian plasticity and IP can learn the statistics of inputs with variable gain. We propose a novel generative model named Product-Poisson-Gamma (PPG) that explicitly accounts for class-specific variation in input gain. We then derive, from first principles, a neural circuit that implements inference and learning for this model. Our derivation yields a novel IP rule as a required component of unsupervised learning given gain variations. Our model is unique in that it directly links IP to the gain variations of the pattern to which a neuron is sensitive, which may be tested experimentally. Beyond neurobiology, the models provide a new class of efficient clustering algorithms that do not require data preprocessing. The learned representations also permit efficient classification from very little labeled data. 2 The Product-Poisson-Gamma model Intensity can vary drastically across images although the features present in it are the same.1 This variability constitutes a challenge for learning and is typically eliminated through a preprocessing stage in which the inputs are normalized [9]. While such preprocessing can make learning easier, ad hoc normalizations may be suboptimal, or may require additional parameters to be set by hand. More importantly, input normalization has the side-effect of losing information about intensity, which might have helped identify the features themselves. For instance, in computer vision objects of the same class are likely to have similar surface properties, resulting in a characteristic distribution of light intensities. Light intensities can therefore aid classification. In the neural context, the overall drive to neurons may vary, e.g. due to attentional gain modulation, despite the underlying encoded features being the same. A principled way to address intensity variations is to explicitly model them in a generative model describing the data. Then we can use that generative model to derive optimal inference and learning for such data and map them to a corresponding neural circuit implementation. Let us assume the stimuli are drawn from one of C classes, and let us denote a stimulus by ⃗y. Given a stimulus / data point ⃗y, we wish to infer the class c that generated it (see Figure 1). Let ⃗y depend not only on the class c, but also on a continuous random variable z, representing the intensity of the stimulus, that itself depends on c as well as some parameters θ. Given these dependencies Pr(⃗y|c, z, θ) and Pr(z|c, θ), Bayes’ rule specifies how to infer the class c and hidden variable z given an observation of ⃗y: Pr(c, z|⃗y, θ) = Pr(⃗y|c, z, θ) Pr(z|c, θ) Pr(c|θ) P c′ R Pr(⃗y|c′, z′, θ) Pr(z′|c′, θ) Pr(c′|θ)dz . (1) We can obtain neurally-implementable expressions for the posterior if our data generative model is a mixture model with non-negative noise, e.g. a Poisson mixture model [9]. We extend the Poisson mixture model by including an additional statistical description of stimulus intensity. The Gamma distribution is a natural choice due to its conjugacy with the Poisson distribution. Let each of the D elements in the vector ⃗y|z, c, θ (e.g. pixels in an image) be independent and Poisson-distributed, let z|c, θ be Gamma-distributed, and let the prior of each class be uniform: Pr(⃗y|c, z, θ) = D Y d=1 Pois(yd; zWcd); Pr(z|c, θ) = Gam(z; αc, βc); Pr(c|θ) = 1 C 1We use images as inputs and intensity as a measure of input gain as a running example. Our arguments apply regardless of the type of sensory input, e.g. the volume of sound or the concentration of odor. 2 where all W, α, and β represent the parameters of the model. To avoid ambiguity in scales, we constrain the weights of the model to sum to one, P d Wcd = 1. We call this generative model a Product-Poisson-Gamma (PPG). While the multiplicative interaction between features and the intensity or gain variable is reminiscent of the Gaussian Scale Mixture (GSM) generative model, note that PPG has separate intensity distributions for each of the classes; each is a Gamma distribution with a (possibly unique) shape parameter αc and rate parameter βc. Furthermore, the non-gaussian observation noise is critical for deriving the circuit dynamics. The model is general and flexible, yet it is sufficiently constrained to allow for closed-form joint posteriors. As shown in Appendix A, the joint posterior of the class and intensity is: Pr(c, z|⃗y, θ) = NB(ˆy; αc, 1 βc+1) exp (P d yd ln Wcd) P c′ NB(ˆy; αc′, 1 βc′+1) exp (P d yd ln Wc′d)Gam(z; αc + ˆy, βc + 1), where ˆy = P d yd, and NB represents the negative binomial distribution. We also obtain a closed-form expression of the posterior marginalized over z, which takes the form of a softmax function weighted by negative binomials: Pr(c|⃗y, θ) = NB(ˆy; αc, 1 βc+1) exp (P d yd ln Wcd) P c′ NB(ˆy; αc′, 1 βc′+1) exp (P d′ yd′ ln Wc′d′) (2) This is a straightforward generalization of the standard softmax, used for optimal learning in winnertake-all (WTA) networks [2,8,9,11] and WTA-based microcircuits [18]. Note that Eqn. 2 represents the optimal way to integrate evidence for class membership originating from stimulus intensity (parameterized by ⃗α and ⃗β) and pattern ‘shape’ (parameterized by W). If one of the two is not instructive, then the corresponding terms cancel out: if the patterns have identical shape (W with identical rows), then the softmax drops out and only negative binomial terms remain, and if all pattern classes have the same intensity distribution, then the posterior reduces to the standard softmax function as in previous work [2,8–11]. To facilitate the link to neural dynamics, Eqn. 2 can be simplified by approximating the negative binomial distribution as Poisson. In the limit that αc →∞and the mean λc ≡αc/βc = constant, the negative binomial distribution is: lim αc→∞,αc/βc=const. NB(ˆy; αc, 1 βc + 1) = Pois(ˆy; αc βc ) ≡Pois(ˆy; λc). In this limit, Eqn. 2 becomes: Pr(c|⃗y, θ) ≈ exp(P d′ yd′ ln(Wcd′λc) −λc) P c′ exp(P d′ yd′ ln(Wc′d′λc′) −λc′) (3) which can be evaluated by a neural network using soft-WTA dynamics [9]. 3 Expectation-Maximization of PPG-generated data As a starting point for deriving a biologically-plausible neural network for learning PPG-generated data, let us first consider optimal learning derived from the Expectation-Maximization (EM) algorithm [19]. Given a set of N data points ⃗y(n), we seek the parameters θ = {W, λ} that maximize the data likelihood given the PPG-model defined above. We use the EM formulation introduced in [20] and optimize the free-energy given by: F(θt, θt-1) = X n X c′ Pr(c′|⃗y(n), θt-1)(ln Pr(⃗y(n)|c′, θt) + ln Pr(c′|θt)) + H(θt-1). Here, H(θt-1) is the Shannon entropy of the posterior as a function of the previous parameter values. We can find the M-step update rules for the parameters of the model λc and Wcd by taking the partial derivative of F(θt, θt-1) w.r.t. the desired parameter and setting it to zero. As shown in Appendix B, the resultant update rule for λc,t is: ∂F(θt, θt-1) ∂λc,t = 0 ⇒λc,t = P n Pr(c|⃗y(n), θt-1)ˆy(n) P n Pr(c|⃗y(n), θt-1) (4) 3 The M-step update rules for the weights Wcd are found by setting the corresponding partial derivative of F(θt, θt-1) to zero, under the constraint that P d Wcd = 1. Using Lagrange multipliers Λc yields the following update rule (see Appendix B): ∂F(θt, θt-1) ∂Wcd,t + ∂ ∂Wcd,t X c′ Λc′ X d′ Wc′d′,t −1 ! = 0 ⇒Wcd,t = P n yd Pr(c|⃗y(n), θt-1) P d P n yd Pr(c|⃗y(n), θt-1). (5) As numerical verification, Figure 1 illustrates the evolution of parameters λc and Wcd yielded by the EM algorithm on artificial data. Our artificial data set consists of four classes of rectangles on a grid of 10x10 pixels. Rectangles from different classes have different sizes and positions and are represented by a generative vector W gen c . We generate a data set by drawing a large number N of observations of W gen c , with each class equiprobable. We then draw a random variable z from a Gamma distribution with parameters αc and βc that depend on the class of each observation. Then, given W gen c and z, we create a data vector ⃗y(n) by adding Poisson noise to each pixel. With a set of N data vectors ⃗y(n), we then perform EM to find the parameters Wcd and λc that maximize the likelihood of the data set (at least locally). The E-step evaluates Equation 2 for each data vector, and the M-step evaluates Equations 4 and 5. Figure 1 shows that, after about five iterations, the EM algorithm returns the values of Wcd and λc that were used to generate the data set, i.e. the parameter values that maximize the data likelihood. Figure 1: The evolution of model parameters yielded by the EM algorithm on artificial data. A: Four classes of rectangles represented by the vector W gen c , with the values of λc for each class displayed to the left. B: Evolution of the parameters Wcd for successive iterations of the EM algorithm. C: Evolution of the parameters λc, with dashed lines indicating the values from the generative model. The EM algorithm returns the values of Wcd and λc that were used to generate the data set, i.e. the parameter values that maximize the data likelihood. For these plots, we generated a data set of 2000 inputs. W gen c = 100 for white pixels and 1 for black pixels. The shape and rate parameters of the Gamma distributions, from the top class to the bottom, are α = [98, 112, 128, 144] and β = [7, 7.5, 8, 8.5], giving λc = αc/βc = [14, 15, 16, 17]. 4 Optimal neural learning for varying stimulus intensities For PPG-generated data, the posterior distribution of the class given an observation is approximately the softmax function (or soft-WTA, Eqn. 3). Neural networks that implement the softmax function, usually via some form of lateral inhibition, have been extensively investigated [2, 8–11, 21]. Thus, inference in our model reduces to well-understood neural circuit dynamics. The key remaining challenge is to analytically relate optimal learning as derived by EM to circuit plasticity. To map abstract random variables to neural counterparts, we consider a complete bipartite neural network, with the input layer corresponding to the observables y and the hidden layer representing the latent causes of the observables, i.e. classes.2 The network is feedforward; each 2The number of hidden neurons does not necessarily need to equal the number of classes; see Figure 3. 4 neuron in the input layer connects to each neuron in the hidden layer via synaptic weights Wcd, where c ∈[1, C] indexes the C hidden neurons and d ∈[1, D] indexes the D input neurons. Let each of the hidden neurons have a standard activity variable, sc, and additionally an intrinsic parameter λc that represents its excitability. Let the activity of each hidden neuron be given by Eqn. 2. The activity of each hidden neuron is then the posterior distribution for one particular class, given the inputs it receives from the input layer, its synaptic weights, and its excitability: sc = exp(Ic) P c′ exp(Ic′); Ic = X d′ yd′ ln(Wcd′λc) −λc. The weights of the neural network Wcd are plastic and change according to a Hebbian learning rule with synaptic scaling [22]: ∆Wcd = ϵW (scyd −scλc ¯WcWcd), (6) where ϵW is a small and positive learning rate, and ¯Wc = P d Wcd. The intrinsic parameters λc are also plastic and change according to a similar learning rule: ∆λc = ϵλsc( X d yd −λc), (7) where ϵλ is another small positive learning rate. This type of regulation of excitability is homeostatic in form, but differs from standard implementations in that the excitability changes not only depending on the neuron output, s, but also on the net input to the neuron (see also [17] for a formal link between P d yd and average incoming inputs). Appendix C shows that these online update rules enforce the desired weight normalization, with ¯Wc converging to one. Assuming weight convergence, and assuming a small learning rate and a large set of data points, the weights and intrinsic parameters converge to (see [9] and Appendix C): W conv cd ≈ P n y(n) d sc P d′ P n y(n) d sc ; λconv c = P n scˆy(n) P n sc . Comparing these convergence expressions with the EM updates (Eqns. 5 and 4) and inserting the definition sc = Pr(c|⃗y, θ), we see that the neural dynamics given in Eqns. 6 and 7 have the same fixed points as optimal EM learning. The network can therefore find the parameter values that optimize the data likelihood using compact and neurally-plausible learning rules. Eqn. 6 is a standard form of Hebbian plasticity with synaptic scaling, while Eqn. 7 states how the excitability of hidden neurons should be governed by the gain of the inputs and the current to the neuron. 5 Numerical Experiments To verify our analytical results, we first investigated learning in the derived neural network using data generated according to the PPG model. Figure 2 illustrates the evolution of parameters λc and Wcd yielded by the neural network on artificial data (the same as used for Figure 1). The neural network learns the synaptic weights and intrinsic parameters that were used to generate the data set, i.e. the parameter values that maximize the data likelihood. Since our artificial data was PPG-generated, one can expect the neural network to learn the classes and intensities quickly and accurately. To test the neural network on more realistic data, we followed a number of related studies [8–12] and used the MNIST as a standard dataset containing different stimulus classes. The input to the network was 28x28 pixel images (converted to vectors) from the MNIST dataset. We present our results for the digits 0-3 for visual ease and simulation speed; our results on the full dataset are qualitatively similar. We added an offset of 1 to all pixels and rescaled them so that no pixel was greater than 1. The λc were initialized to be the mean intensity of all digit classes as calculated from our modified MNIST training set. Each Wcd was initialized as Wcd ∼Pois(Wcd; µd) + 1, where µd is the mean of each pixel over all classes and is calculated from our modified MNIST training set. Figure 3 shows an example run using C = 16 hidden neurons. It shows the change in both neural weights and intrisic excitabilities λc during learning. We observe that the weights change to represent the digit classes and converge relatively quickly (panels A, B). We verified that they sum to 1 5 Figure 2: The evolution of model parameters yielded by the neural network on artificial data generated from the same model as that used in Figure 1. A: Four classes of rectangles with the values of λc for each class displayed to the left. B: Evolution of the synaptic weights Wcd that feed each hidden unit after 0, 20, 40, . . . , 120 time steps, respectively. C: Evolution of the intrinsic parameters λc over 4000 time steps, with dashed lines indicating the values from the generative model. The neural network returns the values of Wcd and λc that were used to generate the data set, i.e. the parameter values that maximize the data likelihood. For these plots, ϵW = ϵλ = .005, D = 100 (for a 10x10 pixel grid), C = 4, initialized weights were uniformly-distributed between .01 and .06, and initialized intrinsic parameters were uniformly-distributed between 10 and 20. Figure 3: The neural network’s performance on a reduced MNIST dataset (the digits 0 to 3). A: Representatives of the input digits. B: The network’s synaptic weights during training. Each square represents the weights feeding one hidden neuron. Each box of 16 squares represents the weights feeding each of the C = 16 hidden neurons after initialization, and after subsequent iterations over the training set. The network learns different writing styles for different digits. C: The network learns the average intensities, i.e. the sum of the pixels in an image, of each class of digit in MNIST. Algorithms that impose ad hoc intensity normalization in their preprocessing cannot learn these intensities. The horizontal dashed lines are the average intensities of each digit, with 1 having the lowest overall luminance and 0 the largest. The average λc for all hidden units representing a given digit converge to those ground truth values. D: The network’s learned intensity differences improve classification performance. The percentage of correct digit classifications by a network with IP (solid lines) is higher than that by a network without IP (dashed lines). This result is robust to the number of iterations over the dataset and the number of labels used to calculate the Bayesian classifier used in [9]. 6 for each class at convergence (not shown). We also observe that the network’s IP dynamics allow it to learn the average intensities of each class of digit (panel C). The thin horizontal dashed lines are the true values for λc as calculated from the MNIST test set using its ground-truth label information. IP modifies the network’s excitability parameters λ to converge to their true values. Our network is not only robust to variations in intensity, but learns their class-specific values. A network that learns the excitability parameters λ exhibits a higher classification rate than a network without IP (panel D). We computed the performance of the network derived in Sec. 4 on unnormalized data in comparison with a network without IP (all else being equal). As a performance measure we used the classification error (computed using the same Bayesian classifier as used in [9]). Classification success rates were calculated with very few labels, using 0.5% (thin lines) and 5% (thick lines) of labels in the training set (both settings for both networks). The classification performance of the network with IP outperforms that of the network without it. This result suggests that the differences in intensities in MNIST, albeit visually small, are sufficient to aid classification. Finally, Figure 4 shows that the neural network can learn classes that differ only in their intensities. The dataset used for Figure 4 comprises 40000 images of two types of sphere: dull and shiny. The spheres were identical in shape and position, and we generated data points (i.e. images) under a variety of lighting conditions. On average, the shiny spheres were brighter (λshiny ≈720) than the dull spheres (λdull ≈620). The network represents the two classes in its learned weights and intensities. Algorithms that utilize ad hoc normalization preprocessing schemes would have serious difficulties learning input statistics for datasets of this kind. Figure 4: The neural network can learn classes that differ only in their intensities. The dataset consisted of either dull or shiny spheres. The network had C = 2 hidden neurons. A: Three pairs of squares represent the weights feeding each hidden neuron after initialization (leftmost pair), 10 iterations (center pair), and 200 iterations (rightmost pair) over the training set. Note the rightmost pair, particularly how the right sphere appears brighter than the left sphere. The right sphere corresponds to the shiny class and the left sphere to the dull class. B: Learned mean intensities as a function of iterations over the training set. The dull spheres have an average intensity of 620, and the shiny spheres 720. The network learns the classes and their average intensities, even when data points from different classes have the same sizes and positions. 6 Discussion Neural circuit models are powerful tools for understanding neural learning and information processing. They have attracted attention as inherently parallel information processing devices for analog VLSI, a fast and power-efficient alternative to standard processor architectures [12,23]. Much work has investigated learning with winner-take-all (WTA) type networks [2, 8–12, 18, 21, 24]. A subset of these studies [2, 8–11, 21] link synaptic plasticity in WTA networks to optimal learning, mostly using mixture distributions to model input stimuli [8–11, 21]. Our contribution expands on these results both computationally, by allowing for a robust treatment of variability in input gain, and biologically, by providing a normative justification for intrinsic plasticity during learning. Our analytical results show that the PPG-generative model is tractable and neurally-implementable, while our numerical results show that it is flexible and robust. Our model provides a principled treatment of intensity variations, something ubiquitous in realistic datasets. As a result, it allows for robust learning without requiring normalized input data. This addresses the criticisms (see [10]) of earlier WTA-like circuits [8,9] that required normalized data. We found that explicitly accounting for intensity improves classification performance even for datasets that have been size-normalized (e.g. MNIST), presumably by providing an additional dimension for discriminating across latent features. Furthermore, we found that the learned representation of the MNIST data allows for good classification in a semi-supervised setting, when only a small fraction 7 of the data is labeled. Thus, our model provides a starting point for constructing novel clustering and classification algorithms following the general approach in [9]. The treatment of intensity as an explicit variable is not new. The well-investigated class of Gaussian Scale Mixtures (GSM) is built on that idea. Nonetheless, while GSM and PPG share some conceptual similarities, they are mathematically distinct. While GSMs assume 1) Gaussian distributed random variables and 2) a common scale variable [25], PPG assumes 1’) Poisson observation noise and 2’) class-specific scale variables. Consequently, none of the GSM results carry over to our work, and our PPG assumptions are critical for our derived intrinsic plasticity and Hebbian plasticity rules. It would be interesting to investigate a circuit analog of intensity parameter learning in a GSM. Since this class of models is known to capture many features of afferent sensory neurons, we might make more specific predictions concerning IP in V1. It would also be interesting to compare the classification performance of a GSM with that of PPG on the same dataset. The nature of the GSM generative model (linear combination of features with multiplicative gain modulation) makes it an unusual choice for a classification task. However, in principle, one could use a GSM to learn a representation of a dataset and train a classifier on it. The optimal circuit implementation of learning in our generative model requires a particular form of IP. The formulation of IP is a phenomenological one, reflecting the biological observation that the excitability of a neuron changes in a negative feedback loop as a function of past activity [14, 15]. Mathematically, our model shares similarities with past IP models [6, 10, 17] with the important difference that the controlled variable is the input current, rather than the output firing rate. Since the two quantities are closely related, we expect it will be difficult to directly disambiguate between IP models experimentally. Nonetheless, our model makes potentially testable predictions in terms of the functional role of IP, by directly linking the excitability of individual neurons to nontrivial statistics of their inputs, namely their average intensity under a Gamma distribution. Since past IP work invariably assumes the target excitability is a fixed parameter, usually shared across neurons, the link between neural excitability and real world statistics is very specific to our model and potentially testable experimentally. Furthermore, our work provides a computational rationale for the dramatic variations in excitability across neurons, even within a local cortical circuit, which could not be explained by traditional models. The functional role for IP identified here complements previous proposals linking the regulation of neuronal excitability to learning priors [11] or as posterior constraints [10, 26]. Ultimately, it is likely that the role of IP is manifold. Recent theoretical work suggests that the net effect of inputs on neural excitability may arise as a complex interaction between several forms of IP, some homeostatic and others not [17]. Furthermore, different experimental paradigms may preferentially expose one IP process over the others, which would explain the confusion within the literature on the exact nature of biological IP. Taken together, these models point to a fundamental role of IP for circuit computation in a variety of setups. Given its many possible roles, any approach based on first principles is valuable, as it tightly connects IP to concrete stimulus properties in a way that can translate into better-constrained experiments. Acknowledgements. We acknowledge funding by the DFG within the Cluster of Excellence EXC 1077/1 (Hearing4all) and by grant LU 1196/5-1 (JL and TM) and the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme (FP7/2007-2013) under REA grant agreement no. 291734 (CS). References [1] L F Abbott and S B Nelson. Synaptic plasticity: taming the beast. Nat Neurosci, 3:1178 – 1183, 2000. [2] J L¨ucke and M Sahani. Maximal causes for non-linear component extraction. J Mach Learn Res, 9:1227–67, 2008. [3] C J Rozell, D H Johnson, R G Baraniuk, and B A Olshausen. Sparse coding via thresholding and local competition in neural circuits. Neural Comput, 20(10):2526–63, October 2008. [4] J L¨ucke. Receptive field self-organization in a model of the fine-structure in V1 cortical columns. Neural Comput, 21(10):2805–45, 2009. 8 [5] J Zylberberg, J T Murphy, and M R Deweese. A Sparse Coding Model with Synaptically Local Plasticity and Spiking Neurons Can Account for the Diverse Shapes of V1 Simple Cell Receptive Fields. PLoS Comp Biol, 7(10):e1002250, 2011. [6] C Savin, P Joshi, and J Triesch. Independent Component Analysis in Spiking Neurons. PLoS Comp Biol, 6(4):e1000757, April 2010. [7] E Oja. A simplified neuron model as a principal component analyzer. J Math Biol, 15:267 – 273, 1982. [8] B Nessler, M Pfeiffer, and W Maass. Stdp enables spiking neurons to detect hidden causes of their inputs. In Adv Neural Inf Process Syst, pages 1357–1365, 2009. [9] C Keck, C Savin, and J L¨ucke. Feedforward inhibition and synaptic scaling–two sides of the same coin? PLoS Comp Biol, 8(3):e1002432, 2012. [10] S Habenschuss, J Bill, and B Nessler. Homeostatic plasticity in bayesian spiking networks as expectation maximization with posterior constraints. In Adv Neural Inf Process Syst, pages 773–781, 2012. [11] B Nessler, M Pfeiffer, L Buesing, and W Maass. Bayesian computation emerges in generic cortical microcircuits through spike-timing-dependent plasticity. PLoS Comp Biol, 9(4):e1003037, 2013. [12] M Schmuker, T Pfeil, and M P Nawrot. A neuromorphic network for generic multivariate data classification. Proc Natl Acad Sci, 111(6):2081–2086, 2014. [13] O Schwartz and E P Simoncelli. Natural sound statistics and divisive normalization in the auditory system. Adv Neural Inf Process Syst, pages 166–172, 2000. [14] G Daoudal and D Debanne. Long-term plasticity of intrinsic excitability: learning rules and mechanisms. Learn Memory, 10(6):456–465, 2003. [15] R H Cudmore and G G Turrigiano. Long-term potentiation of intrinsic excitability in lv visual cortical neurons. J Neurophysiol, 92(1):341–348, 2004. [16] M Stemmler and C Koch. How voltage-dependent conductances can adapt to maximize the information encoded by neuronal firing rate. Nat Neurosci, 2(6):521–527, 1999. [17] C Savin, P Dayan, and M Lengyel. Optimal Recall from Bounded Metaplastic Synapses: Predicting Functional Adaptations in Hippocampal Area CA3. PLoS Comp Biol, 10(2):e1003489, February 2014. [18] Rodney J Douglas and Kevan AC Martin. Neuronal circuits of the neocortex. Annu Rev Neurosci, 27:419–451, 2004. [19] A P Dempster, N M Laird, and D B Rubin. Maximum likelihood from incomplete data via the EM algorithm (with discussion). J R Stat Soc Series B, 39:1–38, 1977. [20] R Neal and G Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models. Kluwer, 1998. [21] D J Rezende, D Wierstra, and W Gerstner. Variational learning for recurrent spiking networks. Adv Neural Inf Process Syst, pages 136–144, 2011. [22] L F Abbott and S B Nelson. Synaptic plasticity: taming the beast. Nat Neurosci, 3(Supp):1178– 1183, November 2000. [23] E Neftci, J Binas, U Rutishauser, E Chicca, G Indiveri, and R J Douglas. Synthesizing cognition in neuromorphic electronic systems. Proc Natl Acad Sci, 110(37):E3468–E3476, 2013. [24] J L¨ucke and C Malsburg. Rapid processing and unsupervised learning in a model of the cortical macrocolumn. Neural Comput, 16:501–33, 2004. [25] M J Wainwright, E P Simoncelli, and A S Willsky. Random cascades on wavelet trees and their use in analyzing and modeling natural images. Appl Comput Harmon Anal, 11(1):89– 123, 2001. [26] S Deneve. Bayesian spiking neurons i: inference. Neural Comput, 20(1):91–117, 2008. 9
2016
143
6,043
Disease Trajectory Maps Peter Schulam Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 pschulam@cs.jhu.edu Raman Arora Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 arora@cs.jhu.edu Abstract Medical researchers are coming to appreciate that many diseases are in fact complex, heterogeneous syndromes composed of subpopulations that express different variants of a related complication. Longitudinal data extracted from individual electronic health records (EHR) offer an exciting new way to study subtle differences in the way these diseases progress over time. In this paper, we focus on answering two questions that can be asked using these databases of longitudinal EHR data. First, we want to understand whether there are individuals with similar disease trajectories and whether there are a small number of degrees of freedom that account for differences in trajectories across the population. Second, we want to understand how important clinical outcomes are associated with disease trajectories. To answer these questions, we propose the Disease Trajectory Map (DTM), a novel probabilistic model that learns low-dimensional representations of sparse and irregularly sampled longitudinal data. We propose a stochastic variational inference algorithm for learning the DTM that allows the model to scale to large modern medical datasets. To demonstrate the DTM, we analyze data collected on patients with the complex autoimmune disease, scleroderma. We find that DTM learns meaningful representations of disease trajectories and that the representations are significantly associated with important clinical outcomes. 1 Introduction Longitudinal data is becoming increasingly important in medical research and practice. This is due, in part, to the growing adoption of electronic health records (EHRs), which capture snapshots of an individual’s state over time. These snapshots include clinical observations (apparent symptoms and vital sign measurements), laboratory test results, and treatment information. In parallel, medical researchers are beginning to recognize and appreciate that many diseases are in fact complex, highly heterogeneous syndromes [Craig, 2008] and that individuals may belong to disease subpopulations or subtypes that express similar sets of symptoms over time (see e.g. Saria and Goldenberg [2015]). Examples of such diseases include asthma [Lötvall et al., 2011], autism [Wiggins et al., 2012], and COPD [Castaldi et al., 2014]. The data captured in EHRs can help better understand these complex diseases. EHRs contain many types of observations and the ability to track their progression can help bring in to focus the subtle differences across individual disease expression. In this paper, we focus on two exploratory questions that we can begin to answer using longitudinal EHR data. First, we want to discover whether there are individuals with similar disease trajectories and whether there are a small number of degrees of freedom that account for differences across a heterogeneous population. A low-dimensional characterization of trajectories and how they differ can yield insights into the biological underpinnings of the disease. In turn, this may motivate new targeted therapies. In the clinic, physicians can analyze an individual’s clinical history to estimate the low-dimensional representation of the trajectory and can use this knowledge to make more accurate prognoses and guide treatment decisions by comparing against representations of past trajectories. Second, we would like to know whether individuals with similar clinical outcomes (e.g. death, severe organ damage, or development of comorbidities) have similar disease trajectories. In complex diseases, individuals are often at risk of developing a number of severe complications and 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. clinicians rarely have access to accurate prognostic biomarkers. Discovering associations between target outcomes and trajectory patterns may both generate new hypotheses regarding the causes of these outcomes and help clinicians to better anticipate the event using an individual’s clinical history. Contributions. Our approach to simultaneously answering these questions is to embed individual disease trajectories into a low-dimensional vector space wherein similarity in the embedded space implies that two individuals have similar trajectories. Such a representation would naturally answer our first question, and could also be used to answer the second by comparing distributions over representations across groups defined by different outcomes. To learn these representations, we introduce a novel probabilistic model of longitudinal data, which we term the Disease Trajectory Map (DTM). In particular, the DTM models the trajectory over time of a single clinical marker, which is an observation or measurement recorded over time by clinicians that is used to track the progression of a disease (see e.g. Schulam et al. [2015]). Examples of clinical markers are pulmonary function tests or creatinine laboratory test results, which track lung and kidney function respectively. The DTM discovers low-dimensional (e.g. 2D or 3D) latent representations of clinical marker trajectories that are easy to visualize. We describe a stochastic variational inference algorithm for estimating the posterior distribution over the parameters and individual-specific representations, which allows our model to be easily applied to large datasets. To demonstrate the DTM, we analyze clinical marker data collected on individuals with the complex autoimmune disease scleroderma (see e.g. Allanore et al. [2015]). We find that the learned representations capture interesting subpopulations consistent with previous findings, and that the representations suggest associations with important clinical outcomes not captured by alternative representations. 1.1 Background and Related Work Clinical marker data extracted from EHRs is a by-product of an individual’s interactions with the healthcare system. As a result, the time series are often irregularly sampled (the time between samples varies within and across individuals), and may be extremely sparse (it is not unusual to have a single observation for an individual). To aid the following discussion, we briefly introduce notation for this type of data. We use m to denote the number of individual disease trajectories recorded in a given dataset. For each individual, we use ni to denote the number of observations. We collect the observation times for subject i into a column vector ti ≜[ti1, . . . , tini]⊤(sorted in non-decreasing order) and the corresponding measurements into a column vector yi ≜[yi1, . . . , yini]⊤. Our goal is to embed the pair (ti, yi) into a low-dimensional vector space wherein similarity between two embeddings xi and xj) implies that the trajectories have similar shapes. This is commonly done using basis representations of the trajectories. Fixed basis representations. In the statistics literature, (ti, yi) is often referred to as unbalanced longitudinal data, and is commonly analyzed using linear mixed models (LMMs) [Verbeke and Molenberghs, 2009]. In their simplest form, LMMs assume the following probabilistic model: wi | Σ ∼N(0, Σ) , yi | Bi, wi, µ, σ2 ∼N(µ + Biwi, σ2Ini). (1) The matrix Bi ∈Rni×d is known as the design matrix, and can be used to capture non-linear relationships between the observation times ti and measurements yi. Its rows are comprised of d-dimensional basis expansions of each observation time Bi = [b(ti1), · · · , b(tini)]⊤. Common choices of b(·) include polynomials, splines, wavelets, and Fourier series. The particular basis used is often carefully crafted by the analyst depending on the nature of the trajectories and on the desired structure (e.g. invariance to translations and scaling) in the representation [Brillinger, 2001]. The design matrix can therefore make the LMM remarkably flexible despite its simple parametric probabilistic assumptions. Moreover, the prior over wi and the conjugate likelihood make it straightforward to fit µ, Σ, and σ2 using EM or Bayesian posterior inference. After estimating the model parameters, we can estimate the coefficients wi of a given clinical marker trajectory using the posterior distribution, which embeds the trajectory in a Euclidean space. To flexibly capture complex trajectory shapes, however, the basis must be high-dimensional, which makes interpretability of the representations challenging. We can use low-dimensional summaries such as the projection on to a principal subspace, but these are not necessarily substantively meaningful. Indeed, much research has gone into developing principal direction post-processing techniques (e.g. Kaiser [1958]) or alternative estimators that enhance interpretability (e.g. Carvalho et al. [2012]). Data-adaptive basis representations. A set of related, but more flexible, techniques comes from functional data analysis where observations are functions (i.e. trajectories) assumed to be sampled 2 from a stochastic process and the goal is to find a parsimonious representation for the data [Ramsay et al., 2002]. Functional principal component analysis (FPCA), one of the most popular techniques in functional data analysis, expresses functional data in the orthonormal basis given by the eigenfunctions of the auto-covariance operator. This representation is optimal in the sense that no other representation captures more variation [Ramsay, 2006]. The idea itself can be traced back to early independent work by Karhunen and Loeve and is also referred to as the Karhunen-Loeve expansion [Watanabe, 1965]. While numerous variants of FPCA have been proposed, the one that is most relevant to the problem at hand is that of sparse FPCA [Castro et al., 1986, Rice and Wu, 2001] where we allow sparse, irregularly sampled data as is common in longitudinal data analysis. To deal with the sparsity, Rice and Wu [2001] used LMMs to model the auto-covariance operator. In very sparse settings, however, LMMs can suffer from numerical instability of covariance matrices in high dimensions. James et al. [2000] addressed this by constraining the rank of the covariance matrices—we will refer to this model as the reduced-rank LMM, but note that it is a variant of sparse FPCA. Although sparse FPCA represents trajectories using a data-driven basis, the basis is restricted to lie in a linear subspace of a fixed basis, which may be overly restrictive. Other approaches to learning a functional basis include Bayesian estimation of B-spline parameters (e.g. [Bigelow and Dunson, 2012]) and placing priors over reproducing kernel Hilbert spaces (e.g. [MacLehose and Dunson, 2009]). Although flexible, these two approaches do not learn a low-dimensional representation. Cluster-based representations. Mixture models and clustering approaches are also commonly used to represent and discover structure in time series data. Marlin et al. [2012] cluster time series data from the intensive care unit (ICU) using a mixture model and use cluster membership to predict outcomes. Schulam and Saria [2015] describe a probabilistic model that represents trajectories using a hierarchy of features, which includes “subtype” or cluster membership. LMMs have also been extended to have nonparametric Dirichlet process priors over the coefficients (e.g. Kleinman and Ibrahim [1998]), which implicitly induce clusters in the data. Although these approaches flexibly model trajectory data, the structure they recover is a partition, which does not allow us to compare all trajectories in a coherent way as we can in a vector space. Lexicon-based representations. Another line of research has investigated the discovery of motifs or repeated patterns in continuous time-series data for the purposes of succinctly representing the data as a string of elements of the discovered lexicon. These include efforts in the speech processing community to identify sub-word units (parts of words comparable to phonemes) in a data-driven manner [Varadarajan et al., 2008, Levin et al., 2013]. In computational healthcare, Saria et al. [2011] propose a method for discovering deformable motifs that are repeated in continuous time series data. These methods are, in spirit, similar to discretization approaches such as symbolic aggregate approximation (SAX) [Lin et al., 2007] and piecewise aggregate approximation (PAA) [Keogh et al., 2001] that are popular in data mining, and aim to find compact descriptions of sequential data, primarily for the purposes of indexing, search, anomaly detection, and information retrieval. The focus in this paper is to learn representations for entire trajectories rather than discover a lexicon. Furthermore, we focus on learning a representation in a vector space where similarities among trajectories are captured through the standard inner product on Rd. 2 Disease Trajectory Maps To motivate Disease Trajectory Maps (DTM), we begin with the reduced-rank LMM proposed by James et al. [2000]. We show that the reduced-rank LMM defines a Gaussian process with a covariance function that linearly depends on trajectory-specific representations. To define DTMs, we then use the kernel trick to make the dependence non-linear. Let µ ∈R be the marginal mean of the observations, F ∈Rd×q be a rank-q matrix, and σ2 be the variance of measurement errors. As a reminder, yi ∈Rni denotes the vector of observed trajectory measurements, Bi ∈Rni×d denotes the individual’s design matrix, and xi ∈Rq denotes the individual’s representation. James et al. [2000] define the reduced-rank LMM using the following conditional distribution: yi | Bi, xi, µ, F, σ2 ∼N(µ + BiFxi, σ2Ini). (2) They assume an isotropic normal prior over xi and marginalize to obtain the observed-data loglikelihood, which is then optimized with respect to {µ, F, σ2}. As in Lawrence [2004], we instead optimize xi and marginalize F. By assuming a normal prior N(0, αIq) over the rows of F and marginalizing we obtain: yi | Bi, xi, µ, σ2, α ∼N(µ, α⟨xi, xi⟩BiB⊤ i + σ2Ini). (3) 3 Note that by marginalizing over F, we induce a joint distribution over all trajectories in the dataset. Moreover, this joint distribution is a Gaussian process with mean µ and the following covariance function defined across trajectories that depends on times {ti, tj} and representations {xi, xj}: Cov(yi, yj | Bi, Bj, xi, xj, µ, σ2, α) = α⟨xi, xj⟩BiB⊤ j + I[i = j] (σ2Ini). (4) This reformulation of the reduced-rank LMM highlights that the covariance across trajectories i and j depends on the inner product between the two representations xi and xj, and suggests that we can non-linearize the dependency with an inner product in an expanded feature space using the “kernel trick”. Let k(·, ·) denote a non-linear kernel defined over the representations with parameters θ, then we have: Cov(yi, yj | Bi, Bj, xi, xj, µ, σ2, θ) = k(xi, xj)BiB⊤ j + I[i = j] (σ2Ini). (5) Let y ≜[y⊤ 1 , . . . , y⊤ m]⊤denote the column vector obtained by concatenating the measurement vectors from each trajectory. The joint distribution over y is a multivariate normal: y | B1:m, x1:m, µ, σ2, θ ∼N(µ, ΣDTM + σ2In), (6) where ΣDTM is a covariance matrix that depends on the times t1:m (through design matrices B1:m) and representations x1:m. In particular, ΣDTM is a block-structured matrix with m row blocks and m column blocks. The block at the ith row and jth column is the covariance between yi and yj defined in (5). Finally, we place isotropic Gaussian priors over xi. We use Bayesian inference to obtain a posterior Gaussian process and to estimate the representations. We tune hyperparameters by maximizing the observed-data log likelihood. Note that our model is similar to the Bayesian GPLVM [Titsias and Lawrence, 2010], but models functional data instead of finite-dimensional vectors. 2.1 Learning and Inference in the DTM As formulated, the model scales poorly to large datasets. Inference within each iteration of an optimization algorithm, for example, requires storing and inverting ΣDTM, which requires O(n2) space and O(n3) time respectively, where n ≜Pm i=1 ni is the number of clinical marker observations. For modern datasets, where n can be in the hundreds of thousands or millions, this is unacceptable. In this section, we approximate the log-likelihood using techniques from Hensman et al. [2013] that allow us to apply stochastic variational inference (SVI) [Hoffman et al., 2013]. Inducing points. Recent work in scaling Gaussian processes to large datasets has focused on the idea of inducing points [Snelson and Ghahramani, 2005, Titsias, 2009], which are a relatively small number of artificial observations of a Gaussian process that approximately capture the information contained in the training data. In general, let f ∈Rm denote observations of a GP at inputs {xi}m i=1 and u ∈Rp denote inducing points at inputs {zi}p i=1. Titsias [2009] constructs the inducing points as variational parameters by introducing an augmented probability model: u ∼N(0, Kpp) , f | u ∼N(KmpK−1 pp u, ˜Kmm), (7) where Kpp is the Gram matrix between inducing points, Kmm is the Gram matrix between observations, Kmp is the cross Gram matrix between observations and inducing points, and ˜Kmm ≜Kmm −KmpK−1 pp Kpm. We can marginalize over u to construct a low-rank approximate covariance matrix, which is computationally cheaper to invert using the Woodbury identity. Alternatively, Hensman et al. [2013] extends these ideas by explicitly maintaining a variational distribution over u that d-separates the observations and satisfies the conditions required to apply SVI [Hoffman et al., 2013]. Let yf = f + ϵ where ϵ is iid Gaussian noise with variance σ2, then we use the following inequality to lower bound our data log-likelihood: log p(yf | u) ≥Pm i=1 Ep(fi|u)[log p(yfi | fi)]. (8) In the interest of space, we refer the interested reader to Hensman et al. [2013] for details. DTM evidence lower bound. When marginalizing over the rows of F, we induced a Gaussian process over the trajectories, but by doing so we also implicitly induced a Gaussian process over the individual-specific basis coefficients. Let wi ≜Fxi ∈Rd denote the basis weights implied by the mapping F and representation xi in the reduced-rank LMM, and let w:,k for k ∈[d] denote the kth coefficient of all individuals in the dataset. After marginalizing the kth row of F and applying the kernel trick, we see that the vector of coefficients w:,k has a Gaussian process distribution 4 with mean zero and covariance function: Cov(wik, wjk) = αk(xi, xj). Moreover, the Gaussian processes across coefficients are statistically independent of one another. To lower bound the DTM loglikelihood, we introduce p inducing points uk for each vector of coefficients w:,k with shared inducing point inputs {zi}p i=1. To refer to all inducing points simultaneously, we use U ≜[u1, . . . , ud] and u to denote the “vectorized” form of U obtained by stacking its columns. Applying (8) we have: logp(y | u, x1:m) ≥ m X i=1 Ep(wi|u,xi)[log p(yi | wi)] = m X i=1 log N(yi | µ + BiU⊤K−1 pp ki, σ2Ini) − ˜kii 2σ2 Tr[B⊤ i Bi] ≜ m X i=1 log ˜p(yi | u, xi), (9) where ki ≜[k(xi, z1), . . . , k(xi, zp)]⊤and ˜kii is the ith diagonal element of ˜Kmm. We can then construct the variational lower bound on log p(y): log p(y) ≥Eq(u,x1:m)[log p(y | u, x1:m)] −KL(q(u, x1:m) ∥p(u, x1:m)) (10) ≥ m X i=1 Eq(u,xi)[log ˜p(yi | u, xi)] −KL(q(u, x1:m) ∥p(u, x1:m)), (11) where we use the lower bound in (9). Finally, to make the lower bound concrete we specify the variational distribution q(u, x1:m) to be a product of independent multivariate normal distributions: q(u, x1:m) ≜N(u | m, S) Qm i=1 N(xi | mi, Si), (12) where the variational parameters to be fit are m, S, and {mi, Si}m i=1. Stochastic optimization of the lower bound. To apply SVI, we must be able to compute the gradient of the expected value of log ˜p(yi | u, xi) under the variational distributions. Because u and xi are assumed to be independent in the variational posteriors, we can analyze the expectation in either order. Fix xi, then we see that log ˜p(yi | u, xi) depends on u only through the mean of the Gaussian density, which is a quadratic term in the log likelihood. Because q(u) is multivariate normal, we can compute the expectation in closed form. Eq(u)[log ˜p(yi | u, xi)] = Eq(U)[log N(yi | µ + (Bi ⊗k⊤ i K−1 pp )u, σ2Ini)] − ˜kii 2σ2 Tr[B⊤ i Bi] = log N(yi | µ + Cim, σ2Ini)] − 1 2σ2 Tr[SC⊤ i Ci] − ˜kii 2σ2 Tr[B⊤ i Bi], where we have defined Ci ≜(Bi ⊗k⊤ i K−1 pp ) to be the extended design matrix and ⊗is the Kronecker product. We now need to compute the expectation of this expression with respect to q(xi), which entails computing the expectations of ki (a vector) and kik⊤ i (a matrix). In this paper, we assume an RBF kernel, and so the elements of the vector and matrix are all exponentiated quadratic functions of xi. This makes the expectations straightforward to compute given that q(xi) is multivariate normal.1 We therefore see that the expected value of log ˜p(yi) can be computed in closed form under the assumed variational distribution. We use the standard SVI algorithm to optimize the lower bound. We subsample the data, optimize the likelihood of each example in the batch with respect to the variational parameters over the representation (mi, Si), and compute approximate gradients of the global variational parameters (m, S) and the hyperparameters. The likelihood term is conjugate to the prior over u, and so we can compute the natural gradients with respect to the global variational parameters m and S [Hoffman et al., 2013, Hensman et al., 2013]. Additional details on the approximate objective and the gradients required for SVI are given in the supplement. We provide details on initialization, minibatch selection, and learning rates for our experiments in Section 3. Inference on new trajectories. The variational distribution over the inducing point values u can be used to approximate a posterior process over the basis coefficients wi [Hensman et al., 2013]. Therefore, given a representation xi, we have that wik | xi, m, S ∼N(k⊤ i K−1 pp mk, ˜kii + k⊤ i K−1 pp SkkK−1 pp ki), (13) 1Other kernels can be used instead, but the expectations may not have closed form expressions. 5 where mk is the approximate posterior mean of the kth column of U and Skk is its covariance. The approximate joint posterior distribution over all coefficients can be shown to be multivariate normal. Let µ(xi) be the mean of this distribution given representation xi and Σ(xi) be the covariance, then the posterior predictive distribution over a new trajectory y∗given the representation x∗is y∗| x∗∼N(µ + B∗µ(x∗), B∗Σ(x∗)B⊤ ∗+ σ2In∗). (14) We can then approximately marginalize with respect to the prior over x∗or a variational approximation of the posterior given a partial trajectory using a Monte Carlo estimate. 3 Experiments We now use DTM to analyze clinical marker trajectories of individuals with the autoimmune disease, scleroderma [Allanore et al., 2015]. Scleroderma is a heterogeneous and complex chronic autoimmune disease. It can potentially affect many of the visceral organs, such as the heart, lungs, kidneys, and vasculature. Any given individual may experience only a subset of complications, and the timing of the symptoms relative to disease onset can vary considerably across individuals. Moreover, there are no known biomarkers that accurately predict an individual’s disease course. Clinicians and medical researchers are therefore interested in characterizing and understanding disease progression patterns. Moreover, there are a number of clinical outcomes responsible for the majority of morbidity among patients with scleroderma. These include congestive heart failure, pulmonary hypertension and pulmonary arterial hypertension, gastrointestinal complications, and myositis [Varga et al., 2012]. We use the DTM to study associations between these outcomes and disease trajectories. We study two scleroderma clinical markers. The first is the percent of predicted forced vital capacity (PFVC); a pulmonary function test result measuring lung function. PFVC is recorded in percentage points, and a higher value (near 100) indicates that the individual’s lungs are functioning as expected. The second clinical marker that we study is the total modified Rodnan skin score (TSS). Scleroderma is named after its effect on the skin, which becomes hard and fibrous during periods of high disease activity. Because it is the most clinically apparent symptom, many of the current sub-categorizations of scleroderma depend on an individual’s pattern of skin disease activity over time [Varga et al., 2012]. To systematically monitor skin disease activity, clinicians use the TSS which is a quantitative score between 0 and 55 computed by evaluating skin thickness at 17 sites across the body (higher scores indicate more active skin disease). 3.1 Experimental Setup For our experiments, we extract trajectories from the Johns Hopkins Hospital Scleroderma Center’s patient registry; one of the largest in the world. For both PFVC and TSS, we study the trajectory from the time of first symptom until ten years of follow-up. The PFVC dataset contains trajectories for 2,323 individuals and the TSS dataset contains 2,239 individuals. The median number of observations per individuals is 3 for the PFVC data and 2 for the TSS data. The maximum number of observations is 55 and 22 for PFVC and TSS respectively. We present two sets of results. First, we visualize groups of similar trajectories obtained by clustering the representations learned by DTM. Although not quantitative, we use these visualizations as a way to check that the DTM uncovers subpopulations that are consistent with what is currently known about scleroderma. Second, we use the learned representations of trajectories obtained using the LMM, the reduced-rank LMM (which we refer to as FPCA), and the DTM to statistically test for relationships between important clinical outcomes and learned disease trajectory representations. For all experiments and all models, we use a common 5-dimensional B-spline basis composed of degree-2 polynomials (see e.g. Chapter 20 in Gelman et al. [2014]). We choose knots using the percentiles of observation times across the entire training set [Ramsay et al., 2002]. For LMM and FPCA, we use EM to fit model parameters. To fit the DTM, we use the LMM estimate to set the mean µ , noise σ2, and average the diagonal elements of Σ to set the kernel scale α. Length-scales ℓ are set to 1. For these experiments, we do not learn the kernel hyperparameters during optimization. We initialize the variational means over xi using the first two unit-scaled principal components of wi estimated by LMM and set the variational covariances to be diagonal with standard deviation 0.1. For both PFVC and TSS, we use minibatches of size 25 and learn for a total of five epochs (passes over the training data). The initial learning rate for m and S is 0.1 and decays as t−1 for each epoch t. 3.2 Qualitative Analysis of Representations The DTM returns approximate posteriors over the representations xi for all individuals in the training set. We examine these posteriors for both the PFVC and TSS datasets to check for consistency with 6 Percent of Predicted FVC (PFVC) (B) Years Since First Symptom (A) [5] [23] [34] [22] [28] [7] [14] [21] [35] [1] [8] [15] [29] Figure 1: (A) Groups of PFVC trajectories obtained by hierarchical clustering of DTM representations. (B) Trajectory representations are color-coded and labeled according to groups shown in (A). Contours reflect posterior GP over the second B-spline coefficient (blue contours denote smaller values, red denote larger values). Total Skin Score (TSS) Years Since First Symptom (A) (B) [1] [11] [6] [19] [5] [10] [15] [20] [16] Figure 2: Same presentation as in Figure 1 but for TSS trajectories. what is currently known about scleroderma disease trajectories. In Figure 1 (A) we show groups of trajectories uncovered by clustering the posterior means over the representations, which are plotted in Figure 1 (B). Many of the groups shown here align with other work on scleroderma lung disease subtypes (e.g. Schulam et al. [2015]). In particular, we see rapidly declining trajectories (group [5]), slowly declining trajectories (group [22]), recovering trajectories (group [23]), and stable trajectories (group [34]). Surprisingly, we also see a group of individuals who we describe as “late decliners” (group [28]). These individuals are stable for the first 5-6 years, but begin to decline thereafter. This is surprising because the onset of scleroderma-related lung disease is currently thought to occur early in the disease course [Varga et al., 2012]. In Figure 2 (A) we show clusters of TSS trajectories and the corresponding mean representations in Figure 2 (B). These trajectories corroborate what is currently known about skin disease in scleroderma. In particular, we see individuals who have minimal activity (e.g. group [1]) and individuals with early activity that later stabilizes (e.g. group [11]), which correspond to what are known as the limited and diffuse variants of scleroderma [Varga et al., 2012]. We also find that there are a number of individuals with increasing activity over time (group [6]) and some whose activity remains high over the ten year period (group [19]). These patterns are not currently considered to be canonical trajectories and warrant further investigation. 3.3 Associations between Representations and Clinical Outcomes To quantitatively evaluate the low-dimensional representations learned by the DTM, we statistically test for relationships between the representations of clinical marker trajectories and important clinical outcomes. We compare the inferences of the hypothesis test with those made using representations derived from the LMM and FPCA baselines. For the LMM, we project wi into its 2-dimensional principal subspace. For FPCA, we learn a rank-2 covariance, which learns 2-dimensional representations. To establish that the models are all equally expressive and achieve comparable generalization error, we present held-out data log-likelihoods in Table 1, which are estimated using 10-fold cross-validation. We see that the models are roughly equivalent with respect to generalization error. To test associations between clinical outcomes and learned representations, we use a kernel density estimator test [Duong et al., 2012] to test the null hypothesis that the distributions across subgroups with and without the outcome are equivalent. The p-values obtained are listed in Table 2. As a point of 7 Table 1: Disease Trajectory Held-out Log-Likelihoods PFVC TSS Model Subj. LL Obs. LL Subj. LL Obs. LL LMM -17.59 (± 1.18) -3.95 (± 0.04) -13.63 (± 1.41) -3.47 (± 0.05) FPCA -17.89 (± 1.19) -4.03 (± 0.02) -13.76 (± 1.42) -3.47 (± 0.05) DTM -17.74 (± 1.23) -3.98 (± 0.03) -13.25 (± 1.38) -3.32 (± 0.06) Table 2: P-values under the null hypothesis that the distributions of trajectory representations are the same across individuals with and without clinical outcomes. Lower values indicate stronger support for rejection. PFVC TSS Outcome LMM FPCA DTM LMM FPCA DTM Congestive Heart Failure 0.170 0.081 0.013 0.107 0.383 0.189 Pulmonary Hypertension 0.270 ∗0.000 ∗0.000 0.485 0.606 0.564 Pulmonary Arterial Hypertension 0.013 0.020 ∗0.002 0.712 0.808 0.778 Gastrointestinal Complications 0.328 0.073 0.347 0.026 0.035 0.011 Myositis 0.337 ∗0.002 ∗0.004 ∗0.000 ∗0.002 ∗0.000 Interstitial Lung Disease ∗0.000 ∗0.000 ∗0.000 0.553 0.515 0.495 Ulcers and Gangrene 0.410 0.714 0.514 0.573 0.316 ∗0.009 reference, we include two clinical outcomes that should be clearly related to the two clinical markers. Interstitial lung disease is the most common cause of lung damage in scleroderma [Varga et al., 2012], and so we confirm that the null hypothesis is rejected for all three PFVC representations. Similarly, for TSS we expect ulcers and gangrene to be associated with severe skin disease. In this case, only the representations learned by DTM reveal this relationship. For the remaining outcomes, we see that FPCA and DTM reveal similar associations, but that only DTM suggests a relationship with pulmonary arterial hypertension (PAH). Presence of fibrosis (which drives lung disease progression) has been shown to be a risk factor in the development of PAH (see Chapter 36 of Varga et al. [2012]), but only the representations learned by DTM corroborate this association (see Figure 3). 4 Conclusion We presented the Disease Trajectory Map (DTM), a novel probabilistic model that learns lowdimensional embeddings of sparse and irregularly sampled clinical time series data. The DTM is a reformulation of the LMM. We derived it using an approach comparable to that of Lawrence [2004] in deriving the Gaussian process latent variable model (GPLVM) from probabilistic principal component analysis (PPCA) [Tipping and Bishop, 1999], and indeed the DTM can be interpreted as a “twin kernel” GPLVM (briefly discussed in the concluding paragraphs) over functional observations. The DTM can also be viewed as an LMM with a “warped” Gaussian prior over the random effects (see e.g. Damianou et al. [2015] for a discussion of distributions induced by mapping Gaussian random variables through non-linear maps). We demonstrated the model by analyzing data extracted from one of the nation’s largest scleroderma patient registries, and found that the DTM discovers structure among trajectories that is consistent with previous findings and also uncovers several surprising disease trajectory shapes. We also explore associations between important clinical outcomes and the DTM’s representations and found statistically significant differences in representations between outcome-defined groups that were not uncovered by two sets of baseline representations. Acknowledgments. PS is supported by an NSF Graduate Research Fellowship. RA is supported in part by NSF BIGDATA grant IIS-1546482. (B) FPCA Representations (C) DTM Representations (A) LMM Representations Figure 3: Scatter plots of PFVC representations for the three models color-coded by presence or absence of pulmonary arterial hypertension (PAH). Groups of trajectories with very few cases of PAH are circled in green. 8 References Allanore et al. Systemic sclerosis. Nature Reviews Disease Primers, page 15002, 2015. Jamie L Bigelow and David B Dunson. Bayesian semiparametric joint models for functional predictors. Journal of the American Statistical Association, 2012. David R Brillinger. Time series: data analysis and theory, volume 36. SIAM, 2001. Carvalho et al. High-dimensional sparse factor modeling: applications in gene expression genomics. Journal of the American Statistical Association, 2012. P.J. Castaldi et al. Cluster analysis in the copdgene study identifies subtypes of smokers with distinct patterns of airway disease and emphysema. Thorax, 2014. PE Castro, WH Lawton, and EA Sylvestre. Principal modes of variation for processes with continuous sample curves. Technometrics, 28(4):329–337, 1986. J. Craig. Complex diseases: Research and applications. Nature Education, 1(1):184, 2008. A. C. Damianou, M. K Titsias, and N. D. Lawrence. Variational inference for latent variables and uncertain inputs in gaussian processes. JMLR, 2, 2015. T. Duong, B. Goud, and K. Schauer. Closed-form density-based framework for automatic detection of cellular morphology changes. Proceedings of the National Academy of Sciences, 109(22):8382–8387, 2012. Andrew Gelman et al. Bayesian data analysis, volume 2. Taylor & Francis, 2014. J. Hensman, N. Fusi, and N.D. Lawrence. Gaussian processes for big data. arXiv:1309.6835, 2013. M.D. Hoffman, D.M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. JMLR, 14(1):1303–1347, 2013. G.M. James, T.J. Hastie, and C.A. Sugar. Principal component models for sparse functional data. Biometrika, 87 (3):587–602, 2000. H.F. Kaiser. The varimax criterion for analytic rotation in factor analysis. Psychometrika, 23(3):187–200, 1958. E. Keogh et al. Locally adaptive dimensionality reduction for indexing large time series databases. ACM SIGMOD Record, 30(2):151–162, 2001. K.P Kleinman and J.G. Ibrahim. A semiparametric bayesian approach to the random effects model. Biometrics, pages 921–938, 1998. N.D. Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. Advances in neural information processing systems, 16(3):329–336, 2004. K. Levin, K. Henry, A. Jansen, and K. Livescu. Fixed-dimensional acoustic embeddings of variable-length segments in low-resource settings. In ASRU, pages 410–415. IEEE, 2013. Jessica Lin, Eamonn Keogh, Li Wei, and Stefano Lonardi. Experiencing SAX: a novel symbolic representation of time series. Data Mining and knowledge discovery, 15(2):107–144, 2007. J. Lötvall et al. Asthma endotypes: a new approach to classification of disease entities within the asthma syndrome. Journal of Allergy and Clinical Immunology, 127(2):355–360, 2011. Richard F MacLehose and David B Dunson. Nonparametric bayes kernel-based priors for functional data analysis. Statistica Sinica, pages 611–629, 2009. B.M. Marlin et al. Unsupervised pattern discovery in electronic health care data using probabilistic clustering models. In Proc. ACM SIGHIT International Health Informatics Symposium, pages 389–398. ACM, 2012. James Ramsay et al. Applied functional data analysis: methods and case studies. Springer, 2002. James O Ramsay. Functional data analysis. Wiley Online Library, 2006. J.A. Rice and C.O. Wu. Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics, 57(1):253–259, 2001. S. Saria and A. Goldenberg. Subtyping: What it is and its role in precision medicine. Int. Sys., IEEE, 2015. S. Saria et al. Discovering deformable motifs in continuous time series data. In IJCAI, volume 22, 2011. P. Schulam and S. Saria. A framework for individualizing predictions of disease trajectories by exploiting multi-resolution structure. In Advances in Neural Information Processing Systems, pages 748–756, 2015. P. Schulam, F. Wigley, and S. Saria. Clustering longitudinal clinical marker trajectories from electronic health data: Applications to phenotyping and endotype discovery. In AAAI, pages 2956–2964, 2015. E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. In NIPS, 2005. M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 61(3):611–622, 1999. M.K. Titsias. Variational learning of inducing variables in sparse gaussian processes. In AISTATS, 2009. M.K. Titsias and N.D. Lawrence. Bayesian gaussian process latent variable model. In AISTATS, 2010. B. Varadarajan et al. Unsupervised learning of acoustic sub-word units. In Proc. ACL, pages 165–168, 2008. J. Varga et al. Scleroderma: From pathogenesis to comprehensive management. Springer, 2012. G. Verbeke and G. Molenberghs. Linear mixed models for longitudinal data. Springer, 2009. S. Watanabe. Karhunen-loeve expansion and factor analysis, theoretical remarks and applications. In Proc. 4th Prague Conf. Inform. Theory, 1965. L.D. Wiggins et al. Support for a dimensional view of autism spectrum disorders in toddlers. Journal of autism and developmental disorders, 42(2):191–200, 2012. 9
2016
144
6,044
Bayesian optimization for automated model selection Gustavo Malkomes,† Chip Schaff,† Roman Garnett Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 {luizgustavo, cbschaff, garnett}@wustl.edu Abstract Despite the success of kernel-based nonparametric methods, kernel selection still requires considerable expertise, and is often described as a “black art.” We present a sophisticated method for automatically searching for an appropriate kernel from an infinite space of potential choices. Previous efforts in this direction have focused on traversing a kernel grammar, only examining the data via computation of marginal likelihood. Our proposed search method is based on Bayesian optimization in model space, where we reason about model evidence as a function to be maximized. We explicitly reason about the data distribution and how it induces similarity between potential model choices in terms of the explanations they can offer for observed data. In this light, we construct a novel kernel between models to explain a given dataset. Our method is capable of finding a model that explains a given dataset well without any human assistance, often with fewer computations of model evidence than previous approaches, a claim we demonstrate empirically. 1 Introduction Over the past decades, enormous human effort has been devoted to machine learning; preprocessing data, model selection, and hyperparameter optimization are some examples of critical and often expert-dependent tasks. The complexity of these tasks has in some cases relegated them to the realm of “black art.” In kernel methods in particular, the selection of an appropriate kernel to explain a given dataset is critical to success in terms of the fidelity of predictions, but the vast space of potential kernels renders the problem nontrivial. We consider the problem of automatically finding an appropriate probabilistic model to explain a given dataset. Although our proposed algorithm is general, we will focus on the case where a model can be completely specified by a kernel, as is the case for example for centered Gaussian processes (GPs). Recent work has begun to tackle the kernel-selection problem in a systematic way. Duvenaud et al. [1] and Grosse et al. [2] described generative grammars for enumerating a countably infinite space of arbitrarily complex kernels via exploiting the closure of kernels under additive and multiplicative composition. We adopt this kernel grammar in this work as well. Given a dataset, Duvenaud et al. [1] proposed searching this infinite space of models using a greedy search mechanism. Beginning at the root of the grammar, we traverse the tree greedily attempting to maximize the (approximate) evidence for the data given by a GP model incorporating the kernel. In this work, we develop a more sophisticated mechanism for searching through this space. The greedy search described above only considers a given dataset by querying a model’s evidence. Our search performs a metalearning procedure, which, conditional on a dataset, establishes similarities among the models in terms of the space of explanations they can offer for the data. With this viewpoint, we construct a novel kernel between models (a “kernel kernel”). We then approach †These authors contributed equally to this work 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the model-search problem via Bayesian optimization, treating the model evidence as an expensive black-box function to be optimized as a function of the kernel. The dependence of our kernel between models on the distribution of the data is critical; depending on a given dataset, the kernels generated by a compositional grammar could be especially rich or deceptively so. We develop an automatic framework for exploring a set of potential models, seeking the model that best explains a given dataset. Although we focus on Gaussian process models defined by a grammar, our method could be easily extended to any probabilistic model with a parametric or structured model space. Our search appears to perform competitively with other baselines across a variety of datasets, including the greedy method from [1], especially in terms of the number of models for which we must compute the (expensive) evidence, which typically scales cubically for kernel methods. 2 Related work There are several works attempting to create more expressive kernels, either by combining kernels or designing custom ones. Multiple kernel learning approaches, for instance, construct a kernel for a given dataset through a weighted sum of predefined and fixed set of kernels, adjusting the weights to best explain the observed data. Besides limiting the space of kernels considered, the hyperparameters of component kernels also need to be specified in advance [3, 4]. Another approach to is to design flexible kernel families [5–7]. These methods often use Bochner’s theorem to reason in spectral space, and can approximate any arbitrary stationary kernel function. In contrast, our method does not depend on stationarity. Other work has developed expressive kernels by combining Gaussian processes with deep belief networks; see, for example, [8–10]. Unfortunately, there is no free lunch; these methods require complicated inference techniques that are much more costly than using standard kernels. The goal of automated machine learning (autoML) is to automate complex machine-learning procedures using insights and techniques from other areas of machine learning. Our work falls into this broad category of research. By applying machine learning methods throughout the entire modeling process, it is possible to create more automated and, eventually, better systems. Bergstra et al. [11] and Snoek et al. [12], for instance, have shown how to use modern optimization tools such as Bayesian optimization to set the hyperparameters of machine learning methods (e.g., deep neural networks and structured SVMs). Our approach to model search is also based on Bayesian optimization, and its success in similar settings is encouraging for our adoption here. Gardner et al. [13] also considered the automated model selection problem, but in an active leaning framework with a fixed set of models. We note that our method could be adopted to their Bayesian active model selection framework with minor changes, but we focus on the classical supervised learning case with a fixed training set. 3 Bayesian optimization for model search Suppose we face a classical supervised learning problem defined on an input space X and output space Y. We are given a set of training observations D = (X, y), where X represents the design matrix of explanatory variables xi ∈X, and yi ∈Y is the respective value or label to be predicted. Ultimately, we want to use D to predict the value y∗associated with an unseen point x∗. Given a probabilistic model M, we may accomplish this via formation of the predictive distribution. Suppose, however, that we are given a collection of probabilistic models M that could have plausibly generated the data. Ideally, finding the source of D would let us solve our prediction task with the highest fidelity. Let M ∈M be a probabilistic model, and let ΘM be the corresponding parameter space. These models are typically parametric families of distributions, each of which encodes a structural assumption about the data, for example, that the data can be described by a linear, quadratic, or periodic trend. Further, the member distributions (Mθ ∈M, θ ∈ΘM) of M differ from each other by a particular value of some properties—represented by the hyperprameters θ—related to the data such as amplitude, characteristic length scales, etc. We wish to select one model from this collection of models M to explain D. From a Bayesian perspective, the principle approach for solving this problem is Bayesian model selection.2 The critical 2“Model selection” is unfortunately sometimes also used in GP literature for the process of hyperparameter learning (selecting some Mθ ∈M), rather than selecting a model class M, the focus of our work. 2 value is model evidence, the probability of generating the observed data given a model M: p(y | X, M) = Z ΘM p(y | X, θ, M) p(θ | M) dθ. (1) The evidence (also called marginal likelihood) integrates over θ to account for all possible explanations of the data offered by the model, under a prior p(θ | M) associated with that model. Our goal is to automatically explore a space of models M to select a model3 M∗∈M that explains a given dataset D as well as possible, according to the model evidence. The essence of our method, which we call Bayesian optimization for model search (BOMS), is viewing the evidence as a function g: M →R to be optimized. We note two important aspects of g. First, for large datasets and/or complex models, g is an expensive function, for example growing cubically with |D| for GP models. Further, gradient information about g is impossible to compute due to the discrete nature of M. We can, however, query a model’s evidence as a black-box function. For these reasons, we propose to optimize evidence over M using Bayesian optimization, a technique well-suited for optimizing expensive, gradient-free, black-box objectives [14]. In this framework, we seek an optimal model M∗= arg max M∈M g(M; D), (2) where g(M; D) is the (log) model evidence: g(M; D) = log p(y | X, M). (3) We begin by placing a Gaussian process (GP) prior on g, p(g) = GP(g; µg, Kg), where µg : M →R is a mean function and Kg : M2 →R is a covariance function appropriately defined over the model space M. This is a nontrivial task due to the discrete and potentially complex nature of M. We will suggest useful choices for µg and Kg when M is a space of Gaussian process models below. Now, given observations of the evidence of a selected set of models, Dg = Mi, g(Mi; D)  , (4) we may compute the posterior distribution on g conditioned on Dg, which will be an updated Gaussian process [15]. Bayesian optimization uses this probabilistic belief about g to induce an inexpensive acquisition function to select which model we should select to evaluate next. Here we use the classical expected improvement (EI) [16] acquisition function, or a slight variation described below, because it naturally considers the trade off between exploration and exploitation. The exact choice of acquisition function, however, is not critical to our proposal. In each round of our model search, we will evaluate the acquisition function in the optimal model evidence for a number of candidate models C(Dg) = {Mi}, and compute the evidence of the candidate where this is maximized: M′ = arg max M∈C αEI(M; Dg). We then incorporate the chosen model M′ and the observed model evidence g(M′; D) into our model evidence training set Dg, update the posterior on g, select a new set of candidates, and continue. We repeat this iterative procedure until a budget is expended, typically measured in terms of the number of models considered. We have observed that expected improvement [16] works well especially for small and/or lowdimensional problems. When the dataset is large and/or high-dimensional, training costs can be considerable and variable, especially for complex models. To give better anytime performance on such datasets, we use expected improvement per second, where we divide the expected improvement by an estimate of the time required to compute the evidence. In our experiments, this estimation was performed by fitting a linear regression model to the log time to compute g(M; D) as a function of the number of hyperparameters (the dimension of ΘM) that we train on the models available in Dg. The acquisition function allows us to quickly determine which models are more promising than others, given the evidence we have observed so far. Since M is an infinite set of models, we cannot consider every model in every round. Instead, we will define a heuristic to evaluate the acquisition function at a smaller set of active candidate models below. 3We could also select a set of models but, for simplicity, we assume that there is one model that best explains that data with overwhelming probability, which would imply that there is not benefit in considering more than one model, e.g., via Bayesian model averaging. 3 4 Bayesian optimization for Gaussian process kernel search We introduced above a general framework for searching over a space of probabilistic models M to explain a dataset D without making further assumptions about the nature of the models. In the following, we will provide specific suggestions in the case that all members of M are Gaussian process priors on a latent function. We assume that our observations y were generated according to an unknown function f : X →R via a fixed probabilistic observation mechanism p(y | f), where fi = f(xi). In our experiments here, we will consider regression with additive Gaussian observation noise, but this is not integral to our approach. We further assume a GP prior distribution on f, p(f) = GP(f; µf, Kf), where µf : X →R is a mean function and Kf : X 2 →R is a positive-definite covariance function or kernel. For simplicity, we will assume that the prior on f is centered, µf(x) = 0, which lets us fully define the prior on f by the kernel function Kf. We assume that the kernel function is parameterized by hyperparameters that we concatenate into a vector θ. In this restricted context, a model M is completely determined by the choice of kernel function and an associated hyperparameter prior p(θ | M). Below we briefly review a previously suggested method for constructing an infinite space of potential kernels to model the latent function f, and thus an infinite family of models M. We will the discuss the standardized and automated construction of associated hyperparameter priors. 4.1 Space of compositional Gaussian processes kernels We adopt the same space of kernels defined by Duvenaud et al. [1], which we briefly summarize here. We refer the reader to the original paper for more details. Given a set of simple, so-called base kernels, such as the common squared exponential (SE), periodic (PER), linear (LIN), and rational quadratic (RQ) kernels, we create new and potentially complex kernels by summation and multiplication of these base units. The entire kernel space can be describe by the following grammar rules: 1. Any subexpression S can be replaced with S + B, where B is a base kernel. 2. Any subexpression S can be replaced with S × B, where B is a base kernel. 3. Any base kernel B may be replaced with another base kernel B′. 4.2 Creating hyperparameter priors The base kernels we will use are well understood, as are their hyperparameters, which have simple interpretations that can be thematically grouped together. We take advantage of the Bayesian framework to encode prior knowledge over hyperparameters, i.e., p(θ | M). Conveniently, these priors can also potentially mitigate numerical problems during the training of the GPs. Here we derive a consistent method to construct such priors for arbitrary kernels and datasets in regression problems. We first standardize the dataset, i.e., we subtract the mean and divide by the standard deviation of both the predictive features {xi} and the outputs y. This gives each dataset a consistent scale. Now we can reason about what real-world datasets usually look like in this scale. For example, we do not typically expect to see datasets spanning 10 000 length scales. Here we encode what we judge to be reasonable priors for groups of thematically related hyperparameters for most datasets. These include three types of hyperparameters common to virtually any problem: length scales ℓ(including, for example, the period parameter of a periodic covariance), signal variance σ, and observation noise σn. We also consider separately three other parameters specific to particular covariances we use here: the α parameter of the rational quadratic covariance [15, (4.19)], the “length scale” of the periodic covariance ℓp [15, ℓin (4.31)], and the offset σ0 in the linear covariance. We define the following: p(log ℓ) = N(0.1, 0.72) p(log σ) = N(0.4, 0.72) p(log σn) = N(0.1, 12) p(log α) = N(0.05, 0.72) p(log ℓp) = N(2, 0.72) p(σ0) = N(0, 22) Given these, each model was given an independent prior over each of its hyperparameters, using the appropriate selection from the above for each. 4.3 Approximating the model evidence The model evidence p(y | X, M) is in general intractable for GPs [17, 15]. Alternatively we use a Laplace approximation to approximately compute the model evidence. This approximation works by 4 making a second-order Taylor expansion of log p(θ | D, M) around its mode ˆθ and approximates the model evidence as follows: log p(y | X, M) ≈log p(y | X, ˆθ, M) + log p(ˆθ | M) −1 2 log det Σ−1 + d 2 log 2π, (5) where d is the dimension of θ and Σ−1 = −∇2 log p(θ | D, M) θ=ˆθ [18, 19]. We can view (5) as rewarding model fit while penalizing model complexity. Note that the Bayesian information criterion (BIC), commonly used for model selection and also used by Duvenaud et al. [1], can be seen as an approximation to the Laplace approximation [20, 21]. 4.4 Creating a “kernel kernel” In §4.1, §4.2, and §4.3, we focused on modeling a latent function f with a GP, creating an infinite space of models M to explain f (along with associated hyperparameter priors), and approximating the log model evidence function g(M; D). The evidence function g is the objective function we are trying to optimize via Bayesian optimization. We described in §3 how this search progresses in the general case, described in terms of an arbitrary Gaussian process prior on g. Here we will provide specific suggestions for the modeling of g in the case that the model family M comprises Gaussian process priors on a latent function f, as discussed here and considered in our experiments. Our prior belief about g is given by a GP prior p(g) = GP(g; µg, Kg), which is fully specified by the mean µg and covariance functions Kg. We define the former as a simple constant mean function µg(M) = θµ, where θµ is a hyperparameter to be learned through a regular GP training procedure given a set of observations. The latter we will construct as follows. The basic idea in our construction is that is that we will consider the distribution of the observation locations in our dataset D, X (the design matrix of the underlying problem). We note that selecting a model class M induces a prior distribution over the latent function values at X, p(f | X, M): p(f | X, M) = Z p(f | X, M, θ) p(θ | M) dθ. This prior distribution is an infinite mixture of multivariate Gaussian prior distributions, each conditioned on specific hyperparameters θ. We consider these prior distributions as different explanations of the latent function f, restricted to the observed locations, offered by the model M. We will compare two models in M according to how different the explanations they offer for f are, a priori. The Hellinger distance is a probability metric that we adopt as a basic measure of similarity between two distributions. Although this quantity is defined between arbitrary probability distributions (and thus could be used with non-GP model spaces), we focus on the multivariate normal case. Suppose that M, M′ ∈M are two models that we wish to compare, in the context of explaining a fixed dataset D. For now, suppose that we have conditioned each of these models on arbitrary hyperparameters (that is, we select a particular prior for f from each of these two families), giving Mθ and M′ θ′, with θ ∈ΘM and θ′ ∈ΘM′. Now, we define the two distributions P = p(f | X, M, θ) = N(f; µP , ΣP ) Q = p(f | X, M′, θ′) = N(f; µQ, ΣQ). The squared Hellinger distance between P and Q is d2 H(P, Q) = 1 −|ΣP |1/4|ΣQ|1/4 ΣP +ΣQ 2 1/2 exp  −1 8(µP −µQ)⊤ ΣP + ΣQ 2 −1 (µP −µQ)  . (6) The Hellinger distance will be small when P and Q are highly overlapping, and thus Mθ and M′ θ′ provide similar explanations for this dataset. The distance will be larger, conversely, when Mθ and M′ θ′ provide divergent explanations. Critically, we note that this distance depends on the dataset under consideration in addition to the GP priors. Observe that the distance above is not sufficient to compare the similarity of two models M, M′ due to the fixing of hyperparameters above. To properly account for the different hyperparameters of different models, and the priors associated with them, we define the expected squared Hellinger distance of two models M, M′ ∈M as ¯d2 H(M, M′; X) = E  d2 H(Mθ, M′ θ′)  = ZZ d2 H(Mθ, M′ θ′; X) p(θ | M) p(θ′ | M′) dθ dθ′, (7) 5 SE+PER RQ PER SE SE RQ PER SE+ PER SE RQ PER SE+ PER Figure 1: A demonstration of our model kernel Kg (8) based on expected Hellinger distance of induced latent priors. Left: four simple model classes on a 1d domain, showing samples from the prior p(f | M) ∝p(f | θ, M) p(θ | M). Right: our Hellinger squared exponential covariance evaluated for the grid domains on the left. Increasing intensity indicates stronger covariance. The sets {SE, RQ} and {SE, PER, SE+PER} show strong mutual correlation. where the distance is understood to be evaluated between the priors provided on f induced at X. Finally, we construct the Hellinger squared exponential covariance between models as Kg(M, M′; θg, X) = σ2 exp  −1 2 ¯d2 H(M, M′; X) ℓ2  , (8) where θg = (σ, ℓ) specifies output and length scale hyperparameters in this kernel/evidence space. This covariance is illustrated in Figure 1 for a few simple kernels on a fictitious domain. We make two notes before continuing. The first observation is that computing (6) scales cubically with |X|, so it might appear that we might as well compute the evidence instead. This is misleading for two reasons. First, the (approximate) computation of a given model’s evidence via either a Laplace approximation or the BIC requires optimizing its hyperparameters. Especially for complex models this can require hundreds-to-thousands of computations that each require cubic time. Further, as a result of our investigations, we have concluded that in practice we may approximate (6) and (7) by considering only a small subset of the observation locations X and that this usually sufficient to capture the similarity between models in terms of explaining a given dataset. In our experiments, we choose 20 points uniformly at random from those available in each dataset, fixed once for the entire procedure and for all kernels under consideration in the search. We then used these points to compute distances (6–8), significantly reducing the overall time to compute Kg. Second, we note that the expectation in (7) is intractable. Here we approximate the expectation via quasi-Monte Carlo, using a low-discrepancy sequence (a Sobol sequence) of the appropriate dimension, and inverse transform sampling, to give consistent, representative samples from the hyperparameter space of each model. Here we used 100 (θ, θ′) samples with good results. 4.5 Active set of candidate models Another challenging of exploring an infinite set of models is how we advance the search. Each round, we only compute the acquisition function on a set of candidate models C. Here we discuss our policy for creating and maintaining this set. From the kernel grammar (§4.1), we can define a model graph where two models are connected if we can apply one rule to produce the other. We seek to traverse this graph, balancing exploration (diversity) against exploitation (models likely to have higher evidence). We begin each round with a set of already chosen candidates C. To encourage exploitation, we add to C all neighbors of the best model seen thus far. To encourage exploration, we perform random walks to create diverse models, which we also add to C. We start each random walk from the empty kernel and repeatedly apply a random number of grammatical transformations. The number of such steps is sampled from a geometric distribution with termination probability 1 3. We find that 15 random walks works well. To constrain the number of candidates, we discard the models with the lowest EI values at the end of each round, keeping |C| no larger than 600. 6 Table 1: Root mean square error for model-evidence regression experiment. Dataset Train % Mean k-NN (SP) k-NN ( ¯dH) GP ( ¯dH) CONCRETE 20 0.109 (0.000) 0.200 (0.020) 0.233 (0.008) 0.107 (0.001) 40 0.107 (0.000) 0.260 (0.025) 0.221 (0.007) 0.102 (0.001) 60 0.107 (0.000) 0.266 (0.007) 0.215 (0.005) 0.097 (0.001) 80 0.106 (0.000) 0.339 (0.015) 0.200 (0.003) 0.093 (0.002) HOUSING 20 0.210 (0.001) 0.226 (0.002) 0.347 (0.004) 0.175 (0.002) 40 0.207 (0.001) 0.235 (0.004) 0.348 (0.004) 0.140 (0.002) 60 0.206 (0.000) 0.235 (0.004) 0.348 (0.004) 0.123 (0.002) 80 0.206 (0.000) 0.257 (0.004) 0.344 (0.004) 0.114 (0.002) MAUNA LOA 20 0.543 (0.002) 0.736 (0.051) 0.685 (0.010) 0.513 (0.003) 40 0.537 (0.001) 0.878 (0.062) 0.667 (0.005) 0.499 (0.003) 60 0.535 (0.001) 1.051 (0.058) 0.686 (0.010) 0.487 (0.004) 80 0.534 (0.001) 1.207 (0.048) 0.707 (0.005) 0.474 (0.004) 5 Experiments Here we evaluate our proposed algorithm. We split our evaluation into two parts: first, we show that our GP model for predicting a model’s evidence is suitable; we then demonstrate that our model search method quickly finds a good model for a range of regression datasets. The datasets we consider are publicly available4 and were used in previous related work [1, 3]. AIRLINE, MAUNA LOA, METHANE, and SOLAR are 1d time series, and CONCRETE and HOUSING have, respectively, 8 and 13 dimensions. To facilitate comparison of evidence across datasets, we report log evidence divided by dataset size, redefining g(M; D) = log(p(y | X, M))/|D|. (9) We use the aforementioned base kernels {SE, RQ, LIN, PER} when the dataset is one-dimensional. For multi-dimensional datasets, we consider the set {SEi} ∪{RQi}, where the subscript indicates that the kernel is applied only to the ith dimension. This setup is the same as in [1]. 5.1 Predicting a model’s evidence We first demonstrate that our proposed regression model in model space (i.e., the GP on g: M →R) is sound. We set up a simple prediction task where we predict model evidence on a set of models given training data. We construct a dataset Dg (4) of 1 000 models as follows. We initialize a set M with the set of base kernels, which varies for each dataset (see above). Then, we select one model uniformly at random from M and add its neighbors in the model grammar to M. We repeat this procedure until |M| = 1 000 and computed g(M; D) for the entire set generated. We train several baselines on a subset of Dg and test their ability to predict the evidence of the remaining models, as measured by the root mean squared error (RMSE). To achieve reliable results we repeat this experiment ten times. We considered a subset of the datasets (including both high-dimensional problems), because training 1 000 models demands considerable time. We compare with several alternatives: 1. Mean prediction. Predicts the mean evidence on the training models. 2. k-nearest neighbors. We perform k-NN regression with two distances: shortest-path distance in the directed model graph described in §4.5 (SP), and the expected squared Hellinger distance (7). Inverse distance was used as weights. We select k for both k-NN algorithms through cross-validation, trying all values of k from 1 to 10. We show the average RMSE along with standard error in Table 1. The GP with our Hellinger distance model covariance universally achieves the lowest error. Both k-NN methods are outperformed by the simple mean prediction. We note that in these experiments, many models perform similarly in terms of evidence (usually, this is because many models are “bad” in the same way, e.g., explaining the dataset away entirely as independent noise). We note, however, that the GP model is able to exploit correlations in deviations from the mean, for example in “good pockets” of model space, to achieve 4https://archive.ics.uci.edu/ml/datasets.html 7 AIRLINE METHANE HOUSING 0 20 40 0 0.5 iteration CKS BOMS 0 20 40 −0.4 −0.3 −0.2 iteration 0 20 40 −1.4 −1.2 −1 −0.8 −0.6 iteration SOLAR MAUNA LOA CONCRETE 0 20 40 −0.4 −0.3 −0.2 iteration 0 20 40 1.5 2 2.5 iteration 0 20 40 −1.4 −1.2 −1 −0.8 iteration Figure 2: A plot of the best model evidence found (normalized by |D|, (9)) as a function of the number of models evaluated, g(M∗; D), for six of the datasets considered (identical vertical axis labels omitted for greater horizontal resolution). better performance. We also note that both the k-NN and GP models have decreasing error with the number of training models, suggesting our novel model distance is also useful in itself. 5.2 Model search We also evaluate our method’s ability to quickly find a suitable model to explain a given dataset. We compare our approach with the greedy compositional kernel search (CKS) of [1]. Both algorithms used the same kernel grammar (§4.1), hyperparameter priors (§4.2), and evidence approximation (§4.3, (5)). We used L-BFGS to optimize model hyperparameters, using multiple restarts to avoid bad local maxima; each restart begins from a sample from p(θ | M). For BOMS, we always began our search evaluating SE first. The active set of models C (§4.5) was initialized with all models that are at most two edges distant from the base kernels. To avoid unnecessary re-training over g, we optimized the hyperparameters of µg and Kg every 10 iterations. This also allows us to perform rank-one updates for fast inference during the intervening iterations. Results are depicted in Figure 2 for a budget of 50 evaluations of the model evidence. In four of the six datasets we substantially outperform CKS. Note the vertical axis is in the log domain. The overhead for computing the kernel Kg and performing the inference about g was approximately 10% of the total running time. On MAUNA LOA our method is competitive since we find a model with similar quality, but earlier. The results for METHANE, on the other hand, indicate that our search initially focused on a suboptimal region of the graph, but we eventually do catch up. 6 Conclusion We introduced a novel automated search for an appropriate kernel to explain a given dataset. Our mechanism explores a space of infinite candidate kernels and quickly and effectively selects a promising model. Focusing on the case where the models represent structural assumptions in GPs, we introduced a novel “kernel kernel” to capture the similarity in prior explanations that two models ascribe to a given dataset. We have empirically demonstrated that our choice of modeling the evidence (or marginal likelihood) with a GP in model space is capable of predicting the evidence value of unseen models with enough fidelity to effectively explore model space via Bayesian optimization. 8 Acknowledgments This material is based upon work supported by the National Science Foundation (NSF) under award number IIA–1355406. Additionally, GM acknowledges support from the Brazilian Federal Agency for Support and Evaluation of Graduate Education (CAPES). References [1] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure Discovery in Nonparametric Regression through Compositional Kernel Search. In International Conference on Machine Learning (ICML), 2013. [2] R. Grosse, R. Salakhutdinov, W. Freeman, and J. Tenenbaum. Exploiting compositionality to explore a large space of model structures. In Conference on Uncertainty in Artificial Intelligence (UAI), 2012. [3] F. R. Bach. Exploring large feature spaces with hierarchical multiple kernel learning. In Conference on Neural Information Processing Systems (NIPS), 2008. [4] M. Gonen and E. Alpaydin. Multiple kernel learning algorithms. Journal of Machine Learning Research, 12:2211–2268, 2011. [5] M. Lázaro-Gredilla, J. Q. Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse Spectrum Gaussian Process Regression. Journal of Machine Learning Research, 11:1865–1881, 2010. [6] A. G. Wilson and R. P. Adams. Gaussian Process Kernels for Pattern Discovery and Extrapolation. In International Conference on Machine Learning (ICML), 2013. [7] A. Wilson, E. Gilboa, J. P. Cunningham, and A. Nehorai. Fast kernel learning for multidimensional pattern extrapolation. In Conference on Neural Information Processing Systems (NIPS), 2014. [8] A. G. Wilson, D. A. Knowles, and Z. Ghahramani. Gaussian process regression networks. In International Conference on Machine Learning (ICML), 2012. [9] G. E. Hinton and R. R. Salakhutdinov. Using Deep Belief Nets to Learn Covariance Kernels for Gaussian Processes. In Conference on Neural Information Processing Systems (NIPS). 2008. [10] A. C. Damianou and N. D. Lawrence. Deep Gaussian Processes. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2013. [11] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyper-parameter optimization. In Conference on Neural Information Processing Systems (NIPS). 2011. [12] J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Conference on Neural Information Processing Systems, 2012. [13] J. Gardner, G. Malkomes, R. Garnett, K. Q. Weinberger, D. Barbour, and J. P. Cunningham. Bayesian active model selection with an application to automated audiometry. In Conference on Neural Information Processing Systems (NIPS). 2015. [14] E. Brochu, V. M. Cora, and N. De Freitas. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning. arXiv preprint arXiv:1012.2599, 2010. [15] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [16] D. R. Jones, M. Schonlau, and W. J. Welch. Efficient global optimization of expensive black-box functions. Journal of Global optimization, 13(4):455–492, 1998. [17] D. J. C. MacKay. Introduction to Gaussian processes. In C. M. Bishop, editor, Neural Networks and Machine Learning, pages 133–165. Springer, Berlin, 1998. [18] A. E. Raftery. Approximate Bayes Factors and Accounting for Model Uncertainty in Generalised Linear Models. Biometrika, 83(2):251–266, 1996. [19] J. Kuha. AIC and BIC: Comparisons of Assumptions and Performance. Sociological Methods and Research, 33(2):188–229, 2004. [20] G. Schwarz. Estimating the Dimension of a Model. Annals of Statistics, 6(2):461–464, 1978. [21] K. P. Murphy. Machine Learning: A Probabilistic Perspective. MIT Press, 2012. 9
2016
145
6,045
Designing smoothing functions for improved worst-case competitive ratio in online optimization Reza Eghbali Department of Electrical Engineering University of Washington Seattle, WA 98195 eghbali@uw.edu Maryam Fazel Department of Electrical Engineering University of Washington Seattle, WA 98195 mfazel@uw.edu Abstract Online optimization covers problems such as online resource allocation, online bipartite matching, adwords (a central problem in e-commerce and advertising), and adwords with separable concave returns. We analyze the worst case competitive ratio of two primal-dual algorithms for a class of online convex (conic) optimization problems that contains the previous examples as special cases defined on the positive orthant. We derive a sufficient condition on the objective function that guarantees a constant worst case competitive ratio (greater than or equal to 1 2) for monotone objective functions. We provide new examples of online problems on the positive orthant that satisfy the sufficient condition. We show how smoothing can improve the competitive ratio of these algorithms, and in particular for separable functions, we show that the optimal smoothing can be derived by solving a convex optimization problem. This result allows us to directly optimize the competitive ratio bound over a class of smoothing functions, and hence design effective smoothing customized for a given cost function. 1 Introduction Given a proper convex cone K ⊂Rn, let ψ : K 7→R be an upper semi-continuous concave function. Consider the optimization problem maximize ψ (Pm t=1 Atxt) subject to xt ∈Ft, ∀t ∈[m], (1) where for all t ∈[m] := {1, 2, . . . , m}, xt ∈Rl are the optimization variables and Ft are compact convex constraint sets. We assume At ∈Rn×l maps Ft to K; for example, when K = Rn + and Ft ⊂Rl +, this assumption is satisfied if At has nonnegative entries. We consider problem (1) in the online setting, where it can be viewed as a sequential game between a player (online algorithm) and an adversary. At each step t, the adversary reveals At, Ft and the algorithm chooses ˆxt ∈Ft. The performance of the algorithm is measured by its competitive ratio, i.e., the ratio of objective value at ˆx1, . . . , ˆxm to the offline optimum. Problem (1) covers (convex relaxations of) various online combinatorial problems including online bipartite matching [14], the “adwords” problem [16], and the secretary problem [15]. More generally, it covers online linear programming (LP) [6], online packing/covering with convex cost [3, 4, 7], and generalization of adwords [8]. In this paper, we study the case where ∂ψ(u) ⊂K∗for all u ∈K, i.e., ψ is monotone with respect to the cone K. The competitive performance of online algorithms has been studied mainly under the worst-case model (e.g., in [16]) or stochastic models (e.g., in [15]). In the worst-case model one is interested in lower bounds on the competitive ratio that hold for any (A1, F1), . . . , (Am, Fm). In stochastic models, adversary choses a probability distribution from a family of distributions to generate (A1, F1), . . . , (Am, Fm), and the competitive ratio is calculated using the expected value of the algorithm’s objective value. Online bipartite matching and its generalization, the “adwords” problem, are the two main problems that have been studied under the worst case model. The greedy algorithm achieves a competitive ratio of 1/2 while the optimal algorithm achieves a competitive ratio of 1−1/e (as bid to budget ratio goes to zero) [16, 5, 14, 13]. A more general version of Adwords in which each agent (advertiser) has a concave cost has been studied in [8]. The majority of algorithms proposed for the problems mentioned above rely on a primal-dual framework [5, 6, 3, 8, 4]. The differentiating point among the algorithms is the method of updating the dual variable at each step, since once the dual variable is updated the primal variable can be assigned using a simple complementarity condition. A simple and efficient method of updating the dual variable is through a first order online learning step. For example, the algorithm stated in [9] for online linear programming uses mirror descent with entropy regularization (multiplicative weight updates algorithm) once written in the primal dual language. Recently, the work in [9] was independently extended to random permutation model in [12, 2, 11]. In [2], the authors provide competitive difference bound for online convex optimization under random permutation model as a function of the regret bound for the online learning algorithm applied to the dual. In this paper, we consider two versions of the greedy algorithm for problem (1), a sequential update and a simultaneous update algorithm. The simultaneous update algorithm, Algorithm 2, provides a direct saddle-point representation of what has been described informally in the literature as “continuous updates” of primal and dual variables. This saddle point representation allows us to generalize this type of updates to non-smooth function. In section 2, we bound the competitive ratios of the two algorithms. A sufficient condition on the objective function that guarantees a non-trivial worst case competitive ratio is introduced. We show that the competitive ratio is at least 1 2 for a monotone non-decreasing objective function. Examples that satisfy the sufficient condition (on the positive orthant and the positive semidefinite cone) are given. In section 3, we derive optimal algorithms, as variants of greedy algorithm applied to a smoothed version of ψ. For example, Nesterov smoothing provides optimal algorithm for the adwords problem. The main contribution of this paper is to show how one can derive the optimal smoothing function (or from the dual point of view the optimal regularization function) for separable ψ on positive orthant by solving a convex optimization problem. This gives an implementable algorithm that achieves the optimal competitive ratio derived in [8]. We also show how this convex optimization can be modified for the design of smoothing function specifically for the sequential algorithm. In contrast, [8] only considers continuous updates. The algorithms considered in this paper and their general analysis are the same as those we considered in [10]. In [10], the focus is on non-monotone functions and online problems on the positive semidefinite cone. However, the focus of this paper is on monotone functions on the positive orthant. In [10], we only considered Nesterov smoothing and only derived competitive ratio bounds for the simultaneous algorithm. Notation. Given a function ψ : Rn 7→R, ψ∗denotes the concave conjugate of ψ defined as ψ∗(y) = infu⟨y, u⟩−ψ(u), for all y ∈Rn. For a concave function ψ, ∂ψ(u) denotes the set of supergradients of ψ at u, i.e., the set of all y ∈Rn such that ∀u′ ∈Rn : ψ(u′) ≤⟨y, u′ −u⟩+ψ(u). The set ∂ψ is related to the concave conjugate function ψ∗as follows. For an upper semi-continuous concave function ψ we have ∂ψ(u) = argminy⟨y, u⟩−ψ∗(y). A differentiable function ψ has a Lipschitz continuous gradient with respect to ∥·∥with continuity parameter 1/µ > 0 if for all u, u′ ∈Rn, ∥∇ψ(u′) −∇ψ(u)∥∗≤1/µ ∥u −u′∥, where ∥·∥∗is the dual norm to ∥·∥. The dual cone K∗of a cone K ⊂Rn is defined as K∗= {y | ⟨y, u⟩≥0 ∀u ∈K}. Two examples of self-dual cones are the positive orthant Rn + and the cone of n × n positive semidefinite matrices Sn +. A proper cone (pointed convex cone with nonempty interior) K induces a partial ordering on Rn which is denoted by ≤K and is defined as x ≤K y ⇔y −x ∈K. 1.1 Two primal-dual algorithms The (Fenchel) dual problem for problem (1) is given by minimize Pm t=1 σt(AT t y) −ψ∗(y), (2) where the optimization variable is y ∈Rn, and σt denotes the support function for the set Ft defined as σt(z) = supx∈Ft⟨x, z⟩. A pair (x∗, y∗) ∈(F1 × . . . × Fm) × K∗is an optimal primal-dual pair if and only if 2 x∗ t ∈argmax x∈Ft ⟨x, AT t y∗⟩, y∗∈∂ψ( m X t=1 Atx∗ t ), ∀t ∈[m]. Based on these optimality conditions, we consider two algorithms. Algorithm 1 updates the primal and dual variables sequentially, by maintaining a dual variable ˆyt and using it to assign ˆxt ∈ argmaxx∈Ft⟨x, AT t ˆyt⟩. The then algorithm updates the dual variable based on the second optimality condition. By the assignment rule, we have Atˆxt ∈∂σt(ˆyt), and the dual variable update can be viewed as ˆyt+1 ∈argminy⟨Pt s=1 Asˆxs, y⟩−ψ∗(y). Therefore, the dual update is the same as the update in dual averaging [18] or Follow The Regularized Leader (FTRL) [20, 19, 1] algorithm with regularization −ψ∗(y). Algorithm 1 Sequential Update Initialize ˆy1 ∈∂ψ(0) for t ←1 to m do Receive At, Ft ˆxt ∈argmaxx∈Ft⟨x, AT t ˆyt⟩ ˆyt+1 ∈∂ψ(Pt s=1 Asˆxs) end for Algorithm 2 updates the primal and dual variables simultaneously, ensuring that ˜xt ∈argmax x∈Ft ⟨x, AT t ˜yt⟩, ˜yt ∈∂ψ( t X s=1 As˜xs). This algorithm is inherently more complicated than algorithm 1, since finding ˜xt involves solving a saddle-point problem. This can be solved by a first order method like mirror descent algorithm for saddle point problems. In contrast, the primal and dual updates in algorithm 1 solve two separate maximization and minimization problems 1. Algorithm 2 Simultaneous Update for t ←1 to m do Receive At, Ft (˜yt, ˜xt) ∈arg miny maxx∈Ft ⟨y, Atx + Pt−1 s=1 As˜xs⟩−ψ∗(y) end for 2 Competitive ratio bounds and examples for ψ In this section, we derive bounds on the competitive ratios of Algorithms 1 and 2 by bounding their respective duality gaps. We begin by stating a sufficient condition on ψ that leads to non-trivial competitive ratios, and we assume this condition holds in the rest of the paper. Roughly, one can interpret this assumption as having “diminishing returns” with respect to the ordering induced by a cone. Examples of functions that satisfy this assumption will appear later in this section. Assumption 1 Whenever u ≥K v, there exists y ∈∂ψ(u) that satisfies y ≤K∗z for all z ∈∂ψ(v). When ψ is differentiable, assumption 1 simplifies to u ≥K v ⇒∇ψ(u) ≤K∗∇ψ(v). That is, the gradient, as a map from Rn (equipped with ≤K) to Rn (equipped with ≤K∗), is order-reversing.When ψ is twice differentiable, assumption 1 is equivalent to ⟨w, ∇2ψ(u)v⟩≤0, for all u, v, w ∈K. For example, this is equivalent to Hessian being element-wise non-positive when K = Rn +. Let define ˜ym+1 to be the minimum element in ∂ψ(Pm t=1 At˜xt) with respect to ordering ≤K∗(such an element exists in the superdifferential by Assumption (1)). Let Pseq = ψ (Pm t=1 Atˆxt) and Psim = ψ (Pm t=1 At˜xt) denote the primal objective values for the primal solution produced by the algorithms 1Also if the original problem is a convex relaxation of an integer program, meaning that each Ft = convFt where Ft ⊂Zl, then ˆxt can always be chosen to be integral while integrality may not hold for the solution of the second algorithm. 3 1 and 2, and Dseq = Pm t=1 σt(AT t ˆyt)−ψ∗(ˆym+1) and Dsim = Pm t=1 σt(AT t ˜yt)−ψ∗(˜ym+1) denote the corresponding dual objective values. The next lemma provides a lower bound on the duality gaps of both algorithms. Lemma 1 The duality gaps for the two algorithms can be lower bounded as Psim −Dsim ≥ψ∗(˜ym+1) + ψ(0), Pseq −Dseq ≥ψ∗(ˆym+1) + ψ(0) + m X t=1 ⟨Atˆxt, ˆyt+1 −ˆyt⟩ Furthermore, if ψ has a Lipschitz continuous gradient with parameter 1/µ with respect to ∥·∥, Pseq −Dseq ≥ψ∗(ˆym+1) + ψ(0) − 1 2µ Pm t=1 ∥Atˆxt∥2 . (3) Note that right hand side of (3) is exactly the regret bound of the FTRL algorithm (with a negative sign) [19]. The proof is given in the appendix. To simplify the notation in the rest of the paper, we assume ψ(0) = 0 by replacing ψ(u) with ψ(u) −ψ(0). To quantify the competitive ratio of the algorithms, we define αψ as αψ = sup {c | ψ∗(y) ≥cψ(u), y ∈∂ψ(u), u ∈K}, (4) Since ψ∗(y) + ψ(u) = ⟨y, u⟩for all y ∈∂ψ(u), αψ is equivalent to αψ = sup{c | ⟨y, u⟩≥(c + 1)ψ(u), y ∈∂ψ(u) u ∈K}. (5) Note that −1 ≤αψ ≤0, since for any u ∈K and y ∈∂ψ(u), by concavity of ψ and the fact that y ∈K∗, we have 0 ≤⟨y, u⟩≤ψ(u) −ψ(0). If ψ is a linear function then αψ = 0, while if 0 ∈∂ψ(u) for some u ∈K, then αψ = −1. The next theorem provides lower bounds on the competitive ratio of the two algorithms. Theorem 1 If Assumption 1 holds, we have Psim ≥ 1 1 −αψ D⋆, Pseq ≥ 1 1 −αψ (D⋆+ m X t=1 ⟨Atˆxt, ˆyt+1 −ˆyt⟩) where D⋆is the dual optimal objective. If ψ has a Lipschitz continuous gradient with parameter 1/µ with respect to ∥·∥, Pseq ≥ 1 1−αψ (D⋆− 1 2µ Pm t=1 ∥Atˆxt∥2). (6) Proof: Consider the simultaneous update algorithm. We have Pt s=1 As˜xs ≤K Pm s=1 As˜xs for all t, since AsFs ⊂K for all s. Since ˜yt ∈∂ψ(Pt s=1 As˜xs) and ˜ym+1 was picked to be the minimum element in ∂ψ(Pm s=1 As˜xs) with respect to ≤K∗, by Assumption 1, we have ˜yt ≥K∗˜ym+1. Since Atx ∈K for all x ∈Ft, we get ⟨Atx, ˜yt⟩≥⟨Atx, ˜ym+1⟩; therefore, σt(AT t ˜yt) ≥σt(AT t ˜ym). Thus Dsim = m X t=1 σt(AT t ˜yt) −ψ∗(˜ym) ≥ m X t=1 σt(AT t ˜ym+1) −ψ∗(˜ym) ≥D∗. Now Lemma 1 and definition of αψ give the desired result. The proof for Algorithm 1 follows similar steps. □ We now consider examples of ψ that satisfy Assumption 1 and derive lower bound on αψ for those examples. Examples on positive orthant. Let K = Rn + and note that K∗= K. To simplify the notation we use ≤instead of ≤Rn +. Assumption 1 is satisfied for a twice differentiable function if and only if the Hessian is element-wise non-positive over Rn +. If ψ is separable, i.e., ψ(u) = Pn i=1 ψi(ui), Assumption 1 is satisfied since by concavity for each ψi we have ∂ψi(ui) ≤∂ψi(vi) when ui ≤vi. In the basic adwords problem, for all t, Ft = {x ∈Rl + | 1T x ≤1}, At is a diagonal matrix with non-negative entries, and ψ(u) = Pn i=1 ui −Pn i=1(ui −1)+, (7) 4 where (·)+ = max{·, 0}. In this problem, ψ∗(y) = 1T (y −1). Since 0 ∈∂ψ(1) we have αψ = −1 by (5); therefore, the competitive ratio of algorithm 2 is 1 2. Let r = maxt,i,j At,i,j, then we have Pm t=1⟨Atˆxt, ˆyt+1 −ˆyt⟩≤nr. Therefore, the competitive ratio of algorithm 1 goes to 1 2 as r (bid to budget ratio) goes to zero. In adwords with concave returns studied in [8], At is diagonal for all t and ψ is separable 2. For any p ≥1 let Bp be the lp-norm ball. We can rewrite the penalty function −Pn i=1(ui −1)+ in the adwords objective using the distance from B∞: we have Pn i=1(ui −1)+ = d1(u, B∞), where d1(·, C) is the l1 norm distance from set C. For p ∈[1, ∞) the function −d1(u, Bp) although not separable it satisfies Assumption 1. The proof is given in the supplementary materials. Examples on the positive semidefinite cone. Let K = Sn + and note that K∗= K. Two examples that satisfy Assumption 1 are ψ(U) = log det(U + A0), and ψ(U) = trU p with p ∈(0, 1). We refer the reader to [10] for examples of online problems that entails log det in the objective function and competitive ratio analysis of the simultanuous algorithm for these problems. 3 Smoothing of ψ for improved competitive ratio The technique of “smoothing” an (potentially non-smooth) objective function, or equivalently adding a strongly convex regularization term to its conjugate function, has been used in several areas. In convex optimization, a general version of this is due to Nesterov [17], and has led to faster convergence rates of first order methods for non-smooth problems. In this section, we study how replacing ψ with a appropriately smoothed function ψS helps improve the performance of the two algorithms discussed in section 1.1, and show that it provides optimal competitive ratio for two of the problems mentioned in section 2, adwords and online LP. We then show how to maximize the competitive ratio of both algorithms for a separable ψ and compute the optimal smoothing by solving a convex optimization problem. This allows us to design the most effective smoothing customized for a given ψ: we maximize the bound on the competitive ratio over the set of smooth functions.(see subsection 3.2 for details). Let ψS denote an upper semi-continuous concave function (a smoothed version of ψ), and suppose ψS satisfies Assumption 1. The algorithms we consider in this section are the same as Algorithms 1 and 2, but with ψ replacing ψS. Note that the competitive ratio is computed with respect to the original problem, that is the offline primal and dual optimal values are still the same P ⋆and D⋆as before. From Lemma 1, we have that Dsim ≤ψS (Pm t=1 At˜xt)−ψ∗(˜ym+1) and Dseq ≤ψS (Pm t=1 Atˆxt)− ψ∗(ˆym+1)−Pm t=1⟨Atˆxt, ˆyt+1 −ˆyt⟩. To simplify the notation, assume ψS(0) = 0 as before. Define αψ,ψS = sup{c |ψ∗(y) ≥ψS(u) + (c −1)ψ(u), y ∈∂ψS(u), u ∈K}. Then the conclusion of Theorem 1 for Algorithms 1 and 2 applied to the smoothed function holds with αψ replaced by αψ,ψS. 3.1 Nesterov Smoothing We first consider Nesterov smoothing [17], and apply it to examples on non-negative orthant. Given a proper upper semi-continuous concave function φ : Rn 7→R ∪{−∞}, let ψS = (ψ∗+ φ∗)∗. Note that ψS is the supremal convolution of ψ and φ. If ψ and φ are separable, then ψS satisfies Assumption 1 for K = Rn +. Here we provide example of Nesterov smoothing for functions on non-negative orthant. Adwords: The optimal competitive ratio for the Adwords problem is 1 −e−1. This is achieved by smoothing ψ with φ∗(y) = Pm i=1(yi − e e−1) log(e −(e −1)yi) −2yi, which gives ψS,i(ui) −ψS,i(0) = ( eui−exp (ui)+1 e−1 ui ∈[0, 1] 1 e−1 ui > 1, 2Note that in this case one can remove the assumption that ∂ψi ⊂R+ since if ˜yt,i = 0 for some t and i, then ˜xs,i = 0 for all s ≥t. 5 3.2 Computing optimal smoothing for separable functions on Rn + We now tackle the problem of finding the optimal smoothing for separable functions on the positive orthant, which as we show in an example at the end of this section is not necessarily given by Nesterov smoothing. Given a separable monotone ψ(u) = Pn i=1 ψi(ui) and ψS(u) = Pn i=1 ψS,i(ui) on Rn + we have that αψ,ψS ≥mini αψi,ψS,i. To simplify the notation, drop the index i and consider ψ : R+ 7→R. We formulate the problem of finding ψS to maximize αψ,ψS as an optimization problem. In section 4 we discuss the relation between this optimization method and the optimal algorithm presented in [8]. We set ψS(u) = R u 0 y(s)ds with y a continuous function (y ∈C[0, ∞)), and state the infinite dimensional convex optimization problem with y as a variable, minimize β subject to R u 0 y(s)ds −ψ∗(y(u)) ≤βψ(u), ∀u ∈[0, ∞) y ∈C[0, ∞), (8) where β = 1 −αψ,ψS (theorem 1 describes the dependence of the competitive ratios on this parameter). Note that we have not imposed any condition on y to be non-increasing (i.e., the corresponding ψS to be concave). The next lemma establishes that every feasible solution to the problem (8) can be turned into a non-increasing solution. Lemma 2 Let (y, β) be a feasible solution for problem (8) and define ¯y(t) = infs≤t y(s). Then (¯y, β) is also a feasible solution for problem (8). In particular if (y, β) is an optimal solution, then so is (¯y, β). The proof is given in the supplement. Revisiting the adwords problem, we observe that the optimal solution is given by y(u) =  e−exp(u) e−1  +, which is the derivative of the smooth function we derived using Nesterov smoothing in section 3.1. The optimality of this y can be established by providing a dual certificate, a measure ν corresponding to the inequality constraint, that together with y satisfies the optimality condition. If we set dν = exp (1 −u)/(e −1) du, the optimality conditions are satisfied with β = (1 −1/e)−1. Also note that if ψ plateaus (e.g., as in the adwords objective), then one can replace problem (8) with a problem over a finite horizon. Theorem 2 Suppose ψ(t) = c on [u′, ∞) (ψ plateaus). Then problem (8) is equivalent to minimize β subject to R u 0 y(s)ds −ψ∗(y(u)) ≤βψ(u), ∀u ∈[0, u′] y(u′) = 0, y ∈C[0, u′]. (9) So for a function ψ with a plateau, one can discretize problem (9) to get a finite dimensional problem, minimize β subject to h Pt s=1 y[s] −ψ∗(y[t]) ≤βψ(ht) ∀t ∈[d] y[d] = 0, (10) where h = u′/d is the discretization step. Figure 1a shows the optimal smoothing for the piecewise linear function ψ(u) = min(.75, u, .5u + .25) by solving problem (10). We point out that the optimal smoothing for this function is not given by Nesterov’s smoothing (even though the optimal smoothing can be derived by Nesterov’s smoothing for a piecewise linear function with only two pieces, like the adwords cost function). Figure 1d shows the difference between the conjugate of the optimal smoothing function and ψ∗for the piecewise linear function, which we can see is not concave. We simulated the performance of the simultaneous algorithm on a dataset with n = m, Ft simplex, and At diagonal. We varied m in the range from 1 to 30 and for each m calculated the the smallest competitive ratio achieved by the algorithm over (10m)2 random permutation of A1, . . . , Am. Figure 1i depicts this quantity vs. m for the optimal smoothing and the Nesterov smoothing. For the Nesterov smoothing we used the function φ∗(y) = (y − √e √e−1) log(√e −(√e −1)y) −3 2y. In cases where a bound umax on Pm t=1 AtFt is known, we can restrict t to [0, umax] and discretize problem (8) over this interval. However, the conclusion of Lemma 2 does not hold for a finite horizon 6 and we need to impose additional linear constraints y[t] ≤y[t −1] to ensure the monotonicity of y. We find the optimal smoothing for two examples of this kind: ψ(u) = log(1 + u) over [0, 100] (Figure 1b), and ψ(u) = √u over [0, 100] (Figure 1c). Figure 1e shows the competitive ratio achieved with the optimal smoothing of ψ(u) = log(1 + u) over [0, umax] as a function of umax. Figure 1f depicts this quantity for ψ(u) = √u. 3.3 Competitive ratio bound for the sequential algorithm In this section we provide a lower bound on the competitive ratio of the sequential algorithm (Algorithm 1). Then we modify Problem (8) to find a smoothing function that optimizes this competitive ratio bound for the sequential algorithm. Theorem 3 Suppose ψS is differentiable on an open set containing K and satisfies Assumption 1. In addition, suppose there exists c ∈K such that AtFt ≤K c for all t, then Pseq ≥ 1 1 −αψ,ψS + κc,ψ,ψS D⋆, where κ is given by κc,ψ,ψS = inf{r | ⟨c, ∇ψS(0) −∇ψS(u)⟩≤rψ(u), u ∈K} Proof: Since ψS satisfies Assumption 1, we have ˆyt+1 ≤K∗ˆyt. Therefore, we can write: Pm t=1⟨Atˆxt, ˆyt −ˆyt+1⟩≤Pm t=1⟨c, ˆyt −ˆyt+1⟩= ⟨c, ˆy0 −ˆym+1⟩ (11) Now by combining the duality gap given by Lemma 1 with 11, we get Dseq ≤ψS (Pm t=1 Atˆxt) − ψ∗(ˆym+1)+⟨c, ∇ψS(0) −∇ψS (Pm t=1 Atˆxt)⟩. The conclusion follows from the definition of αψ,ψS, κc,ψ,ψS and the fact that Dseq ≥D⋆. □ Based on the result of the previous theorem we can modify the optimization problem set up in Section 3.2 for separable functions on Rn + to maximize the lower bound on the competitive ratio of the sequential algorithm. Note that when ψ and ψS are separable, we have κc,ψ,ψS ≤maxi κci,ψi,ψS i. Therefore, similar to the previous section to simplify the notation we drop the index i and assume ψ is a function of a scalar variable. The optimization problem for finding ψS that minimizes κc,ψ,ψS −αψ,ψS is as follows: minimize β subject to R u 0 y(s)ds + c(ψ′(0) −y(u)) −ψ∗(y(u)) ≤βψ(u), ∀u ∈[0, ∞) y ∈C[0, ∞). (12) For adwords, the optimal solution is given by β = 1 1−exp(− 1 c+1 ) and y(u) = β  1 −exp  u−1 1+c  + , which gives a competitive ratio of 1 −exp  −1 c+1  . In Figure 1h we have plotted the competitive ratio achieved by solving problem 12 for ψ(u) = log det(1 + u) with umax = 100 as a function of c. Figure 1g shows the competitive ratio as a function of c for the piecewise linear function ψ(u) = min(.75, u, .5u + .25). 4 Discussion and Related Work We discuss results and papers from two communities, computer science theory and machine learning, related to this work. Online optimization. In [8], the authors proposed an optimal algorithm for adwords with differentiable concave returns (see examples in section 2). Here, “optimal” means that they construct an instance of the problem for which competitive ratio bound cannot be improved, hence showing the bound is tight. The algorithm is stated and analyzed for a twice differentiable, separable ψ(u). The assignment rule for primal variables in their proposed algorithm is explained as a continuous process. A closer look reveals that this algorithm falls in the framework of algorithm 2, with the only difference being that at each step, (˜xt, ˜yt) are chosen such that ˜xt ∈argmax⟨x, AT t ˜yt⟩ ∀i ∈[n] : ˜yt,i = ∇ψi(vi(ui)), ui = (Pt t=1 As˜xs)i, 7 u 0 0.5 1 1.5 2 0 0.2 0.4 0.6 0.8 ψS ψ (a) u 0 20 40 60 80 100 0 1 2 3 4 5 ψS ψ (b) u 0 20 40 60 80 100 0 5 10 15 20 25 ψS ψ (c) y 0 0.2 0.4 0.6 0.8 1 ψ∗ S(y) −ψ∗(y) -0.1 0 0.1 0.2 0.3 0.4 (d) umax 0 500 1000 comp. ratio 0.7 0.75 0.8 0.85 0.9 0.95 (e) umax 0 200 400 600 800 1000 comp. ratio 0.7 0.75 0.8 0.85 0.9 0.95 1 (f) c 0 0.2 0.4 0.6 0.8 1 Competitive ratio 0.3 0.4 0.5 0.6 0.7 (g) c 0 20 40 60 80 100 Competitive ratio 0 0.2 0.4 0.6 0.8 (h) m 0 10 20 30 competitive ratio 0.7 0.75 0.8 0.85 0.9 0.95 1 Optimal smoothing Nesterov smoothing (i) Figure 1: Optimal smoothing for ψ(u) = min(.75, u, .5u+.25) (a), ψ(u) = log(1+u) over [0, 100] (b), and ψ(u) = √u over [0, 100] (c). The competitive ratio achieved by the optimal smoothing as a function of umax for ψ(u) = log(1 + u) (e) and ψ(u) = √u (f). ψ∗ S −ψ∗for the piecewise linear function (d). The competitive ratio achieved by the optimal smoothing for the sequential algorithm as a function of c for ψ(u) = min(.75, u, .5u + .25) (g) and ψ(u) = log(1 + u) with umax = 100 (h). i, Competitive ratio of the simultaneous algorithm for ψ(u) = min(.75, u, .5u + .25) as a function of m with optimal smoothing and Nesterov smoothing (see text). where vi : R+ 7→R+ is an increasing differentiable function given as a solution of a nonlinear differential equation that involves ψi and may not necessarily have a closed form. The competitive ratio is also given based on the differential equation. They prove that this gives the optimal competitive ratio for the instances where ψ1 = ψ2 = . . . = ψm. Note that this is equivalent of setting ψS,i(ui) = ψ(vi(ui))). Since vi is nondecreasing ψS,i is a concave function. On the other hand, given a concave function ψS,i(R+) ⊂ψi(R+), we can set vi : R+ 7→R+ as vi(u) = inf{z | ψi(z) ≥ψS,i(u)}. Our formulation in section 3.2 provides a constructive way of finding the optimal smoothing. It also applies to non-smooth ψ. Online learning. As mentioned before, the dual update in Algorithm 1 is the same as in Follow-theRegularized-Leader (FTRL) algorithm with −ψ∗as the regularization. This primal dual perspective has been used in [20] for design and analysis of online learning algorithms. In the online learning literature, the goal is to derive a bound on regret that optimally depends on the horizon, m. The goal in the current paper is to provide competitive ratio for the algorithm that depends on the function ψ. Regret provides a bound on the duality gap, and in order to get a competitive ratio the regularization function should be crafted based on ψ. A general choice of regularization which yields an optimal regret bound in terms of m is not enough for a competitive ratio argument, therefore existing results in online learning do not address our aim. 8 References [1] Jacob Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient algorithm for bandit linear optimization. In COLT, pages 263–274, 2008. [2] Shipra Agrawal and Nikhil R Devanur. Fast algorithms for online stochastic convex programming. arXiv preprint arXiv:1410.7596, 2014. [3] Yossi Azar, Ilan Reuven Cohen, and Debmalya Panigrahi. Online covering with convex objectives and applications. arXiv preprint arXiv:1412.3507, 2014. [4] Niv Buchbinder, Shahar Chen, Anupam Gupta, Viswanath Nagarajan, et al. Online packing and covering framework with convex objectives. arXiv preprint arXiv:1412.8347, 2014. [5] Niv Buchbinder, Kamal Jain, and Joseph SeffiNaor. Online primal-dual algorithms for maximizing ad-auctions revenue. In Algorithms–ESA 2007, pages 253–264. Springer, 2007. [6] Niv Buchbinder and Joseph Naor. Online primal-dual algorithms for covering and packing. Mathematics of Operations Research, 34(2):270–286, 2009. [7] TH Chan, Zhiyi Huang, and Ning Kang. Online convex covering and packing problems. arXiv preprint arXiv:1502.01802, 2015. [8] Nikhil R Devanur and Kamal Jain. Online matching with concave returns. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 137–144. ACM, 2012. [9] Nikhil R Devanur, Kamal Jain, Balasubramanian Sivan, and Christopher A Wilkens. Near optimal online algorithms and fast approximation algorithms for resource allocation problems. In Proceedings of the 12th ACM conference on Electronic commerce, pages 29–38. ACM, 2011. [10] R. Eghbali, M. Fazel, and M. Mesbahi. Worst Case Competitive Analysis for Online Conic Optimization. In 55th IEEE conference on decision and control (CDC). IEEE, 2016. [11] Reza Eghbali, Jon Swenson, and Maryam Fazel. Exponentiated subgradient algorithm for online optimization under the random permutation model. arXiv preprint arXiv:1410.7171, 2014. [12] Anupam Gupta and Marco Molinaro. How the experts algorithm can help solve lps online. arXiv preprint arXiv:1407.5298, 2014. [13] Bala Kalyanasundaram and Kirk R Pruhs. An optimal deterministic algorithm for online b-matching. Theoretical Computer Science, 233(1):319–325, 2000. [14] Richard M Karp, Umesh V Vazirani, and Vijay V Vazirani. An optimal algorithm for on-line bipartite matching. In Proceedings of the twenty-second annual ACM symposium on Theory of computing, pages 352–358. ACM, 1990. [15] Robert Kleinberg. A multiple-choice secretary algorithm with applications to online auctions. In Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, pages 630–631. Society for Industrial and Applied Mathematics, 2005. [16] Aranyak Mehta, Amin Saberi, Umesh Vazirani, and Vijay Vazirani. Adwords and generalized online matching. Journal of the ACM (JACM), 54(5):22, 2007. [17] Yu Nesterov. Smooth minimization of non-smooth functions. Mathematical programming, 103(1):127–152, 2005. [18] Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical programming, 120(1):221–259, 2009. [19] Shai Shalev-Shwartz and Yoram Singer. Online learning: Theory, algorithms, and applications. 2007. [20] Shai Shalev-Shwartz and Yoram Singer. A primal-dual perspective of online learning algorithms. Machine Learning, 69(2-3):115–142, 2007. 9
2016
146
6,046
Towards Unifying Hamiltonian Monte Carlo and Slice Sampling Yizhe Zhang, Xiangyu Wang, Changyou Chen, Ricardo Henao, Kai Fan, Lawrence Carin Duke University Durham, NC, 27708 {yz196,xw56,changyou.chen, ricardo.henao, kf96 , lcarin} @duke.edu Abstract We unify slice sampling and Hamiltonian Monte Carlo (HMC) sampling, demonstrating their connection via the Hamiltonian-Jacobi equation from Hamiltonian mechanics. This insight enables extension of HMC and slice sampling to a broader family of samplers, called Monomial Gamma Samplers (MGS). We provide a theoretical analysis of the mixing performance of such samplers, proving that in the limit of a single parameter, the MGS draws decorrelated samples from the desired target distribution. We further show that as this parameter tends toward this limit, performance gains are achieved at a cost of increasing numerical difficulty and some practical convergence issues. Our theoretical results are validated with synthetic data and real-world applications. 1 Introduction Markov Chain Monte Carlo (MCMC) sampling [1] stands as a fundamental approach for probabilistic inference in many computational statistical problems. In MCMC one typically seeks to design methods to efficiently draw samples from an unnormalized density function. Two popular auxiliaryvariable sampling schemes for this task are Hamiltonian Monte Carlo (HMC) [2, 3] and the slice sampler [4]. HMC exploits gradient information to propose samples along a trajectory that follows Hamiltonian dynamics [3], introducing momentum as an auxiliary variable. Extending the random proposal associated with Metropolis-Hastings sampling [4], HMC is often able to propose large moves with acceptance rates close to one [2]. Recent attempts toward improving HMC have leveraged geometric manifold information [5] and have used better numerical integrators [6]. Limitations of HMC include being sensitive to parameter tuning and being restricted to continuous distributions. These issues can be partially solved by using adaptive approaches [7, 8], and by transforming sampling from discrete distributions into sampling from continuous ones [9, 10]. Seemingly distinct from HMC, the slice sampler [4] alternates between drawing conditional samples based on a target distribution and a uniformly distributed slice variable (the auxiliary variable). One problem with the slice sampler is the difficulty of solving for the slice interval, i.e., the domain of the uniform distribution, especially in high dimensions. As a consequence, adaptive methods are often applied [4]. Alternatively, one recent attempt to perform efficient slice sampling on latent Gaussian models samples from a high-dimensional elliptical curve parameterized by a single scalar [11]. It has been shown that in some cases slice sampling is more efficient than Gibbs sampling and Metropolis-Hastings, due to the adaptability of the sampler to the scale of the region currently being sampled [4]. Despite the success of slice sampling and HMC, little research has been performed to investigate their connections. In this paper we use the Hamilton-Jacobi equation from classical mechanics to show that slice sampling is equivalent to HMC with a (simply) generalized kinetic function. Further, we also show that different settings of the HMC kinetic function correspond to generalized slice 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. sampling, with a non-uniform conditional slicing distribution. Based on this relationship, we develop theory to analyze the newly proposed broad family of auxiliary-variable-based samplers. We prove that under this special family of distributions for the momentum in HMC, as the distribution becomes more heavy-tailed, the one-step autocorrelation of samples from the target distribution converges asymptotically to zero, leading to potentially decorrelated samples. While of limited practical impact, this theoretical result provides insights into the properties of the proposed family of samplers. We also elaborate on the practical tradeoff between the increased computational complexity associated with improved theoretical sampling efficiency. In the experiments, we validate our theory on both synthetic data and with real-world problems, including Bayesian Logistic Regression (BLR) and Independent Component Analysis (ICA), for which we compare the mixing performance of our approach with that of standard HMC and slice sampling. 2 Solving Hamiltonian dynamics via the Hamilton-Jacobi equation A Hamiltonian system consists of a kinetic function K(p) with momentum variable p 2 R, and a potential energy function U(x) with coordinate x 2 R. We elaborate on multivariate cases in the Appendix. The dynamics of a Hamiltonian system are completely determined by a set of first-order Partial Differential Equations (PDEs) known as Hamilton’s equations [12]: @p @⌧= −@H(x, p, ⌧) @x , @x @⌧= @H(x, p, ⌧) @p , (1) where H(x, p, ⌧) = K(p(⌧)) + U(x(⌧)) is the Hamiltonian, and ⌧is the system time. Solving (1) gives the dynamics of x(⌧) and p(⌧) as a function of system time ⌧. In a Hamiltonian system governed by (1), H(·) is a constant for every ⌧[12]. A specified H(·), together with the initial point {x(0), p(0)}, defines a Hamiltonian trajectory {{x(⌧), p(⌧)} : 8⌧}, in {x, p} space. It is well known that in many practical cases, a direct solution to (1) may be difficult [13]. Alternatively, one might seek to transform the original HMC system {H(·), x, p, ⌧} to a dual space {H0(·), x0, p0, ⌧} in hope that the transformed PDEs in the dual space becomes simpler than the original PDEs in (1). One promising approach consists of using the Legendre transformation [12]. This family of transformations defines a unique mapping between primed and original variables, where the system time, ⌧, is identical. In the transformed space, the resulting dynamics are often simpler than the original Hamiltonian system. An important property of the Legendre transformation is that the form of (1) is preserved in the new space [14], i.e., @p0/@⌧= −@H0(x0, p0, ⌧)/@x0 , @x0/@⌧= @H0(x0, p0, ⌧)/@p0 . To guarantee a valid Legendre transformation between the original Hamiltonian system {H(·), x, p, ⌧} and the transformed Hamiltonian system {H0(·), x0, p0, ⌧}, both systems should satisfy the Hamilton’s principle [13], which equivalently express Hamilton’s equations (1). The form of this Legendre transformation is not unique. One possibility is to use a generating function approach [13], which requires the transformed variables to satisfy p · @x/@⌧−H(x, p, ⌧) = p0 · @x0/@⌧−H(x0, p0, ⌧)0 + dG(x, x0, p0, ⌧)/d⌧, where dG(x, x0, p0, ⌧)/d⌧follows from the chain rule and G(·) is a Type-2 generating function defined as G(·) , −x0 ·p0 +S(x, p0, ⌧) [14], with S(x, p0, ⌧) being the Hamilton’s principal function [15], defined below. The following holds due to the independency of x, x0 and p0 in the previous transformation (after replacing G(·) by its definition): p = @S(x, p0, ⌧) @x , x0 = @S(x, p0, ⌧) @p0 , H0(x0, p0, ⌧) = H(x, p, ⌧) + @S(x, p0, ⌧) @⌧ . (2) We then obtain the desired Legendre transformation by setting H0(x0, p0, ⌧) = 0. The resulting (2) is known as the Hamilton-Jacobi equation (HJE). We refer the reader to [13, 12] for extensive discussions on the Legendre transformation and HJE. Recall from above that the Legendre transformation preserves the form of (1). Since H0(x0, p0, ⌧) = 0, {x0, p0} are time-invariant (constant for every ⌧). Importantly, the time-invariant point {x0, p0} corresponds to a Hamiltonian trajectory in the original space, and it defines the initial point {x(0), p(0)} in the original space {x, p}; hence, given {x0, p0}, one may update the point along the trajectory by specifying the time ⌧. A new point {x(⌧), p(⌧)} in the original space along the Hamiltonian trajectory, with system time ⌧, can be determined from the transformed point {x0, p0} via solving (2). One typically specifies the kinetic function as K(p) = p2 [2], and Hamilton’s principal function as S(x, p0, ⌧) = W(x) −p0⌧, where W(x) is a function to be determined (defined below). From (2), 2 and the definition of S(·), we can write H(x, p, ⌧) + @S @⌧= H(x, p, ⌧) −p0 = U(x) + @S @x "2 −p0 = U(x) + dW(x) dx "2 −p0 = 0 , (3) where the second equality is obtained by replacing H(x, p, ⌧) = U(x(⌧)) + K(p(⌧)) and the third equality by replacing p from (2) into K(p(⌧)). From (3), p0 = H(x, p, ⌧) represents the total Hamiltonian in the original space {x, p}, and uniquely defines a Hamiltonian trajectory in {x, p}. Define X , {x : H(·) −U(x) ≥0} as the slice interval, which for constant p0 = H(x, p, ⌧) corresponds to a set of valid coordinates in the original space {x, p}. Solving (3) for W(x) gives W(x) = Z x(⌧) xmin f(z) 1 2 dz + C , f(z) = ⇢ H(·) −U(z), z 2 X 0, z 62 X , (4) where xmin = min{x : x 2 X} and C is a constant. In addition, from (2) we have x0 = @S(x, p0, ⌧) @p0 = @W(x) @H −⌧= 1 2 Z x(⌧) xmin f(z)−1 2 dz −⌧, (5) where the second equality is obtained by substituting S(·) by its definition and the third equality is obtained by applying Fubini’s theorem on (4). Hence, for constant {x0, p0 = H(x, p, ⌧)}, equation (5) uniquely defines x(⌧) in the original space, for a specified system time ⌧. 3 Formulating HMC as a Slice Sampler 3.1 Revisiting HMC and Slice Sampling xt(0), pt(0) xt(0), pt(0) xt(⌧t), pt(⌧t) xt(⌧t), pt(⌧t) xt+1(0), pt+1(0) xt+1(0), pt+1(0) x p Figure 1: Representation of HMC sampling. Points {xt(0), pt(0)} and {xt+1(0), pt+1(0)} represent HMC samples at iterations t and t + 1, respectively. The trajectories for t and t + 1 correspond to distinct Hamiltonian levels Ht(·) and Ht+1(·), denoted as black and red lines, respectively. Suppose we are interested in sampling a random variable x from an unnormalized density function f(x) / exp[−U(x)], where U(x) is the potential energy function. Hamiltonian Monte Carlo (HMC) augments the target density with an auxiliary momentum random variable p, that is independent of x. The distribution of p is specified as / exp[−K(p)], where K(p) is the kinetic energy function. Define H(x, p) = U(x) + K(p) as the Hamiltonian. We have omitted the dependency of H(·), x and p on the system time ⌧for simplicity. HMC iteratively performs dynamic evolving and momentum resampling steps, by sampling xt from the target distribution and pt from the momentum distribution (Gaussian as K(p) = p2), respectively, for t = 1, 2, . . . iterations. Figure 1 illustrates two iterations of this procedure. Starting from point {xt(0), pt(0)} at the t-th (discrete) iteration, HMC leverages the Hamiltonian dynamics, governed by Hamilton’s equations in (1) to propose the next sample {xt(⌧t), pt(⌧t)}, at system time ⌧t. The position in HMC at iteration t + 1 is updated as xt+1(0) = xt(⌧t) (dynamic evolving). A new momentum pt+1(0) is resampled independently from a Gaussian distribution (assuming K(p) = p2), establishing the next initial point {xt+1(0), pt+1(0)} for iteration t + 1 (momentum resampling). The latter point corresponds to the initial point of a new trajectory because the Hamiltonian H(·) is commensurately updated. This means that trajectories correspond to distinct values of H(·). Typically, numerical integrators such as the leap-frog method [2] are employed to numerically approximate the Hamiltonian dynamics. In practice, a random number (uniformly drawn from a fixed range) of discrete numerical integration steps (leap-frog steps) are often used (corresponding to random time ⌧t along the trajectory), which has been shown to have better convergence properties than a single leap-frog step [16]. The discretization error introduced by the numerical integration is corrected by a Metropolis Hastings (MH) step. Slice sampling is conceptually simpler than HMC. It augments the target unnormalized density f(x) with a random variable y, with joint distribution expressed as p(x, y) = Z−1 1 , s.t. 0 < y < f(x), where Z1 = R f(x)dx is the normalization constant, and the marginal distribution of x exactly recovers the target normalized distribution f(x)/Z1. To sample from the target density, slice sampling iteratively performs a conditional sampling step from p(x|y) and sampling a slice from p(y|x). At iteration t, starting from xt, a slice yt is uniformly drawn from (0, f(xt)). Then, the next sample xt+1, at iteration t + 1, is uniformly drawn from the slice interval {x : f(x) > yt}. 3 HMC and slice sampling both augment the target distribution with auxiliary variables and can propose long-range moves with high acceptance probability. 3.2 Formulating HMC as a Slice Sampler Consider the dynamic evolving step in HMC, i.e., {xt(0), pt(0)} 7! {xt(⌧), pt(⌧)} in Figure 1. From Section 2, the Hamiltonian dynamics in {x, p} space with initial point {x(0), p(0)} can be performed by mapping to {x0, p0} space and updating {x(⌧), p(⌧)} via selecting a ⌧and solving (5). As we show in the Appendix, from (5) and in univariate cases⇤the Hamiltonian dynamics has period R X[H(·)−U(z)]−1 2 dz and is symmetric along p = 0 (due to the symmetric form of the kinetic function). Also from (5), the system time, ⌧, is specified uniformly sampled from a half-period of the Hamiltonian dynamics. i.e., ⌧⇠Uniform ⇣ −x0, −x0 + 1 2 R X[H(·) −U(z)]−1 2 ⌘ . Intuitively, x0 is the “anchor” of the initial point {x(0), p(0)}, w.r.t. the start of the first half period, i.e, when R X[H(·) −U(z)]−1 2 = 0. Further, we only need consider half a period because for a symmetric kinetic function, K(p) = p2, the Hamiltonian dynamics for the two half-periods are mirrored [14]. For the same reason, Figure 1 only shows half of the {x, p} space, when p ≥0. Given the sampled ⌧and the constant {x0, p0}, equation (5) can be solved for x⇤, x(⌧), i.e., the value of x at time ⌧. Interestingly, the integral in (5) can be interpreted as (up to normalization constant) a cumulative density function (CDF) of x(⌧). From the inverse CDF transform sampling method, uniformly sampling ⌧from half of a period and solving for x⇤from (5), are equivalent to directly sampling x⇤from the following density p(x⇤|H(·)) / [H(·) −U(x⇤)]−1 2 , s.t., H(·) −U(x⇤) ≥0 . (6) We note that this transformation does not make the analytic solution of x(⌧) generally tractable. However, it provides the basic setup to reveal the connection between the slice sampler and HMC. In the momentum resampling step of HMC, i.e., {xt(⌧), pt(⌧)} 7! {xt+1(0), pt+1(0)} in Figure 1, and using the previously described kinetic function, K(p) = p2, resampling corresponds to drawing p from a Gaussian distribution [2]. The algorithm to analytically sample from the HMC (analytic HMC) proceeds as follows: at iteration t, momentum pt is drawn from a Gaussian distribution. The previously sampled value of xt−1 and the newly sampled pt yield a Hamiltonian Ht(·). Then, the next sample xt is drawn from (6). This procedure relates HMC to the slice sampler. To clearly see the connection, we denote yt = e−Ht(·). Instead of directly sampling {p, x} as just described, we sample {y, x} instead. By substituting Ht(·) with yt in (6), the conditional updates for this new sampling procedure can be rewritten as below, yielding the HMC slice sampler (HMC-SS), with conditional distributions defined as Sampling a slice: p(yt|xt) = 1 Γ(a)f(xt)[log f(xt) −log yt]1−a , s.t. 0 < yt < f(xt) , (7) Conditional sampling: p(xt+1|yt) = 1 Z2(yt)[log f(xt+1) −log yt]1−a , s.t. f(xt) > yt , (8) where a = 1/2 (other values of a considered below), f(x) = e−U(x) is an unnormalized density, and Z1 , R f(x)dx and Z2(y) , R f(x)>y[log f(x) −log y]−1 2 dx are the normalization constants. Comparing these two procedures, analytic HMC and HMC-SS, we see that the resampling momentum in analytic HMC corresponds to sampling a slice in HMC-SS. Further, the dynamic evolving in HMC corresponds to the conditional sampling in MG-SS. We have thus shown that HMC can be equivalently formulated as a slice sampler procedure via (7) and (8). 3.3 Reformulating Standard Slice Sampler from HMC-SS In standard slice sampling (described in Section 3.1), both conditional sampling and sampling a slice are drawn from uniform distributions. However those for HMC-SS in (7) and (8) represent non-uniform distributions. Interestingly, if we change a in (7) and (8) from a = 1/2 to a = 1, we obtain the desired uniform distributions for standard slice sampling. This key observation leads us to consider a generalized form of the kinetic function for HMC, described below. ⇤For multidimensional cases, the Hamiltonian dynamics are semi-periodic, yet a similar conclusion still holds. Details are discussed in the Appendix. 4 Consider the generalized family of kinetic functions K(p) = |p|1/a with a > 0. One may rederive equations (3)-(8) using this generalized kinetic energy. As shown in the Appendix, these equations remained unchanged, with the update that each isolated 2 in these equations is replaced by 1/a, and −1/2 is replaced by a −1. Sampling p (for the momentum resampling step) with the generalized kinetics, corresponds to drawing p from ⇡(p; m, a) = 1 2m−a/Γ(a + 1) exp[−|p|1/a/m], with m = 1. All the formulation in the paper still holds for arbitrary m, see Appendix for details. We denote this distribution the monomial Gamma (MG) distribution, MG(a, m), where m is the mass parameter, and a is the monomial parameter. Note that this is equivalent to the exponential power distribution with zero-mean, described in [17]. We summarize some properties of the MG distribution in the Appendix. To generate random samples from the MG distribution, one can draw G ⇠Gamma(a, m) and a uniform sign variable S ⇠{−1, 1}, then S ·Ga follows the MG(a, m) distribution. We call the HMC sampler based on the generalized kinetic function, K(p; a, m): Monomial Gamma Hamiltonian Monte Carlo (MG-HMC). The algorithm to analytically sample from the MG-HMC is shown in Algorithm 1. The only difference between this procedure and the previously described is the momentum resampling step, in that for analytic HMC, p is drawn Gaussian instead of MG(a, m). However, note that the Gaussian distribution is a special case of MG(a, m) when a = 1/2. Algorithm 1: MG-HMC with HJE for t = 1 to T do Resample momentum: pt ⇠MG(m, a). Compute Hamiltonian: Ht = U(xt−1) + K(pt). Find X , {x : x 2 R; U(x) Ht(·)}. Dynamic evolving: xt|Ht(·) / [Ht(·) −U(xt)]a−1 ; x 2 X. Algorithm 2: MG-SS for t = 1 to T do Sampling a slice: Sample yt from (7). Conditional sampling: Sample xt from (8). Interestingly, when a = 1, the Monomial Gamma Slice sampler (MG-SS) in Algorithm 2 recovers exactly the same update formulas as in standard slice sampling, described in Section 3.1, where the conditional distributions in (7) and (8) are both uniform. When a 6= 1, we have to iteratively alternate between sampling from non-uniform distributions (7) and (8), for both auxiliary (slicing) variable y and target variable x. Using the same argument from the convergence analysis of standard slice sampling [4], the iterative sampling procedure in (7) and (8), converges to an invariant joint distribution (detailed in the Appendix). Further, the marginal distribution of x recovers the target distribution as f(x)/Z1, while the marginal distribution of y is given by p(y) = Z2(y)/[Γ(a)Z1]. The MG-SS can be divided into three broad regimes: 0 < a < 1, a = 1 and a > 1 (illustrated in the Appendix). When 0 < a < 1, the conditional distribution p(yt|xt) is skewed towards the current unnormalized density value f(xt). The conditional draw of p(xt+1|yt) encourages taking samples with smaller density value (inefficient moves), within the domain of the slice interval X. On the other hand, when a > 1, draws of yt tend to take smaller values, while draws of xt+1 encourage sampling from those with large density function values (efficient moves). The case a = 1 corresponds to the conventional slice sampler. Intuitively, setting a to be small makes the auxiliary variable, yt, stay close to f(xt), thus f(xt+1) is close to f(xt). As a result, a larger a seems more desirable. This intuition is justified in the following sections. 4 Theoretical analysis We analyze theoretical properties of the MG sampler. All the proofs as well as the ergodicity properties of analytic MG-SS are given in the Appendix. One-step autocorrelation of analytic MG-SS We present results on the univariate distribution case: p(x) / e−U(x). We first investigate the impact of the monomial parameter a on the one-step autocorrelation function (ACF), ⇢x(1) , ⇢(xt, xt+1) = [Extxt+1 −(Ex)2]/Var(x), as a ! 1. Theorem 1 characterizes the limiting behavior of ⇢(xt, xt+1). Theorem 1 For a univariate target distribution, i.e. exp[−U(x)] has finite integral over R, under certain regularity conditions, the one-step autocorrelation of the MG-SS parameterized by a, asymptotically approaches zero as a ! 1, i.e., lima!0 ⇢x(1) = 0. 5 In the Appendix we also show that lima!1 ⇢(yt, yt+1) = 0. In addition, we show that ⇢(yt, yt+h) is a non-negative decreasing function of the time lag in discrete steps h. Effective sample size The variance of a Monte Carlo estimator is determined by its Effective Sample Size (ESS) [18], defined as ESS = N/(1+2⇥P1 h=1 ⇢x(h)), where N is the total number of samples, ⇢x(h) is the h-step autocorrelation function, which can be calculated in a recursive manner. We prove in the Appendix that ⇢x(h) is non-negative. Further, assuming the MG sampler is uniformly ergodic and ⇢x(h) is monotonically decreasing, it can be shown that lima!1 ESS = N. When ESS approaches full sample size, N, the resulting sampler delivers excellent mixing efficiency [5]. Details and further discussion are provided in the Appendix. Case study To examine a specific 1D example, we consider sampling from the exponential distribution, Exp(✓), with energy function given by U(x) = x/✓, where x ≥0. This case has analytic ⇢x(h) and ESS. After some algebra (details in the Appendix), ⇢x(1) = 1 a + 1 , ⇢x(h) = 1 (a + 1)h , ESS = Na a + 2 , ˆxh(x0) , Eh(xh|x0)xh = ✓+ x0 −✓ (a + 1)h . These results are in agreement with Theorem 1 and related arguments of ESS and monotonicity of autocorrelation w.r.t. a. Here ˆxh(x0) denotes the expectation of the h-lag sample, starting from any x0. The relative difference ˆxh(x0)−✓ x0−✓ decays exponentially in h, with a factor of 1 a+1. In fact, the ⇢x(1) for the exponential family class of models introduced in [19], with potential energy U(x) = x!/✓, where x ≥0, !, ✓> 0, can be analytically calculated. The result, provided in the Appendix, indicates that for this family, ⇢x(1) decays at a rate of O(a−1). MG-HMC mixing performance In theory, the analytic MG-HMC (the dynamics in (5) can be solved exactly) is expected to have the same theoretical properties of the analytic MG-SS for unimodal cases, since they are derived from the same setup. However, the mixing performance of the two methods could differ significantly when sampling from a multimodal distribution, due to the fact that the Hamiltonian dynamics may get “trapped” into a single closed trajectory (one of the modes) with low energy, whereas the analytic MG-SS does not suffer from this problem as is able to sample from disjoint slice intervals (one per mode). This is a well-known property of slice sampling [4] that arises from (7) and (8). However, if a is large enough, as we show in the Appendix, the probability of getting into a low-energy level associated with more than one Hamiltonian trajectory, which restrict movement between modes, is arbitrarily small. As a result, the analytic MG-HMC with large value of a is able to approach the stationary mixing performance of MG-SS. 5 MG sampling in practice MG-HMC with numerical integrator In practice, MG-SS (performing Algorithm 2) requires: 1) analytically solving for the slice interval X, which is typically infeasible for multivariate cases [4]; or 2) analytically computing the integral Z2(y) over X, implied by the non-uniform conditionals from MG-SS. These are usually computationally infeasible, though adaptive estimation of X could be done using schemes like “doubling” and “shrinking” strategies from the slice sampling literature [4]. It is more convenient to perform approximate MG-HMC using a numerical integrator like in traditional HMC, i.e., in each iteration, the momentum p is first initialized by sampling from MG(m, a), then second order Störmer-Verlet integration [2] is performed for the Hamiltonian dynamics updates: pt+1/2 = pt −✏ 2rU(xt) , xt+1 = xt + ✏rK(pt+1/2) , pt+1 = pt+1/2 −✏ 2rU(xt+1) , (9) where rK(p) = sign(p) · 1 ma|p|1/a−1. When a = 1, [rK(p)]d = 1/m for any dimension d, independent of x and p. To avoid moving on a grid when a = 1, we employ a random step-size ✏ from a uniform distribution within non-negative range (r1, r2), as suggested in [2]. No free lunch With a numerical integrator for MG-HMC, however, the argument about choosing large a (of great theoretical advantage as discussed in the previous section) may face practical issues. First, a large value of a will lead to a less accurate numerical integrator. This is because as a gets larger, the trajectory of the total Hamiltonian becomes “stiffer”, i.e., that the maximum curvature becomes larger. When a > 1/2, the Hamiltonian trajectory in the phase space, (x, p), has at least 2D (D denotes the total dimension) non-differentiable points (“turnovers”), at each intersection point with the hyperplane p(d) = 0, d 2 {1 · · · D}. As a result, directly applying Störmer-Verlet integration would lead to high integration error as D becomes large. 6 Second, if the sampler is initialized in the tail region of a light-tailed target distribution, MG-HMC with a > 1 may converge arbitrarily slow to the true target distribution, i.e., the burn-in period could take arbitrarily long time. For example, with a > 1, rU(x0) can be very large when x0 is in the light-tailed region, leading the update x0 + rK(p0 + rU(x0)) to be arbitrary close to x0, i.e., the sampler does not move. To ameliorate these issues, we provide mitigating strategies. For the first (numerical) issue, we propose two possibilities: 1) As an analog to the “reflection” action of [2], in (9), whenever the d-th dimension(s) of the momentum changes sign, we “recoil” the point of these dimension(s) to the previous iteration, and negate the momentum of these dimension(s), i.e., x(d) t+1 = x(d) t , p(d) t+1 = −p(d) t . 2) Substituting the kinetic function K(p) with a “softened” kinetic function, and use importance sampling to sample the momentum. The details and comparison between the “reflection” action and “softened” kinetics are discussed in the Appendix. For the second (convergence) issue, we suggest using a step-size decay scheme, e.g., ✏= max(✏1⇢t, ✏0). In our experiments we use (✏1, ⇢) = (106, 0.9), where ✏0 is problem-specific. This approach empirically alleviates the slow convergence problem, however we note that a more principled way would be adaptively selecting a during sampling, which is left for further investigation. As a compromise between theoretical gains and practical issues, we suggest setting a = 1 (HMC implementation of a slice sampler) when the dimension is relatively large. This is because in our experiments, when a > 1, numerical errors and convergence issues tend to overwhelm the theoretical mixing performance gains described in Section 4. (a) 0 1 2 3 4 Monomial parameter a 0 0.2 0.4 0.6 0.8 1 ;(1) Theoretical MG-SS MG-HMC (b) 0 1 2 3 4 Monomial parameter a 0 0.5 1 1.5 2 ESS #104 Theoretical MG-SS MG-HMC (c) 0 1 2 3 4 Monomial parameter a 0 0.2 0.4 0.6 0.8 1 ;(1) Theoretical MG-SS MG-HMC (d) 0 1 2 3 4 Monomial parameter a 0 0.5 1 1.5 2 2.5 ESS #104 Theoretical MG-SS MG-HMC (e) 0 1 2 3 4 Monomial parameter a 0 0.2 0.4 0.6 0.8 1 ;(1) Theoretical MG-HMC Figure 2: Theoretical and empirical ⇢x(1) and ESS of exponential distribution (a,b), N+ (c,d) and Gamma (e). 6 Experiments 6.1 Simulation studies 1D unimodal problems We first evaluate the performance of the MG sampler with several univariate distributions: 1) Exponential distribution, U(x) = ✓x, x ≥0. 2) Truncated Gaussian, U(x) = ✓x2, x ≥0. 3) Gamma distribution, U(x) = −(r −1) log x + ✓x. Note that the performance of the sampler does not depend on the scale parameter ✓> 0. We compare the empirical ⇢x(1) and ESS of the analytic MG-SS and MG-HMC with their theoretical values. In the Gamma distribution case, analytic derivations of the autocorrelations and ESS are difficult, thus we resort to a numerical approach to compute ⇢x(1) and ESS. Details are provided in the Appendix. Each method is run for 30,000 iterations with 10,000 burn-in samples. The number of leap-frog steps is set to be uniformly drawn from (100 −l, 100 + l) with l = 20, as suggested by [16]. We also compared MG-HMC (a = 1) with standard slice sampling using doubling and shrinking scheme [4] As expected, the resulting ESS (not shown) for these two methods is almost identical. The experiment settings and results are provided in the Appendix. The acceptance rates decrease from around 0.98 to around 0.77 for each case, when a grows from 0.5 to 4, as shown in Figure 2(a)-(d), The results for analytic MG-SS match well with the theoretical results, however MG-HMC seems to suffer from practical difficulties when a is large, evidenced by results gradually deviating from the theoretical values. This issue is more evident in the Gamma case (see Figure 2(e)), where ⇢x(1) first decreases then increases. Meanwhile, the acceptance rates decreases from 0.9 to 0.5. 1D and 2D bimodal problems We further conduct simulation studies to evaluate the efficiency of MG-HMC when sampling 1D and 2D multimodal distributions. For the univariate case, the potential energy is given by U(x) = x4 −2x2; whereas U(x) = −0.2 ⇥(x1 + x2)2 + 0.01 ⇥(x1 + x2)4 − 0.4 ⇥(x1 −x2)2 in the bivariate case. We show in the Appendix that if the energy functions are symmetric along x = C, where C is a constant, in theory, the analytic MG-SS will have ESS equal to the total sample size. However, as shown in Section 4, the analytic MG-HMC is expected to have an ESS less than its corresponding analytic MG-SS, and the gap between the analytic MG-HMC 7 and analytic MG-SS counterpart should decrease with a. As a result, despite numerical difficulties, we expect the MG-HMC based on numerical integration to have better mixing performance with large a. To verify our theory, we run MG-HMC for a = {0.5, 1, 2} for 30,000 iterations with 10,000 burn-in samples. The parameter settings and the acceptance rates are detailed in the Appendix. Empirically, we find that the efficiency of HMC is significantly improved with a large a as shown in Table 1, which coincides with the theory in Section 4. From Figure 3, we observe that the MG-HMC sampler with monomial parameter a = {1, 2} performs better at jumping between modes of the target distribution, when compared to standard HMC, which confirms the theory in Section 4. We also compared MG-HMC (a = 1) with standard SS [4]. As expected, in the 1D case, the standard SS yields ESS close to full sample size, while in 2D case, the resulting ESS is lower than MG-HMC (a = 1) (details are provided in the Appendix). 6.2 Real data Figure 3: 10 MC samples by MG-HMC from a 2D distribution and different a. -3.5 -2 -0.5 1 2.5 3.5 -3 -2 -1 0 1 2 3 x1 x2 MG-HMC (a=0.5) MG-HMC (a=1) MG-HMC (a=2) Density contour Table 1: ESS of MG-HMC for 1D and 2D bimodal distributions. 1D ESS ⇢x(1) a = 0.5 5175 0.60 a = 1 10157 0.43 a = 2 24298 0.11 2D ESS ⇢x(1) a = 0.5 4691 0.67 a = 1 16349 0.60 a = 2 18007 0.53 Bayesian logistic regression We evaluate our methods on 6 real-world datasets from the UCI repository [20]: German credit (G), Australian credit (A), Pima Indian (P), Heart (H), Ripley (R) and Caravan (C) [21]. Feature dimensions range from 7 to 87, and total data instances are between 250 to 5822. All datasets are normalized to have zero mean and unit variance. Gaussian priors N(0, 100I) are imposed on the regression coefficients. We draw 5000 iterations with 1000 burn-in samples for each experiment. The leap-frog steps are set to be uniformly drawn from (100 −l, 100 + l) with l = 20. Other experimental settings (m and ✏) are provided in the Appendix. Results in terms of minimum ESS are summarized in Table 2. Prediction accuracies estimated via cross-validation are almost identical all across (reported in the Appendix). It can be seen that MG-HMC with a = 1 outperforms (in terms of ESS) the other two settings with a = 0.5 and a = 2, indicating increased numerical difficulties counter the theoretical gains when a becomes large. This can be also seen by noting that the acceptance rates drop from around 0.9 to around 0.7 as a increases from 0.5 to 2. The dimensionality also seems to have an impact on the optimal setting of a, since in the high-dimensional dataset Cavaran, the improvement of MG-HMC with a = 1 is less significant compared with other datasets, and a = 2 seems to suffer more of numerical difficulties. Comparisons between MG-HMC (a = 1) and standard slice sampling are provided in the Appendix. In general, standard slice sampling with adaptive search underperforms relative to MG-HMC (a = 1). Table 2: Minimum ESS for each method (dimensionality indicated in parenthesis). Left: BLR; Right: ICA Dataset (dim) A (15) G (25) H (14) P (8) R (7) C (87) a = 0.5 3124 3447 3524 3434 3317 33 (median 3987) a = 1 4308 4353 4591 4664 4226 36 (median 4531) a = 2 1490 3646 4315 4424 1490 7 (median 740) ICA (25) 2677 3029 1534 ICA We finally evaluate our methods on the MEG [22] dataset for Independent Component Analysis (ICA), with 17,730 time points and 25 feature dimension. All experiments are based on 5000 MCMC samples. The acceptance rates for a = (0.5, 1, 2) are (0.98, 0.97, 0.77). Running time is almost identical for different a. Settings (including m and ✏) are provided in the Appendix. As shown in Table 2, when a = 1, MG-HMC has better mixing performance compared with other settings. 7 Conclusion We demonstrated the connection between HMC and slice sampling, introducing a new method for implementing a slice sampler via an augmented form of HMC. With few modifications to standard HMC, our MG-HMC can be seen as a drop-in replacement for any scenario where HMC and its variants apply, for example, Hamiltonian Variational Inference (HVI) [23]. We showed the theoretical advantages of our method over standard HMC, as well as numerical difficulties associated with it. Several future extensions can be explored to mitigate numerical issues, e.g., performing MG-HMC on the Riemann manifold [5] so that step-sizes can be adaptively chosen, and using a high-order symplectic numerical method [24, 25] to reduce the discretization error introduced by the integrator. 8 References [1] Christian Robert and George Casella. Monte Carlo statistical methods. Springer Science & Business Media, 2004. [2] Radford M Neal. MCMC using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo, 2, 2011. [3] Simon Duane, Anthony D Kennedy, Brian J Pendleton, and Duncan Roweth. Hybrid Monte Carlo. Physics letters B, 195(2), 1987. [4] Radford M Neal. Slice sampling. Annals of statistics, 2003. [5] Mark Girolami and Ben Calderhead. Riemann manifold Langevin and Hamiltonian Monte Carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 73(2), 2011. [6] Wei-Lun Chao, Justin Solomon, Dominik Michels, and Fei Sha. Exponential integration for Hamiltonian Monte Carlo. In ICML, 2015. [7] Matthew D Homan and Andrew Gelman. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. The Journal of Machine Learning Research, 15(1), 2014. [8] Ziyu Wang, Shakir Mohamed, and De Nando. Adaptive hamiltonian and riemann manifold monte carlo. In ICML, 2013. [9] Ari Pakman and Liam Paninski. Auxiliary-variable exact Hamiltonian Monte Carlo samplers for binary distributions. In NIPS, 2013. [10] Yichuan Zhang, Zoubin Ghahramani, Amos J Storkey, and Charles A Sutton. Continuous relaxations for discrete Hamiltonian Monte Carlo. In NIPS, 2012. [11] Iain Murray, Ryan Prescott Adams, and David JC MacKay. Elliptical slice sampling. ArXiv, 2009. [12] Vladimir Igorevich Arnol’d. Mathematical methods of classical mechanics, volume 60. Springer Science & Business Media, 2013. [13] Herbert Goldstein. Classical mechanics. Pearson Education India, 1965. [14] John Robert Taylor. Classical mechanics. University Science Books, 2005. [15] LD Landau and EM Lifshitz. Mechanics, 1st edition. Pergamon Press, Oxford, 1976. [16] Samuel Livingstone, Michael Betancourt, Simon Byrne, and Mark Girolami. On the Geometric Ergodicity of Hamiltonian Monte Carlo. ArXiv, January 2016. [17] Saralees Nadarajah. A generalized normal distribution. Journal of Applied Statistics, 32(7), 2005. [18] Steve Brooks, Andrew Gelman, Galin Jones, and Xiao-Li Meng. Handbook of Markov Chain Monte Carlo. CRC press, 2011. [19] Gareth O Roberts and Richard L Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 1996. [20] Kevin Bache and Moshe Lichman. UCI machine learning repository, 2013. [21] Peter Van Der Putten and Maarten van Someren. COIL challenge 2000: The insurance company case. Sentient Machine Research, 9, 2000. [22] Ricardo Vigário, Veikko Jousmäki, M Hämäläninen, R Haft, and Erkki Oja. Independent component analysis for identification of artifacts in magnetoencephalographic recordings. In NIPS, 1998. [23] Tim Salimans, Diederik P Kingma, and Max Welling. Markov chain Monte Carlo and variational inference: Bridging the gap. ArXiv, 2014. [24] Michael Striebel, Michael Günther, Francesco Knechtli, and Michèle Wandelt. Accuracy of symmetric partitioned Runge-Kutta methods for differential equations on Lie-groups. ArXiv, 12 2011. [25] Chengxiang Jiang and Yuhao Cong. A sixth order diagonally implicit symmetric and symplectic RungeKutta method for solving hamiltonian systems. Journal of Applied Analysis and Computation, 5(1), 2015. [26] Ivar Ekeland and Jean-Michel Lasry. On the number of periodic trajectories for a Hamiltonian flow on a convex energy surface. Annals of Mathematics, 1980. [27] Luke Tierney and Antonietta Mira. Some adaptive Monte Carlo methods for Bayesian inference. Statistics in Medicine, 18(1718), 1999. [28] Richard Isaac. A general version of doeblin’s condition. The Annals of Mathematical Statistics, 1963. [29] Eric Cances, Frédéric Legoll, and Gabriel Stoltz. Theoretical and numerical comparison of some sampling methods for molecular dynamics. ESAIM: Mathematical Modelling and Numerical Analysis, 41(02), 2007. [30] Alicia A Johnson. Geometric ergodicity of Gibbs samplers. PhD thesis, university of Minnesota, 2009. [31] Gareth O Roberts and Jeffrey S Rosenthal. Markov-chain Monte Carlo: Some practical implications of theoretical results. Canadian Journal of Statistics, 26(1), 1998. [32] Jeffrey S Rosenthal. Minorization conditions and convergence rates for Markov chain Monte Carlo. Journal of the American Statistical Association, 90(430), 1995. [33] Michael Betancourt, Simon Byrne, and Mark Girolami. Optimizing the integrator step size for Hamiltonian Monte Carlo. ArXiv, 2014. [34] Aapo Hyvärinen and Erkki Oja. Independent component analysis: algorithms and applications. Neural networks, 13(4), 2000. [35] Anoop Korattikara, Yutian Chen, and Max Welling. Austerity in MCMC land: Cutting the MetropolisHastings budget. ArXiv, 2013. 9
2016
147
6,047
Multi-step learning and underlying structure in statistical models Maia Fraser Dept. of Mathematics and Statistics Brain and Mind Research Institute University of Ottawa Ottawa, ON K1N 6N5, Canada mfrase8@uottawa.ca Abstract In multi-step learning, where a final learning task is accomplished via a sequence of intermediate learning tasks, the intuition is that successive steps or levels transform the initial data into representations more and more “suited" to the final learning task. A related principle arises in transfer-learning where Baxter (2000) proposed a theoretical framework to study how learning multiple tasks transforms the inductive bias of a learner. The most widespread multi-step learning approach is semisupervised learning with two steps: unsupervised, then supervised. Several authors (Castelli-Cover, 1996; Balcan-Blum, 2005; Niyogi, 2008; Ben-David et al, 2008; Urner et al, 2011) have analyzed SSL, with Balcan-Blum (2005) proposing a version of the PAC learning framework augmented by a “compatibility function" to link concept class and unlabeled data distribution. We propose to analyze SSL and other multi-step learning approaches, much in the spirit of Baxter’s framework, by defining a learning problem generatively as a joint statistical model on X ⇥Y . This determines in a natural way the class of conditional distributions that are possible with each marginal, and amounts to an abstract form of compatibility function. It also allows to analyze both discrete and non-discrete settings. As tool for our analysis, we define a notion of γ-uniform shattering for statistical models. We use this to give conditions on the marginal and conditional models which imply an advantage for multi-step learning approaches. In particular, we recover a more general version of a result of Poggio et al (2012): under mild hypotheses a multi-step approach which learns features invariant under successive factors of a finite group of invariances has sample complexity requirements that are additive rather than multiplicative in the size of the subgroups. 1 Introduction The classical PAC learning framework of Valiant (1984) considers a learning problem with unknown true distribution p on X ⇥Y , Y = {0, 1} and fixed concept class C consisting of (deterministic) functions f : X ! Y . The aim of learning is to select a hypothesis h : X ! Y , say from C itself (realizable case), that best recovers f. More formally, the class C is said to be PAC learnable if there is a learning algorithm that with high probability selects h 2 C having arbitrarily low generalization error for all possible distributions D on X. The distribution D governs both the sampling of points z = (x, y) 2 X ⇥Y by which the algorithm obtains a training sample and also the cumulation of error over all x 2 X which gives the generalization error. A modification of this model, together with the notion of learnable with a model of probability (resp. decision rule) (Haussler, 1989; Kearns and Schapire, 1994), allows to treat non-deterministic functions f : X ! Y and the case Y = [0, 1] analogously. Polynomial dependence of the algorithms on sample size and reciprocals 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of probability bounds is further required in both frameworks for efficient learning. Not only do these frameworks consider worst case error, in the sense of requiring the generalization error to be small for arbitrary distributions D on X, they assume the same concept class C regardless of the true underlying distribution D. In addition, choice of the hypothesis class is taken as part of the inductive bias of the algorithm and not addressed. Various, by now classic, measures of complexity of a hypothesis space (e.g., VC dimension or Rademacher complexity, see Mohri et al. (2012) for an overview) allow to prove upper bounds on generalization error in the above setting, and distribution-specific variants of these such as annealed VC-entropy (see Devroye et al. (1996)) or Rademacher averages (beginning with Koltchinskii (2001)) can be used to obtain more refined upper bounds. The widespread strategy of semi-supervised learning (SSL) is known not to fit well into PAC-style frameworks (Valiant, 1984; Haussler, 1989; Kearns and Schapire, 1994). SSL algorithms perform a first step using unlabeled training data drawn from a distribution on X, followed by a second step using labeled training data from a joint distribution on X ⇥Y . This has been studied by several authors (Balcan and Blum, 2005; Ben-David et al., 2008; Urner et al., 2011; Niyogi, 2013) following the seminal work of Castelli and Cover (1996) comparing the value of unlabeled and labeled data. One immediate observation is that without some tie between the possible marginals D on X and the concept class C which records possible conditionals p(y|x), there is no benefit to unlabeled data: if D can be arbitrary then it conveys no information about the true joint distribution that generated labeled data. Within PAC-style frameworks, however, C and D are completely independent. Balcan and Blum therefore proposed augmenting the PAC learning framework by the addition of a compatibility function χ : C ⇥D ! [0, 1], which records the amount of compatibility we believe each concept from C to have with each D 2 D, the class of “all" distributions on X. This function is required to be learnable from D and is then used to reduce the concept class from C to a sub-class which will be used for the subsequent (supervised) learning step. If χ is a good compatible function this sub-class should have lesser complexity than C (Balcan and Blum, 2005). While PAC-style frameworks in essence allow the true joint distribution to be anything in C ⇥D, the existence of a good compatibility function in the sense of Balcan and Blum (2005) implicitly assumes the joint model that we believe in is smaller. We return to this point in Section 2.1. In this paper we study properties of multi-step learning strategies – those which involve multiple training steps – by considering the advantages of breaking a single learning problem into a sequence of two learning problems. We start by assuming a true distribution which comes from a class of joint distributions, i.e. statistical model, P on X ⇥Y . We prove that underlying structure of a certain kind in P, together with differential availability of labeled vs. unlabeled data, imply a quantifiable advantage to multi-step learning at finite sample size. The structure we need is the existence of a representation t(x) of x 2 X which is a sufficient statistic for the classification or regression of interest. Two common settings where this assumption holds are: manifold learning and group-invariant feature learning. In these settings we have respectively 1. t = tpX is determined by the marginal pX and pX is concentrated on a submanifold of X, 2. t = tG is determined by a group action on X and p(y|x) is invariant1 under this action. Learning t in these cases corresponds respectively to learning manifold features or group-invariant features; various approaches exist (see (Niyogi, 2013; Poggio et al., 2012) for more discussion) and we do not assume any fixed method. Our framework is also not restricted to these two settings. As a tool for analysis we define a variant of VC dimension for statistical models which we use to prove a useful lower bound on generalization error even2 under the assumption that the true distribution comes from P. This allows us to establish a gap at finite sample size between the error achievable by a single-step purely supervised learner and that achievable by a semi-supervised learner. We do not claim an asymptotic gap. The purpose of our analysis is rather to show that differential finite availability of data can dictate a multi-step learning approach. Our applications are respectively a strengthening of a manifold learning example analyzed by Niyogi (2013) and a group-invariant features example related to a result of Poggio et al. (2012). We also discuss the relevance of these to biological learning. Our framework has commonalities with a framework of Baxter (2000) for transfer learning. In that work, Baxter considered learning the inductive bias (i.e., the hypothesis space) for an algorithm for a 1 This means there is a group G of transformations of X such that p(y|x) = p(y|g·x) for all g 2 G. 2(distribution-specific lower bounds are by definition weaker than distribution-free ones) 2 “target" learning task, based on experience from previous “source" learning tasks. For this purpose he defined a learning environment E to be a class of probability distributions on X ⇥Y together with an unknown probability distribution Q on E, and assumed E to restrict the possible joint distributions which may arise. We also make a generative assumption, assuming joint distributions come from P, but we do not use a prior Q. Within his framework Baxter studied the reduction in generalization error for an algorithm to learn a new task, defined by p 2 E, when given access to a sample from p and a sample from each of m other learning tasks, p1, . . . , pm 2 E, chosen randomly according to Q, compared with an algorithm having access to only a sample from p. The analysis produced upper bounds on generalization error in terms of covering numbers and a lower bound was also obtained in terms of VC dimension in the specific case of shallow neural networks. In proving our lower bound in terms of a variant of VC dimension we use a minimax analysis. 2 Setup We assume a learning problem is specified by a joint probability distribution p on Z = X ⇥Y and a particular (regression, classification or decision) function fp : X ! R determined entirely by p(y|x). Moreover, we postulate a statistical model P on X ⇥Y and assume p 2 P. Despite the simplified notation, fp(x) depends on the conditionals p(y|x) and not the entire joint distribution p. There are three main types of learning problem our framework addresses (reflected in three types of fp). When y is noise-free, i.e. p(y|x) is concentrated at a single y-value vp(x) 2 {0, 1}, fp = vp : X ! {0, 1} (classification); here fp(x) = Ep(y|x). When y is noisy, then either fp : X ! {0, 1} (classification/decision) or fp : X ! [0, 1] (regression) and fp(x) = Ep(y|x). In all three cases the parameters which define fp, the learning goal, depend only on p(y|x) = Ep(y|x). We assume the learner knows the model P and the type of learning problem, i.e., the hypothesis class is the “concept class" C := {fp : p 2 P}. To be more precise, for the first type of fp listed above, this is the concept class (Kearns and Vazirani, 1994); for the second type, it is a class of decision rules and for the third type, it is a class of p-concepts (Kearns and Schapire, 1994). For specific choice of loss functions, we seek worst-case bounds on learning rates, over all distributions p 2 P. Our results for all three types of learning problem are stated in Theorem 3. To keep the presentation simple, we give a detailed proof for the first two types, i.e., assuming labels are binary. This shows how classic PAC-style arguments for discrete X can be adapted to our framework where X may be smooth. Extending these arguments to handle non-binary Y proceeds by the same modifications as for discrete X (c.f. Kearns and Schapire (1994)). We remark that in the presence of noise, better bounds can be obtained (see Theorem 3 for details) if a more technical version of Definition 1 is used but we leave this for a subsequent paper. We define the following probabilistic version of fat shattering dimension: Definition 1. Given P, a class of probability distributions on X ⇥{0, 1}, let γ 2 (0, 1), ↵2 (0, 1/2) and n 2 N = {0, 1, . . . , ...}. Suppose there exist (disjoint) sets Si ⇢X, i 2 {1, . . . , n} with S = [iSi, a reference probability measure q on X, and a sub-class Pn ⇢P of cardinality |Pn| = 2n with the following properties: 1. q(Si) ≥γ/n for every i 2 {1, . . . , n} 2. q lower bounds the marginals of all p 2 Pn on S, i.e. R B dpX ≥ R B dq for any p-measurable subset B ⇢S 3. 8 e 2 {0, 1}n, 9 p 2 Pn such that Ep(y|x) > 1/2 + ↵for x 2 Si when ei = 1 and Ep(y|x) < 1/2 −↵for x 2 Si when ei = 0 then we say P ↵-shatters S1, . . . , Sn γ-uniformly using Pn. The γ-uniform ↵-shattering dimension of P is the largest n such that P ↵-shatters some collection of n subsets of X γ-uniformly. This provides a measure of complexity of the class P of distributions in the sense that it indicates the variability of the expected y-values for x constrained to lie in the region S with measure at least γ under corresponding marginals. The reference measure q serves as a lower bound on the marginals and ensures that they “uniformly" assign probabilty at least γ to S. Richness (variability) of conditionals is thus traded off against uniformity of the corresponding marginal distributions. 3 Remark 2 (Uniformity of measure). The technical requirement of a reference distribution q is automatically satisfied if all marginals pX for p 2 Pn are uniform over S. For simplicity this is the situation considered in all our examples. The weaker condition (in terms of q) that we postulate in Definition 1 is however sufficient for our main result, Theorem 3. If fp is binary and y is noise-free then P shatters S1, . . . , Sn γ-uniformly if and only if there is a sub-class Pn ⇢P with the specified uniformity of measure, such that each fp(·) = Ep(y|·), p 2 Pn is constant on each Si and the induced set-functions shatter {S1, . . . , Sn} in the usual (Vapnik-Chervonenkis) sense. In that case, ↵may be chosen arbitrarily in (0, 1/2) and we omit mention of it. If fp takes values in [0, 1] or fp is binary and y noisy then γ-uniform shattering can be expressed in terms of fat-shattering (both at scale ↵). We show that the γ-uniform ↵-shattering dimension of P can be used to lower bound the sample size required by even the most powerful learner of this class of problems. The proof is in the same spirit as purely combinatorial proofs of lower bounds using VC-dimension. Essentially the added condition on P in terms of γ allows to convert the risk calculation to a combinatorial problem. As a counterpoint to the lower bound result, we consider an alternative two step learning strategy which makes use of underlying structure in X implied by the model P and we obtain upper bounds for the corresponding risk. 2.1 Underlying structure We assume a representation t : X ! Rk of the data, such that p(y|x) can be expressed in terms of p(y|t(x)), say fp(x) = g✓(t(x)) for some parameter ✓2 ⇥. Such a t is generally known in Statistics as a sufficient dimension reduction for fp but here we make no assumption on the dimension k (compared with the dimension of X). This is in keeping with the paradigm of feature extraction for use in kernel machines, where the dimension of t(X) may even be higher than the original dimension of X. As in that setting, what will be important is rather that the intermediate representation t(x) reduce the complexity of the concept space. While t depends on p we will assume it does so only via X. For example t could depend on p through the marginal pX on X or possible group action on X; it is a manifestation in the data X, possibly over time, of underlying structure in the true joint distribution p 2 P. The representation t captures structure in X induced by p. On the other hand, the regression function itself depends only on the conditional p(y|t(x)). In general, the natural factorization ⇡: P ! PX, p 7! pX determines for each marginal q 2 PX a collection ⇡−1(q) of possible conditionals, namely those p(y|x) arising from joint p 2 P that have marginal pX = q. More generally any sufficient statistic t induces a similar factorization (c.f. Fisher-Neyman characterization) ⇡t : P ! Pt, p 7! pt, where Pt is the marginal model with respect to t, and only conditionals p(y|t) are needed for learning. As before, given a known marginal q 2 Pt, this implies a collection ⇡−1 t (q) of possible conditionals p(y|t) relevant to learning. Knowing q thus reduces the original problem where p(y|x) or p(y|t) can come from any p 2 P to one where it comes from p in a reduced class ⇡−1(q) or ⇡−1 t (q) ( P. Note the similarity with the assumption of Balcan and Blum (2005) that a good compatibility function reduce the concept class. In our case the concept class C consists of fp defined by p(y|t) in [tPY |t with PY |t:={p(y|t) : p 2 P}, and marginals come from Pt. The joint model P that we postulate, meanwhile, corresponds to a subset of C ⇥Pt (pairs (fp, q) where fp uses p 2 ⇡−1 t (q)). The indicator function χ for this subset is an abstract (binary) version of compatibility function (recall the compatibility function of Balcan-Blum should be a [0, 1]-valued function on C ⇥D, satisfying further practical conditions that our function typically would not). Thus, in a sense, our assumption of a joint model P and sufficient statistic t amounts to a general form of compatibility function that links C and D without making assumptions on how t might be learned. This is enough to imply the original learning problem can be factored into first learning the structure t and then learning the parameter ✓for fp(x) = g✓(t(x)) in a reduced hypothesis space. Our goal is to understand when and why one should do so. 2.2 Learning rates We wish to quantify the benefits achieved by using such a factorization in terms of the bounds on the expected loss (i.e. risk) for a sample of size m 2 N drawn iid from any p 2 P. We assume the learner is provided with a sample ¯z = (z1, z2 · · · zm), with zi = (xi, yi) 2 X ⇥Y = Z, drawn iid from the distribution p and uses an algorithm A : Zm ! C = H to select A(¯z) to approximate fp. 4 Let `(A(¯z), fp) denote a specific loss. It might be 0/1, absolute, squared, hinge or logistic loss. We define L(A(¯z), fp) to be the global expectation or L2-norm of one of those pointwise losses `: L(A(¯z), fp) := Ex`(A(¯z)(x), fp(x)) = Z X `(A(¯z)(x), fp(x))dpX(x) (1) or L(A(¯z), fp) := ||`(A(¯z), fp)||L2(pX) = sZ X `(A(¯z)(x), fp(x))2dpX. (2) Then the worst case expected loss (i.e. minimax risk) for the best learning algorithm with no knowledge of tpX is R(m) := inf A sup p2P E¯zL(A(¯z), fp) = inf A sup q2PX sup p(y|tq)s.t. p2P,pX=q E¯zL(A(¯z, fp) . (3) while for the best learning algorithm with oracle knowledge of tpX it is Q(m) := sup q2PX inf A sup p(y|tq)s.t. p2P,pX=q E¯zL(A(¯z, fp) . (4) Some clarification is in order regarding the classes over which the suprema are taken. In principle the worst case expected loss for a given A is the supremum over P of the expected loss. Since fp(x) is determined by p(y|tpX(x)), and tpX is determined by pX this is a supremum over q 2 PX of a supremum over p(y|tq(·)) such that pX = q. Finding the worst case expected error for the best A therefore means taking the infimum of the supremum just described. In the case of Q(m) since the algorithm knows tq, the order of the supremum over t changes with respect to the infimum: the learner can select the best algorithm A using knowledge of tq. Clearly R(m) ≥Q(m) by definition. In the next section, we lower bound R(m) and upper bound Q(m) to establish a gap between R(m) and Q(m). 3 Main Result We show that γ-uniform shattering dimension n or more implies a lower bound on the worst case expected error, R(m), when the sample size m n. In particular - in the setup specified in the previous section - if {g✓(·) : ✓2 ⇥} has much smaller VC dimension than n this results in a distinct gap between rates for a learner with oracle access to tpX and a learner without. Theorem 3. Consider the framework defined in the previous Section with Y = {0, 1}. Assume {g✓(·) : ✓2 ⇥} has VC dimension d < m and P has γ-uniform ↵-shattering dimension n ≥(1+✏)m. Then, for sample size m, Q(m) 16 q d log(m+1)+log 8+1 2m while R(m) > ✏bcγm+1/8 where b depends both on the type of loss and the presence of noise, while c depends on noise. Assume the standard definition in (1). If fp are binary (in the noise-free or noisy setting) b = 1 for absolute, squared, 0-1, hinge or logistic loss. In the noisy setting, if fp = E(y|x) 2 [0, 1], b = ↵for absolute loss and b = ↵2 for squared loss. In general, c = 1 in the noise-free setting and c = (1/2 + ↵)m in the noisy setting. By requiring P to satisfy a stronger notion of γ-uniform ↵-shattering one can obtain c = 1 even in the noisy case. Note that for sample size m and γ-uniform ↵-shattering dimension 2m, we have ✏= 1, so the lower bound in its simplest form becomes γm+1/8. This is the bound we will use in the next Section to derive implications of Theorem 3. Remark 4. We have stated in the Theorem a simple upper bound, sticking to Y = {0, 1} and using VC dimension, in order to focus the presentation on the lower bound which uses the new complexity measure. The upper bound could be improved. It could also be replaced with a corresponding upper bound assuming instead Y = [0, 1] and fat shattering dimension d. Proof. The upper bound on Q(m) holds for an ERM algorithm (by the classic argument, see for example Corollary 12.1 in Devroye et al. (1996)). We focus here on the lower bound for R(m). Moreover, we stick to the simpler definition of γ-uniform shattering in Definition 1 and omit proof of the final statement of the Theorem, which is slightly more involved. We let n = 2m (i.e. ✏= 1) and 5 we comment in a footnote on the result for general ✏. Let S1, . . . , S2m be sets which are γ-uniformly ↵-shattered using the family P2m ⇢P and denote their union by S. By assumption S has measure at least γ under a reference measure q which is dominated by all marginals pX for p 2 P2m (see Definition 1). We divide our argument into three parts. 1. If we prove a lower bound for the average over P2m, 8A, 1 22m X p2P2m E¯zL(A(¯z), fp) ≥bcγm+1/8 (5) it will also be a lower bound for the supremum over P2m: 8A, sup p2P2m E¯zL(A(¯z), fp) ≥bcγm+1/8 . and hence for the supremum over P. It therefore suffices to prove (5). 2. Given x 2 S, define vp(x) to be the more likely label for x under the joint distribution p 2 P2m. This notation extends to the noisy case the definition of vp already given for the noise-free case. The uniform shattering condition implies p(vp(x)|x) > 1/2 + ↵in the noisy case and p(vp(x)|x) = 1 in the noise-free case. Given ¯x = (x1, . . . , xm) 2 Sm, write ¯zp(¯x) := (z1, . . . , zm) where zj = (xj, vp(xj)). Then E¯zL(A(¯z), fp) = Z Zm L(A(¯z), fp)dpm(¯z) ≥ Z Sm⇥Y m L(A(¯z), fp)dpm(¯z) ≥c Z Sm L(A(¯zp(¯x)), fp)dpm X(¯x) where c is as specified in the Theorem. Note the sets Vl := {¯x 2 Sm ⇢Xm : the xj occupy exactly l of the Si} for l = 1, . . . , m define a partition of Sm. Recall that dpX ≥dq on S for all p 2 P2m so Z Sm L(A(¯zp(¯x)), fp)dpm X(¯x) ≥ 1 22m X p2P2m m X l=1 Z ¯x2Vl L(A(¯zp(¯x)), fp) dqm(¯x) = m X l=1 Z ¯x2Vl 0 B B B B @ 1 22m X p2P2m L(A(¯zp(¯x)), fp) | {z } I 1 C C C C A dqm(¯x). We claim the integrand, I, is bounded below by bγ/8 (this computation is performed in part 3, and depends on knowing ¯x 2 Vl). At the same time, S has measure at least γ under q so m X l=1 Z ¯x2Vl dqm(¯x) = Z ¯x2Sm dqm(¯x) ≥γm which will complete the proof of (5). 3. We now assume a fixed but arbitrary ¯x 2 Vl and prove I ≥bγ/8. To simplify the discussion, we will refer to sets Si which contain a component xj of ¯x as Si with data. We also need notation for the elements of P2m: for each L ⇢[2m] denote by p(L) the unique element of P2m such that vp(L)|Si = 1 if i 2 L, and vp(L)|Si = 0 if i /2 L. Now, let L¯x := {i 2 [2m] : ¯x \ Si 6= ;}. These are the indices of sets Si with data. By assumption |L¯x| = l, and so |Lc ¯x| = 2m −l. Every subset L ⇢[2m] and hence every p 2 P2m is determined by L \ L¯x and L \ Lc ¯x. We will collect together all p(L) having the same L \ L¯x, namely for each D ⇢L¯x define PD := {p(L) 2 P2m : L \ L¯x = D}. These 2l families partition P2m and in each PD there are 22m−l probability distributions. Most importantly, ¯zp(¯x) is the same for all p 2 PD (because D determines vp on the Si with data). This 6 implies A(¯zp(¯x)) : X ! R is the same function3 of X for all p in a given PD. To simplify notation, since we will be working within a single PD, we write f := A(¯z(¯x)). While f is the hypothesized regression function given data ¯x, fp is the true regression function when p is the underlying distribution. For each set Si let vi be 1 if f is above 1/2 on a majority of Si using reference measure q (a q-majority) and 0 otherwise. We now focus on the “unseen" Si where no data lie (i.e., i 2 Lc ¯x) and use the vi to specify a 1-1 correspondence between elements p 2 PD and subsets K ⇢Lc ¯x: p 2 PD ! Kp := {i 2 Lc ¯x : vp 6= vi}. Take a specific p 2 PD with its associated Kp. We have |f(x) −fp(x)| > ↵on the q-majority of the set Si for all i 2 Kp. The condition |f(x) −fp(x)| > ↵with f(x) and fp(x) on opposite sides of 1/2 implies a lower bound on `(f(x), fp(x)) for each of the pointwise loss functions ` that we consider (0/1, absolute, square, hinge, logistic). The value of b, however, differs from case to case (see Appendix). For now we have, Z Si `(f(x), fp(x)) dpX(x) ≥ Z Si `(f(x), fp(x)) dq(x) ≥b 1 2 Z Si dq(x) ≥bγ 4m. Summing over all i 2 Kp, and letting k = |Kp|, we obtain (still for the same p) L(f(x), fp(x)) ≥k bγ 4m (assuming L is defined by equation (1))4. There are 02m−` k 1 possible K with cardinality k, for any k = 0, . . . , 2m −`. Therefore, X p2PD L(f(x), fp(x)) ≥ 2m−` X k=0 ✓2m −` k ◆ k bγ 4m = 22m−`(2m −`) 2 bγ 4m ≥22m−` bγ 8 (using 2m −` ≥2m −m = m)5. Since D was an arbitrary subset of L¯x, this same lower bound holds for each of the 2` families PD and so I = 1 22m X p2P2m L(f(x), fp(x)) ≥bγ 8 . In the constructions of the next Section it is often the case that one can prove a different level of shattering for different n, namely γ(n)-uniform shattering of n subsets for various n. The following Corollary is an immediate consequence of the Theorem for such settings. We state it for binary fp without noise. Corollary 5. Let C 2 (0, 1) and M 2 N. If P γ(n)-uniformly ↵-shatters n subsets of X and γ(n)n+1/8 > C for all n < M then no learning algorithm can achieve worst case expected error below ↵C, using a training sample of size less than M/2. If such uniform shattering holds for all n 2 N then the same lower bound applies regardless of sample size. Even when γ(n)-uniform shattering holds for all n 2 N and limn!1 γ(n) = 1, if γ(n) approaches 1 sufficiently slowly then it is possible γ(n)n+1 ! 0 and there is no asymptotic obstacle to learning. By contrast, the next Section shows an extreme situation where limn!1 γ(n)n+1 ≥e > 0. In that case, learning is impossible. 4 Applications and conclusion Manifold learning We now describe a simpler, finite dimensional version of the example in Niyogi (2013). Let X = RD, D ≥2 and Y = {0, 1}. Fix N 2 N and consider a very simple type of 1-dimensional manifold in X, namely the union of N linear segments, connected in circular fashion (see Figure 1). Let PX be the collection of marginal distributions, each of which is supported on and assigns uniform probability along a curve of this type. There is a 1-1 correspondence between the elements of PX and curves just described. 3Warning: f need not be an element of {fp : p 2 P2n}; we only know f 2 H = {fp : p 2 P}. 4In the L2 version, using px ≥x, the reader can verify the same lower bound holds. 5In the case where we use (1 + ✏)m instead of 2m, we would have (1 + ✏)m −` ≥✏m here. 7 Figure 1: An example of M with N = 12. The dashed curve is labeled 1, the solid curve 0 (in next Figure as well). Figure 2: M with N = 28 = 4(n + 1) pieces, used to prove uniform shattering of n sets (shown for the case n = 6 with e = 010010). On each curve M, choose two distinct points x0, x00. Removing these disconnects M. Let one component be labeled 0 and the other 1, then label x0 and x00 oppositely. Let P be the class of joint distributions on X ⇥Y with conditionals as described and marginals in PX. This is a noise-free setting and fp is binary. Given M (or circular coordinates on M), consider the reduced class P0 := {p 2 P : support(pX) = M}. Then H0 := {fp : p 2 P0} has VC dimension 3. On the other hand, for n < N/4 −1 it can be shown that P γ(n)-uniformly shatters n sets with fp, where γ(n) = 1 − 1 n+1 (see Appendix and Figure 2). Since (1 − 1 n+1)n+1 ! e > 0 as n ! 1, it follows from Corollary 5 that the worst case expected error is bounded below by e/8 for any sample of size n N/8 −1/2. If many linear pieces are allowed (i.e. N is high) this could be an impractical number of labeled examples. By contrast with this example, γ(n) in Niyogi’s example cannot be made arbitrarily close to 1. Group-invariant features We give a simplified, partially-discrete example (for a smooth version and Figures, see Appendix). Let Y = {0, 1} and let X = J ⇥I where J = {0, 1, . . . , n1 − 1} ⇥{0, 1, . . . , n2 −1} is an n1 by n2 grid (ni 2 N) and I = [0, 1] is a real line segment. One should picture X as a rectangular array of vertical sticks. Above each grid point (j1, j2) consider two special points on the stick I, one with i = i+ := 1 −✏and the other with i = i−:= 0 + ✏. Let PX contain only the uniform distribution on X and assume the noise-free setting. For each ¯e 2 {+, −}n1n2, on each segment (j1, j2) ⇥I assign, via p¯e, the label 1 above the special point (determined by ¯e) and 0 below the point. This determines a family of n1n2 conditional distributions and thus a family P := {p¯e : ¯e 2 {+, −}n1n2} of n1n2 joint distributions. The reader can verify that P has 2✏-uniform shattering dimension n1n2. Note that when the true distribution is p¯e for some ¯e 2 {+, −}n1n2 the labels will be invariant under the action a¯e of Zn1 ⇥Zn2 defined as follows. Given (z1, z2) 2 Zn1 ⇥Zn2 and (j1, j2) 2 J, let the group element (z1, z2) move the vertical stick at (j1, j2) to the one at (z1 + j1 mod n1, z2 + j2 mod n2) without flipping the stick over, just stretching it as needed so the special point i± determined by ¯e on the first stick goes to the one on the second stick. The orbit space of the action can be identified with I. Let t : X ⇥Y ! I be the projection of X ⇥Y to this orbit space, then there is an induced labelling of this orbit space (because labels were invariant under the action of the group). Given access to t, the resulting concept class has VC dimension 1. On the other hand, given instead access to a projection s for the action of the subgroup Zn1 ⇥{0}, the class eP := {p(·|s) : p 2 P} has 2✏-uniform shattering dimension n2. Thus we have a general setting where the over-all complexity requirements for two-step learning are n1 + n2 while for single-step learning they are n1n2. Conclusion We used a notion of uniform shattering to demonstrate both manifold learning and invariant feature learning situations where learning becomes impossible unless the learner has access to very large amounts of labeled data or else uses a two-step semi-supervised approach in which suitable manifold- or group-invariant features are learned first in unsupervised fashion. Our examples also provide a complexity manifestation of the advantages, observed by Poggio and Mallat, of forming intermediate group-invariant features according to sub-groups of a larger transformation group. Acknowledgements The author is deeply grateful to Partha Niyogi for the chance to have been his student. This paper is directly inspired by discussions with him which were cut short much too soon. The author also thanks Ankan Saha and Misha Belkin for very helpful input on preliminary drafts. 8 References M. Ahissar and S. Hochstein. The reverse hierarchy theory of visual perceptual learning. Trends Cogn Sci., Oct;8(10):457–64, 2004. G. Alain and Y. Bengio. What regularized auto-encoders learn from the data generating distribution. Technical report, 2012. http://arXiv:1211.4246[cs.LG]. M.-F. Balcan and A. Blum. A pac-style model for learning from labeled and unlabeled data. In Learning Theory, volume 3559, pages 111–126. Springer LNCS, 2005. J. Baxter. A model of inductive bias learning. Journal of Artificial Intelligence Research, 12:149?198, 2000. M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434, 2006. S. Ben-David, T. Lu, and D. Pál. Does unlabeled data provably help? worst-case analysis of the sample complexity of semi-supervised learning. In COLT, pages 33–44, 2008. J. Bourne and M. Rosa. Hierarchical development of the primate visual cortex, as revealed by neurofilament immunoreactivity: early maturation of the middle temporal area (mt). Cereb Cortex, Mar;16(3):405–14, 2006. Epub 2005 Jun 8. V. Castelli and T. Cover. The relative value of labeled and unlabeled samples in pattern recognition. IEEE Transactions on Information Theory, 42:2102–2117, 1996. L. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition, volume 31 of Applications of mathematics. Springer, New York, 1996. D. Haussler. Generalizing the pac model: Sample size bounds from metric dimension-based uniform convergence results. pages 40–45, 1989. M. Kearns and R. Schapire. Efficient distribution-free learning of probabilistic concepts. Journal of Computer and System Sciences, 48:464–497, 1994. M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, Massachusetts, 1994. V. Koltchinskii. Rademacher penalties and structural risk minimization. IEEE Transactions on Information Theory, 47(5):1902–1914, 2001. S. Mallat. Group invariant scattering. CoRR, abs/1101.2286, 2011. http://arxiv.org/abs/1101.2286. M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. MIT Press, 2012. P. Niyogi. Manifold regularization and semi-supervised learning: Some theoretical analyses. Journal of Machine Learning Research, 14:1229–1250, 2013. T. Poggio, J. Mutch, F. Anselmi, L. Rosasco, J. Leibo, and A. Tacchetti. The computational magic of the ventral stream: sketch of a theory (and why some deep architectures work). Technical report, Massachussetes Institute of Technology, 2012. MIT-CSAIL-TR-2012-035. R. Urner, S. Shalev-Shwartz, and S. Ben-David. Access to unlabeled data can speed up prediction time. In ICML, 2011. L. Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984. 9
2016
148
6,048
The non-convex Burer–Monteiro approach works on smooth semidefinite programs Nicolas Boumal⋆ Department of Mathematics Princeton University nboumal@math.princeton.edu Vladislav Voroninski⋆ Department of Mathematics Massachusetts Institute of Technology vvlad@math.mit.edu Afonso S. Bandeira Department of Mathematics and Center for Data Science Courant Institute of Mathematical Sciences, New York University bandeira@cims.nyu.edu Abstract Semidefinite programs (SDPs) can be solved in polynomial time by interior point methods, but scalability can be an issue. To address this shortcoming, over a decade ago, Burer and Monteiro proposed to solve SDPs with few equality constraints via rank-restricted, non-convex surrogates. Remarkably, for some applications, local optimization methods seem to converge to global optima of these non-convex surrogates reliably. Although some theory supports this empirical success, a complete explanation of it remains an open question. In this paper, we consider a class of SDPs which includes applications such as max-cut, community detection in the stochastic block model, robust PCA, phase retrieval and synchronization of rotations. We show that the low-rank Burer–Monteiro formulation of SDPs in that class almost never has any spurious local optima. This paper was corrected on April 9, 2018. Theorems 2 and 4 had the assumption that M (1) is a manifold. From this assumption it was stated that TY M = { ˙Y ∈Rn×p : A( ˙Y Y ⊤+ Y ˙Y ⊤) = 0}, which is not true in general. To ensure this identity, the theorems now make the stronger assumption that gradients of the constraints A(Y Y ⊤) = b are linearly independent for all Y in M. All examples treated in the paper satisfy this assumption. Appendix D gives details. 1 Introduction We consider semidefinite programs (SDPs) of the form f ∗= min X∈Sn×n ⟨C, X⟩ subject to A(X) = b, X ⪰0, (SDP) where ⟨C, X⟩= Tr(C⊤X), C ∈Sn×n is the symmetric cost matrix, A: Sn×n →Rm is a linear operator capturing m equality constraints with right hand side b ∈Rm and the variable X is symmetric, positive semidefinite. Interior point methods solve (SDP) in polynomial time [Nesterov, 2004]. In practice however, for n beyond a few thousands, such algorithms run out of memory (and time), prompting research for alternative solvers. ⋆The first two authors contributed equally. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. If (SDP) has a compact search space, then it admits a global optimum of rank at most r, where r(r+1) 2 ≤m [Pataki, 1998, Barvinok, 1995]. Thus, if one restricts the search space of (SDP) to matrices of rank at most p with p(p+1) 2 ≥m, then the globally optimal value remains unchanged. This restriction is easily enforced by factorizing X = Y Y ⊤where Y has size n × p, yielding an equivalent quadratically constrained quadratic program: q∗= min Y ∈Rn×p ⟨CY , Y ⟩ subject to A(Y Y ⊤) = b. (P) In general, (P) is non-convex, making it a priori unclear how to solve it globally. Still, the benefits are that it is lower dimensional than (SDP) and has no conic constraint. This has motivated Burer and Monteiro [2003, 2005] to try and solve (P) using local optimization methods, with surprisingly good results. They developed theory in support of this observation (details below). About their results, Burer and Monteiro [2005, §3] write (mutatis mutandis): “How large must we take p so that the local minima of (P) are guaranteed to map to global minima of (SDP)? Our theorem asserts that we need only1 p(p+1) 2 > m (with the important caveat that positive-dimensional faces of (SDP) which are ‘flat’ with respect to the objective function can harbor non-global local minima).” The caveat—the existence or non-existence of non-global local optima, or their potentially adverse effect for local optimization algorithms—was not further discussed. In this paper, assuming p(p+1) 2 > m, we show that if the search space of (SDP) is compact and if the search space of (P) is a regularly defined smooth manifold, then, for almost all cost matrices C, if Y satisfies first- and second-order necessary optimality conditions for (P), then Y is a global optimum of (P) and, since p(p+1) 2 ≥m, X = Y Y ⊤is a global optimum of (SDP). In other words, first- and second-order necessary optimality conditions for (P) are also sufficient for global optimality—an unusual theoretical guarantee in non-convex optimization. Notice that this is a statement about the optimization problem itself, not about specific algorithms. Interestingly, known algorithms for optimization on manifolds converge to second-order critical points,2 regardless of initialization [Boumal et al., 2016]. For the specified class of SDPs, our result improves on those of [Burer and Monteiro, 2005] in two important ways. Firstly, for almost all C, we formally exclude the existence of spurious local optima.3 Secondly, we only require the computation of second-order critical points of (P) rather than local optima (which is hard in general [Vavasis, 1991]). Below, we make a statement about computational complexity, and we illustrate the practical efficiency of the proposed methods through numerical experiments. SDPs which satisfy the compactness and smoothness assumptions occur in a number of applications including Max-Cut, robust PCA, Z2-synchronization, community detection, cut-norm approximation, phase synchronization, phase retrieval, synchronization of rotations and the trust-region subproblem—see Section 4 for references. A simple example: the Max-Cut problem Given an undirected graph, Max-Cut is the NP-hard problem of clustering the n nodes of this graph in two classes, +1 and −1, such that as many edges as possible join nodes of different signs. If C is the adjacency matrix of the graph, Max-Cut is expressed as max x∈Rn 1 4 n X i,j=1 Cij(1 −xixj) s.t. x2 1 = · · · = x2 n = 1. (Max-Cut) 1The condition on p and m is slightly, but inconsequentially, different in [Burer and Monteiro, 2005]. 2Second-order critical points satisfy first- and second-order necessary optimality conditions. 3Before Prop. 2.3 in [Burer and Monteiro, 2005], the authors write: “The change of variables X = Y Y ⊤ does not introduce any extraneous local minima.” This is sometimes misunderstood to mean (P) does not have spurious local optima, when it actually means that the local optima of (P) are in exact correspondence with the local optima of “(SDP) with the extra constraint rank(X) ≤p,” which is also non-convex and thus also liable to having local optima. Unfortunately, this misinterpretation has led to some confusion in the literature. 2 Introducing the positive semidefinite matrix X = xx⊤, both the cost and the constraints may be expressed linearly in terms of X. Ignoring that X has rank 1 yields the well-known convex relaxation in the form of a semidefinite program (up to an affine transformation of the cost): min X∈Sn×n ⟨C, X⟩ s.t. diag(X) = 1, X ⪰0. (Max-Cut SDP) If a solution X of this SDP has rank 1, then X = xx⊤for some x which is then an optimal cut. In the general case of higher rank X, Goemans and Williamson [1995] exhibited the celebrated rounding scheme to produce approximately optimal cuts (within a ratio of .878) from X. The corresponding Burer–Monteiro non-convex problem with rank bounded by p is: min Y ∈Rn×p ⟨CY , Y ⟩ s.t. diag(Y Y ⊤) = 1. (Max-Cut BM) The constraint diag(Y Y ⊤) = 1 requires each row of Y to have unit norm; that is: Y is a point on the Cartesian product of n unit spheres in Rp, which is a smooth manifold. Furthermore, all X feasible for the SDP have identical trace equal to n, so that the search space of the SDP is compact. Thus, our results stated below apply: For p = √ 2n  , for almost all C, even though (Max-Cut BM) is non-convex, any local optimum Y is a global optimum (and so is X = Y Y ⊤), and all saddle points have an escape (the Hessian has a negative eigenvalue). We note that, for p > n/2, the same holds for all C [Boumal, 2015]. Notation Sn×n is the set of real, symmetric matrices of size n. A symmetric matrix X is positive semidefinite (X ⪰0) if and only if u⊤Xu ≥0 for all u ∈Rn. For matrices A, B, the standard Euclidean inner product is ⟨A, B⟩= Tr(A⊤B). The associated (Frobenius) norm is ∥A∥= p ⟨A, A⟩. Id is the identity operator and In is the identity matrix of size n. 2 Main results Our main result establishes conditions under which first- and second-order necessary optimality conditions for (P) are sufficient for global optimality. Under those conditions, it is a fortiori true that global optima of (P) map to global optima of (SDP), so that local optimization methods on (P) can be used to solve the higher-dimensional, cone-constrained (SDP). We now specify the necessary optimality conditions of (P). Under the assumptions of our main result below (Theorem 2), the search space M = Mp = {Y ∈Rn×p : A(Y Y ⊤) = b} (1) is a smooth and compact manifold of dimension np −m. As such, it can be linearized at each point Y ∈M by a tangent space, differentiating the constraints [Absil et al., 2008, eq. (3.19)]: TY M = { ˙Y ∈Rn×p : A( ˙Y Y ⊤+ Y ˙Y ⊤) = 0}. (2) Endowing the tangent spaces of M with the (restricted) Euclidean metric ⟨A, B⟩= Tr(A⊤B) turns M into a Riemannian submanifold of Rn×p. In general, second-order optimality conditions can be intricate to handle [Ruszczy´nski, 2006]. Fortunately, here, the smoothness of both the search space (1) and the cost function f(Y ) = ⟨CY , Y ⟩ (3) make for straightforward conditions. In spirit, they coincide with the well-known conditions for unconstrained optimization. As further detailed in Appendix A, the Riemannian gradient gradf(Y ) is the orthogonal projection of the classical gradient of f to the tangent space TY M. The Riemannian Hessian of f at Y is a similarly restricted version of the classical Hessian of f to the tangent space. 3 Definition 1. A (first-order) critical point for (P) is a point Y ∈M such that gradf(Y ) = 0, (1st order nec. opt. cond.) where gradf(Y ) ∈TY M is the Riemannian gradient at Y of f restricted to M. A second-order critical point for (P) is a critical point Y such that Hessf(Y ) ⪰0, (2nd order nec. opt. cond.) where Hessf(Y ): TY M →TY M is the Riemannian Hessian at Y of f restricted to M (a symmetric linear operator). Proposition 1. All local (and global) optima of (P) are second-order critical points. Proof. See [Yang et al., 2014, Rem. 4.2 and Cor. 4.2]. We can now state our main result. In the theorem statement below, “for almost all C” means potentially troublesome cost matrices form at most a (Lebesgue) zero-measure subset of Sn×n, in the same way that almost all square matrices are invertible. In particular, given any matrix C ∈Sn×n, perturbing C to C + σW where W is a Wigner random matrix results in an acceptable cost matrix with probability 1, for arbitrarily small σ > 0. Theorem 2. Given constraints A: Sn×n →Rm, b ∈Rm and p satisfying p(p+1) 2 > m, if (i) the search space of (SDP) is compact; and (ii) the search space of (P) is a regularly-defined smooth manifold, in the sense that A1Y, . . . , AmY are linearly independent in Rn×p for all Y ∈M (see Appendix D), then for almost all cost matrices C ∈Sn×n, any second-order critical point of (P) is globally optimal. Under these conditions, if Y is globally optimal for (P), then the matrix X = Y Y ⊤is globally optimal for (SDP). The assumptions are discussed in the next section. The proof—see Appendix A—follows directly from the combination of two intermediate results: 1. If Y is rank deficient and second-order critical for (P), then it is globally optimal and X = Y Y ⊤is optimal for (SDP); and 2. If p(p+1) 2 > m, then, for almost all C, every first-order critical Y is rank-deficient. The first step holds in a more general context, as previously established by Burer and Monteiro [2003, 2005]. The second step is new and crucial, as it allows to formally exclude the existence of spurious local optima, generically in C, thus resolving the caveat mentioned in the introduction. The smooth structure of (P) naturally suggests using Riemannian optimization to solve it [Absil et al., 2008], which is something that was already proposed by Journ´ee et al. [2010] in the same context. Importantly, known algorithms converge to second-order critical points regardless of initialization. We state here a recent computational result to that effect. Proposition 3. Under the numbered assumptions of Theorem 2, the Riemannian trust-region method (RTR) [Absil et al., 2007] initialized with any Y0 ∈M returns in O(1/ε2 gεH + 1/ε3 H) iterations a point Y ∈M such that f(Y ) ≤f(Y0), ∥gradf(Y )∥≤εg, and Hessf(Y ) ⪰−εH Id . Proof. Apply the main results of [Boumal et al., 2016] using that f has locally Lipschitz continuous gradient and Hessian in Rn×p and M is a compact submanifold of Rn×p. Essentially, each iteration of RTR requires evaluation of one cost and one gradient, a bounded number of Hessian-vector applications, and one projection from Rn×p to M. In many important cases, this projection amounts to Gram–Schmidt orthogonalization of small blocks of Y —see Section 4. Proposition 3 bounds worst-case iteration counts for arbitrary initialization. In practice, a good initialization point may be available, making the local convergence rate of RTR more informative. 4 For RTR, one may expect superlinear or even quadratic local convergence rates near isolated local minimizers [Absil et al., 2007]. While minimizers are not isolated in our case [Journ´ee et al., 2010], experiments show a characteristically superlinear local convergence rate in practice [Boumal, 2015]. This means high accuracy solutions can be achieved, as demonstrated in Appendix B. Thus, under the conditions of Theorem 2, generically in C, RTR converges to global optima. In practice, the algorithm returns after a finite number of steps, and only approximate second-order criticality is guaranteed. Hence, it is interesting to bound the optimality gap in terms of the approximation quality. Unfortunately, we do not establish such a result for small p. Instead, we give an a posteriori computable optimality gap bound which holds for all p and for all C. In the following statement, the dependence of M on p is explicit, as Mp. The proof is in Appendix A. Theorem 4. Let R < ∞be the maximal trace of any X feasible for (SDP). For any p such that Mp and Mp+1 are smooth manifolds (even if p(p+1) 2 ≤m) and for any Y ∈Mp, form ˜Y = [Y |0n×1] in Mp+1. The optimality gap at Y is bounded as 0 ≤2(f(Y ) −f ∗) ≤ √ R∥gradf(Y )∥−Rλmin(Hessf( ˜Y )). (4) If all feasible X have the same trace R and there exists a positive definite feasible X, then the bound simplifies to 0 ≤2(f(Y ) −f ∗) ≤−Rλmin(Hessf( ˜Y )) (5) so that ∥gradf(Y )∥needs not be controlled explicitly. If p > n, the bounds hold with ˜Y = Y . In particular, for p = n + 1, the bound can be controlled a priori: approximate second-order critical points are approximately optimal, for any C.4 Corollary 5. Under the assumptions of Theorem 4, if p = n + 1 and Y ∈M satisfies both ∥gradf(Y )∥≤εg and Hessf(Y ) ⪰−εH Id, then Y is approximately optimal in the sense that 0 ≤2(f(Y ) −f ∗) ≤ √ Rεg + RεH. Under the same condition as in Theorem 4, the bound can be simplified to RεH. This works well with Proposition 3. For any p, equation (4) also implies the following: λmin(Hessf( ˜Y )) ≤−2(f(Y ) −f ∗) − √ R∥gradf(Y )∥ R . That is, for any p and any C, an approximate critical point Y in Mp which is far from optimal maps to a comfortably-escapable approximate saddle point ˜Y in Mp+1. This suggests an algorithm as follows. For a starting value of p such that Mp is a manifold, use RTR to compute an approximate second-order critical point Y . Then, form ˜Y in Mp+1 and test the left-most eigenvalue of Hessf( ˜Y ).5 If it is close enough to zero, this provides a good bound on the optimality gap. If not, use an (approximate) eigenvector associated to λmin(Hessf( ˜Y )) to escape the approximate saddle point and apply RTR from that new point in Mp+1; iterate. In the worst-case scenario, p grows to n + 1, at which point all approximate second-order critical points are approximate optima. Theorem 2 suggests p = √ 2m  should suffice for C bounded away from a zero-measure set. Such an algorithm already features with less theory in [Journ´ee et al., 2010] and [Boumal, 2015]; in the latter, it is called the Riemannian staircase, for it lifts (P) floor by floor. Related work Low-rank approaches to solve SDPs have featured in a number of recent research papers. We highlight just two which illustrate different classes of SDPs of interest. Shah et al. [2016] tackle SDPs with linear cost and linear constraints (both equalities and inequalities) via low-rank factorizations, assuming the matrices appearing in the cost and constraints are 4With p = n + 1, problem (P) is no longer lower dimensional than (SDP), but retains the advantage of not involving a positive semidefiniteness constraint. 5It may be more practical to test λmin(S) (14) rather than λmin(Hessf). Lemma 7 relates the two. See [Journ´ee et al., 2010, §3.3] to construct escape tangent vectors from S. 5 positive semidefinite. They propose a non-trivial initial guess to partially overcome non-convexity with great empirical results, but do not provide optimality guarantees. Bhojanapalli et al. [2016a] on the other hand consider the minimization of a convex cost function over positive semidefinite matrices, without constraints. Such problems could be obtained from generic SDPs by penalizing the constraints in a Lagrangian way. Here too, non-convexity is partially overcome via non-trivial initialization, with global optimality guarantees under some conditions. Also of interest are recent results about the harmlessness of non-convexity in low-rank matrix completion [Ge et al., 2016, Bhojanapalli et al., 2016b]. Similarly to the present work, the authors there show there is no need for special initialization despite non-convexity. 3 Discussion of the assumptions Our main result, Theorem 2, comes with geometric assumptions on the search spaces of both (SDP) and (P) which we now discuss. Examples of SDPs which fit the assumptions of Theorem 2 are featured in the next section. The assumption that the search space of (SDP), C = {X ∈Sn×n : A(X) = b, X ⪰0}, (6) is compact works in pair with the assumption p(p+1) 2 > m as follows. For (P) to reveal the global optima of (SDP), it is necessary that (SDP) admits a solution of rank at most p. One way to ensure this is via the Pataki–Barvinok theorems [Pataki, 1998, Barvinok, 1995], which state that all extreme points of C have rank r bounded as r(r+1) 2 ≤m. Extreme points are faces of dimension zero (such as vertices for a cube). When optimizing a linear cost function ⟨C, X⟩over a compact convex set C, at least one extreme point is a global optimum [Rockafellar, 1970, Cor. 32.3.2]—this is not true in general if C is not compact. Thus, under the assumptions of Theorem 2, there is a point Y ∈M such that X = Y Y ⊤is an optimal extreme point of (SDP); then, of course, Y itself is optimal for (P). In general, the Pataki–Barvinok bound is tight, in that there exist extreme points of rank up to that upper-bound (rounded down)—see for example [Laurent and Poljak, 1996] for the Max-Cut SDP and [Boumal, 2015] for the Orthogonal-Cut SDP. Let C (the cost matrix) be the negative of such an extreme point. Then, the unique optimum of (SDP) is that extreme point, showing that p(p+1) 2 ≥m is necessary for (SDP) and (P) to be equivalent for all C. We further require a strict inequality because our proof relies on properties of rank deficient Y ’s in M. The assumption that M (eq. (1)) is a regularly-defined smooth manifold works in pair with the ambition that the result should hold for (almost) all cost matrices C. The starting point is that, for a given non-convex smooth optimization problem—even a quadratically constrained quadratic program—computing local optima is hard in general [Vavasis, 1991]. Thus, we wish to restrict our attention to efficiently computable points, such as points which satisfy first- and second-order KKT conditions for (P)—see [Burer and Monteiro, 2003, §2.2] and [Ruszczy´nski, 2006, §3]. This only makes sense if global optima satisfy the latter, that is, if KKT conditions are necessary for optimality. A global optimum Y necessarily satisfies KKT conditions if constraint qualifications (CQs) hold at Y [Ruszczy´nski, 2006]. The standard CQs for equality constrained programs are Robinson’s conditions or metric regularity (they are here equivalent). They read as follows, assuming A(Y Y ⊤)i = Ai, Y Y ⊤ for some matrices A1, . . . , Am ∈Sn×n: CQs hold at Y if A1Y, . . . , AmY are linearly independent in Rn×p. (7) Considering almost all C, global optima could, a priori, be almost anywhere in M. To simplify, we require CQs to hold at all Y ’s in M rather than only at the (unknown) global optima. Under this condition, the constraints are independent at each point and ensure M is a smooth embedded submanifold of Rn×p of codimension m [Absil et al., 2008, Prop. 3.3.3]. Indeed, tangent vectors ˙Y ∈TY M (2) are exactly those vectors that satisfy ⟨AiY , ˙Y ⟩= 0: under CQs, the AiY ’s form a basis of the normal space to the manifold at Y . Finally, we note that Theorem 2 only applies for almost all C, rather than all C. To justify this restriction, if indeed it is justified, one should exhibit a matrix C that leads to suboptimal secondorder critical points while other assumptions are satisfied. We do not have such an example. We do 6 observe that (Max-Cut SDP) on cycles of certain even lengths has a unique solution of rank 1, while the corresponding (Max-Cut BM) with p = 2 has suboptimal local optima (strictly, if we quotient out symmetries). This at least suggests it is not enough, for generic C, to set p just larger than the rank of the solutions of the SDP. (For those same examples, at p = 3, we consistently observe convergence to global optima.) 4 Examples of smooth SDPs The canonical examples of SDPs which satisfy the assumptions in Theorem 2 are those where the diagonal blocks of X or their traces are fixed. We note that the algorithms and the theory continue to hold for complex matrices, where the set of Hermitian matrices of size n is treated as a real vector space of dimension n2 (instead of n(n+1) 2 in the real case) with inner product ⟨H1, H2⟩= ℜ{Tr(H∗ 1H2)}, so that occurrences of p(p+1) 2 are replaced by p2. Certain concrete examples of SDPs include: min X ⟨C, X⟩s.t. Tr(X) = 1, X ⪰0; (fixed trace) min X ⟨C, X⟩s.t. diag(X) = 1, X ⪰0; (fixed diagonal) min X ⟨C, X⟩s.t. Xii = Id, X ⪰0. (fixed diagonal blocks) Their rank-constrained counterparts read as follows (matrix norms are Frobenius norms): min Y : n×p ⟨CY , Y ⟩s.t. ∥Y ∥= 1; (sphere) min Y : n×p ⟨CY , Y ⟩s.t. Y ⊤= [y1 · · · yn] and ∥yi∥= 1 for all i; (product of spheres) min Y : qd×p ⟨CY , Y ⟩s.t. Y ⊤= [Y1 · · · Yq] and Y ⊤ i Yi = Id for all i. (product of Stiefel) The first example has only one constraint: the SDP always admits an optimal rank 1 solution, corresponding to an eigenvector associated to the left-most eigenvalue of C. This generalizes to the trust-region subproblem as well. For the second example, in the real case, p = 1 forces yi = ±1, allowing to capture combinatorial problems such as Max-Cut [Goemans and Williamson, 1995], Z2-synchronization [Javanmard et al., 2016] and community detection in the stochastic block model [Abbe et al., 2016, Bandeira et al., 2016a]. The same SDP is central in a formulation of robust PCA [McCoy and Tropp, 2011] and is used to approximate the cut-norm of a matrix [Alon and Naor, 2006]. Theorem 2 states that for almost all C, p = √ 2n  is sufficient. In the complex case, p = 1 forces |yi| = 1, allowing to capture problems where phases must be recovered; in particular, phase synchronization [Bandeira et al., 2017, Singer, 2011] and phase retrieval via Phase-Cut [Waldspurger et al., 2015]. For almost all C, it is then sufficient to set p = ⌊√n + 1⌋. In the third example, Y of size n × p is divided in q slices of size d × p, with p ≥d. Each slice has orthonormal rows. For p = d, the slices are orthogonal (or unitary) matrices, allowing to capture Orthogonal-Cut [Bandeira et al., 2016b] and the related problems of synchronization of rotations [Wang and Singer, 2013] and permutations. Synchronization of rotations is an important step in simultaneous localization and mapping, for example. Here, it is sufficient for almost all C to let p = lp d(d + 1)q m . SDPs with constraints that are combinations of the above examples can also have the smoothness property; the right-hand sides 1 and Id can be replaced by any positive definite right-hand sides by a change of variables. Another simple rule to check is if the constraint matrices A1, . . . , Am ∈Sn×n such that A(X)i = ⟨Ai, X⟩satisfy AiAj = 0 for all i ̸= j (note that this is stronger than requiring ⟨Ai, Aj⟩= 0), see [Journ´ee et al., 2010]. 5 Conclusions The Burer–Monteiro approach consists in replacing optimization of a linear function ⟨C, X⟩over the convex set {X ⪰0 : A(X) = b} with optimization of the quadratic function ⟨CY , Y ⟩over the 7 non-convex set {Y ∈Rn×p : A(Y Y ⊤) = b}. It was previously known that, if the convex set is compact and p satisfies p(p+1) 2 ≥m where m is the number of constraints, then these two problems have the same global optimum. It was also known from [Burer and Monteiro, 2005] that spurious local optima Y , if they exist, must map to special faces of the compact convex set, but without statement as to the prevalence of such faces or the risk they pose for local optimization methods. In this paper we showed that, if the set of X’s is compact and the set of Y ’s is a regularly-defined smooth manifold, and if p(p+1) 2 > m, then for almost all C, the non-convexity of the problem in Y is benign, in that all Y ’s which satisfy second-order necessary optimality conditions are in fact globally optimal. We further reference the Riemannian trust-region method [Absil et al., 2007] to solve the problem in Y , as it was recently guaranteed to converge from any starting point to a point which satisfies secondorder optimality conditions, with global convergence rates [Boumal et al., 2016]. In addition, for p = n + 1, we guarantee that approximate satisfaction of second-order conditions implies approximate global optimality. We note that the 1/ε3 convergence rate in our results may be pessimistic. Indeed, the numerical experiments clearly show that high accuracy solutions can be computed fast using optimization on manifolds, at least for certain applications. Addressing a broader class of SDPs, such as those with inequality constraints or equality constraints that may violate our smoothness assumptions, could perhaps be handled by penalizing those constraints in the objective in an augmented Lagrangian fashion. We also note that, algorithmically, the Riemannian trust-region method we use applies just as well to nonlinear costs in the SDP. We believe that extending the theory presented here to broader classes of problems is a good direction for future work. Acknowledgment VV was partially supported by the Office of Naval Research. ASB was supported by NSF Grant DMS-1317308. Part of this work was done while ASB was with the Department of Mathematics at the Massachusetts Institute of Technology. We thank Wotao Yin and Michel Goemans for helpful discussions. References E. Abbe, A.S. Bandeira, and G. Hall. Exact recovery in the stochastic block model. Information Theory, IEEE Transactions on, 62(1):471–487, 2016. P.-A. Absil, C. G. Baker, and K. A. Gallivan. Trust-region methods on Riemannian manifolds. Foundations of Computational Mathematics, 7(3):303–330, 2007. doi:10.1007/s10208-005-0179-9. P.-A. Absil, R. Mahony, and R. Sepulchre. Optimization Algorithms on Matrix Manifolds. Princeton University Press, Princeton, NJ, 2008. ISBN 978-0-691-13298-3. N. Alon and A. Naor. Approximating the cut-norm via Grothendieck’s inequality. SIAM Journal on Computing, 35(4):787–803, 2006. doi:10.1137/S0097539704441629. R. Andreani, C. E. Echag¨ue, and M. L. Schuverdt. Constant-rank condition and second-order constraint qualification. Journal of Optimization Theory and Applications, 146(2):255–266, 2010. doi:10.1007/s10957010-9671-8. A.S. Bandeira, N. Boumal, and V. Voroninski. On the low-rank approach for semidefinite programs arising in synchronization and community detection. In Proceedings of The 29th Conference on Learning Theory, COLT 2016, New York, NY, June 23–26, 2016a. A.S. Bandeira, C. Kennedy, and A. Singer. Approximating the little Grothendieck problem over the orthogonal and unitary groups. Mathematical Programming, pages 1–43, 2016b. doi:10.1007/s10107-016-0993-7. A.S. Bandeira, N. Boumal, and A. Singer. Tightness of the maximum likelihood semidefinite relaxation for angular synchronization. Mathematical Programming, 163(1):145–167, 2017. doi:10.1007/s10107-0161059-6. A.I. Barvinok. Problems of distance geometry and convex properties of quadratic maps. Discrete & Computational Geometry, 13(1):189–202, 1995. doi:10.1007/BF02574037. 8 S. Bhojanapalli, A. Kyrillidis, and S. Sanghavi. Dropping convexity for faster semi-definite optimization. Conference on Learning Theory (COLT), 2016a. S. Bhojanapalli, B. Neyshabur, and N. Srebro. Global optimality of local search for low rank matrix recovery. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 3873–3881. Curran Associates, Inc., 2016b. N. Boumal. A Riemannian low-rank method for optimization over semidefinite matrices with block-diagonal constraints. arXiv preprint arXiv:1506.00575, 2015. N. Boumal, B. Mishra, P.-A. Absil, and R. Sepulchre. Manopt, a Matlab toolbox for optimization on manifolds. Journal of Machine Learning Research, 15:1455–1459, 2014. URL http://www.manopt.org. N. Boumal, P.-A. Absil, and C. Cartis. Global rates of convergence for nonconvex optimization on manifolds. arXiv preprint arXiv:1605.08101, 2016. S. Burer and R.D.C. Monteiro. A nonlinear programming algorithm for solving semidefinite programs via lowrank factorization. Mathematical Programming, 95(2):329–357, 2003. doi:10.1007/s10107-002-0352-8. S. Burer and R.D.C. Monteiro. Local minima and convergence in low-rank semidefinite programming. Mathematical Programming, 103(3):427–444, 2005. CVX. CVX: Matlab software for disciplined convex programming. http://cvxr.com/cvx, August 2012. R. Ge, J.D. Lee, and T. Ma. Matrix completion has no spurious local minimum. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 2973–2981. Curran Associates, Inc., 2016. M.X. Goemans and D.P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM (JACM), 42(6):1115–1145, 1995. doi:10.1145/227683.227684. C. Helmberg, F. Rendl, R.J. Vanderbei, and H. Wolkowicz. An interior-point method for semidefinite programming. SIAM Journal on Optimization, 6(2):342–361, 1996. doi:10.1137/0806020. A. Javanmard, A. Montanari, and F. Ricci-Tersenghi. Phase transitions in semidefinite relaxations. Proceedings of the National Academy of Sciences, 113(16):E2218–E2223, 2016. M. Journ´ee, F. Bach, P.-A. Absil, and R. Sepulchre. Low-rank optimization on the cone of positive semidefinite matrices. SIAM Journal on Optimization, 20(5):2327–2351, 2010. doi:10.1137/080731359. M. Laurent and S. Poljak. On the facial structure of the set of correlation matrices. SIAM Journal on Matrix Analysis and Applications, 17(3):530–547, 1996. doi:10.1137/0617031. J.M. Lee. Introduction to Smooth Manifolds, volume 218 of Graduate Texts in Mathematics. Springer-Verlag New York, 2 edition, 2012. ISBN 978-1-4419-9981-8. doi:10.1007/978-1-4419-9982-5. M. McCoy and J.A. Tropp. Two proposals for robust PCA using semidefinite programming. Electronic Journal of Statistics, 5:1123–1160, 2011. doi:10.1214/11-EJS636. Y. Nesterov. Introductory lectures on convex optimization: A basic course, volume 87 of Applied optimization. Springer, 2004. ISBN 978-1-4020-7553-7. G. Pataki. On the rank of extreme matrices in semidefinite programs and the multiplicity of optimal eigenvalues. Mathematics of operations research, 23(2):339–358, 1998. doi:10.1287/moor.23.2.339. R.T. Rockafellar. Convex analysis. Princeton University Press, Princeton, NJ, 1970. A.P. Ruszczy´nski. Nonlinear optimization. Princeton University Press, Princeton, NJ, 2006. S. Shah, A. Kumar, D. Jacobs, C. Studer, and T. Goldstein. Biconvex relaxation for semidefinite programming in computer vision. arXiv preprint arXiv:1605.09527, 2016. A. Singer. Angular synchronization by eigenvectors and semidefinite programming. Applied and Computational Harmonic Analysis, 30(1):20–36, 2011. doi:10.1016/j.acha.2010.02.001. K.C. Toh, M.J. Todd, and R.H. T¨ut¨unc¨u. SDPT3–a MATLAB software package for semidefinite programming. Optimization Methods and Software, 11(1–4):545–581, 1999. doi:10.1080/10556789908805762. S.A. Vavasis. Nonlinear optimization: complexity issues. Oxford University Press, Inc., 1991. 9 I. Waldspurger, A. d’Aspremont, and S. Mallat. Phase recovery, MaxCut and complex semidefinite programming. Mathematical Programming, 149(1–2):47–81, 2015. doi:10.1007/s10107-013-0738-9. L. Wang and A. Singer. Exact and stable recovery of rotations for robust synchronization. Information and Inference, 2(2):145–193, 2013. doi:10.1093/imaiai/iat005. Z. Wen and W. Yin. A feasible method for optimization with orthogonality constraints. Mathematical Programming, 142(1–2):397–434, 2013. doi:10.1007/s10107-012-0584-1. W.H. Yang, L.-H. Zhang, and R. Song. Optimality conditions for the nonlinear programming problems on Riemannian manifolds. Pacific Journal of Optimization, 10(2):415–434, 2014. 10
2016
149
6,049
Exponential Family Embeddings Maja Rudolph Columbia University Francisco J. R. Ruiz Univ. of Cambridge Columbia University Stephan Mandt Columbia University David M. Blei Columbia University Abstract Word embeddings are a powerful approach for capturing semantic similarity among terms in a vocabulary. In this paper, we develop exponential family embeddings, a class of methods that extends the idea of word embeddings to other types of high-dimensional data. As examples, we studied neural data with real-valued observations, count data from a market basket analysis, and ratings data from a movie recommendation system. The main idea is to model each observation conditioned on a set of other observations. This set is called the context, and the way the context is defined is a modeling choice that depends on the problem. In language the context is the surrounding words; in neuroscience the context is close-by neurons; in market basket data the context is other items in the shopping cart. Each type of embedding model defines the context, the exponential family of conditional distributions, and how the latent embedding vectors are shared across data. We infer the embeddings with a scalable algorithm based on stochastic gradient descent. On all three applications—neural activity of zebrafish, users’ shopping behavior, and movie ratings—we found exponential family embedding models to be more effective than other types of dimension reduction. They better reconstruct held-out data and find interesting qualitative structure. 1 Introduction Word embeddings are a powerful approach for analyzing language (Bengio et al., 2006; Mikolov et al., 2013a,b; Pennington et al., 2014). A word embedding method discovers distributed representations of words; these representations capture the semantic similarity between the words and reflect a variety of other linguistic regularities (Rumelhart et al., 1986; Bengio et al., 2006; Mikolov et al., 2013c). Fitted word embeddings can help us understand the structure of language and are useful for downstream tasks based on text. There are many variants, adaptations, and extensions of word embeddings (Mikolov et al., 2013a,b; Mnih and Kavukcuoglu, 2013; Levy and Goldberg, 2014; Pennington et al., 2014; Vilnis and McCallum, 2015), but each reflects the same main ideas. Each term in a vocabulary is associated with two latent vectors, an embedding and a context vector. These two types of vectors govern conditional probabilities that relate each word to its surrounding context. Specifically, the conditional probability of a word combines its embedding and the context vectors of its surrounding words. (Different methods combine them differently.) Given a corpus, we fit the embeddings by maximizing the conditional probabilities of the observed text. In this paper we develop the exponential family embedding (ef-emb), a class of models that generalizes the spirit of word embeddings to other types of high-dimensional data. Our motivation is that other types of data can benefit from the same assumptions that underlie word embeddings, namely that a data point is governed by the other data in its context. In language, this is the foundational idea that words with similar meanings will appear in similar contexts (Harris, 1954). We use the tools of exponential families (Brown, 1986) and generalized linear models (glms) (McCullagh and Nelder, 1989) to adapt this idea beyond language. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. As one example beyond language, we will study computational neuroscience. Neuroscientists measure sequential neural activity across many neurons in the brain. Their goal is to discover patterns in these data with the hope of better understanding the dynamics and connections among neurons. In this example, a context can be defined as the neural activities of other nearby neurons, or as neural activity in the past. Thus, it is plausible that the activity of each neuron depends on its context. We will use this idea to fit latent embeddings of neurons, representations of neurons that uncover hidden features which help suggest their roles in the brain. Another example we study involves shoppers at the grocery store. Economists collect shopping data (called “market basket data”) and are interested in building models of purchase behavior for downstream econometric analysis, e.g., to predict demand and market changes. To build such models, they seek features of items that are predictive of when they are purchased and in what quantity. Similar to language, purchasing an item depends on its context, i.e., the other items in the shopping cart. In market basket data, Poisson embeddings can capture important econometric concepts, such as items that tend not to occur together but occur in the same contexts (substitutes) and items that co-occur, but never one without the other (complements). We define an ef-emb, such as one for neuroscience or shopping data, with three ingredients. (1) We define the context, which specifies which other data points each observation depends on. (2) We define the conditional exponential family. This involves setting the appropriate distribution, such as a Gaussian for real-valued data or a Poisson for count data, and the way to combine embeddings and context vectors to form its natural parameter. (3) We define the embedding structure, how embeddings and context vectors are shared across the conditional distributions of each observation. These three ingredients enable a variety of embedding models. We describe ef-emb models and develop efficient algorithms for fitting them. We show how existing methods, such as continuous bag of words (cbow) (Mikolov et al., 2013a) and negative sampling (Mikolov et al., 2013b), can each be viewed as an ef-emb. We study our methods on three different types of data—neuroscience data, shopping data, and movie ratings data. Mirroring the success of word embeddings, ef-emb models outperform traditional dimension reduction, such as exponential family principal component analysis (pca) (Collins et al., 2001) and Poisson factorization (Gopalan et al., 2015), and find interpretable features of the data. Related work. ef-emb models generalize cbow (Mikolov et al., 2013a) in the same way that exponential family pca (Collins et al., 2001) generalizes pca, glms (McCullagh and Nelder, 1989) generalize regression, and deep exponential families (Ranganath et al., 2015) generalize sigmoid belief networks (Neal, 1990). A linear ef-emb (which we define precisely below) relates to context-windowbased embedding methods such as cbow or the vector log-bilinear language model (vlbl) (Mikolov et al., 2013a; Mnih and Kavukcuoglu, 2013), which model a word given its context. The more general ef-emb relates to embeddings with a nonlinear component, such as the skip-gram (Mikolov et al., 2013a) or the inverse vector log-bilinear language model (ivlbl) (Mnih and Kavukcuoglu, 2013). (These methods might appear linear but, when viewed as a conditional probabilistic model, the normalizing constant of each word induces a nonlinearity.) Researchers have developed different approximations of the word embedding objective to scale the procedure. These include noise contrastive estimation (Gutmann and Hyvärinen, 2010; Mnih and Teh, 2012), hierarchical softmax (Mikolov et al., 2013b), and negative sampling (Mikolov et al., 2013a). We explain in Section 2.2 and Supplement A how negative sampling corresponds to biased stochastic gradients of an ef-emb objective. 2 Exponential Family Embeddings We consider a matrix x D x1WI of I observations, where each xi is a D-vector. As one example, in language xi is an indicator vector for the word at position i and D is the size of the vocabulary. As another example, in neural data xi is the neural activity measured at index pair i D .n; t/, where n indexes a neuron and t indexes a time point; each measurement is a scalar (D D 1). The goal of an exponential family embedding (ef-emb) is to derive useful features of the data. There are three ingredients: a context function, a conditional exponential family, and an embedding structure. These ingredients work together to form the objective. First, the ef-emb models each data point conditional on its context; the context function determines which other data points are at play. Second, 2 the conditional distribution is an appropriate exponential family, e.g., a Gaussian for real-valued data. Its parameter is a function of the embeddings of both the data point and its context. Finally, the embedding structure determines which embeddings are used when the ith point appears, either as data or in the context of another point. The objective is the sum of the log probabilities of each data point given its context. We describe each ingredient, followed by the ef-emb objective. Examples are in Section 2.1. Context. Each data point i has a context ci, which is a set of indices of other data points. The ef-emb models the conditional distribution of xi given the data points in its context. The context is a modeling choice; different applications will require different types of context. In language, the data point is a word and the context is the set of words in a window around it. In neural data, the data point is the activity of a neuron at a time point and the context is the activity of its surrounding neurons at the same time point. (It can also include neurons at future time or in the past.) In shopping data, the data point is a purchase and the context is the other items in the cart. Conditional exponential family. An ef-emb models each data point xi conditional on its context xci . The distribution is an appropriate exponential family, xi j xci  ExpFam.i.xci /; t.xi//; (1) where i.xci / is the natural parameter and t.xi/ is the sufficient statistic. In language modeling, this family is usually a categorical distribution. Below, we will study Gaussian and Poisson. We parameterize the conditional with two types of vectors, embeddings and context vectors. The embedding of the ith data point helps govern its distribution; we denote it Œi 2 RKD. The context vector of the ith data point helps govern the distribution of data for which i appears in their context; we denote it ˛Œi 2 RKD. How to define the natural parameter as a function of these vectors is a modeling choice. It captures how the context interacts with an embedding to determine the conditional distribution of a data point. Here we focus on the linear embedding, where the natural parameter is a function of a linear combination of the latent vectors, i.xci / D fi 0 @Œi> X j2ci ˛Œjxj 1 A : (2) Following the nomenclature of generalized linear models (glms), we call fi./ the link function. We will see several examples of link functions in Section 2.1. This is the setting of many existing word embedding models, though not all. Other models, such as the skip-gram, determine the probability through a “reverse” distribution of context words given the data point. These non-linear embeddings are still instances of an ef-emb. Embedding structure. The goal of an ef-emb is to find embeddings and context vectors that describe features of the data. The embedding structure determines how an ef-emb shares these vectors across the data. It is through sharing the vectors that we learn an embedding for the object of primary interest, such as a vocabulary term, a neuron, or a supermarket product. In language the same parameters Œi D  and ˛Œi D ˛ are shared across all positions i. In neural data, observations share parameters when they describe the same neuron. Recall that the index connects to both a neuron and time point i D .n; t/. We share parameters with Œi D n and ˛Œi D ˛n to find embeddings and context vectors that describe the neurons. Other variants might tie the embedding and context vectors to find a single set of latent variables, Œi D ˛Œi. The objective function. The ef-emb objective sums the log conditional probabilities of each data point, adding regularizers for the embeddings and context vectors.1 We use log probability functions as regularizers, e.g., a Gaussian probability leads to `2 regularization. We also use regularizers to constrain the embeddings,e.g., to be non-negative. Thus, the objective is L.; ˛/ D I X iD1 > i t.xi/ a.i/  C log p./ C log p.˛/: (3) 1One might be tempted to see this as a probabilistic model that is conditionally specified. However, in general it does not have a consistent joint distribution (Arnold et al., 2001). 3 We maximize this objective with respect to the embeddings and context vectors. In Section 2.2 we explain how to fit it with stochastic gradients. Equation (3) can be seen as a likelihood function for a bank of glms (McCullagh and Nelder, 1989). Each data point is modeled as a response conditional on its “covariates,” which combine the context vectors and context, e.g., as in Equation (2); the coefficient for each response is the embedding itself. We use properties of exponential families and results around glms to derive efficient algorithms for ef-emb models. 2.1 Examples We highlight the versatility of ef-emb models with three example models and their variations. We develop the Gaussian embedding (g-emb) for analyzing real observations from a neuroscience application; we also introduce a nonnegative version, the nonnegative Gaussian embedding (ngemb). We develop two Poisson embedding models, Poisson embedding (p-emb) and additive Poisson embedding (ap-emb), for analyzing count data; these have different link functions. We present a categorical embedding model that corresponds to the continuous bag of words (cbow) word embedding (Mikolov et al., 2013a). Finally, we present a Bernoulli embedding (b-emb) for binary data. In Section 2.2 we explain how negative sampling (Mikolov et al., 2013b) corresponds to biased stochastic gradients of the b-emb objective. For convenience, these acronyms are in Table 1. ef-emb exponential family embedding g-emb Gaussian embedding ng-emb nonnegative Gaussian embedding p-emb Poisson embedding ap-emb additive Poisson embedding b-emb Bernoulli embedding Table 1: Acronyms used for exponential family embeddings. Example 1: Neural data and Gaussian observations. Consider the (calcium) expression of a large population of zebrafish neurons (Ahrens et al., 2013). The data are processed to extract the locations of the N neurons and the neural activity xi D x.n;t/ across location n and time t. The goal is to model the similarity between neurons in terms of their behavior, to embed each neuron in a latent space such that neurons with similar behavior are close to each other. We consider two neurons similar if they behave similarly in the context of the activity pattern of their surrounding neurons. Thus we define the context for data index i D .n; t/ to be the indices of the activity of nearby neurons at the same time. We find the K-nearest neighbors (knn) of each neuron (using a Ball-tree algorithm) according to their spatial distance in the brain. We use this set to construct the context ci D c.n;t/ D f.m; t/jm 2 knn.n/g. This context varies with each neuron, but is constant over time. With the context defined, each data point xi is modeled with a conditional Gaussian. The conditional mean is the inner product from Equation (2), where the context is the simultaneous activity of the nearest neurons and the link function is the identity. The conditionals of two observations share parameters if they correspond to the same neuron. The embedding structure is thus Œi D n and ˛Œi D ˛n for all i D .n; t/. Similar to word embeddings, each neuron has two distinct latent vectors: the neuron embedding n 2 RK and the context vector ˛n 2 RK. These ingredients, along with a regularizer, combine to form a neural embedding objective. g-emb uses `2 regularization (i.e., a Gaussian prior); ng-emb constrains the vectors to be nonnegative (`2 regularization on the logarithm. i.e., a log-normal prior). Example 2: Shopping data and Poisson observations. We also study data about people shopping. The data contains the individual purchases of anonymous users in chain grocery and drug stores. There are N different items and T trips to the stores among all households. The data is a sparse N  T matrix of purchase counts. The entry xi D x.n;t/ indicates the number of units of item n that was purchased on trip t. Our goal is to learn a latent representation for each product that captures the similarity between them. 4 We consider items to be similar if they tend to be purchased in with similar groups of other items. The context for observation xi is thus the other items in the shopping basket on the same trip. For the purchase count at index i D .n; t/, the context is ci D fj D .m; t/jm ¤ ng. We use conditional Poisson distributions to modelthe count data. The sufficient statistic of the Poisson is t.xi/ D xi, and its natural parameter is the logarithm of the rate (i.e., the mean). We set the natural parameter as in Equation (2), with the link function defined below. The embedding structure is the same as in g-emb, producing embeddings for the items. We explore two choices for the link function. p-emb uses an identity link function. Since the conditional mean is the exponentiated natural parameter, this implies that the context items contribute multiplicatively to the mean. (We use `2-regularization on the embeddings.) Alternatively, we can constrain the parameters to be nonnegative and set the link function f ./ D log./. This is ap-emb, a model with an additive mean parameterization. (We use `2-regularization in log-space.) ap-emb only captures positive correlations between items. Example 3: Text modeling and categorical observations. ef-embs are inspired by word embeddings, such as cbow (Mikolov et al., 2013a). cbow is a special case of an ef-emb; it is equivalent to a multivariate ef-emb with categorical conditionals. In the notation here, each xi is an indicator vector of the ith word. Its dimension is the vocabulary size. The context of the ith word are the other words in a window around it (of size w), ci D fj ¤ iji w  j  i C wg. The distribution of xi is categorical, conditioned on the surrounding words xci ; this is a softmax regression. It has natural parameter as in Equation (2) with an identity link function. The embedding structure imposes that parameters are shared across all observed words. The embeddings are shared globally (Œi D , ˛Œi D ˛ 2 RNK). The word and context embedding of the nth word is the nth row of  and ˛ respectively. cbow does not use any regularizer. Example 4: Text modeling and binary observations. One way to simplify the cbow objective is with a model of each entry of the indicator vectors. The data are binary and indexed by i D .n; v/, where n is the position in the text and v indexes the vocabulary; the variable xn;v is the indicator that word n is equal to term v. (This model relaxes the constraint that for any n only one xn;v will be on.) With this notation, the context is ci D f.j; v0/j8v0; j ¤ n; n w  j  n C wg; the embedding structure is Œi D Œ.n; v/ D v and ˛Œi D ˛Œ.n; v/ D ˛v. We can consider different conditional distributions in this setting. As one example, set the conditional distribution to be a Bernoulli with an identity link; we call this the b-emb model for text. In Section 2.2 we show that biased stochastic gradients of the b-emb objective recovers negative sampling (Mikolov et al., 2013b). As another example, set the conditional distribution to Poisson with link f ./ D log./. The corresponding embedding model relates closely to Poisson approximations of distributed multinomial regression (Taddy et al., 2015). 2.2 Inference and Connection to Negative Sampling We fit the embeddings Œi and context vectors ˛Œi by maximizing the objective function in Equation (3). We use stochastic gradient descent (sgd) with Adagrad (Duchi et al., 2011). We can derive the analytic gradient of the objective function using properties of the exponential family (see the Supplement for details). The gradients linearly combine the data in summations we can approximate using subsampled minibatches of data. This reduces the computational cost. When the data is sparse, we can split the gradient into the summation of two terms: one term corresponding to all data entries i for which xi ¤ 0, and one term corresponding to those data entries xi D 0. We compute the first term of the gradient exactly—when the data is sparse there are not many summations to make—and we estimate the second term by subsampling the zero entries. Compared to computing the full gradient, this reduces the complexity when most of the entries xi are zero. But it retains the strong information about the gradient that comes from the non-zero entries. This relates to negative sampling, which is used to approximate the skip-gram objective (Mikolov et al., 2013b). Negative sampling re-defines the skip-gram objective to distinguish target (observed) words from randomly drawn words, using logistic regression. The gradient of the stochastic objective is identical to a noisy but biased estimate of the gradient for a b-emb model. To obtain the equivalence, preserve the terms for the non-zero data and subsample terms for the zero data. While an unbiased 5 single neuron held out 25% of neurons held out Model K D 10 K D 100 K D 10 K D 100 fa 0:290 ˙ 0:003 0:275 ˙ 0:003 0:290 ˙ 0:003 0:276 ˙ 0:003 g-emb (c=10) 0:239 ˙ 0:006 0:239 ˙ 0:005 0:246 ˙ 0:004 0:245 ˙ 0:003 g-emb (c=50) 0:227 ˙ 0:002 0:222 ˙ 0:002 0:235 ˙ 0:003 0:232 ˙ 0:003 ng-emb (c=10) 0:263 ˙ 0:004 0:261 ˙ 0:004 0:250 ˙ 0:004 0:261 ˙ 0:004 Table 2: Analysis of neural data: mean squared error and standard errors of neural activity (on the test set) for different models. Both ef-emb models significantly outperform fa; g-emb is more accurate than ng-emb. stochastic gradient would rescale the subsampled terms, negative sampling does not. Thus, negative sampling corresponds to a biased estimate, which down-weights the contribution of the zeros. See the Supplement for the mathematical details. 3 Empirical Study We study exponential family embedding (ef-emb) models on real-valued and count-valued data, and in different application domains—computational neuroscience, shopping behavior, and movie ratings. We present quantitative comparisons to other dimension reduction methods and illustrate how we can glean qualitative insights from the fitted embeddings. 3.1 Real Valued Data: Neural Data Analysis Data. We analyze the neural activity of a larval zebrafish, recorded at single cell resolution for 3000 time frames (Ahrens et al., 2013). Through genetic modification, individual neurons express a calcium indicator when they fire. The resulting calcium imaging data is preprocessed by a nonnegative matrix factorization to identify neurons, their locations, and the fluorescence activity x t 2 RN of the individual neurons over time (Friedrich et al., 2015). Using this method, our data contains 10,000 neurons (out of a total of 200,000). We fit all models on the lagged data xt D x t x t1 to filter out correlations based on calcium decay and preprocessing.2 The calcium levels can be measured with great spatial resolution but the temporal resolution is poor; the neuronal firing rate is much higher than the sampling rate. Hence we ignore all “temporal structure” in the data and model the simultaneous activity of the neurons. We use the Gaussian embedding (g-emb) and nonnegative Gaussian embedding (ng-emb) from Section 2.1 to model the lagged activity of the neurons conditional on the lags of surrounding neurons. We study context sizes c 2 f10; 50g and latent dimension K 2 f10; 100g. Models. We compare ef-emb to probabilistic factor analysis (fa), fitting K-dimensional factors for each neuron and K-dimensional factor loadings for each time frame. In fa, each entry of the data matrix is Gaussian distributed, with mean equal to the inner product of the corresponding factor and factor loading. Evaluation. We train each model on a random sample of 90% of the lagged time frames and hold out 5% each for validation and testing. With the test set, we use two types of evaluation. (1) Leave one out: For each neuron xi in the test set, we use the measurements of the other neurons to form predictions. For fa this means the other neurons are used to recover the factor loadings; for ef-emb this means the other neurons are used to construct the context. (2) Leave 25% out: We randomly split the neurons into 4 folds. Each neuron is predicted using the three sets of neurons that are out of its fold. (This is a more difficult task.) Note in ef-emb, the missing data might change the size of the context of some neurons. See Table 5 in Supplement C for the choice of hyperparameters. Results. Table 2 reports both types of evaluation. The ef-emb models significantly outperform fa in terms of mean squared error on the test set. g-emb obtains the best results with 100 components and a context size of 50. Figure 1 illustrates how to use the learned embeddings to hypothesize connections between nearby neurons. 2We also analyzed unlagged data but all methods resulted in better reconstruction on the lagged data. 6 Figure 1: Top view of the zebrafish brain, with blue circles at the location of the individual neurons. We zoom on 3 neurons and their 50 nearest neighbors (small blue dots), visualizing the “synaptic weights” learned by a g-emb model (K D 100). The edge color encodes the inner product of the neural embedding vector and the context vectors > n ˛m for each neighbor m. Positive values are green, negative values are red, and the transparency is proportional to the magnitude. With these weights we can hypothesize how nearby neurons interact. Model K D 20 K D 100 p-emb 7:497 ˙ 0:007 7:199 ˙ 0:008 p-emb (dw) 7:110 ˙ 0:007 6:950 ˙ 0:007 ap-emb 7:868 ˙ 0:005 8:414 ˙ 0:003 hpf 7:740 ˙ 0:008 7:626 ˙ 0:007 Poisson pca 8:314 ˙ 0:009 11:01 ˙ 0:01 (a) Market basket analysis. K D 20 K D 100 5:691 ˙ 0:006 5:726 ˙ 0:005 5:790 ˙ 0:003 5:798 ˙ 0:003 5:964 ˙ 0:003 6:118 ˙ 0:002 5:787 ˙ 0:006 5:859 ˙ 0:006 5:908 ˙ 0:006 7:50 ˙ 0:01 (b) Movie ratings. Table 3: Comparison of predictive log-likelihood between p-emb, ap-emb, hierarchical Poisson factorization (hpf) (Gopalan et al., 2015), and Poisson principal component analysis (pca) (Collins et al., 2001) on held out data. The p-emb model outperforms the matrix factorization models in both applications. For the shopping data, downweighting the zeros improves the performance of p-emb. 3.2 Count Data: Market Basket Analysis and Movie Ratings We study the Poisson models Poisson embedding (p-emb) and additive Poisson embedding (ap-emb) on two applications: shopping and movies. Market basket data. We analyze the IRI dataset3 (Bronnenberg et al., 2008), which contains the purchases of anonymous households in chain grocery and drug stores. It contains 137; 632 trips in 2012. We remove items that appear fewer than 10 times, leaving a dataset with 7; 903 items. The context for each purchase is the other purchases from the same trip. MovieLens data. We also analyze the MovieLens-100K dataset (Harper and Konstan, 2015), which contains movie ratings on a scale from 1 to 5. We keep only positive ratings, defined to be ratings of 3 or more (we subtract 2 from all ratings and set the negative ones to 0). The context of each rating is the other movies rated by the same user. After removing users who rated fewer than 20 movies and movies that were rated fewer than 50 times, the dataset contains 777 users and 516 movies; the sparsity is about 5%. Models. We fit the p-emb and the ap-emb models using number of components K 2 f20; 100g. For each K we select the Adagrad constant based on best predictive performance on the validation set. (The parameters we used are in Table 5.) In these datasets, the distribution of the context size is heavy tailed. To handle larger context sizes we pick a link function for the ef-emb model which rescales the sum over the context in Equation (2) by the context size (the number of terms in the sum). We also fit a p-emb model that artificially downweights the contribution of the zeros in the objective function by a factor of 0:1, as done by Hu et al. (2008) for matrix factorization. We denote it as “p-emb (dw).” 3We thank IRI for making the data available. All estimates and analysis in this paper, based on data provided by IRI, are by the authors and not by IRI. 7 Maruchan chicken ramen Yoplait strawberry yogurt Mountain Dew soda Dean Foods 1 % milk M. creamy chicken ramen Yoplait apricot mango yogurt Mtn. Dew orange soda Dean Foods 2 % milk M. oriental flavor ramen Yoplait strawberry orange smoothie Mtn. Dew lemon lime soda Dean Foods whole milk M. roast chicken ramen Yoplait strawberry banana yogurt Pepsi classic soda Dean Foods chocolate milk Table 4: Top 3 similar items to a given example query words (bold face). The p-emb model successfuly captures similarities. We compare the predictive performance with hpf (Gopalan et al., 2015) and Poisson pca (Collins et al., 2001). Both hpf and Poisson pca factorize the data into K-dimensional positive vectors of user preferences, and K-dimensional positive vectors of item attributes. ap-emb and hpf parameterize the mean additively; p-emb and Poisson pca parameterize it multiplicatively. For the ef-emb models and Poisson pca, we use stochastic optimization with `2 regularization. For hpf, we use variational inference. See Table 5 in Supplement C for details. Evaluation. For the market basket data we hold out 5% of the trips to form the test set, also removing trips with fewer than two purchased different items. In the MovieLens data we hold out 20% of the ratings and set aside an additional 5% of the non-zero entries from the test for validation. We report prediction performance based on the normalized log-likelihood on the test set. For p-emb and ap-emb, we compute the likelihood as the Poisson mean of each nonnegative count (be it a purchase quantity or a movie rating) divided by the sum of the Poisson means for all items, given the context. To evaluate hpf and Poisson pca at a given test observation we recover the factor loadings using the other test entries we condition on, and we use the factor loading to form the prediction. Predictive performance. Table 3 summarizes the test log-likelihood of the four models, together with the standard errors across entries in the test set. In both applications the p-emb model outperforms hpf and Poisson pca. On shopping data p-emb with K D 100 provides the best predictions; on MovieLens p-emb with K D 20 is best. For p-emb on shopping data, downweighting the contribution of the zeros gives more accurate estimates. Item similarity in the shopping data. Embedding models can capture qualitative aspects of the data as well. Table 4 shows four example products and their three most similar items, where similarity is calculated as the cosine distance between embedding vectors. (These vectors are from p-emb with downweighted zeros and K D 100.) For example, the most similar items to a soda are other sodas; the most similar items to a yogurt are (mostly) other yogurts. The p-emb model can also identify complementary and substitutable products. To see this, we compute the inner products of the embedding and the context vectors for all item pairs. A high value of the inner product indicates that the probability of purchasing one item is increased if the second item is in the shopping basket (i.e., they are complements). A low value indicates the opposite effect and the items might be substitutes for each other. We find that items that tend to be purchased together have high value of the inner product (e.g., potato chips and beer, potato chips and frozen pizza, or two different types of soda), while items that are substitutes have negative value (e.g., two different brands of pasta sauce, similar snacks, or soups from different brands). Other items with negative value of the inner product are not substitutes, but they are rarely purchased together (e.g., toast crunch and laundry detergent, milk and a toothbrush). Supplement D gives examples of substitutes and complements. Topics in the movie embeddings. The embeddings from MovieLens data identify thematically similar movies. For each latent dimension k, we sort the context vectors by the magnitude of the kth component. This yields a ranking of movies for each component. In Supplement E we show two example rankings. (These are from a p-emb model with K D 50.) The first one contains children’s movies; the second contains science-fiction/action movies. Acknowledgments This work is supported by the EU H2020 programme (Marie Skłodowska-Curie grant agreement 706760), NFS IIS-1247664, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, DARPA N6600115-C-4032, Adobe, the John Templeton Foundation, and the Sloan Foundation. 8 References Ahrens, M. B., Orger, M. B., Robson, D. N., Li, J. M., and Keller, P. J. (2013). Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature Methods, 10(5):413–420. Arnold, B. C., Castillo, E., Sarabia, J. M., et al. (2001). Conditionally specified distributions: an introduction (with comments and a rejoinder by the authors). Statistical Science, 16(3):249–274. Bengio, Y., Schwenk, H., Senécal, J.-S., Morin, F., and Gauvain, J.-L. (2006). Neural probabilistic language models. In Innovations in Machine Learning, pages 137–186. Springer. Bronnenberg, B. J., Kruger, M. W., and Mela, C. F. (2008). Database paper: The IRI marketing data set. Marketing Science, 27(4):745–748. Brown, L. D. (1986). Fundamentals of statistical exponential families with applications in statistical decision theory. Lecture Notes-Monograph Series, 9:i–279. Collins, M., Dasgupta, S., and Schapire, R. E. (2001). A generalization of principal components analysis to the exponential family. In Neural Information Processing Systems, pages 617–624. Duchi, J., Hazan, E., and Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Friedrich, J., Soudry, D., Paninski, L., Mu, Y., Freeman, J., and Ahrens, M. (2015). Fast constrained non-negative matrix factorization for whole-brain calcium imaging data. In NIPS workshop on Neural Systems. Gopalan, P., Hofman, J., and Blei, D. M. (2015). Scalable recommendation with hierarchical Poisson factorization. In Uncertainty in Artificial Intelligence. Gutmann, M. and Hyvärinen, A. (2010). Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Journal of Machine Learning Research. Harper, F. M. and Konstan, J. A. (2015). The MovieLens datasets: History and context. ACM Transactions on Interactive Intelligent Systems (TiiS), 5(4):19. Harris, Z. S. (1954). Distributional structure. Word, 10(2-3):146–162. Hu, Y., Koren, Y., and Volinsky, C. (2008). Collaborative filtering for implicit feedback datasets. Data Mining. Levy, O. and Goldberg, Y. (2014). Neural word embedding as implicit matrix factorization. In Neural Information Processing Systems, pages 2177–2185. McCullagh, P. and Nelder, J. A. (1989). Generalized linear models, volume 37. CRC press. Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013a). Efficient estimation of word representations in vector space. ICLR Workshop Proceedings. arXiv:1301.3781. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). Distributed representations of words and phrases and their compositionality. In Neural Information Processing Systems, pages 3111–3119. Mikolov, T., Yih, W.-T. a., and Zweig, G. (2013c). Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746–751. Mnih, A. and Kavukcuoglu, K. (2013). Learning word embeddings efficiently with noise-contrastive estimation. In Neural Information Processing Systems, pages 2265–2273. Mnih, A. and Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models. In International Conference on Machine Learning, pages 1751–1758. Neal, R. M. (1990). Learning stochastic feedforward networks. Department of Computer Science, University of Toronto. Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Conference on Empirical Methods on Natural Language Processing, volume 14, pages 1532–1543. Ranganath, R., Tang, L., Charlin, L., and Blei, D. M. (2015). Deep exponential families. Artificial Intelligence and Statistics. Rumelhart, D. E., Hintont, G. E., and Williams, R. J. (1986). Learning representations by back-propagating errors. Nature, 323:9. Taddy, M. et al. (2015). Distributed multinomial regression. The Annals of Applied Statistics, 9(3):1394–1414. Vilnis, L. and McCallum, A. (2015). Word representations via Gaussian embedding. In International Conference on Learning Representations. 9
2016
15
6,050
Minimizing Regret on Reflexive Banach Spaces and Nash Equilibria in Continuous Zero-Sum Games Maximilian Balandat, Walid Krichene, Claire Tomlin, Alexandre Bayen Electrical Engineering and Computer Sciences, UC Berkeley [balandat,walid,tomlin]@eecs.berkeley.edu, bayen@berkeley.edu Abstract We study a general adversarial online learning problem, in which we are given a decision set X in a reflexive Banach space X and a sequence of reward vectors in the dual space of X. At each iteration, we choose an action from X, based on the observed sequence of previous rewards. Our goal is to minimize regret. Using results from infinite dimensional convex analysis, we generalize the method of Dual Averaging to our setting and obtain upper bounds on the worst-case regret that generalize many previous results. Under the assumption of uniformly continuous rewards, we obtain explicit regret bounds in a setting where the decision set is the set of probability distributions on a compact metric space S. Importantly, we make no convexity assumptions on either S or the reward functions. We also prove a general lower bound on the worst-case regret for any online algorithm. We then apply these results to the problem of learning in repeated two-player zero-sum games on compact metric spaces. In doing so, we first prove that if both players play a Hannan-consistent strategy, then with probability 1 the empirical distributions of play weakly converge to the set of Nash equilibria of the game. We then show that, under mild assumptions, Dual Averaging on the (infinite-dimensional) space of probability distributions indeed achieves Hannan-consistency. 1 Introduction Regret analysis is a general technique for designing and analyzing algorithms for sequential decision problems in adversarial or stochastic settings (Shalev-Shwartz, 2012; Bubeck and Cesa-Bianchi, 2012). Online learning algorithms have applications in machine learning (Xiao, 2010), portfolio optimization (Cover, 1991), online convex optimization (Hazan et al., 2007) and other areas. Regret analysis also plays an important role in the study of repeated play of finite games (Hart and MasColell, 2001). It is well known, for example, that in a two-player zero-sum finite game, if both players play according to a Hannan-consistent strategy (Hannan, 1957), their (marginal) empirical distributions of play almost surely converge to the set of Nash equilibria of the game (Cesa-Bianchi and Lugosi, 2006). Moreover, it can be shown that playing a strategy that achieves sublinear regret almost surely guarantees Hannan-consistency. A natural question then is whether a similar result holds for games with infinite action sets. In this article we provide a positive answer. In particular, we prove that in a continuous two-player zero sum game over compact (not necessarily convex) metric spaces, if both players follow a Hannan-consistent strategy, then with probability 1 their empirical distributions of play weakly converge to the set of Nash equilibria of the game. This in turn raises another important question: Do algorithms that ensure Hannan-consistency exist in such a setting? More generally, can one develop algorithms that guarantee sub-linear growth of the worst-case regret? We answer these questions affirmatively as well. To this end, we develop a general framework to study the Dual Averaging (or Follow the Regularized Leader) method on reflexive Banach spaces. This framework generalizes a wide range of existing 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. results in the literature, including algorithms for online learning on finite sets (Arora et al., 2012) and finite-dimensional online convex optimization (Hazan et al., 2007). Given a convex subset X of a reflexive Banach space X, the generalized Dual Averaging (DA) method maximizes, at each iteration, the cumulative past rewards (which are elements of X∗, the dual space of X) minus a regularization term h. We show that under certain conditions, the maximizer in the DA update is the Fréchet gradient Dh∗of the regularizer’s conjugate function. In doing so, we develop a novel characterization of the duality between essential strong convexity of h and essential Fréchet differentiability of h∗in reflexive Banach spaces, which is of independent interest. We apply these general results to the problem of minimizing regret when the rewards are uniformly continuous functions over a compact metric space S. Importantly, we do not assume convexity of either S or the rewards, and show that it is possible to achieve sublinear regret under a mild geometric condition on S (namely, the existence of a locally Q-regular Borel measure). We provide explicit bounds for a class of regularizers, which guarantee sublinear worst-case regret. We also prove a general lower bound on the regret for any online algorithm and show that DA asymptotically achieves this bound up to a √log t factor. Our results are related to work by Lehrer (2003) and Sridharan and Tewari (2010); Srebro et al. (2011). Lehrer (2003) gives necessary geometric conditions for Blackwell approachability in infinitedimensional spaces, but no implementable algorithm guaranteeing Hannan-consistency. Sridharan and Tewari (2010) derive general regret bounds for Mirror Descent (MD) under the assumption that the strategy set is uniformly bounded in the norm of the Banach space. We do not make such an assumption here. In fact, this assumption does not hold in general for our applications in Section 3. The paper is organized as follows: In Section 2 we introduce and provide a general analysis of Dual Averaging in reflexive Banach spaces. In Section 3 we apply these results to obtain explicit regret bounds on compact metric spaces with uniformly continuous reward functions. We use these results in Section 4 in the context of learning Nash equilibria in continuous two-player zero sum games, and provide a numerical example in Section 4. All proofs are given in the supplementary material. 2 Regret Minimization on Reflexive Banach Spaces Consider a sequential decision problem in which we are to choose a sequence (x1, x2, . . . ) of actions from some feasible subset X of a reflexive Banach space X, and seek to maximize a sequence (u1(x1), u2(x2), . . . ) of rewards, where the uτ : X →R are elements of a given subset U ⊂X∗, with X∗the dual space of X. We assume that xt, the action chosen at time t, may only depend on the sequence of previously observed reward vectors (u1, . . . , ut−1). We call any such algorithm an online algorithm. We consider the adversarial setting, i.e., we do not make any distributional assumptions on the rewards. In particular, they could be picked maliciously by some adversary. The notion of regret is a standard measure of performance for such a sequential decision problem. For a sequence (u1, . . . , ut) of reward vectors, and a sequence of decisions (x1, . . . , xt) produced by an algorithm, the regret of the algorithm w.r.t. a (fixed) decision x ∈X is the gap between the realized reward and the reward under x, i.e., Rt(x) := Pt τ=1 uτ(x) −Pt τ=1 uτ(xτ). The regret is defined as Rt := supx∈X Rt(x). An algorithm is said to have sublinear regret if for any sequence (ut)t≥1 in the set of admissible reward functions U, the regret grows sublinearly, i.e. lim supt Rt/t ≤0. Example 1. Consider a finite action set S = {1, . . . , n}, let X = X∗= Rn, and let X = ∆n−1, the probability simplex in Rn. A reward function can be identified with a vector u ∈Rn, such that the i-th element ui is the reward of action i. A choice x ∈X corresponds to a randomization over the n actions in S. This is the classic setting of many regret-minimizing algorithms in the literature. Example 2. Suppose S is a compact metric space with µ a finite measure on S. Consider X = X∗= L2(S, µ) and let X = {x ∈X : x ≥0 a.e., ∥x∥1 = 1}. A reward function is an L2integrable function on S, and each choice x ∈X corresponds to a probability distribution (absolutely continuous w.r.t. µ) over S. We will explore a more general variant of this problem in Section 3. In this Section, we prove a general bound on the worst-case regret for DA. DA was introduced by Nesterov (2009) for (finite dimensional) convex optimization, and has also been applied to online learning, e.g. by Xiao (2010). In the finite dimensional case, the method solves, at each iteration, the optimization problem xt+1 = arg maxx∈X ηt Pt τ=1 uτ , x −h(x), where h is a strongly convex 2 regularizer defined on X ⊂Rn and (ηt)t≥0 is a sequence of learning rates. The regret analysis of the method relies on the duality between strong convexity and smoothness (Nesterov, 2009, Lemma 1). In order to generalize DA to our Banach space setting, we develop an analogous duality result in Theorem 1. In particular, we show that the correct notion of strong convexity is (uniform) essential strong convexity. Equipped with this duality result, we analyze the regret of the Dual Averaging method and derive a general bound in Theorem 2. 2.1 Preliminaries Let (X, ∥· ∥) be a reflexive Banach space, and denote by ⟨· , · ⟩: X × X∗→R the canonical pairing between X and its dual space X∗, so that ⟨x, ξ⟩:= ξ(x) for all x ∈X, ξ ∈X∗. By the effective domain of an extended real-valued function f : X →[−∞, +∞] we mean the set dom f = {x ∈X : f(x) < +∞}. A function f is proper if f > −∞and dom f is non-empty. The conjugate or Legendre-Fenchel transform of f is the function f ∗: X∗→[−∞, +∞] given by f ∗(ξ) = sup x∈X ⟨x, ξ⟩−f(x) (1) for all ξ ∈X∗. If f is proper, lower semicontinuous and convex, its subdifferential ∂f is the set-valued mapping ∂f(x) =  ξ ∈X∗: f(y) ≥f(x) + ⟨y −x, ξ⟩for all y ∈X . We define dom ∂f := {x ∈X : ∂f(x) ̸= ∅}. Let Γ denote the set of all convex, lower semicontinuous functions γ : [0, ∞) →[0, ∞] such that γ(0) = 0, and let ΓU :=  γ ∈Γ : ∀r > 0, γ(r) > 0 ΓL :=  γ ∈Γ : γ(r)/r →0, as r →0 (2) We now introduce some definitions. Additional results are reviewed in the supplementary material. Definition 1 (Strömberg, 2011). A proper convex lower semicontinuous function f : X →(−∞, ∞] is essentially strongly convex if (i) f is strictly convex on every convex subset of dom ∂f (ii) (∂f)−1 is locally bounded on its domain (iii) for every x0 ∈dom ∂f there exists ξ0 ∈X∗and γ ∈ΓU such that f(x) ≥f(x0) + ⟨x −x0, ξ0⟩+ γ(∥x −x0∥), ∀x ∈X. (3) If (3) holds with γ independent of x0, f is uniformly essentially strongly convex with modulus γ. Definition 2 (Strömberg, 2011). A proper convex lower semicontinuous function f : X →(−∞, ∞] is essentially Fréchet differentiable if int dom f ̸= ∅, f is Fréchet differentiable on int dom f with Fréchet derivative Df, and ∥Df(xj)∥∗→∞for any sequence (xj)j in int dom f converging to some boundary point of dom f. Definition 3. A proper Fréchet differentiable function f : X →(−∞, ∞] is essentially strongly smooth if ∀x0 ∈dom ∂f, ∃ξ0 ∈X∗, κ ∈ΓL such that f(x) ≤f(x0) + ⟨ξ0, x −x0⟩+ κ(∥x −x0∥), ∀x ∈X. (4) If (4) holds with κ independent of x0, f is uniformly essentially strongly smooth with modulus κ. With this we are now ready to give our main duality result: Theorem 1. Let f : X →(−∞, +∞] be proper, lower semicontinuous and uniformly essentially strongly convex with modulus γ ∈ΓU. Then (i) f ∗is proper and essentially Fréchet differentiable with Fréchet derivative Df ∗(ξ) = arg max x∈X ⟨x, ξ⟩−f(x). (5) If, in addition, ˜γ(r) := γ(r)/r is strictly increasing, then ∥Df ∗(ξ1) −Df ∗(ξ2)∥≤˜γ−1∥ξ1 −ξ2∥∗/2  . (6) In other words, Df ∗is uniformly continuous with modulus of continuity χ(r) = ˜γ−1(r/2). (ii) f ∗is uniformly essentially smooth with modulus γ∗. Corollary 1. If γ(r) ≥C r1+κ, ∀r ≥0 then ∥Df ∗(ξ1) −Df ∗(ξ2)∥≤(2C)−1/κ∥ξ1 −ξ2∥1/κ ∗ . In particular, with γ(r) = K 2 r2, Definition 1 becomes the classic definition of K-strong convexity, and (6) yields the result familiar from the finite-dimensional case that the gradient Df ∗is 1/K Lipschitz with respect to the dual norm (Nesterov, 2009, Lemma 1). 3 2.2 Dual Averaging in Reflexive Banach Spaces We call a proper convex function h : X →(−∞, +∞] a regularizer function on a set X ⊂X if h is essentially strongly convex and dom h = X. We emphasize that we do not assume h to be Fréchet-differentiable. Definition 1 in conjunction with Lemma S.1 (supplemental material) implies that for any regularizer h, the supremum of any function of the form ⟨· , ξ⟩−h( · ) over X, where ξ ∈X∗, will be attained at a unique element of X, namely Dh∗(ξ), the Fréchet gradient of h∗at ξ. DA with regularizer h and a sequence of learning rates (ηt)t≥1 generates a sequence of decisions using the simple update rule xt+1 = Dh∗(ηtUt), where Ut = Pt τ=1 uτ and U0 := 0. Theorem 2. Let h be a uniformly essentially strongly convex regularizer on X with modulus γ and let (ηt)t≥1 be a positive non-increasing sequence of learning rates. Then, for any sequence of payoff functions (ut)t≥1 in X∗for which there exists M < ∞such that supx∈X |⟨ut, x⟩| ≤M for all t, the sequence of plays (xt)t≥0 given by xt+1 = Dh∗ηt Pt τ=1 uτ  (7) ensures that Rt(x) := t X τ=1 ⟨uτ, x⟩− t X τ=1 ⟨uτ, xτ⟩≤h(x) −h ηt + t X τ=1 ∥uτ∥∗˜γ−1ητ−1 2 ∥uτ∥∗  (8) where h = infx∈X h(x), ˜γ(r) := γ(r)/r and η0 := η1. It is possible to obtain a regret bound similar to (8) also in a continuous-time setting. In fact, following Kwon and Mertikopoulos (2014), we derive the bound (8) by first proving a bound on a suitably defined notion of continuous-time regret, and then bounding the difference between the continuous-time and discrete-time regrets. This analysis is detailed in the supplementary material. Note that the condition that supx∈X |⟨ut, x⟩| ≤M in Theorem 2 is weaker than the one in Sridharan and Tewari (2010), as it does not imply a uniformly bounded strategy set (e.g., if X = L2(R) and X is the set of distributions on X, then X is unbounded in L2, but the condition may still hold). Theorem 2 provides a regret bound for a particular choice x ∈X. Recall that Rt := supx∈X Rt(x). In Example 1 the set X is compact, so any continuous regularizer h will be bounded, and hence taking the supremum over x in (8) poses no issue. However, this is not the case in our general setting, as the regularizer may be unbounded on X. For instance, consider Example 2 with the entropy regularizer h(x) = R S x(s) log(x(s))ds, which is easily seen to be unbounded on X. As a consequence, obtaining a worst-case bound will in general require additional assumptions on the reward functions and the decision set X. This will be investigated in detail in Section 3. Corollary 2. Suppose that γ(r) ≥C r1+κ, ∀r ≥0 for some C > 0 and κ > 0. Then Rt(x) ≤h(x) −h ηt + (2C)−1/κ t X τ=1 η1/κ τ−1∥uτ∥1+1/κ ∗ . (9) In particular, if ∥ut∥∗≤M for all t and ηt = η t−β, then Rt(x) ≤h(x) −h η tβ + κ κ −β  η 2C 1/κ M 1+1/κ t1−β/κ. (10) Assuming h is bounded, optimizing over β yields a rate of Rt(x) = O(t κ 1+κ ). In particular, if γ(r) = K 2 r2, which corresponds to the classic definition of strong convexity, then Rt(x) = O( √ t). For non-vanishing uτ we will need that ηt ↘0 for the sum in (9) to converge. Thus we could get potentially tighter control over the rate of this term for κ < 1, at the expense of larger constants. 3 Online Optimization on Compact Metric Spaces We now apply the above results to the problem minimizing regret on compact metric spaces under the additional assumption of uniformly continuous reward functions. We make no assumptions on convexity of either the feasible set or the rewards. Essentially, we lift the non-convex problem of minimizing a sequence of functions over the (possibly non-convex) set S to the convex (albeit infinitedimensional) problem of minimizing a sequence of linear functionals over a set X of probability measures (a convex subset of the vector space of measures on S). 4 3.1 An Upper Bound on the Worst-Case Regret Let (S, d) be a compact metric space, and let µ be a Borel measure on S. Suppose that the reward vectors uτ are given by elements in Lq(S, µ), where q > 1. Let X = Lp(S, µ), where p and q are Hölder conjugates, i.e., 1 p + 1 q = 1. Consider X = {x ∈X : x ≥0 a.e., ∥x∥1 = 1}, the set of probability measures on S that are absolutely continuous w.r.t. µ with p-integrable Radon-Nikodym derivatives. Moreover, denote by Z the class of non-decreasing χ : [0, ∞) →[0, ∞] such that limr→0 χ(r) = χ(0) = 0. The following assumption will be made throughout this section: Assumption 1. The reward vectors ut have modulus of continuity χ on S, uniformly in t. That is, there exists χ ∈Z such that |ut(s) −ut(s′)| ≤χ(d(s, s′)) for all t and for all s, s′ ∈S. Let B(s, r) = {s′ ∈S : d(s, s′) < r} and denote by B(s, δ) ⊂X the elements of X with support contained in B(s, δ). Furthermore, let DS := sups,s′∈S d(s, s′). Then we have the following: Theorem 3. Let (S, d) be compact, and suppose that Assumption 1 holds. Let h be a uniformly essentially strongly convex regularizer on X with modulus γ, and let (ηt)t≥1 be a positive nonincreasing sequence of learning rates. Then, under (7), for any positive sequence (ϑt)t≥1, Rt ≤sups∈S infx∈B(s,ϑt) h(x) −h ηt + t χ(ϑt) + t X τ=1 ∥uτ∥∗˜γ−1ητ−1 2 ∥uτ∥∗  . (11) Remark 1. The sequence (ϑt)t≥1 in Theorem 3 is not a parameter of the algorithm, but rather a parameter in the regret bound. In particular, (11) holds true for any such sequence, and we will use this fact later on to obtain explicit bounds by instantiating (11) with a particular choice of (ϑt)t≥1. It is important to realize that the infimum over B(s, ϑt) in (11) may be infinite, in which case the bound is meaningless. This happens for example if s is an isolated point of some S ⊂Rn and µ is the Lebesgue measure, in which case B(s, ϑt) = ∅. However, under an additional regularity assumption on the measure µ we can avoid such degenerate situations. Definition 4 (Heinonen. et al., 2015). A Borel measure µ on a metric space (S, d) is (Ahlfors) Q-regular if there exist 0 < c0 ≤C0 < ∞such that for any open ball B(s, r) c0rQ ≤µ(B(s, r)) ≤C0rQ. (12) We say that µ is r0-locally Q-regular if (12) holds for all 0 < r ≤r0. Intuitively, under an r0-locally Q-regular measure, the mass in the neighborhood of any point of S is uniformly bounded from above and below. This will allow, at each iteration t, to assign sufficient probability mass around the maximizer(s) of the cumulative reward function. Example 3. The canonical example for a Q-regular measure is the Lebesgue measure λ on Rn. If d is the metric induced by the Euclidean norm, then Q = n and the bound (12) is tight with c0 = C0, a dimensional constant. However, for general sets S ⊂Rn, λ need not be locally Q-regular. A sufficient condition for local regularity of λ is that S is v-uniformly fat (Krichene et al., 2015). Assumption 2. The measure µ is r0-locally Q-regular on (S, d). Under Assumption 2, B(s, ϑt) ̸= ∅for all s ∈S and ϑt > 0, hence we may hope for a bound on infx∈B(s,ϑt) h(x) uniform in s. To obtain explicit convergence rates, we have to consider a more specific class of regularizers. 3.2 Explicit Rates for f-Divergences on Lp(S) We consider a particular class of regularizers called f-divergences or Csiszár divergences (Csiszár, 1967). Following Audibert et al. (2014), we define ω-potentials and the associated f-divergence. Definition 5. Let ω ≤0 and a ∈(−∞, +∞]. A continuous increasing diffeomorphism φ : (−∞, a) →(ω, ∞), is an ω-potential if limz→−∞φ(z) = ω, limz→a φ(z) = +∞and φ(0) ≤1. Associated to φ is the convex function fφ : [0, ∞) →R defined by fφ(x) = R x 1 φ−1(z) dz and the fφ-divergence, defined by hφ(x) = R S fφ x(s)  dµ(s) + ιX (x), where ιX is the indicator function of X (i.e. ιX (x) = 0 if x ∈X and ιX (x) = +∞if x /∈X). A remarkable fact is that for regularizers based on ω potentials, the DA update (7) can be computed efficiently. More precisely, it can be shown (see Proposition 3 in Krichene (2015)) that the maximizer in this case has a simple expression in terms of the dual problem, and the problem of computing xt+1 = Dh∗(ηt Pt τ=1 uτ) reduces to computing a scalar dual variable ν∗ t . 5 Proposition 1. Suppose that µ(S) = 1, and that Assumption 2 holds with constants r0 > 0 and 0 < c0 ≤C0 < ∞. Under the Assumptions of Theorem 3, with h = hφ the regularizer associated to an ω-potential φ, we have that, for any positive sequence (ϑt)t≥1 with ϑt ≤r0, Rt t ≤min(C0ϑQ t , µ(S)) t ηt fφ c−1 0 ϑ−Q t  + χ(ϑt) + 1 t t X τ=1 ∥uτ∥∗˜γ−1ητ−1 2 ∥uτ∥∗  . (13) For particular choices of the sequences (ηt)t≥1 and (ϑt)t≥1, we can derive explicit regret rates. 3.3 Analysis for Entropy Dual Averaging (The Generalized Hedge Algorithm) Taking φ(z) = ez−1, we have that fφ(x) = R x 1 φ−1(z)dz = x log x, and hence the regularizer is hφ(x) = R S x(s) log x(s)dµ(s). Then Dh∗(ξ)(s) = exp ξ(s) ∥exp ξ(s)∥1 . This corresponds to a generalized Hedge algorithm (Arora et al., 2012; Krichene et al., 2015) or the entropic barrier of Bubeck and Eldan (2014) for Euclidean spaces. The regularizer hφ can be shown to be essentially strongly convex with modulus γ(r) = 1 2r2. Corollary 3. Suppose that µ(S) = 1, that µ is r0-locally Q-regular with constants c0, C0, that ∥ut∥∗≤M for all t, and that χ(r) = Cαrα for 0 < α ≤1 (that is, the rewards are α-Hölder continuous). Then, under Entropy Dual Averaging, choosing ηt = η p log t/t with η = 1 M C0Q 2c0 log(c−1 0 ϑ−Q/α) + Q 2α 1/2 and ϑ > 0, we have that Rt t ≤  2M r 2C0 c0  log(c−1 0 ϑ−Q/α) + Q 2α  + Cαϑ r log t t (14) whenever p log t/t < rα 0 ϑ−1. One can now further optimize over the choice of ϑ to obtain the best constant in the bound. Note also that the case α = 1 corresponds to Lipschitz continuity. 3.4 A General Lower Bound Theorem 4. Let (S, d) be compact, suppose that Assumption 2 holds, and let w : R →R be any function with modulus of continuity χ ∈Z such that ∥w(d( · , s′))∥q ≤M for some s′ ∈S for which there exists s ∈S with d(s, s′) = DS. Then for any online algorithm, there exist a sequence (uτ)t τ=1 of reward vectors uτ ∈X∗with ∥uτ∥∗≤M and modulus of continuity χτ < χ such that Rt ≥w(DS) 2 √ 2 √ t, (15) Maximizing the constant in (15) is of interest in order to benchmark the bound against the upper bounds obtained in the previous sections. This problem is however quite challenging, and we will defer this analysis to future work. For Hölder-continuous functions, we have the following result: Proposition 2. In the setting of Theorem 4, suppose that µ(S) = 1 and that χ(r) = Cαrα for some 0 < α ≤1. Then Rt ≥min C1/α α Dα S , M  2 √ 2 √ t. (16) Observe that, up to a √log t factor, the asymptotic rate of this general lower bound for any online algorithm matches that of the upper bound (14) of Entropy Dual Averaging. 4 Learning in Continuous Two-Player Zero-Sum Games Consider a two-player zero sum game G = (S1, S2, u), in which the strategy spaces S1 and S2 of player 1 and 2, respectively, are Hausdorff spaces, and u : S1 × S2 →R is the payoff function of player 1 (as G is zero-sum, the payoff function of player 2 is −u). For each i, denote by Pi := P(Si) the set of Borel probability measures on Si. Denote S := S1 × S2 and P := P1 × P2. For a (joint) mixed strategy x ∈P, we define the natural extension ¯u : P →R by ¯u(x) := Ex[u] = R S u(s1, s2) dx(s1, s2), which is the expected payoff of player 1 under x. 6 A continuous zero-sum game G is said to have value V if sup x1∈P1 inf x2∈P2 ¯u(x1, x2) = inf x2∈P2 sup x1∈P1 ¯u(x1, x2) = V. (17) The elements x1 × x2 ∈P at which (17) holds are the (mixed) Nash Equilibria of G. We denote the set of Nash equilibria of G by N(G). In the case of finite games, it is well known that every two-player zero-sum game has a value. This is not true in general for continuous games, and additional conditions on strategy sets and payoffs are required, see e.g. (Glicksberg, 1950). 4.1 Repeated Play We consider repeated play of the continuous two-player zero-sum game. Given a game G and a sequence of plays (s1 t)t≥1 and (s2 t)t≥1, we say that player i has sublinear (realized) regret if lim sup t→∞ 1 t  sup si∈Si t X τ=1 ui(si, s−i τ ) − t X τ=1 ui(si τ, s−i τ )  ≤0 (18) where we use −i to denote the other player. A strategy σi for player i is, loosely speaking, a (possibly random) mapping from past observations to its actions. Of primary interest to us are Hannan-consistent strategies: Definition 6 (Hannan, 1957). A strategy σi of player i is Hannan consistent if, for any sequence (st −i)t≥1, the sequence of plays (st i)t≥1 generated by σi has sublinear regret almost surely. Note that the almost sure statement in Definition 6 is with respect to the randomness in the strategy σi. The following result is a generalization of its counterpart for discrete games (e.g. Corollary 7.1 in (Cesa-Bianchi and Lugosi, 2006)): Proposition 3. Suppose G has value V and consider a sequence of plays (s1 t)t≥1, (s2 t)t≥1 and assume that both players have sublinear realized regret. Then limt→∞1 t Pt τ=1 u(s1 τ, s2 τ) = V . As in the discrete case (Cesa-Bianchi and Lugosi, 2006), we can also say something about convergence of the empirical distributions of play to the set of Nash Equilibria. Since these distributions have finite support for every t, we can at best hope for convergence in the weak sense as follows: Theorem 5. Suppose that in a repeated two-player zero sum game G that has a value both players follow a Hannan-consistent strategy, and denote by ˆxi t = 1 t Pt τ=1 δsiτ the marginal empirical distribution of play of player i at iteration t. Let ˆxt := (ˆx1 t, ˆx2 t). Then ˆxt ⇀N(G) almost surely, that is, with probability 1 the sequence (ˆxt)t≥1 weakly converges to the set of Nash equilibria of G. Corollary 4. If G has a unique Nash equilibrium x∗, then with probability 1, ˆxt ⇀x∗. 4.2 Hannan-Consistent Strategies By Theorem 5, if each player follows a Hannan-consistent strategy, then the empirical distributions of play weakly converge to the set of Nash equilibria of the game. But do such strategies exist? Regret minimizing strategies are intuitive candidates, and the intimate connection between regret minimization and learning in games is well studied in many cases, e.g. for finite games (CesaBianchi and Lugosi, 2006) or potential games (Monderer and Shapley, 1996). Using our results from Section 3, we will show that, under the appropriate assumption on the information revealed to the player, no-regret learning based on Dual Averaging leads to Hannan consistency in our setting. Specifically, suppose that after each iteration t, each player i observes a partial payoff function ˜ui t : Si →R describing their payoff as a function of only their own action, si, holding the action played by the other player fixed. That is, ˜u1 t(s1) := u(s1, s2 t) and ˜u2 t(s2) := −u(s1 t, s2). Remark 2. Note that we do not assume that the players have knowledge of the joint utility function u. However, we do assume that the player has full information feedback, in the sense that they observe partial reward functions u( · , s−i τ ) on their entire action set, as opposed to only observing the reward u(s1 τ, s2 τ) of the action played (the latter corresponds to the bandit setting). We denote by ˜U i t = (˜ui τ)t τ=1 the sequence of partial payoff functions observed by player i. We use Ui t to denote the set of all possible such histories, and define Ui 0 := ∅. A strategy σi of player i is a collection (σi t)∞ t=1 of (possibly random) mappings σi t : Ui t−1 →Si, such that at iteration t, player i plays si t = σi t(U i t−1). We make the following assumption on the payoff function: 7 Assumption 3. The payoff function u is uniformly continuous in si with modulus of continuity independent of s−i for i = 1, 2. That is, for each i there exists χi ∈Z such that |u(s, s−i) − u(s′, s−i)| ≤χi(di(s, s′)) for all s−i ∈S−i. It is easy to see that Assumption 3 implies that the game has a value (see supplementary material). It also makes our setting compatible with that of Section 3. Suppose now that each player randomizes their play according to the sequence of probability distributions on Si generated by DA with regularizer hi. That is, suppose that each σi t is a random variable with the following distribution: σi t ∼Dh∗ i ηt−1 Pt−1 τ=1 ˜ui τ  . (19) Theorem 6. Suppose that player i uses strategy σi according to (19), and that the DA algorithm ensures sublinear regret (i.e. lim supt Rt/t ≤0). Then σi is Hannan-consistent. Corollary 5. If both players use strategies according to (19) with the respective Dual Averaging ensuring that lim supt Rt/t ≤0, then with probability 1 the sequence (ˆxt)t≥1 of empirical distributions of play weakly converges to the set of Nash equilibria of G. Example Consider a zero-sum game G1 between two players on the unit interval with payoff function u(s1, s2) = s1s2 −a1s1 −a2s2, where a1 = e−2 e−1 and a2 = 1 e−1. It is easy to verify that the pair x1, x2 = exp(s) e−1 , exp(1−s) e−1  is a mixed-strategy Nash equilibrium of G1. For sequences (s1 τ)t τ=1 and (s2 τ)t τ=1, the cumulative payoff functions for fixed action s ∈[0, 1] are given, respectively, by U 1 t (s1) = Σt τ=1s2 τ −a1t  s1 −a2Σt τ=1s2 τ U 2 t (s2) = a2t −Σt τ=1s1 τ  s2 −a1Σt τ=1s1 τ If each player i uses the Generalized Hedge Algorithm with learning rates (ητ)t τ=1, their strategy in period t is to sample from the distribution xi t(s) ∝exp(αi ts), where α1 t = ηt(Σt τ=1s2 τ −a1t) and α2 t = ηt(a2t −Σt τ=1s1 τ). Interestingly, in this case the sum of the opponent’s past plays is a sufficient statistic, in the sense that it completely determines the mixed strategy at time t. 0.0 0.5 1.0 1.5 2.0 2.5 player 1, t=5000 x1(s) player 1, t=50000 x1(s) player 1, t=500000 x1(s) 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 player 2, t=5000 x2(s) 0.0 0.2 0.4 0.6 0.8 1.0 player 2, t=50000 x2(s) 0.0 0.2 0.4 0.6 0.8 1.0 player 2, t=500000 x2(s) Figure 1: Normalized histograms of the empirical distributions of play in G (100 bins) Figure 1 shows normalized histograms of the empirical distributions of play at different iterations t. As t grows the histograms approach the equilibrium densities x1 and x2, respectively. However, this does not mean that the individual strategies xi t converge. Indeed, Figure 2 shows the αi t oscillating around the equilibrium parameters 1 and −1, respectively, even for very large t. We do, however, observe that the time-averaged parameters ¯αi t converge to the equilibrium values 1 and −1. 100 101 102 103 104 105 106 −1 0 1 2 α1 t ¯α1 t α2 t ¯α2 t Figure 2: Evolution of parameters αi t and ¯αi t := 1 t Pt τ=1 αi τ in G1 In the supplementary material we provide additional numerical examples, including one that illustrates how our algorithms can be utilized as a tool to compute approximate Nash equilibria in continuous zero-sum games on non-convex domains. 8 References Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a metaalgorithm and applications. Theory of Computing, 8(1):121–164, 2012. Jean-Yves Audibert, Sébastien Bubeck, and Gàbor Lugosi. Regret in online combinatorial optimization. Mathematics of Operations Research, 39(1):31–45, 2014. S. Bubeck and R. Eldan. The entropic barrier: a simple and optimal universal self-concordant barrier. ArXiv e-prints, December 2014. Sébastien Bubeck and Nicolò Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multiarmed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge UP, 2006. Thomas M. Cover. Universal portfolios. Mathematical Finance, 1(1):1–29, 1991. Imre Csiszár. Information-type measures of difference of probability distributions and indirect observations. Studia Scientiarum Mathematicarum Hungarica, 2:299–318, 1967. Irving L. Glicksberg. Minimax theorem for upper and lower semicontinuous payoffs. Research Memorandum RM-478, The RAND Corporation, Oct 1950. James Hannan. Approximation to Bayes risk in repeated play. In Contributions to the Theory of Games, vol III of Annals of Mathematics Studies 39. Princeton University Press, 1957. Sergiu Hart and Andreu Mas-Colell. A general class of adaptive strategies. Journal of Economic Theory, 98(1):26 – 54, 2001. Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169–192, 2007. Juha Heinonen., Pekka Koskela, Nageswari Shanmugalingam, and Jeremy T. Tyson. Sobolev Spaces on Metric Measure Spaces: An Approach Based on Upper Gradients. New Mathematical Monographs. Cambridge University Press, 2015. Walid Krichene. Dual averaging on compactly-supported distributions and application to no-regret learning on a continuum. CoRR, abs/1504.07720, 2015. Walid Krichene, Maximilian Balandat, Claire Tomlin, and Alexandre Bayen. The Hedge Algorithm on a Continuum. In 32nd International Conference on Machine Learning, pages 824–832, 2015. Joon Kwon and Panayotis Mertikopoulos. A continuous-time approach to online optimization. ArXiv e-prints, January 2014. Ehud Lehrer. Approachability in infinite dimensional spaces. International Journal of Game Theory, 31(2):253–268, 2003. Dov Monderer and Lloyd S. Shapley. Potential games. Games and Economic Behavior, 14(1):124 – 143, 1996. Yurii Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221–259, 2009. Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012. Nati Srebro, Karthik Sridharan, and Ambuj Tewari. On the universality of online mirror descent. In Advances in Neural Information Processing Systems 24 (NIPS), pages 2645–2653. 2011. Karthik Sridharan and Ambuj Tewari. Convex games in banach spaces. In COLT 2010 - The 23rd Conference on Learning Theory,, pages 1–13, Haifa, Israel, June 2010. Thomas Strömberg. Duality between Fréchet differentiability and strong convexity. Positivity, 15(3): 527–536, 2011. Lin Xiao. Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res., 11:2543–2596, December 2010. 9
2016
150
6,051
Spatiotemporal Residual Networks for Video Action Recognition Christoph Feichtenhofer Graz University of Technology feichtenhofer@tugraz.at Axel Pinz Graz University of Technology axel.pinz@tugraz.at Richard P. Wildes York University, Toronto wildes@cse.yorku.ca Abstract Two-stream Convolutional Networks (ConvNets) have shown strong performance for human action recognition in videos. Recently, Residual Networks (ResNets) have arisen as a new technique to train extremely deep architectures. In this paper, we introduce spatiotemporal ResNets as a combination of these two approaches. Our novel architecture generalizes ResNets for the spatiotemporal domain by introducing residual connections in two ways. First, we inject residual connections between the appearance and motion pathways of a two-stream architecture to allow spatiotemporal interaction between the two streams. Second, we transform pretrained image ConvNets into spatiotemporal networks by equipping them with learnable convolutional filters that are initialized as temporal residual connections and operate on adjacent feature maps in time. This approach slowly increases the spatiotemporal receptive field as the depth of the model increases and naturally integrates image ConvNet design principles. The whole model is trained end-to-end to allow hierarchical learning of complex spatiotemporal features. We evaluate our novel spatiotemporal ResNet using two widely used action recognition benchmarks where it exceeds the previous state-of-the-art. 1 Introduction Action recognition in video is an intensively researched area, with many recent approaches focused on application of Convolutional Networks (ConvNets) to this task, e.g. [13, 20, 26]. As actions can be understood as spatiotemporal objects, researchers have investigated carrying spatial recognition principles over to the temporal domain by learning local spatiotemporal filters [13, 25, 26]. However, since the temporal domain arguably is fundamentally different from the spatial one, different treatment of these dimensions has been considered, e.g. by incorporating optical flow networks [20], or modelling temporal sequences in recurrent architectures [4, 18, 19]. Since the introduction of the “AlexNet” architecture [14] in the 2012 ImageNet competition, ConvNets have dominated state-of-the-art performance across a variety of computer vision tasks, including object-detection, image segmentation, image classification, face recognition, human pose estimation and tracking. In conjunction with these advances as well as the evolution of network architectures, several design best practices have emerged [8, 21, 23, 24]. First, information bottlenecks should be avoided and the representation size should gently decrease from the input to the output as the number of feature channels increases with the depth of the network. Second, the receptive field at the end of the network should be large enough that the processing units can base operations on larger regions of the input. This functionality can be achieved by stacking many small filters or using large filters in the network; notably, the first choice can be implemented with fewer operations (faster, fewer parameters) and also allows inclusion of more nonlinearities. Third, dimensionality reduction (1×1 convolutions) before spatially aggregating filters (e.g. 3×3) is supported by the fact that outputs of neighbouring filters are highly correlated and therefore these activations can be reduced before aggregation [23]. Fourth, spatial factorization into asymmetric filters can even further reduce computational cost and 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Appearance Stream Motion Stream conv1 conv2_x + conv1 conv2_x + conv3_x + conv3_x + conv4_x + conv4_x + conv5_x + conv5_x + loss loss Figure 1: Our method introduces residual connections in a two-stream ConvNet model [20]. The two networks separately capture spatial (appearance) and temporal (motion) information to recognize the input sequences. We do not use residuals from the spatial into the temporal stream as this would bias both losses towards appearance information. ease the learning problem. Fifth, it is important to normalize the responses of each feature channel within a batch to reduce internal covariate shift [11]. The last architectural guideline is to use residual connections to facilitate training of very deep models that are essential for good performance [8]. We carry over these good practices for designing ConvNets in the image domain to the video domain by converting the 1×1 convolutional dimensionality mapping filters in ResNets to temporal filters. By stacking several of these transformed temporal filters throughout the network we provide a large receptive field for the discriminative units at the end of the network. Further, this design allows us to convert spatial ConvNets into spatiotemporal models and thereby exploits the large amount of training data from image datasets such as ImageNet. We build on the two-stream approach [20] that employs two separate ConvNet streams, a spatial appearance stream, which achieves state-of-the-art action recognition from RGB images and a temporal motion stream, which operates on optical flow information. The two-stream architecture is inspired by the two-stream hypothesis from neuroscience [6] that postulates two pathways in the visual cortex: The ventral pathway, which responds to spatial features such as shape or colour of objects, and the dorsal pathway, which is sensitive to object transformations and their spatial relationship, as e.g. caused by motion. We extend two-stream ConvNets in the following ways. First, motivated by the recent success of residual networks (ResNets) [8] for numerous challenging recognition tasks on datasets such as ImageNet and MS COCO, we apply ResNets to the task of human action recognition in videos. Here, we initialize our model with pre-trained ResNets for image categorization [8] to leverage a large amount of image-based training data for the action recognition task in video. Second, we demonstrate that injecting residual connections between the two streams (see Fig. 1) and jointly fine-tuning the resulting model achieves improved performance over the two-stream architecture. Third, we overcome limited temporal receptive field size in the original two-stream approach by extending the model over time. We convert convolutional dimensionality mapping filters to temporal filters that provide the network with learnable residual connections over time. By stacking several of these temporal filters and sampling the input sequence at large temporal strides (i.e. skipping frames), we enable the network to operate over large temporal extents of the input. To demonstrate the benefits of our proposed spatiotemporal ResNet architecture, it has been evaluated on two standard action recognition benchmarks where it greatly boosts the state-of-the-art. 2 Related work Approaches for action recognition in video can largely be divided into two categories: Those that use hand-crafted features with decoupled classifiers and those that jointly learn features and classifier. Our work is related to the latter, which is outlined in the following. Several approaches have been presented for spatiotemporal feature learning. Unsupervised learning techniques have been applied by stacking ISA or convolutional gated RBMs to learn spatiotemporal features for action recognition [16, 25]. In other work, spatiotemporal features are learned by extending 2D ConvNets into time by stacking consecutive video frames [12]. Yet another study compared several approaches to extending ConvNets into the temporal domain, but with rather disappointing results [13]: The architectures were not particularly sensitive to temporal modelling, 2 with a slow fusion model performing slightly better than early and late fusion alternatives; moreover, similar levels of performance were achieved by a purely spatial network. The recently proposed C3D approach learns 3D ConvNets on a limited temporal support of 16 frames and all filter kernels having size 3×3×3 [26]. The network structure is similar to earlier deep spatial networks [21]. Another research branch has investigated combining image information in network architectures across longer time periods. A comparison of temporal pooling architectures suggested that temporal pooling of convolutional layers performs better than slow, local, or late pooling, as well as temporal convolution [18]. That work also considered ordered sequence modelling, which feeds ConvNet features into a recurrent network with Long Short-Term Memory (LSTM) cells. Using LSTMs, however, did not yield an improvement over temporal pooling of convolutional features. Other work trained an LSTM on human skeleton sequences to regularize another LSTM that uses an Inception network for frame-level descriptor input [17]. Yet other work uses a multilayer LSTM to let the model attend to relevant spatial parts in the input frames [19]. Further, the inner product of a recurrent model has been replaced with a 2D convolution and thereby converts the fully connected hidden layers in a GRU-RNN to 2D convolutional operations [1]. That approach takes advantage of the local spatial similarity in images; however, it only yields a minor increase over their baseline, which is a two-stream VGG-16 ConvNet [21] used as the input to their convolutional RNN. Finally, three recent approaches for action recognition apply ConvNets as follows: In [2] dynamic images are created by weighted averaging of video frames over time; [31] captures the transformation of ConvNet features from the beginning to the end of the video with a Siamese architecture; and [5] introduces a spatiotemporal convolutional fusion layer between the streams of a two-stream architecture. Notably, the most closely related work to ours (and to several of those above) is the two-stream ConvNet architecture [20]. That approach first decomposes video into spatial and temporal components by using RGB and optical flow frames. These components are fed into separate deep ConvNet architectures to learn spatial as well as temporal information about the appearance and movement of the objects in a scene. Each stream initially performs video recognition on its own and for final classification, softmax scores are combined by late fusion. To date, this approach is the most effective approach of applying deep learning to action recognition, especially with limited training data. In our work we directly convert image ConvNets into 3D architectures and show greatly improved performance over the two-stream baseline. 3 Technical approach 3.1 Two-Stream residual networks As our base representation we use deep ResNets [8, 9]. These networks are designed similarly to the VGG networks [21], with small 3×3 spatial filters (except at the first layer), and similar to the Inception networks [23], with 1×1 filters for learned dimensionality reduction and expansion. The network sees an input of size 224×224 that is reduced five times in the network by stride 2 convolutions followed by a global average pooling layer of the final 7×7 feature map and a fullyconnected classification layer with softmax. Each time the spatial size of the feature map changes, the number of features is doubled to avoid tight bottlenecks. Batch normalization [11] and ReLU [14] are applied after each convolution; the network does not use hidden fc, dropout, or max-pooling (except immediately after the first layer). The residual units are defined as [8, 9]: xl+1 = f (xl + F(xl; Wl)) , (1) where xl and xl+1 are input and output of the l-th layer, F is a nonlinear residual mapping represented by convolutional filter weights Wl = {Wl,k|1≤k≤K} with K ∈{2, 3} and f ≡ReLU [9]. A key advantage of residual units is that their skip connections allow direct signal propagation from the first to the last layer of the network. Especially during backpropagation this arrangement is advantageous: Gradients are propagated directly from the loss layer to any previous layer while skipping intermediate weight layers that have potential to trigger vanishing or deterioration of the gradient signal. We also leverage the two-stream architecture [20]. For both streams, we use the ResNet-50 model [8] pretrained on the ImageNet dataset and replace the last (classifiation) layer according to the number of classes in the target dataset. The filters in the first layer of the motion stream are further modified by replicating the three RGB filter channels to a size of 2L = 20 for operating over the horizontal and vertical optical flow stacks, each of which has a stack of L = 10 frames. This tack allows us to exploit the availability of a large amount of annotated training data for both streams. 3 + 3x3x1 x 512 1x1x3 x 2048 1x1x3 x 512 ReLU ReLU ReLU 1x1x1 x 512 + 3x3x1 x 512 1x1x1 x 2048 ReLU ReLU ReLU + 3x3x1 x 512 1x1x3 x 2048 1x1x3 x 512 ReLU ReLU 1x1x1 x 512 + 3x3x1 x 512 1x1x1 x 2048 ReLU ReLU ReLU Motion Stream + conv5_2 conv5_3 1x1x1 x 512 + 3x3x1 x 512 1x1x1 x 2048 ReLU ReLU ReLU conv5_1 1x1x1 x 512 + 3x3x1 x 512 1x1x1 x 2048 ReLU ReLU ReLU 1x1x1 x 2048 1x1x1 x 2048 Appearance Stream Figure 2: The conv5_x residual units of our architecture. A residual connection (highlighted in red) between the two streams enables motion interactions. The second residual unit, conv5_2 also includes temporal convolutions (highlighted in green) for learning abstract spacetime features. A drawback of the two-stream architecture is that it is unable to spatiotemporally register appearance and motion information. Thus, it is not able to represent what (captured by the spatial stream) moves in which way (captured by the temporal stream). Here, we remedy this deficiency by letting the network learn such spatiotemporal cues at several spatiotemporal scales. We enable this interaction by introducing residual connections between the two streams. Just as there can be various types of shortcut connections in a ResNet, there are several ways the two streams can be connected. In preliminary experiments we found that direct connections between identical layers of the two streams led to an increase in validation error. Similarly, bidirectional connections increased the validation error significantly. We conjecture that these results are due to the large change that the signal of one network stream undergoes after injecting a fusion signal from the other stream. Therefore, we developed a more subtle alternative solution based on additive interactions, as follows. Motion Residuals. We inject a skip connection from the motion stream to the appearance stream’s residual unit. To enable learning of spatiotemporal features at all possible scales, this modification is applied before the second residual unit at each spatial resolution of the network (indicated by “skip-stream” in Table 1), as exemplified by the connection at the conv5_x layers in Fig. 2. Formally, the corresponding appearance stream’s residual units (1) are modified according to ˆxa l+1 = f(xa l) + F  xa l + f(xm l ), Wa l  , (2) where xa l is the input of the l-th layer appearance stream, xm l the input of the l-th layer motion stream and Wa l are the weights of the l-th layer residual unit in the appearance stream. For the gradient on the loss function L in the backward pass the chain rule yields ∂L ∂xa l = ∂L ∂ˆxa l+1 ∂ˆxa l+1 ∂xa l = ∂L ∂ˆxa l+1 ∂f(xa l) ∂xa l + ∂ ∂xa l F  xa l + f(xm l ), Wa l  (3) for the appearance stream and similarly for the motion stream ∂L ∂xm l = ∂L ∂xm l+1 ∂xm l+1 ∂xm l + ∂L ∂ˆxa l+1 ∂ ∂xa l F  xa l + f(xm l ), Wa l  , (4) where the first additive term of (4) is the gradient at the l-th layer in the motion stream and the second term accumulates gradients from the appearance stream. Thus, the residual connection between the streams backpropagates gradients from the appearance stream into the motion stream. 3.2 Convolutional residual connections across time Spatiotemporal coherence is an important cue when working with time varying visual data and can be exploited to learn general representations from video in an unsupervised manner [7]. In that case, temporal smoothness is an important property and is enforced by requiring features to vary slowly with respect to time. Further, one can expect that in many cases a ConvNet is capturing similar features across time. For example, an action with repetitive motion patterns such as “Hammering” would trigger similar features for the appearance and motion stream over time. For such cases the use of temporal residual connections would make perfect sense. However, for cases where the 4 pool fc Time t conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + conv1 conv2_x + conv_3x + conv_4x + conv_5x + τ Figure 3: The temporal receptive field of a single neuron at the fifth meta layer of our motion network stream is highlighted. τ indicates the temporal stride between inputs. The outputs of conv5_3 are max-pooled in time and fed to the fully connected layer of our ST-ResNet*. appearance or the instantaneous motion pattern varies over time, a residual connection would be suboptimal for discriminative learning, since the sum operation corresponds to a low-pass filtering over time and would smooth out potentially important high-frequency temporal variation of the features. Moreover, backpropagation is unable to compensate for that deficit since at a sum layer all gradients are distributed equally from output to input connections. Based on the above observations, we developed a novel approach to temporal residual connections that builds on the ConvNet design guidelines of chaining small [21] asymmetric [10, 23] filters, noted in Sec. 1. We extend the ResNet architecture with temporal convolutions by transforming spatial dimensionality mapping filters in the residual paths to temporal filters. This allows the straightforward use of standard two-stream ConvNets that have been pre-trained on large-scale datasets e.g. to leverage the massive amounts of training data from the ImageNet challenge. We initialize the temporal weights as residual connections across time and let the network learn to best discriminate image dynamics via backpropagation. We achieve this by replicating the learned spatial 1×1 dimensionality mapping kernels in pretrained ResNets across time. Given the pretrained spatial weights, wl ∈R1×1×C, temporal filters,ˆwl ∈R1×1×T ′×C, are initialized according to ˆwl(i, j, t, c) = wl(i, j, c) T ′ , ∀t ∈[1, T ′], (5) and subsequently refined via backpropagation. In (5), division by T ′ serves to average feature responses across time. We transform filters from both the motion and the appearance ResNets accordingly. Hence, the temporal filters are able to learn the temporal evolution of the appearance and motion features and, moreover, by stacking such filters as the depth of the network increases complex spatiotemporal relationships can be modelled. 3.3 Proposed architecture Our overall architecture (used for each stream) is summarized in Table 1. The underlying network used is a 50 layer ResNet [8]. Each filtering operation is followed by batch normalization [11] and halfway rectification (ReLU). In the columns we show “metalayers” which share the same output size. From left to right, top to bottom, the first row shows the convolutional and pooling building blocks, with the filter and pooling size shown as (W × H × T, C), denoting width, height, temporal extent and number of feature channels, resp. Brackets outline residual units equipped with skip connections. In the last two rows we show the output size of these metalayers as well as the receptive field on which they operate. One observes that the temporal receptive field is modulated by the temporal stride τ between the input chunks. For example, if the stride is set to τ = 15 frames, a unit at conv5_3 sees a window of 17 ∗15 = 255 frames on the input video; see. Fig. 3. The pool5 layer receives multiple spatiotemporal features, where the spatial 7 × 7 features are averaged as in [8] and the temporal features are max-pooled within a window of 5, with each of these seeing a window of 705 frames at the input. The pool5 output is classified by a fully connected layer of size 1 × 1 × 1 × 2048; note that this passes several temporally max-pooled chunks to the softmax log-loss layer afterwards. For videos with less than 705 frames we reduce the stride between temporal inputs and for extremely short videos we symmetrically pad the input over time. Sub-batch normalization. Batch normalization [11] subtracts from all activations the batchwise mean and divides by their variance. These moments are estimated by averaging over spatial locations and multiple images in the batch. After batch normalization a learned, channel-specific affine transformation (scaling and bias) is applied. The noisy bias/variance estimation replaces the need 5 Layers conv1 pool1 conv2_x conv3_x conv4_x conv5_x pool5 Blocks 7×7×1, 64 3×3×1 max " 1×1×1, 64 3×3×1, 64 1×1×1, 256 # " 1×1×1, 128 3×3×1, 128 1×1×1, 512 # " 1×1×1, 256 3×3×1, 256 1×1×1, 1024 # " 1×1×1, 512 3×3×1, 512 1×1×1, 2048 # 7×7×1 avg stride 2 1×1×5 max skip-stream skip-stream skip-stream skip-stream stride 2 " 1×1×3, 64 3×3×1, 64 1×1×3, 256 # " 1×1×3, 128 3×3×1, 128 1×1×3, 512 # " 1×1×3, 256 3×3×1, 256 1×1×3, 1024 # " 1×1×3, 512 3×3×1, 512 1×1×3, 2048 # " 1×1×1, 64 3×3×1, 64 1×1×1, 256 # " 1×1×1, 128 3×3×1, 128 1×1×1, 512 # ×2 " 1×1×1, 256 3×3×1, 256 1×1×1, 1024 # ×4 " 1×1×1, 512 3×3×1, 512 1×1×1, 2048 # Output 112×112×11 56×56×11 56×56×11 28×28×11 14×14×11 7×7×11 1×1× 4 size Recept. 7×7×1 11×11×1 35×35×5τ 99×99×9τ 291×291×13τ 483×483×17τ 675 × 675× 47τ Field Table 1: Spatiotemporal ResNet architecture used in both ConvNet streams. The metalayers are shown in the columns with their building blocks showing the convolutional filter dimensions (W ×H ×T, C) in brackets. Each building block shown in brackets also has a skip connection to the block below and skip-stream denotes a residual connection from the motion to the appearance stream, e.g., see Fig. 2 for the conv5_2 building block. Stride 2 downsampling is performed by conv1, pool1, conv3_1, conv4_1 and conv5_1. The output and receptive field size of these layers is shown below. For both streams, the pool5 layer is followed by a 1 × 1 × 1 × 2048 fully connected layer, a softmax and a loss. for dropout regularization [8, 24]. We found that lowering the number of samples used for batch normalization can further improve the generalization performance of the model. For example, for the appearance stream we use a low batch size of 4 for moment estimation during training. This practice strongly supports generalization of the model and nontrivially increases validation accuracy (≈4% on UCF101). Interestingly, in comparison to this approach, using dropout after the classification layer (e.g. as in [24]) decreased validation accuracy of the appearance stream. Note that only the batchsize for normalizing the activations is reduced; the batch size in stochastic gradient descent is unchanged. 3.4 Model training and evaluation Our method has been implemented in MatConvNet [28] and we share our code and models at https://github.com/feichtenhofer/st-resnet. We train our model in three optimization steps with the parameters listed in Table 2. Training phase SGD Bnorm Learning Rate (#Iterations) Temporal batch size batch size chunks / stride τ Motion stream 256 86 10−2(30K), 10−3(10K) 1 / τ = 1 Appearance stream 256 8 10−2(10K), 10−3(10K) 1 / τ = 1 ST-ResNet 128 4 10−3(30K), 10−4(30K), 10−5(20K) 5 / τ ∈[5, 15] ST-ResNet* 128 4 10−4(2K), 10−5(2K) 11 / τ ∈[1, 15] Table 2: Parameters for the three training phases of our model Motion and appearance streams. First, each stream is trained similar to [20] using Stochastic Gradient Descent (SGD) with momentum of 0.9. We rescale all videos by keeping the aspect ratio and resizing the smallest side of a frame to 256. The motion network uses optical flow stacking with L = 10 frames and is trained for 30K iterations with a learning rate of 10−2 followed by 10K iterations at a learning rate of 10−3. At each iteration, a batch of 256 samples is constructed by randomly sampling a single optical flow stack from a video; however, for batch normalization [11], we only use 86 samples to facilitate generalization. We precompute optical flow [32] before training and store the flow fields as JPEGs (with displacement vectors > 20 pixels clipped). During training, we use the same augmentations as in [1, 31]; i.e. randomly cropping from the borders and centre of the flow stack and sampling the width and height of each crop randomly within 256, 224, 192, 168, following by resizing to 224 × 224. The appearance stream is trained identically with a batch of 256 RGB frames and learning rate of 10−2 for 10K iterations, followed by 10−3 for another 10K iterations. Notably here we choose a very small batch size of 8 for normalization. We also apply random cropping and scale augmentations: We randomly jitter the width and height of the 224 × 224 input frame by ±25% and also randomly crop it from a maximum of 25% distance from the image borders. The cropped patch is rescaled to 224 × 224 and passed as input to the network. The same rescaling and cropping technique is chosen to train the next two steps described below. In all our training steps we use random horizontal flipping and do not apply RGB colour jittering [14]. ST-ResNet. Second, to train our spatiotemporal ResNet we sample 5 inputs from a video with random temporal stride between 5 and 15 frames. This technique can be thought of as frame-rate jittering for the temporal convolutional layers and is important to reduce overfitting of the final model. 6 SGD is used with a batch size of 128 videos where 5 temporal chunks are extracted from each. Batch-normalization uses a smaller batch size of 128/32 = 4. The learning rate is set to 10−3 and is reduced by a factor of 10 after 30K iterations. Notably, there is no pooling over time, which leads to temporal fully convolutional training with a single loss for each of the 5 inputs and both streams. We found that this strategy significantly reduces the training duration with the drawback that each loss does not capture all available information. We overcome this by the next training step. ST-ResNet*. For our final model, we equip the spatiotemporal ResNet with a temporal max-pooling layer after pool5 (see Table 1, temporal average pooling led to inferior results) and continue training as above with the learning rate starting from 10−4 for 2K iterations followed by 10−5. As indicated in Table 2, we now use 11 temporal chunks as input with the stride τ between these being randomly chosen from [1, 15]. Fully convolutional inference. For fair comparison, we follow the evaluation procedure of the original two-stream work [20] by sampling 25 frames (and their horizontal flips). However, rather than using 10 spatial 224 × 224 crops from each of the frames, we apply fully convolutional testing both spatially (smallest side rescaled to 256) and temporally (the 25 frame-chunks) by classifying the video in a single forward pass, which takes ≈250ms on a Titan X GPU. For inference, we average the predictions of the fully connected layers (without softmax) over all spatiotemporal locations. 4 Evaluation We evaluate our approach on two challenging action recognition datasets. First, we consider UCF101 [22], which consists of 13320 videos showing 101 action classes. It provides large diversity in terms of actions, variations in background, illumination, camera motion and viewpoint, as well as object appearance, scale and pose. Second, we consider HMDB51 [15], which has 6766 videos that show 51 different actions and generally is considered more challenging than UCF0101 due to the even wider variations in which actions occur. For both datasets, we use the provided evaluation protocol and report mean average accuracy over three splits into training and test sets. 4.1 Two-Stream ResNet with additive interactions Table 3 shows the results of our two-stream architecture across the three training stages outlined in Sec. 3.4. For stream fusion, we always average the (non-softmaxed) prediction scores of the classification layer as this approach produces better results than averaging the softmax scores. Initially, let us consider the performance of the two streams, both initialized with ResNet50 models trained on ImageNet [8], but without cross-stream residual connections (2) and temporal convolutional layers (5). The accuracies for UCF101 and HMDB51 are 89.47% and 60.59%, (our HMDB51 motion stream is initialized from the UCF101 model). Comparatively, a VGG16 two-stream architecture produces 91.4% and 58.5% [1, 31]. In comparing these results it is notable that the VGG16 architecture is more computationally demanding (19.6 vs. 3.8 billion multiply-add FLOPs ) and also holds more model parameters (135M vs. 34M) than a ResNet50 model. Dataset Appearance stream Motion stream Two-Streams ST-ResNet ST-ResNet* UCF101 82.29% 79.05% 89.47% 92.76% 93.46% HMDB51 43.42% 55.47% 60.59% 65.57% 66.41% Table 3: Classification accuracy on UCF101 and HMDB51 in the three training stages of our model. We now consider our proposed spatiotemporal ResNet (ST-ResNet), which is initialized by our twostream ResNet50 model of above and subsequently equipped with 4 residual connections between the streams and 16 transformed temporal convolution layers (initialized as averaging filters). The model is trained end-to-end with the loss layers unchanged (we found that using a single, joint softmax classifier overfits severely to appearance information) and learning parameters chosen as in Table 2. The results are shown in the penultimate column of Table 3. Our architecture significantly improves over the two-stream baseline indicating the importance of residual connections between the streams as well as temporal convolutional connections over time. Interestingly, research in neuroscience also suggests that the human visual cortex is equipped with connections between the dorsal and the ventral stream to distribute motion information to separate visual areas [3, 27]. Finally, in the last column of Table 3 we show results for our ST-ResNet* architecture that is further equipped with a temporal max-pooling layer to consider larger temporal windows in training and testing. For training ST-ResNet* we use 11 temporal chunks at the input and the max-pooling layer pools over 5 chunks to expand the temporal receptive field at the loss layer to a maximum of 705 frames at the input. For 7 testing, where the network sees 25 temporal chunks, we observe that this long-term pooling further improves accuracy over our ST-ResNet by around 1% on both datasets. 4.2 Comparison with the state-of-the-art We compare to the state-of-the-art in action recognition over all three splits of UCF101 and HMDB51 in Table 4 (left). We use ST-ResNet*, as above, and predict the videos in a single forward pass using fully convolutional testing. When comparing to the original two-stream method [20], we improve by 5.4% on UCF101 and by 7% on HMDB51. Apparently, even though the original two-stream approach has the advantage of multitask learning (HMDB51) and SVM fusion, the benefits of our deeper architecture with its cross-stream residual connections are greater. Another interesting comparison is against the two-stream network in [18], which attaches an LSTM to a two-stream Inception [23] architecture. Their accuracy of 88.6% is to date the best performing approach using LSTMs for action recognition. Here, our gain of 4.8% further underlines the importance of our architectural choices. Method UCF101 HMDB51 Two-Stream ConvNet [20] 88.0% 59.4% Two-Stream+LSTM[18] 88.6% Two-Stream (VGG16) [1, 31] 91.4% 58.5% Transformations[31] 92.4% 62.0% Two-Stream Fusion[5] 92.5% 65.4% ST-ResNet* 93.4% 66.4% Method UCF101 HMDB51 IDT [29] 86.4% 61.7% C3D + IDT [26] 90.4% TDD + IDT [30] 91.5% 65.9% Dynamic Image Networks + IDT [2] 89.1% 65.2% Two-Stream Fusion[5] 93.5% 69.2% ST-ResNet* + IDT 94.6% 70.3% Table 4: Mean classification accuracy of the state-of-the-art on HMDB51 and UCF101 for the best ConvNet approaches (left) and methods that additionally use IDT features (right). Our ST-ResNet obtains best performance on both datasets. The Transformations [31] method captures the transformation from start to finish of a video by using two VGG16 Siamese streams (that do not share model parameters, i.e. 4 VGG16 models) to discriminatively learn a transformation matrix. This method uses considerably more parameters than our approach, yet is readily outperformed by ours. When comparing with the previously best performing approach [5], we observe that our method provides a consistent performance gain of around 1% on both datasets. The combination of ConvNet methods with trajectory-based hand-crafted IDT features [29] typically boosts performance nontrivially [2, 26]. Therefore, we further explore the benefits of adding trajectory features to our approach. We achieve this by simply averaging the L2-normalized SVM scores of the FV-encoded IDT descriptors (i.e. HOG, HOF, MBH) [29] with the L2-normalized video predictions of our ST-ResNet*, again without softmax normalization. The results are shown in Table 4 (right) where we observe a notable boost in accuracy of our approach on HMDB51, albeit less on UCF101. Note that unlike our approach, the other approaches in Table 4 (right) suffer considerably larger performance drops when used without IDT, e.g. C3D [26] reduces to 85.2% on UCF101, while Dynamic Image Networks [2] reduces to 76.9% on UCF101 and 42.8% on HMDB51. These relatively larger performance decrements again underline that our approach is better able to capture the available dynamic information, as there is less to be gained by augmenting it with IDT. Still, there is a benefit from the hand-crafted IDT features even with our approach, which could be attributed to its explicit compensation of camera motion. Overall, our 94.6% on UCF101 and 70.3% HMDB51 clearly sets a new state-of-the-art on these widely used action recognition datasets. 5 Conclusion We have presented a novel spatiotemporal ResNet architecture for video-based action recognition. In particular, our approach is the first to combine two-stream with residual networks and to show the great advantage that results. Our ST-ResNet allows the hierarchical learning of spacetime features by connecting the appearance and motion channels of a two-stream architecture. Furthermore, we transfer both streams from the spatial to the spatiotemporal domain by transforming the dimensionality mapping filters of a pre-trained model into temporal convolutions, initialized as residual filters over time. The whole system is trained end-to-end and achieves state-of-the-art performance on two popular action recognition datasets. Acknowledgments. This work was supported by the Austrian Science Fund (FWF) under project P27076 and NSERC. The GPUs used for this research were donated by NVIDIA. Christoph Feichtenhofer is a recipient of a DOC Fellowship of the Austrian Academy of Sciences at the Institute of Electrical Measurement and Measurement Signal Processing, Graz University of Technology. 8 References [1] Nicolas Ballas, Li Yao, Chris Pal, and Aaron Courville. Delving deeper into convolutional networks for learning video representations. In Proc. ICLR, 2016. [2] H. Bilen, B. Fernando, E. Gavves, A. Vedaldi, and S. Gould. Dynamic image networks for action recognition. In Proc. CVPR, 2016. [3] Richard T Born and Roger BH Tootell. Segregation of global and local motion processing in primate middle temporal visual area. Nature, 357(6378):497–499, 1992. [4] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In Proc. CVPR, 2015. [5] Christoph Feichtenhofer, Axel Pinz, and Andrew Zisserman. Convolutional two-stream network fusion for video action recognition. In Proc. CVPR, 20116. [6] M. A. Goodale and A. D. Milner. Separate visual pathways for perception and action. Trends in Neurosciences, 15(1):20–25, 1992. [7] Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen, and Yann LeCun. Unsupervised feature learning from temporal data. In Proc. ICCV, 2015. [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. arXiv preprint arXiv:1603.05027, 2016. [10] Yani Ioannou, Duncan Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank filters for efficient image classification. In Proc. ICLR, 2016. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proc. ICML, 2015. [12] S. Ji, W. Xu, M. Yang, and K. Yu. 3D convolutional neural networks for human action recognition. IEEE PAMI, 35(1):221–231, 2013. [13] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. In Proc. CVPR, 2014. [14] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In NIPS, 2012. [15] Hildegard Kuehne, Hueihan Jhuang, Estíbaliz Garrote, Tomaso Poggio, and Thomas Serre. HMDB: a large video database for human motion recognition. In Proc. ICCV, 2011. [16] Quoc V Le, Will Y Zou, Serena Y Yeung, and Andrew Y Ng. Learning hierarchical invariant spatiotemporal features for action recognition with independent subspace analysis. In Proc. CVPR, 2011. [17] Behrooz Mahasseni and Sinisa Todorovic. Regularizing long short term memory with 3D human-skeleton sequences for action recognition. [18] Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, and George Toderici. Beyond short snippets: Deep networks for video classification. In Proc. CVPR, 2015. [19] Shikhar Sharma, Ryan Kiros, and Ruslan Salakhutdinov. Action recognition using visual attention. In NIPS workshop on Time Series. 2015. [20] K. Simonyan and A. Zisserman. Two-stream convolutional networks for action recognition in videos. In NIPS, 2014. [21] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Proc. ICLR, 2014. [22] Khurram Soomro, Amir Roshan Zamir, and Mubarak Shah. UCF101: A dataset of 101 human actions calsses from videos in the wild. Technical Report CRCV-TR-12-01, 2012. [23] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. arXiv preprint arXiv:1512.00567, 2015. [24] Christian Szegedy, Sergey Ioffe, and Vincent Vanhoucke. Inception-v4, inception-resnet and the impact of residual connections on learning. arXiv preprint arXiv:1602.07261, 2016. [25] G. W. Taylor, R. Fergus, Y. LeCun, and C. Bregler. Convolutional learning of spatio-temporal features. In Proc. ECCV, 2010. [26] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri. Learning spatiotemporal features with 3D convolutional networks. In Proc. ICCV, 2015. [27] David C Van Essen and Jack L Gallant. Neural mechanisms of form and motion processing in the primate visual system. Neuron, 13(1):1–10, 1994. [28] A. Vedaldi and K. Lenc. MatConvNet – convolutional neural networks for MATLAB. In Proceeding of the ACM Int. Conf. on Multimedia, 2015. [29] Heng Wang and Cordelia Schmid. Action recognition with improved trajectories. In Proc. ICCV, 2013. [30] Limin Wang, Yu Qiao, and Xiaoou Tang. Action recognition with trajectory-pooled deep-convolutional descriptors. In Proc. CVPR, 2015. [31] Xiaolong Wang, Ali Farhadi, and Abhinav Gupta. Actions ~ transformations. In Proc. CVPR, 2016. [32] C. Zach, T. Pock, and H. Bischof. A duality based approach for realtime TV-L1 optical flow. In Proc. DAGM, pages 214–223, 2007. 9
2016
151
6,052
Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes Jack W Rae⇤ jwrae Jonathan J Hunt⇤ jjhunt Tim Harley tharley Ivo Danihelka danihelka Andrew Senior andrewsenior Greg Wayne gregwayne Alex Graves gravesa Timothy P Lillicrap countzero Google DeepMind @google.com Abstract Neural networks augmented with external memory have the ability to learn algorithmic solutions to complex tasks. These models appear promising for applications such as language modeling and machine translation. However, they scale poorly in both space and time as the amount of memory grows — limiting their applicability to real-world domains. Here, we present an end-to-end differentiable memory access scheme, which we call Sparse Access Memory (SAM), that retains the representational power of the original approaches whilst training efficiently with very large memories. We show that SAM achieves asymptotic lower bounds in space and time complexity, and find that an implementation runs 1,000⇥faster and with 3,000⇥less physical memory than non-sparse models. SAM learns with comparable data efficiency to existing models on a range of synthetic tasks and one-shot Omniglot character recognition, and can scale to tasks requiring 100,000s of time steps and memories. As well, we show how our approach can be adapted for models that maintain temporal associations between memories, as with the recently introduced Differentiable Neural Computer. 1 Introduction Recurrent neural networks, such as the Long Short-Term Memory (LSTM) [11], have proven to be powerful sequence learning models [6, 18]. However, one limitation of the LSTM architecture is that the number of parameters grows proportionally to the square of the size of the memory, making them unsuitable for problems requiring large amounts of long-term memory. Recent approaches, such as Neural Turing Machines (NTMs) [7] and Memory Networks [21], have addressed this issue by decoupling the memory capacity from the number of model parameters. We refer to this class of models as memory augmented neural networks (MANNs). External memory allows MANNs to learn algorithmic solutions to problems that have eluded the capabilities of traditional LSTMs, and to generalize to longer sequence lengths. Nonetheless, MANNs have had limited success in real world application. A significant difficulty in training these models results from their smooth read and write operations, which incur linear computational overhead on the number of memories stored per time step of training. Even worse, they require duplication of the entire memory at each time step to perform backpropagation through time (BPTT). To deal with sufficiently complex problems, such as processing ⇤These authors contributed equally. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. a book, or Wikipedia, this overhead becomes prohibitive. For example, to store 64 memories, a straightforward implementation of the NTM trained over a sequence of length 100 consumes ⇡30 MiB physical memory; to store 64,000 memories the overhead exceeds 29 GiB (see Figure 1). In this paper, we present a MANN named SAM (sparse access memory). By thresholding memory modifications to a sparse subset, and using efficient data structures for content-based read operations, our model is optimal in space and time with respect to memory size, while retaining end-to-end gradient based optimization. To test whether the model is able to learn with this sparse approximation, we examined its performance on a selection of synthetic and natural tasks: algorithmic tasks from the NTM work [7], Babi reasoning tasks used with Memory Networks [17] and Omniglot one-shot classification [16, 12]. We also tested several of these tasks scaled to longer sequences via curriculum learning. For large external memories we observed improvements in empirical run-time and memory overhead by up to three orders magnitude over vanilla NTMs, while maintaining near-identical data efficiency and performance. Further, in Supplementary D we demonstrate the generality of our approach by describing how to construct a sparse version of the recently published Differentiable Neural Computer [8]. This Sparse Differentiable Neural Computer (SDNC) is over 400⇥faster than the canonical dense variant for a memory size of 2,000 slots, and achieves the best reported result in the Babi tasks without supervising the memory access. 2 Background 2.1 Attention and content-based addressing An external memory M 2 RN⇥M is a collection of N real-valued vectors, or words, of fixed size M. A soft read operation is defined to be a weighted average over memory words, r = N X i=1 w(i)M(i) , (1) where w 2 RN is a vector of weights with non-negative entries that sum to one. Attending to memory is formalized as the problem of computing w. A content addressable memory, proposed in [7, 21, 2, 17], is an external memory with an addressing scheme which selects w based upon the similarity of memory words to a given query q. Specifically, for the ith read weight w(i) we define, w(i) = f (d(q, M(i))) PN j=1 f (d(q, M(j)) , (2) where d is a similarity measure, typically Euclidean distance or cosine similarity, and f is a differentiable monotonic transformation, typically a softmax. We can think of this as an instance of kernel smoothing where the network learns to query relevant points q. Because the read operation (1) and content-based addressing scheme (2) are smooth, we can place them within a neural network, and train the full model using backpropagation. 2.2 Memory Networks One recent architecture, Memory Networks, make use of a content addressable memory that is accessed via a series of read operations [21, 17] and has been successfully applied to a number of question answering tasks [20, 10]. In these tasks, the memory is pre-loaded using a learned embedding of the provided context, such as a paragraph of text, and then the controller, given an embedding of the question, repeatedly queries the memory by content-based reads to determine an answer. 2.3 Neural Turing Machine The Neural Turing Machine is a recurrent neural network equipped with a content-addressable memory, similar to Memory Networks, but with the additional capability to write to memory over time. The memory is accessed by a controller network, typically an LSTM, and the full model is differentiable — allowing it to be trained via BPTT. 2 A write to memory, Mt (1 −Rt) ⊙Mt−1 + At , (3) consists of a copy of the memory from the previous time step Mt−1 decayed by the erase matrix Rt indicating obsolete or inaccurate content, and an addition of new or updated information At. The erase matrix Rt = wW t eT t is constructed as the outer product between a set of write weights wW t 2 [0, 1]N and erase vector et 2 [0, 1]M. The add matrix AT = wW t aT t is the outer product between the write weights and a new write word at 2 RM, which the controller outputs. 3 Architecture This paper introduces Sparse Access Memory (SAM), a new neural memory architecture with two innovations. Most importantly, all writes to and reads from external memory are constrained to a sparse subset of the memory words, providing similar functionality as the NTM, while allowing computational and memory efficient operation. Secondly, we introduce a sparse memory management scheme that tracks memory usage and finds unused blocks of memory for recording new information. For a memory containing N words, SAM executes a forward, backward step in ⇥(log N) time, initializes in ⇥(N) space, and consumes ⇥(1) space per time step. Under some reasonable assumptions, SAM is asymptotically optimal in time and space complexity (Supplementary A). 3.1 Read The sparse read operation is defined to be a weighted average over a selection of words in memory: ˜rt = K X i=1 ˜wR t (si)Mt(si), (4) where ˜wR t 2 RN contains K number of non-zero entries with indices s1, s2, . . . , sK; K is a small constant, independent of N, typically K = 4 or K = 8. We will refer to sparse analogues of weight vectors w as ˜w, and when discussing operations that are used in both the sparse and dense versions of our model use w. We wish to construct ˜wR t such that ˜rt ⇡rt. For content-based reads where wR t is defined by (2), an effective approach is to keep the K largest non-zero entries and set the remaining entries to zero. We can compute ˜wR t naively in O(N) time by calculating wR t and keeping the K largest values. However, linear-time operation can be avoided. Since the K largest values in wR t correspond to the K closest points to our query qt, we can use an approximate nearest neighbor data-structure, described in Section 3.5, to calculate ˜wR t in O(log N) time. Sparse read can be considered a special case of the matrix-vector product defined in (1), with two key distinctions. The first is that we pass gradients for only a constant K number of rows of memory per time step, versus N, which results in a negligible fraction of non-zero error gradient per timestep when the memory is large. The second distinction is in implementation: by using an efficient sparse matrix format such as Compressed Sparse Rows (CSR), we can compute (4) and its gradients in constant time and space (see Supplementary A). 3.2 Write The write operation is SAM is an instance of (3) where the write weights ˜wW t are constrained to contain a constant number of non-zero entries. This is done by a simple scheme where the controller writes either to previously read locations, in order to update contextually relevant memories, or the least recently accessed location, in order to overwrite stale or unused memory slots with fresh content. The introduction of sparsity could be achieved via other write schemes. For example, we could use a sparse content-based write scheme, where the controller chooses a query vector qW t and applies writes to similar words in memory. This would allow for direct memory updates, but would create problems when the memory is empty (and shift further complexity to the controller). We decided upon the previously read / least recently accessed addressing scheme for simplicity and flexibility. 3 The write weights are defined as w W t = ↵t # γt wR t−1 + (1 −γt) IU t $ , (5) where the controller outputs the interpolation gate parameter γt and the write gate parameter ↵t. The write to the previously read locations wR t−1 is purely additive, while the least recently accessed word IU t is set to zero before being written to. When the read operation is sparse (wR t−1 has K non-zero entries), it follows the write operation is also sparse. We define IU t to be an indicator over words in memory, with a value of 1 when the word minimizes a usage measure Ut IU t (i) = ( 1 if Ut(i) = min j=1,...,NUt(j) 0 otherwise. (6) If there are several words that minimize Ut then we choose arbitrarily between them. We tried two definitions of Ut. The first definition is a time-discounted sum of write weights U (1) T (i) = PT t=0 λT −t (wW t (i) + wR t (i)) where λ is the discount factor. This usage definition is incorporated within Dense Access Memory (DAM), a dense-approximation to SAM that is used for experimental comparison in Section 4. The second usage definition, used by SAM, is simply the number of time-steps since a non-negligible memory access: U (2) T (i) = T −max { t : wW t (i) + wR t (i) > δ} . Here, δ is a tuning parameter that we typically choose to be 0.005. We maintain this usage statistic in constant time using a custom data-structure (described in Supplementary A). Finally we also use the least recently accessed word to calculate the erase matrix. Rt = IU t 1T is defined to be the expansion of this usage indicator where 1 is a vector of ones. The total cost of the write is constant in time and space for both the forwards and backwards pass, which improves on the linear space and time dense write (see Supplementary A). 3.3 Controller We use a one layer LSTM for the controller throughout. At each time step, the LSTM receives a concatenation of the external input, xt, the word, rt−1 read in the previous time step. The LSTM then produces a vector, pt = (qt, at, ↵t, γt), of read and write parameters for memory access via a linear layer. The word read from memory for the current time step, rt, is then concatenated with the output of the LSTM, and this vector is fed through a linear layer to form the final output, yt. The full control flow is illustrated in Supplementary Figure 6. 3.4 Efficient backpropagation through time We have already demonstrated how the forward operations in SAM can be efficiently computed in O(T log N) time. However, when considering space complexity of MANNs, there remains a dependence on Mt for the computation of the derivatives at the corresponding time step. A naive implementation requires the state of the memory to be cached at each time step, incurring a space overhead of O(NT), which severely limits memory size and sequence length. Fortunately, this can be remedied. Since there are only O(1) words that are written at each time step, we instead track the sparse modifications made to the memory at each timestep, apply them in-place to compute Mt in O(1) time and O(T) space. During the backward pass, we can restore the state of Mt from Mt+1 in O(1) time by reverting the sparse modifications applied at time step t. As such the memory is actually rolled back to previous states during backpropagation (Supplementary Figure 5). At the end of the backward pass, the memory ends rolled back to the start state. If required, such as when using truncating BPTT, the final memory state can be restored by making a copy of MT prior to calling backwards in O(N) time, or by re-applying the T sparse updates in O(T) time. 3.5 Approximate nearest neighbors When querying the memory, we can use an approximate nearest neighbor index (ANN) to search over the external memory for the K nearest words. Where a linear KNN search inspects every element in 4 memory (taking O(N) time), an ANN index maintains a structure over the dataset to allow for fast inspection of nearby points in O(log N) time. In our case, the memory is still a dense tensor that the network directly operates on; however the ANN is a structured view of its contents. Both the memory and the ANN index are passed through the network and kept in sync during writes. However there are no gradients with respect to the ANN as its function is fixed. We considered two types of ANN indexes: FLANN’s randomized k-d tree implementation [15] that arranges the datapoints in an ensemble of structured (randomized k-d) trees to search for nearby points via comparison-based search, and one that uses locality sensitive hash (LSH) functions that map points into buckets with distance-preserving guarantees. We used randomized k-d trees for small word sizes and LSHs for large word sizes. For both ANN implementations, there is an O(log N) cost for insertion, deletion and query. We also rebuild the ANN from scratch every N insertions to ensure it does not become imbalanced. 4 Results 4.1 Speed and memory benchmarks 101 102 103 104 105 106 107 1umber Rf memRry slRWs (1) 100 101 102 103 104 105 WDll 7ime [ms] 11.9s 7.3ms 170 DA0 6A0 lineDr 6A0 A11 (a) 100 101 102 103 104 105 106 1umber Rf memRry slRts (1) 10iB 100iB 1000iB 1GiB 10GiB 100GiB 0emRry 29.2GiB 7.80iB 170 DA0 6A0 lineDr 6A0 A11 (b) Figure 1: (a) Wall-clock time of a single forward and backward pass. The k-d tree is a FLANN randomized ensemble with 4 trees and 32 checks. For 1M memories a single forward and backward pass takes 12 s for the NTM and 7 ms for SAM, a speedup of 1600⇥. (b) Memory used to train over sequence of 100 time steps, excluding initialization of external memory. The space overhead of SAM is independent of memory size, which we see by the flat line. When the memory contains 64,000 words the NTM consumes 29 GiB whereas SAM consumes only 7.8 MiB, a compression ratio of 3700. We measured the forward and backward times of the SAM architecture versus the dense DAM variant and the original NTM (details of setup in Supplementary E). SAM is over 100 times faster than the NTM when the memory contains one million words and an exact linear-index is used, and 1600 times faster with the k-d tree (Figure 1a). With an ANN the model runs in sublinear time with respect to the memory size. SAM’s memory usage per time step is independent of the number of memory words (Figure 1b), which empirically verifies the O(1) space claim from Supplementary A. For 64 K memory words SAM uses 53 MiB of physical memory to initialize the network and 7.8 MiB to run a 100 step forward and backward pass, compared with the NTM which consumes 29 GiB. 4.2 Learning with sparse memory access We have established that SAM reaps a huge computational and memory advantage of previous models, but can we really learn with SAM’s sparse approximations? We investigated the learning cost of inducing sparsity, and the effect of placing an approximate nearest neighbor index within the network, by comparing SAM with its dense variant DAM and some established models, the NTM and the LSTM. We trained each model on three of the original NTM tasks [7]. 1. Copy: copy a random input sequence of length 1–20, 2. Associative Recall: given 3-6 random (key, value) pairs, and subsequently a cue key, return the associated value. 3. Priority Sort: Given 20 random keys and priority values, return 5 50000 100000 1umber Rf eSLsRdes 0 10 20 30 40 CRst LST0 1T0 DA0 SA0 lLneDr SA0 A11 (a) Copy 10000 20000 30000 40000 1umber of episodes 0 2 4 6 Cost (b) Associative Recall 50000 100000 1umber of episodes 20 40 60 80 100 120 Cost (c) Priority Sort Figure 2: Training curves for sparse (SAM) and dense (DAM, NTM) models. SAM trains comparably for the Copy task, and reaches asymptotic error significantly faster for Associative Recall and Priority Sort.Light colors indicate one standard deviation over 30 random seeds. the top 16 keys in descending order of priority. We chose these tasks because the NTM is known to perform well on them. Figure 2 shows that sparse models are able to learn with comparable efficiency to the dense models and, surprisingly, learn more effectively for some tasks — notably priority sort and associative recall. This shows that sparse reads and writes can actually benefit early-stage learning in some cases. Full hyperparameter details are in Supplementary C. 4.3 Scaling with a curriculum The computational efficiency of SAM opens up the possibility of training on tasks that require storing a large amount of information over long sequences. Here we show this is possible in practice, by scaling tasks to a large scale via an exponentially increasing curriculum. We parametrized three of the tasks described in Section 4.2: associative recall, copy, and priority sort, with a progressively increasing difficulty level which characterises the length of the sequence and number of entries to store in memory. For example, level specifies the input sequence length for the copy task. We exponentially increased the maximum level h when the network begins to learn the fundamental algorithm. Since the time taken for a forward and backward pass scales O(T) with the sequence length T, following a standard linearly increasing curriculum could potentially take O(T 2), if the same amount of training was required at each step of the curriculum. Specifically, h was doubled whenever the average training loss dropped below a threshold for a number of episodes. The level was sampled for each minibatch from the uniform distribution over integers U(0, h). We compared the dense models, NTM and DAM, with both SAM with an exact nearest neighbor index (SAM linear) and with locality sensitive hashing (SAM ANN). The dense models contained 64 memory words, while the sparse models had 2 ⇥106 words. These sizes were chosen to ensure all models use approximately the same amount of physical memory when trained over 100 steps. For all tasks, SAM was able to advance further than the other models, and in the associative recall task, SAM was able to advance through the curriculum to sequences greater than 4000 (Figure 3). Note that we did not use truncated backpropagation, so this involved BPTT for over 4000 steps with a memory size in the millions of words. To investigate whether SAM was able to learn algorithmic solutions to tasks, we investigated its ability to generalize to sequences that far exceeded those observed during training. Namely we trained SAM on the associative recall task up to sequences of length 10, 000, and found it was then able to generalize to sequences of length 200,000 (Supplementary Figure 8). 4.4 Question answering on the Babi tasks [20] introduced toy tasks they considered a prerequisite to agents which can reason and understand natural language. They are synthetically generated language tasks with a vocab of about 150 words that test various aspects of simple reasoning such as deduction, induction and coreferencing. 6 102 103 104 105 106 ESLsRde 1R 100 101 102 103 104 105 DLffLculty Level L6T0 1T0 DA0 6A0 lLneDr 6A0 A11 (a) 102 103 104 105 106 ESLsRde 1R 100 101 102 103 104 DLffLculty Level L6T0 1T0 DA0 6A0 lLneDr 6A0 A11 (b) 102 103 104 105 106 107 ESLsRde 1R 100 101 102 103 DLffLculty Level L670 170 DA0 6A0 lLneDr 6A0 A11 (c) Figure 3: Curriculum training curves for sparse and dense models on (a) Associative recall, (b) Copy, and (c) Priority sort. Difficulty level indicates the task difficulty (e.g. the length of sequence for copy). We see SAM train (and backpropagate over) episodes with thousands of steps, and tasks which require thousands of words to be stored to memory. Each model is averaged across 5 replicas of identical hyper-parameters (light lines indicate individual runs). We tested the models (including the Sparse Differentiable Neural Computer described in Supplementary D) on this task. The full results and training details are described in Supplementary G. The MANNs, except the NTM, are able to learn solutions comparable to the previous best results, failing at only 2 of the tasks. The SDNC manages to solve all but 1 of the tasks, the best reported result on Babi that we are aware of. Notably the best prior results have been obtained by using supervising the memory retrieval (during training the model is provided annotations which indicate which memories should be used to answer a query). More directly comparable previous work with end-to-end memory networks, which did not use supervision [17], fails at 6 of the tasks. Both the sparse and dense perform comparably at this task, again indicating the sparse approximations do not impair learning. We believe the NTM may perform poorly since it lacks a mechanism which allows it to allocate memory effectively. 4.5 Learning on real world data Finally, we demonstrate that the model is capable of learning in a non-synthetic dataset. Omniglot [12] is a dataset of 1623 characters taken from 50 different alphabets, with 20 examples of each character. This dataset is used to test rapid, or one-shot learning, since there are few examples of each character but many different character classes. Following [16], we generate episodes where a subset of characters are randomly selected from the dataset, rotated and stretched, and assigned a randomly chosen label. At each time step an example of one of the characters is presented, along with the correct label of the proceeding character. Each character is presented 10 times in an episode (but each presentation may be any one of the 20 examples of the character). In order to succeed at the task the model must learn to rapidly associate a novel character with the correct label, such that it can correctly classify subsequent examples of the same character class. Again, we used an exponential curriculum, doubling the number of additional characters provided to the model whenever the cost was reduced under a threshold. After training all MANNs for the same length of time, a validation task with 500 characters was used to select the best run, and this was then tested on a test set, containing all novel characters for different sequence lengths (Figure 4). All of the MANNs were able to perform much better than chance, even on sequences ⇡4⇥longer than seen during training. SAM outperformed other models, presumably due to its much larger memory capacity. Previous results on the Omniglot curriculum [16] task are not identical, since we used 1-hot labels throughout and the training curriculum scaled to longer sequences, but our results with the dense models are comparable (⇡0.4 errors with 100 characters), while the SAM is significantly better (0.2 < errors with 100 characters). 7 Figure 4: Test errors for the Omniglot task (described in the text) for the best runs (as chosen by the validation set). The characters used in the test set were not used in validation or training. All of the MANNs were able to perform much better than chance with ⇡500 characters (sequence lengths of ⇡5000), even though they were trained, at most, on sequences of ⇡130 (chance is 0.002 for 500 characters). This indicates they are learning generalizable solutions to the task. SAM is able to outperform other approaches, presumably because it can utilize a much larger memory. 5 Discussion Scaling memory systems is a pressing research direction due to potential for compelling applications with large amounts of memory. We have demonstrated that you can train neural networks with large memories via a sparse read and write scheme that makes use of efficient data structures within the network, and obtain significant speedups during training. Although we have focused on a specific MANN (SAM), which is closely related to the NTM, the approach taken here is general and can be applied to many differentiable memory architectures, such as Memory Networks [21]. It should be noted that there are multiple possible routes toward scalable memory architectures. For example, prior work aimed at scaling Neural Turing Machines [22] used reinforcement learning to train a discrete addressing policy. This approach also touches only a sparse set of memories at each time step, but relies on higher variance estimates of the gradient during optimization. Though we can only guess at what class of memory models will become staple in machine learning systems of the future, we argue in Supplementary A that they will be no more efficient than SAM in space and time complexity if they address memories based on content. We have experimented with randomized k-d trees and LSH within the network to reduce the forward pass of training to sublinear time, but there may be room for improvement here. K-d trees were not designed specifically for fully online scenarios, and can become imbalanced during training. Recent work in tree ensemble models, such as Mondrian forests [13], show promising results in maintaining balanced hierarchical set coverage in the online setting. An alternative approach which may be well-suited is LSH forests [3], which adaptively modifies the number of hashes used. It would be an interesting empirical investigation to more fully assess different ANN approaches in the challenging context of training a neural network. Humans are able to retain a large, task-dependent set of memories obtained in one pass with a surprising amount of fidelity [4]. Here we have demonstrated architectures that may one day compete with humans at these kinds of tasks. Acknowledgements We thank Vyacheslav Egorov, Edward Grefenstette, Malcolm Reynolds, Fumin Wang and Yori Zwols for their assistance, and the Google DeepMind family for helpful discussions and encouragement. 8 References [1] Sunil Arya, David M. Mount, Nathan S. Netanyahu, Ruth Silverman, and Angela Y. Wu. An optimal algorithm for approximate nearest neighbor searching fixed dimensions. J. ACM, 45(6):891–923, November 1998. [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [3] Mayank Bawa, Tyson Condie, and Prasanna Ganesan. Lsh forest: self-tuning indexes for similarity search. In Proceedings of the 14th international conference on World Wide Web, pages 651–660. ACM, 2005. [4] Timothy F Brady, Talia Konkle, George A Alvarez, and Aude Oliva. Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences, 105(38):14325–14329, 2008. [5] Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. [6] Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 6645–6649. IEEE, 2013. [7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [8] Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. Hybrid computing using a neural network with dynamic external memory. Nature, 2016. [9] Gaël Guennebaud, Benoıt Jacob, Philip Avery, Abraham Bachrach, Sebastien Barthelemy, et al. Eigen v3, 2010. [10] Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children’s books with explicit memory representations. arXiv preprint arXiv:1511.02301, 2015. [11] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [12] Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. [13] Balaji Lakshminarayanan, Daniel M Roy, and Yee Whye Teh. Mondrian forests: Efficient online random forests. In Advances in Neural Information Processing Systems, pages 3140–3148, 2014. [14] Rajeev Motwani, Assaf Naor, and Rina Panigrahy. Lower bounds on locality sensitive hashing. SIAM Journal on Discrete Mathematics, 21(4):930–935, 2007. [15] Marius Muja and David G. Lowe. Scalable nearest neighbor algorithms for high dimensional data. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 36, 2014. [16] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and T Lillicrap. Meta-learning with memory-augmented neural networks. In International conference on machine learning, 2016. [17] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439, 2015. [18] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc., 2014. [19] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4:2, 2012. [20] Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. [21] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014. [22] Wojciech Zaremba and Ilya Sutskever. Reinforcement learning neural turing machines. arXiv preprint arXiv:1505.00521, 2015. 9
2016
152
6,053
Neurally-Guided Procedural Models: Amortized Inference for Procedural Graphics Programs using Neural Networks Daniel Ritchie Stanford University Anna Thomas Stanford University Pat Hanrahan Stanford University Noah D. Goodman Stanford University Abstract Probabilistic inference algorithms such as Sequential Monte Carlo (SMC) provide powerful tools for constraining procedural models in computer graphics, but they require many samples to produce desirable results. In this paper, we show how to create procedural models which learn how to satisfy constraints. We augment procedural models with neural networks which control how the model makes random choices based on the output it has generated thus far. We call such models neurally-guided procedural models. As a pre-computation, we train these models to maximize the likelihood of example outputs generated via SMC. They are then used as efficient SMC importance samplers, generating high-quality results with very few samples. We evaluate our method on L-system-like models with imagebased constraints. Given a desired quality threshold, neurally-guided models can generate satisfactory results up to 10x faster than unguided models. 1 Introduction Procedural modeling, or the use of randomized procedures to generate computer graphics, is a powerful technique for creating visual content. It facilitates efficient content creation at massive scale, such as procedural cities [13]. It can generate fine detail that would require painstaking effort to create by hand, such as decorative floral patterns [24]. It can even generate surprising or unexpected results, helping users to explore large or unintuitive design spaces [19]. Many applications demand control over procedural models: making their outputs resemble examples [22, 2], fit a target shape [17, 21, 20], or respect functional constraints such as physical stability [19]. Bayesian inference provides a general-purpose control framework: the procedural model specifies a generative prior, and the constraints are encoded as a likelihood function. Posterior samples can then be drawn via Markov Chain Monte Carlo (MCMC) or Sequential Monte Carlo (SMC). Unfortunately, these algorithms often require many samples to converge to high-quality results, limiting their usability for interactive applications. Sampling is challenging because the constraint likelihood implicitly defines complex (often non-local) dependencies not present in the prior. Can we instead make these dependencies explicit by encoding them in a model’s generative logic? Such an explicit model could simply be run forward to generate high-scoring results. In this paper, we propose an amortized inference method for learning an approximation to this perfect explicit model. Taking inspiration from recent work in amortized variational inference, we augment the procedural model with neural networks that control how the model makes random choices based on the partial output it has generated. We call such a model a neurally-guided procedural model. We train these models by maximizing the likelihood of example outputs generated via SMC using a large number of samples, as an offline pre-process. Once trained, they can be used as efficient SMC importance samplers. By investing time up-front generating and training on many examples, our system effectively ‘pre-compiles’ an efficient sampler that can generate further results much faster. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. For a given likelihood threshold, neurally-guided models can generate results which reliably achieve that threshold using 10-20x fewer particles and up to 10x less compute time than an unguided model. In this paper, we focus on accumulative procedural models that repeatedly add new geometry to a structure. For our purposes, a procedural model is accumulative if, while executing, it provides a ‘current position’ p from which geometry generation will continue. Many popular growth models, such as L-systems, are accumulative [16]. We focus on 2D models (p ∈R2) which generate images, though the techniques we present extend naturally to 3D. 2 Related Work Guided Procedural Modeling Procedural models can be guided using non-probabilistic methods. Open L-systems can query their spatial position and orientation, allowing them to prune their growth to an implicit surface [17, 14]. Recent follow-up work supports larger models by decomposing them into separate regions with limited interaction [1]. These methods were specifically designed to fit procedural models to shapes. In contrast, our method learns how to guide procedural models and is generally applicable to constraints expressable as likelihood functions. Generatively Capturing Dependencies in Procedural Models A recent system by Dang et al. modifies a procedural grammar so that its output distribution reflects user preference scores given to example outputs [2]. Like us, they use generative logic to capture dependencies induced by a likelihood function (in their case, a Gaussian process regression over user-provided examples). Their method splits non-terminal symbols in the original grammar, giving it more combinatorial degrees of freedom. This works well for discrete dependencies, whereas our method is better suited for continuous constraint functions, such as shape-fitting. Neural Amortized Inference Our method is also inspired by recent work in amortized variational inference using neural variational families [11, 18, 8], but it uses a different learning objective. Prior work has also aimed to train efficient neural SMC importance samplers [5, 15]. These efforts focused on time series models and Bayesian networks, respectively; we focus on a class of structured procedural models, the characteristics of which permit different design decisions: • The likelihood of a partially-generated output can be evaluated at any time and is a good heuristic for the likelihood of the completed output. This is different from e.g. time series models, where the likelihood at each step considers a previously-unseen data point. • They make many local random choices but have no global/top-level parameters. • They generate images, which naturally support coarse-to-fine feature extraction. These properties informed the design of our neurally-guided model architecture. 3 Approach Consider a simple procedural modeling program chain that recursively generates a random sequence of linear segments, constrained to match a target image. Figure 1a shows the text of this program, along with samples generated from it (drawn in black) against several target images (drawn in gray). Chains generated by running the program forward do not match the targets, since forward sampling is oblivious to the constraint. Instead, we can generate constrained samples using Sequential Monte Carlo (SMC) [20]. This results in final chains that more closely match the target images. However, the algorithm requires many particles—and therefore significant computation—to produce acceptable results. Figure 1a shows that N = 10 particles is not sufficient. In an ideal world, we would not need costly inference algorithms to generate constraint-satisfying results. Instead, we would have access to an ‘oracle’ program, chain_perfect, that perfectly fills in the target image when run forward. While such an oracle can be difficult or impossible to write by hand, it is possible to learn a program chain_neural that comes close. Figure 1b shows our approach. For each random choice in the program text (e.g. gaussian, flip), we replace the parameters of that choice with the output of a neural network. This neural network’s inputs (abstracted as “...”) include the target image as well the partial output image the program has generated thus far. The network thus 2 function chain(pos, ang) { var newang = ang + gaussian(0, PI/8); var newpos = pos + polarToRect(LENGTH, newang); genSegment(pos, newpos); if (flip(0.5)) chain(newpos, newang); } Forward Samples SMC Samples (N = 10) (a) function chain_neural(pos, ang) { var newang = ang + gaussMixture(nn1(...)); var newpos = pos + polarToRect(LENGTH, newang); genSegment(pos, newpos); if (flip(nn2(...))) chain_neural(newpos, newang); } Forward Samples SMC Samples (N = 10) (b) Figure 1: Turning a linear chain model into a neurally-guided model. (a) The original program. When outputs (shown in black) are constrained to match a target image (shown in gray), SMC requires many particles to achieve good results. (b) The neurally-guided model, where random choice parameters are computed via neural networks. Once trained, forward sampling from this model adheres closely to the target image, and SMC with only 10 particles consistently produces good results. shapes the distribution over possible choices, guiding the programs’s future output based on the target image and its past output. These neural nets affect both continuous choices (e.g. angles) as well as control flow decisions (e.g. recursion): they dictate where the chain goes next, as well as whether it keeps going at all. For continuous choices such as gaussian, we also modify the program to sample from a mixture distribution. This helps the program handle situations where the constraints permit multiple distinct choices (e.g. in which direction to start the chain for the circle-shaped target image in Figure 1). Once trained, chain_neural generates constraint-satisfying results more efficiently than its un-guided counterpart. Figure 1b shows example outputs: forward samples adhere closely to the target images, and SMC with 10 particles is sufficient to produce chains that fully fill the target shape. The next section describes the process of building and training such neurally-guided procedural models. 4 Method For our purposes, a procedural model is a generative probabilistic model of the following form: PM(x) = |x| Y i=1 pi(xi; Φi(x1, . . . , xi−1)) Here, x is the vector of random choices the procedural modeling program makes as it executes. The pi’s are local probability distributions from which each successive random choice is drawn. Each pi is parameterized by a set of parameters (e.g. mean and variance, for a Gaussian distribution), which are determined by some function Φi of the previous random choices x1, . . . , xi−1. A constrained procedural model also includes an unnormalized likelihood function ℓ(x, c) that measures how well an output of the model satisfies some constraint c: PCM(x|c) = 1 Z · PM(x) · ℓ(x, c) In the chain example, c is the target image, with ℓ(·, c) measuring similarity to that image. A neurally-guided procedural model modifies a procedural model by replacing each parameter function Φi with a neural network: PGM(x|c; θ) = |x| Y i=1 ˜pi(xi; NNi(I(x1, . . . , xi−1), c; θ)) where I(x1, . . . , xi−1) renders the model output after the first i −1 random choices, and θ are the network parameters. ˜pi is a mixture distribution if random choice i is continuous; otherwise, ˜pi = pi. 3 Fully Connected (FC) tanh FC Bounds Output Params nf na np nf 2 36c 36c nf 2 np function  branch(  pos,  ang,  width  )  {...}   (Optional) Target Image (50x50) Downsample Downsample Current Partial Output (50x50) Target Image Features Partial Output Features Local State Features Figure 2: Network architecture for neurally-guided procedural models. The outputs are the parameters for a random choice probability distribution. The inputs come from three sources: Local State Features are the arguments to the function in which the random choice occurs; Partial Output Features come from 3x3 pixel windows of the partial image the model has generated, extracted at multiple resolutions, around the procedural model’s current position; Target Image Features are analogous windows extracted from the target image, if the constraint requires one. To train a neurally-guided procedural model, we seek parameters θ such that PGM is as close as possible to PCM. This goal can be formalized as minimizing the conditional KL divergence DKL(PCM||PGM) (see the supplemental materials for derivation): min θ DKL(PCM||PGM) ≈max θ 1 N N X s=1 log PGM(xs|cs; θ) xs ∼PCM(x) , cs ∼P(c) (1) where the xs are example outputs generated using SMC, given a cs drawn from some distribution P(c) over constraints, e.g. uniform over a set of training images. This is simply maximizing the likelihood of the xs under the neurally-guided model. Training then proceeds via stochastic gradient ascent using the gradient ∇log PGM(x|c; θ) = |x| X i=1 ∇log ˜pi(xi; NNi(I(x1, . . . , xi−1), c; θ)) (2) The trained PGM(x|c; θ) can then be used as an importance distribution for SMC. It is worth noting that using the other direction of KL divergence, DKL(PGM||PCM), leads to the marginal likelihood lower bound objective used in many black-box variational inference algorithms [23, 6, 11]. This objective requires training samples from PGM, which are much less expensive to generate than samples from PCM. When used for procedural modeling, however, it leads to models whose outputs lack diversity, making them unsuitable for generating visually-varied content. This behavior is due to a well-known property of the objective: minimizing it produces approximating distributions that are overly-compact, i.e. concentrating their probability mass in a smaller volume of the state space than the true distribution being approximated [10]. Our objective is better suited for training proposal distributions for importance sampling methods (such as SMC), where the target density must be absolutely continuous with respect to the proposal density [3]. 4.1 Neural Network Architecture Each network NNi should predict a distribution over choice i that is as close as possible to its true posterior distribution. More complex networks capture more dependencies and increase accuracy but require more computation time to execute. We can also increase accuracy at the cost of computation time by running SMC with more particles. If our networks are too complex (i.e. accuracy provided per unit computation time is too low), then the neurally-guided model PGM will be outperformed by simply using more particles with the original model PM. For neural guidance to provide speedups, we require networks that pack as much predictive power into as simple an architecture as possible. 4 Scribbles Glyphs PhyloPic Figure 3: Example images from our datasets. Figure 2 shows our network architecture: a multilayer perceptron with nf inputs, one hidden layer of size nf/2 with a tanh nonlinearity, and np outputs, where np is the number of parameters the random choice expects. We found that a simpler linear model did not perform as well per unit time. Since some parameters are bounded (e.g. Gaussian variance must be positive), each output is remapped via an appropriate bounding transform (e.g. ex for non-negative parameters). The inputs come from several sources, each providing the network with decision-critical information: Local State Features The model’s current position p, the current orientation of any local reference frame, etc. We access this data via the arguments of the function call in which the random choice occurs, extracting all na scalar arguments and normalizing them to [−1, 1]. Partial Output Features Next, the network needs information about the output the model has already generated. The raw pixels of the current partial output image I(·) provide too much data; we need to summarize the relevant image contents. We extract 3x3 windows of pixels around the model’s current position p at four different resolution levels, with each level computed by downsampling the previous level via a 2x2 box filter. This results in 36c features for a c-channel image. This architecture is similar to the foveated ‘glimpses’ used in visual attention models [12]. Convolutional networks might also be used here, but this approach provided better performance per unit of computation time. Target Image Features Finally, if the constraint being enforced involves a target image, as in the chain example of Section 3, we also extract multi-resolution windows from this image. These additional 36c features allow the network to make appropriate decisions for matching the image. 4.2 Training We train with stochastic gradient ascent (see Equation 2). We use the Adam algorithm [7] with α = β = 0.75, step size 0.01, and minibatch size one. We terminate training after 20000 iterations. 5 Experiments In this section, we evaluate how well neurally-guided procedural models capture image-based constraints. We implemented our prototype system in the WebPPL probabilistic programming language [4] using the adnn neural network library.1 All timing data was collected on an Intel Core i7-3840QM machine with 16GB RAM running OSX 10.10.5. 5.1 Image Datasets In experiments which require target images, we use the following image collections: • Scribbles: 49 binary mask images drawn by hand with the brush tool in Photoshop. Includes shapes with thick and thin regions, high and low curvature, and self-intersections. • Glyphs: 197 glyphs from the FF Tartine Script Bold typeface (all glyphs with only one foreground component and at least 500 foreground pixels when rendered at 129x97). • PhyloPic: 35 images from the PhyloPic silhouette database.2 We augment the dataset with a horizontally-mirrored copy of each image, and we annotate each image with a starting point and direction from which to initialize the execution of a procedural model. Figure 3 shows some representative images from each collection. 1https://github.com/dritchie/adnn 2http://phylopic.org 5 Target Reference Guided Unguided (Equal N) Unguided (Equal Time) N = 600 , 30.26 s N = 10 , 1.5 s N = 10 , 0.1 s N = 45 , 1.58 s Figure 4: Constraining a vine-growth procedural model to match a target image. N is the number of SMC particles used. Reference shows an example result after running SMC on the unguided model with a large number of particles. Neurally-guided models generate results of this quality in a couple seconds; the unguided model struggles given the same amount of particles or computation time. 5.2 Shape Matching We first train neurally-guided procedural models to match 2D shapes specified as binary mask images. If D is the spatial domain of the image, then the likelihood function for this constraint is ℓshape(x, c) = N(sim(I(x), c) −sim(0, c) 1 −sim(0, c) , 1, σshape) (3) sim(I1, I2) = P p∈D w(p) · 1{I1(p) = I2(p)} P p∈D w(p) w(p) =    1 if I2(p) = 0 1 if ||∇I2(p)||= 1 wfilled if ||∇I2(p)||= 0 where ∇I(p) is a binary edge mask computed using the Sobel operator. This function encourages the output image I(x) to be similar to the target mask c, where similarity is normalized against c’s similarity to an empty image 0. Each pixel p’s contribution is weighted by w(p), determined by whether the target mask is empty, filled, or has an edge at that pixel. We use wfilled = 0.¯6, so empty and edge pixels are worth 1.5 times more than filled pixels. This encourages matching of perceptually-important contours and negative space. σshape = 0.02 in all experiments. We wrote a WebPPL program which recursively generates vines with leaves and flowers and then trained a neurally-guided version of this program to capture the above likelihood. The model was trained on 10000 example outputs, each generated using SMC with 600 particles. Target images were drawn uniformly at random from the Scribbles dataset. Each example took on average 17 seconds to generate; parallelized across four CPU cores, the entire set of examples took approximately 12 hours to generate. Training took 55 minutes in our single-threaded implementation. Figure 4 shows some outputs from this program. 10-particle SMC produces recognizable results with the neurally-guided model (Guided) but not with the unguided model (Unguided (Equal N)). A more equitable comparison is to give the unguided model the same amount of wall-clock time as the guided model. While the resulting outputs fare better, the target shape is still obscured (Unguided (Equal Time)). We find that the unguided model needs ∼200 particles to reliably match the performance of the guided model. Additional results are shown in the supplemental materials. Figure 5 shows a quantitative comparison between five different models on the shape matching task: • Unguided: The original, unguided procedural model. • Constant Params: The neural network for each random choice is a vector of constant parameters (i.e. a partial mean field approximation [23]). • + Local State Features: Adding the local state features described in Section 4.1. • + Target Image Features: Adding the target image features described in Section 4.1. • All Features: Adding the partial output features described in Section 4.1. We test each model on the Glyph dataset and report the median normalized similarity-to-target achieved (i.e. argument one to the Gaussian in Equation 3), plotted in Figure 5a. The performance 6 0 1 2 3 4 5 6 7 8 9 10 11 Number of Particles 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Similarity (a) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 Similarity 0.0 0.2 0.4 0.6 0.8 1.0 Computation Time (seconds) Condition All Features + Target Image Features + Local State Features Constant Params Unguided (b) Figure 5: Shape matching performance comparison. “Similarity” is median normalized similarity to target, averaged over all targets in the test dataset. Bracketing lines show 95% confidence bounds. (a) Performance as number of SMC particles increases. The neurally-guided model achieves higher average similarity as more features are added. (b) Computation time required as desired similarity increases. The vertical gap between the two curves indicates speedup (which can be up to 10x). 0 1 2 3 4 5 6 7 8 9 10 11 Number of Particles 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Similarity Condition With Mixture Distributions Without Mixture Distributions (a) 10 20 50 100 200 500 1000 2000 Number of Training Traces 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Similarity at 10 Particles (b) Figure 6: (a) Using four-component mixtures for continuous random choices boosts performance. (b) The effect of training set size on performance (at 10 SMC particles), plotted on a logarithmic scale. Average similarity-to-target levels off at ∼1000 examples. of the guided model improves with the addition of more features; at 10 particles, the full model is already approaching an asymptote. Figure 5b shows the wall-clock time required to achieve increasing similarity thresholds. The vertical gap between the two curves shows the speedup given by neural guidance, which can be as high as 10x. For example, the + Local State Features model reaches similarity 0.35 about 5.5 times faster than the Unguided model, the + Target Image Features model is about 1.5 times faster still, and the All Features Model is about 1.25 times faster than that. Note that we trained on the Scribbles dataset but tested on the Glyphs dataset; these results suggest that our models can generalize to qualitatively-different previously-unseen images. Figure 6a shows the benefit of using mixture guides for continuous random choices. The experimental setup is the same as in Figure 5. We compare a model which uses four-component mixtures with a no-mixture model. Using mixtures boosts performance, which we alluded to in Section 3: at shape intersections, such as the crossing of the letter ‘t,’ the model benefits from multimodal uncertainty. Using more than four mixture components did not improve performance on this test dataset. We also investigate how the number of training examples affects performance. Figure 6b plots the median similarity at 10 particles as training set size increases. Performance increases rapidly for the first few hundred examples before leveling off, suggesting that ∼1000 sample traces is sufficient (for our particular choice of training set, at least). This may seem surprising, as many published neurally-based learning systems require many thousands to millions of training examples. In our case, 7 Reference N = 600 Guided N = 15 Unguided (Equal N) Unguided (Equal Time) Figure 7: Constraining the vine-growth program to generate circuit-like patterns. Reference outputs took around ∼70 seconds to generate; outputs from the guided model took ∼3.5 seconds. each training example contains hundreds to thousands of random choices, each of which provides a learning signal—in this way, the training data is “bigger” than it appears. Our implementation generates 1000 samples in just over an hour using four CPU cores. 5.3 Stylized “Circuit” Design We next train neurally-guided procedural models to capture a likelihood that does not use a target image: constraining the vines program to resemble a stylized circuit design. To achieve the dense packing of long wire traces that is one of the most striking visual characteristics of circuit boards, we encourage a percentage τ of the image to be filled (τ = 0.5 in our results) and to have a dense, high-magnitude gradient field, as this tends to create many long rectilinear or diagonal edges: ℓcirc(x) = N(edge(I(x)) · (1 −η(fill(I(x)), τ)), 1, σcirc) (4) edge(I) = 1 |D| X p∈D ||∇I(p)|| fill(I) = 1 |D| X p∈D I(p) where η(x, ¯x) is the relative error of x from ¯x and σcirc = 0.01. We also penalize geometry outside the bounds of the image, encouraging the program to fill in a rectangular “die”-like region. We train on 2000 examples generated using SMC with 600 particles. Example generation took 10 hours and training took under two hours. Figure 7 shows outputs from this program. As with shape matching, the neurally-guided model generates high-scoring results significantly faster than the unguided model. 6 Conclusion and Future Work This paper introduced neurally-guided procedural models: constrained procedural models that use neural networks to capture constraint-induced dependencies. We showed how to train guides for accumulative models with image-based constraints using a simple-yet-powerful network architecture. Experiments demonstrated that neurally-guided models can generate high-quality results significantly faster than unguided models. Accumulative procedural models provide a current position p, which is not true of other generative paradigms (e.g. texture generation, which generates content across its entire spatial domain). In such settings, the guide might instead learn what parts of the current partial output are relevant to each random choice using an attention process [12]. Using neural networks to predict random choice parameters is just one possible program transformation for generatively capturing constraints. Other transformations, such as control flow changes, may be necessary to capture more types of constraints. A first step in this direction would be to combine our approach with the grammar-splitting technique of Dang et al. [2]. Methods like ours could also accelerate inference for other applications of procedural models, e.g. as priors in analysis-by-synthesis vision systems [9]. A robot perceiving a room through an onboard camera, detecting chairs, then fitting a procedural model to the detected chairs could learn importance distributions for each step of the chair-generating process (e.g. the number of parts, their size, arrangement, etc.) Future work is needed to determine appropriate neural guides for such domains. 8 References [1] Bedˇrich Beneš, Ondˇrej Št’ava, Radomir Mˇech, and Gavin Miller. Guided Procedural Modeling. In Eurographics 2011. [2] Minh Dang, Stefan Lienhard, Duygu Ceylan, Boris Neubert, Peter Wonka, and Mark Pauly. Interactive Design of Probability Density Functions for Shape Grammars. In SIGGRAPH Asia 2015. [3] Charles Geyer. Importance Sampling, Simulated Tempering, and Umbrella Sampling. In S. Brooks, A. Gelman, G. Jones, and X.L. Meng, editors, Handbook of Markov Chain Monte Carlo. CRC Press, 2011. [4] Noah D Goodman and Andreas Stuhlmüller. The Design and Implementation of Probabilistic Programming Languages. http://dippl.org, 2014. Accessed: 2015-12-23. [5] Shixiang Gu, Zoubin Ghahramani, and Richard E. Turner. Neural Adaptive Sequential Monte Carlo. In NIPS 2015. [6] K. Norman J. Manning, R. Ranganath and D. Blei. Black Box Variational Inference. In AISTATS 2014. [7] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In ICLR 2015. [8] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. In ICLR 2014. [9] T. Kulkarni, P. Kohli, J. B. Tenenbaum, and V. Mansinghka. Picture: An Imperative Probabilistic Programming Language for Scene Perception. In CVPR 2015. [10] David J. C. MacKay. Information Theory, Inference & Learning Algorithms. Cambridge University Press, 2002. [11] Andriy Mnih and Karol Gregor. Neural Variational Inference and Learning in Belief Networks. In ICML 2014. [12] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of Visual Attention. In NIPS 2014. [13] Pascal Müller, Peter Wonka, Simon Haegler, Andreas Ulmer, and Luc Van Gool. Procedural Modeling of Buildings. In SIGGRAPH 2006. [14] Radomír Mˇech and Przemyslaw Prusinkiewicz. Visual Models of Plants Interacting with Their Environment. In SIGGRAPH 1996. [15] B. Paige and F. Wood. Inference Networks for Sequential Monte Carlo in Graphical Models. In ICML 2016. [16] P. Prusinkiewicz and Aristid Lindenmayer. The Algorithmic Beauty of Plants. Springer-Verlag New York, Inc., 1990. [17] Przemyslaw Prusinkiewicz, Mark James, and Radomír Mˇech. Synthetic Topiary. In SIGGRAPH 1994. [18] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In ICML 2014. [19] Daniel Ritchie, Sharon Lin, Noah D. Goodman, and Pat Hanrahan. Generating Design Suggestions under Tight Constraints with Gradient-based Probabilistic Programming. In Eurographics 2015. [20] Daniel Ritchie, Ben Mildenhall, Noah D. Goodman, and Pat Hanrahan. Controlling Procedural Modeling Programs with Stochastically-Ordered Sequential Monte Carlo. In SIGGRAPH 2015. [21] Jerry O. Talton, Yu Lou, Steve Lesser, Jared Duke, Radomír Mˇech, and Vladlen Koltun. Metropolis Procedural Modeling. ACM Trans. Graph., 30(2), 2011. [22] O. Št’ava, S. Pirk, J. Kratt, B. Chen, R. Mˇech, O. Deussen, and B. Beneš. Inverse Procedural Modelling of Trees. Computer Graphics Forum, 33(6), 2014. [23] David Wingate and Theophane Weber. Automated Variational Inference in Probabilistic Programming. In NIPS 2012 Workshop on Probabilistic Programming. [24] Michael T. Wong, Douglas E. Zongker, and David H. Salesin. Computer-generated Floral Ornament. In SIGGRAPH 1998. 9
2016
153
6,054
Reconstructing Parameters of Spreading Models from Partial Observations Andrey Y. Lokhov Center for Nonlinear Studies and Theoretical Division T-4 Los Alamos National Laboratory, Los Alamos, NM 87545, USA lokhov@lanl.gov Abstract Spreading processes are often modelled as a stochastic dynamics occurring on top of a given network with edge weights corresponding to the transmission probabilities. Knowledge of veracious transmission probabilities is essential for prediction, optimization, and control of diffusion dynamics. Unfortunately, in most cases the transmission rates are unknown and need to be reconstructed from the spreading data. Moreover, in realistic settings it is impossible to monitor the state of each node at every time, and thus the data is highly incomplete. We introduce an efficient dynamic message-passing algorithm, which is able to reconstruct parameters of the spreading model given only partial information on the activation times of nodes in the network. The method is generalizable to a large class of dynamic models, as well to the case of temporal graphs. 1 Introduction Knowledge of the underlying parameters of the spreading model is crucial for understanding the global properties of the dynamics and for development of effective control strategies for an optimal dissemination or mitigation of diffusion [1, 2]. However, in many realistic settings effective transmission probabilities are not known a priori and need to be recovered from a limited number of realizations of the process. Examples of such situations include spreading of a disease [3], propagation of information and opinions in a social network [4], correlated infrastructure failures [5], or activation cascades in biological and neural networks [6]: precise model and parameters, as well as propagation paths are often unknown, and one is left at most with several observed diffusion traces. It can be argued that for many interesting systems, even the functional form of the dynamic model is uncertain. Nevertheless, the reconstruction problem still makes sense in this case: the common approach is to assume some simple and reasonable form of the dynamics, and recover the parameters of the model which explain the data in the most accurate and minimalistic way; this is crucial for understanding the basic mechanisms of the spreading process, as well as for making further predictions without overfitting. For example, if only a small number of samples is available, a few-parameter model should be used. In practice, it is very costly or even impossible to record the state of each node at every time step of the dynamics: we might only have access to a subset of nodes, or monitor the state of the system at particular times. For instance, surveys may give some information on the health or awareness of certain individuals, but there is no way to get a detailed account for the whole population; neural avalanches are usually recorded in cortical slices, representing only a small part of the brain; it is costly to deploy measurement devices on each unit of a complex infrastructure system; finally, hidden nodes play an important role in the artificial learning architectures. This is precisely the setting that we address in this article: reconstruction of parameters of a propagation model in the presence of nodes with hidden information, and/or partial information in time. It is not surprising that this 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. challenging problem turns out to be notably harder then its detailed counterpart and requires new algorithms which would be robust with respect to missing observations. Related work. The inverse problem of network and couplings reconstruction in the dynamic setting has attracted a considerable attention in the past several years. However, most of the existing works are focused on learning the propagation networks under the assumption of availability of full diffusion information. The papers [7, 8, 9, 10] developed inference methods based on the maximization of the likelihood of the observed cascades, leading to distributed and convex optimization algorithms in the case of continuous and discrete dynamics, principally for the variants of the independent cascade (IC) model [11]. These algorithms have been further improved under the sparse recovery framework [12, 13], particularly efficient for structure learning of treelike networks. A careful rigorous analysis of these likelihood-based and alternative [14, 15] reconstruction algorithms give an estimation of the number of observed cascades required for an exact network recovery with high probability. Precise conditions for the parameters recovery at a given accuracy are still lacking. The fact that the aforementioned algorithms rely on the fully observed spreading history represents an important limitation in the case of incomplete information. The case of missing time information has been addressed in two recent papers: focusing primarily on tree graphs, [16] studied the structure learning problem in which only initial and final spreading states are observed; [17] addressed the network reconstruction problem in the case of partial time snapshots of the network, using relaxation optimization techniques and assuming that full probabilistic trace for each node in the network is available. A standard technique for dealing with incomplete data involves maximizing the likelihood marginalized over the hidden information; for example, this approach has been used in [18] for identifying the diffusion source. In what follows, we use this method for benchmarking our results. Overview of results. In this article, we propose a different algorithm, based on recently introduced dynamic message-passing (DMP) equations for cascading processes [19, 20], which will be referred to as DMPREC (DMP-reconstruction) throughout the text. Making use of all available information, it yields significantly more accurate reconstruction results, outperforming the likelihood method and having a substantially lower algorithmic complexity, independent on the number of nodes with unobserved information. More generally, the DMPREC framework can be easily adapted to allow reconstruction of heterogeneous transmission probabilities in a large class of cascading processes, including the IC and threshold models, SIR and other epidemiological models, rumor spreading dynamics, etc., as well as for the processes occurring on dynamically-changing networks. 2 Problem formulation Model. For the sake of simplicity and definiteness, we assume that cascades follow the dynamics of stochastic susceptible-infected (SI) model in discrete time, defined on a network G = (V, E) with set of nodes denoted by V and set of directed edges E [3]. Each node i ∈V at times t = 1, 2, . . . , T can be in either of two states: susceptible (S) or infected (I). At each time step, node i in the I state can activate one of its susceptible neighbors j with probability αij1 . The dynamics is non-recurrent: once the node is activated (infected), it can never change its state back to susceptible. In what follows, the network G is supposed to be known. Incomplete observations and inference problem. We assume that the input is formed from M independent cascades, where a cascade Σc is defined as a collection of activation times of nodes in the network {τ c i }i∈V . Each cascade is observed up to the final observation time T. Notice that T is an important parameter: intuitively, the larger is T, the more information is contained in cascades, and the less samples are needed. We assume that T is given and fixed, being related to the availability of the finite-time observation window. If node i in cascade c does not get activated at a certain time prior to the horizon T, we put by definition τ c i = T; hence, τ c i = T means that node i changes its state at time T or later. The full information on the cascades Σ = ∪cΣc is divided into the observed part, ΣO, and the hidden part ΣH. Thus, in general ΣO contains only a subset of activation times in T ∈[0, T] for a part of observed nodes in the network O ∈V . The task is to reconstruct the 1We chose this two-state model since it has slightly more general dynamic rules compared to the popular IC model [11] with an additional restriction: a node infected at time t has a single chance to activate its susceptible neighbors at time step t+1, while further infection attempts in subsequent rounds are not allowed. The DMPREC method presented below can be easily applied to the case of IC model by noticing that it corresponds to the SIR model with a recovery probability equal to one, for which the DMP equations are known [20]. 2 couplings {α∗ ij}(ij)∈E ≡Gα∗, where Gα∗with a star denotes the original transmission probabilities that have been used to generate the data. Maximization of the likelihood. Similarly to the formulations considered in [7, 8, 10], it is possible to explicitly write the expression for the likelihood of the discrete-time SI model in the case of fully available information ΣO = Σ under the assumption that the data has been generated using the couplings Gα: P(Σ | Gα) = Y i∈V Y 1≤c≤M Pi(τ c i | Σc, Gα), (1) with Pi(τ c i | Σc, Gα) =   τ c i −2 Y t′=0 Y k∈∂i (1 −αki1τ c k≤t′)   " 1 − Y k∈∂i (1 −αki1τ c k≤τ c i −1) ! 1τ c i <T # , (2) where ∂i denotes the set of neighbors of node i in the graph G, and 1 is the indicator function. The expression (2) has the following meaning: the probability that node i has been activated at time τi given the activation times of its neighbors is equal to the probability that the activation signal has not been transmitted by any infected neighbor of i until the time τi −2 (first term in the product), and that at least one of the active neighbors actually transmitted the infection at time τi −1 (second term). A straightforward adaptation of the NETRATE algorithm, suggested in [8], to the present setting implies that the estimation of the transmission probabilities bGα∗is obtained as a solution of the convex optimization problem bGα∗= arg min (−ln P(Σ | Gα)) , (3) which can be solved locally for each node i and its neighborhood due to the factorization of the likelihood (1) under assumption of asymmetry of the couplings. In the case of partial observations, the optimization problem (3) is not well defined since it requires the full knowledge of activation times for each node. A simple and natural extension of this scheme, which we will refer to as the maximum likelihood estimator (MLE), is to consider the likelihood function marginalized over unknown activation times: P(ΣO | Gα) = X {τ c h},h∈H P(Σ | Gα). (4) An exact evaluation of (4) is a computationally hard high-dimensional integration problem with complexity proportional to T H in the presence of H nodes with hidden information. In order to correct for this fact, we propose a heuristic scheme which we denote as the heuristic two-stage (HTS) algorithm. The idea of HTS consists of completing the missing part {τ c h}h∈H of the cascades at each step of the optimization process with the most probable values according to the current estimation of the couplings bGα, bΣH = arg max P(Σ | bGα), and solving the optimization problem (3) using the full information on the cascades Σ = ΣO ∪bΣH; these two alternating steps are iterated until the global convergence of the algorithm. An exact (brute-force) estimation of bΣH requires an exponential number of operations T H, as the original MLE formulation. However, we found that in practice the computational time can be significantly reduced with the use of the Monte Carlo sampling. The corresponding approximation is based on the observation that the likelihood (1) is non-zero only for {τ c i }i∈V forming possible (realizable) cascades. Hence, for each c, we sample LH,T auxiliary cascades, and choose the set of {τ c h}h∈H maximizing (1). LH,T is typically a large sampling parameter, growing with T and H to ensure a proper convergence. This procedure leads to an algorithm with a complexity O(NM|E|2LH,T ) at each step of the optimization, where |E| denotes the number of edges; see the journal version of the paper [21] for a more in-depth discussion. Hence, both MLE and HTS algorithms are practically intractable; the remaining part of the paper is devoted to the development of an accurate algorithm with a polynomial-time computational complexity for this hard problem. The next section introduces dynamic message-passing equations which serve as a basis for such algorithm. 3 Dynamic message-passing equations. The dynamic message-passing equations for the SI model in continuous [19] and discrete [20] settings allow to compute marginal probabilities that node i is in the state S at time t: P i S(t) = P i S(0) Y k∈∂i θk→i(t) (5) 3 for t > 0 and a given initial condition P i S(0). The variables θk→i(t) represent the probability that node k did not pass the activation signal to the node i until time t. The intuition behind the key Equation (5) is that the probability of node i to be susceptible at time t is equal to the probability of being in the S state at initial time times the probability that neither of its neighbors infected it until time t. The quantities θk→i(t) can be computed iteratively using the following expressions: θk→i(t) = θk→i(t −1) −αkiφk→i(t −1), (6) φk→i(t) = (1 −αki)φk→i(t −1) + P k S(0)  Y l∈∂k\i θl→k(t −1) − Y l∈∂k\i θl→k(t)  , (7) where ∂k\i denotes the set of neighbors of k excluding i. The Equation (6) translates the fact that θk→i(t) can only decrease if the infection is actually transmitted along the directed link (ki) ∈E; this happens with probability αki times φk→i(t −1) which denotes the probability that node k is in the state I at time t, but has not transmitted the infection to node i until time t −1. The Equation (7), which allows to close the system of dynamic equations, describes the evolution of probability φk→i(t): at time t −1, it decreases if the infection is transmitted (first term in the sum), and increases if node k goes from the state S to the state I (difference of terms 2 and 3). Note that node i is excluded from the corresponding products over θ-variables because this equation is conditioned on the fact that i is in the state S, and therefore can not infect k. The Equations (6) and (7) are iterated in time starting from initial conditions θi→j(0) = 1 and φi→j(0) = 1 −P i S(0) which are consistent with the definitions above. The name “DMP equations” comes from the fact the whole scheme can be interpreted as the procedure of passing “messages” along the edges of the network. Theorem 1. DMP equations for the SI model, defined by Equations (5)-(7), yield exact marginal probabilities on tree networks. On general networks, the quantities P i S(t) give lower bound on values of marginal probabilities. Proof Sketch. The exactness of solution on tree graphs immediately follows from the fact that the DMP equations can be derived from belief propagation equations on time trajectories [20], which provide exact marginals on trees. The fact that P i S(t) computed according to (5) represent a lower bound on marginal probabilities in general networks can be derived from a counting argument, considering multiple infection paths on a loopy graph which contribute to the computation of P i S(t), effectively lowering its value through the Equation (5); the proof technique is borrowed from [19], where similar dynamic equations in the continuous-time case have been considered. □ Using the definition (5) of P i S(t), it is convenient to define the marginal probability mi(t) of activation of node i at time t: mi(t) = P i S(0) " Y k∈∂i θk→i(t −1) − Y k∈∂i θk→i(t) # . (8) As it often happens with message-passing algorithms, although being exact only on tree networks, DMP equations provide accurate results even on loopy networks. An example is provided in the Figure 1, where the DMP-predicted marginals are compared with the values obtained from extensive simulations of the dynamics on a network of retweets with N = 96 nodes [22]. This observation will allow us to use DMP equations as a suitable approximation tool on general networks. In the next section we describe an efficient reconstruction algorithm, DMPREC, which is based on the resolution of the dynamics given by DMP equations and makes use of all available information. 4 Proposed algorithm: DMPREC Probability of cascades and free energy. The marginalization over hidden nodes in (4) creates a complex relation between couplings in the whole graph, resulting in a non-explicit expression. The main idea behind the DMPREC algorithm is to approximate the likelihood of observed cascades (4) through the marginal probability distributions (5) and (8): P(ΣO | Gα) ≈ M Y c=1 Y i∈O  mi(τ c i | Gα)1τ c i ≤T + P i S(τ c i | Gα)1τ c i =T  . (9) The expression (9) is at the core of the suggested algorithm. As there is no tractable way to compute exactly the joint probability of partial observations, we approximate it using a mean-field-type 4 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 MC-predicted P i S(t) DMP-predicted P i S(t) (a) (b) Figure 1: Illustration of the accuracy of DMP equations on a network of retweets with N = 96 nodes [22]. (a) Comparison of DMP-predicted P i S(t) with P i S(t) estimated from 106 runs of Monte Carlo simulations with t = 10 and one infected node at initial time. The couplings {αij} have been generated uniformly at random in the range [0, 1]. (b) Visualization of the network topology created with the Gephi software. approach as a product of marginal probabilities provided by the dynamic message-passing equations. The reasoning behind this approach is that each marginal is expressed through an average of all possible realizations of dynamics with a given initial condition; this is in contrast with the likelihood function which considers only particular instance realized in the given cascade. Therefore, equation (9) summarizes the effect of different propagation paths, and the maximization of this probability function will yield the most likely consensus between the ensemble of couplings in the network. Precisely this key property makes the reconstruction possible in the case involving nodes with hidden information via maximization of the objective (9) which can be interpreted as a cost function representing the product of individual probabilities of activation taken precisely at the value of the observed infection times. Starting from this expression, one can define the associated “free energy”: fDMP = −ln P(ΣO | Gα) = X i∈O f i DMP, (10) where f i DMP = −P c ln  mi(τ c i )1τ c i ≤T −1 + P i S(T −1)1τ c i =T  . In the last expression for f i DMP we used the fact that mi(T) + P i S(T) = P i S(T −1). Our goal is to minimize the free energy (10) with respect to {αij}(ij)∈E. A similar approach has been previously outlined by [23] as a way to learn homogeneous couplings in the spreading source inference algorithm. In order to carry out this optimization task, we need to develop an efficient way of gradient evaluation. Computation of the gradient. The gradient of the free energy reads (note that the indicator functions point to disjoint events): ∂f i DMP ∂αrs = − X c h∂mi(τ c i | Gα)/∂αrs mi(τ c i | Gα) 1τ c i ≤T −1 + ∂P i S(T −1 | Gα)/∂αrs P i S(T −1 | Gα) 1τ c i =T i , (11) where the derivatives of the marginal probabilities can be computed explicitly by taking the derivative of the DMP equations (5)-(8). Let us denote ∂θk→i(t)/∂αrs ≡pk→i rs (t) and ∂φk→i(t)/∂αrs ≡ qk→i rs (t). Since the dynamic messages at initial time {θi→j(0)} and {φi→j(0)} are independent on the couplings, we have pk→i rs (0) = qk→i rs (0) = 0 for all k, i, r, s, and these quantities can be computed iteratively using the analogues of the Equations (6) and (7): pk→i rs (t) = pk→i rs (t −1) −αkiqk→i rs (t −1) −φk→i(t −1)1k=r,i=s, (12) qk→i rs (t) = (1 −αki)qk→i rs (t −1) −φk→i(t −1)1k=r,i=s + P k S(0) X l∈∂k\i pl→k rs (t −1) Y n∈∂k\{i,l} θn→k(t −1) −P k S(0) X l∈k\i pl→k rs (t) Y n∈∂k\{i,l} θn→k(t). (13) Using these quantities, the derivatives of the marginals entering in Equation (11) can be written as ∂P i S(t) ∂αrs = P i S(0) X k∈∂i pk→i rs (t) Y l∈∂i\k θl→i(t), ∂mi(t) ∂αrs = ∂P i S(t −1) ∂αrs −∂P i S(t) ∂αrs . (14) 5 The following observation shows that at least on tree networks, corresponding to the regime in which DMP equations have been derived, the values of the original transmission probabilities Gα∗ correspond to the point in which the gradient of the free energy takes zero value. Claim 1. On a tree network, in the limit of large number of samples M →∞, the derivative of the free energy is equal to zero at the values of couplings Gα∗used for generating cascades. Proof. Let us first look at samples originating from the same initial condition. According to Theorem 1, the DMP equations are exact on tree graphs, and hence it is easy to see that lim M→∞f i DMP = − X t≤T −1 mi(t | Gα∗) ln mi(t | Gα) −P i S(T −1 | Gα∗) ln P i S(T −1 | Gα). (15) Therefore, lim M→∞ ∂f i DMP ∂αrs |Gα∗= − ∂ ∂αrs " X t≤T −1 mi(t | Gα∗) + P i S(T −1 | Gα∗) # = 0, since the expression inside the brackets sums exactly to one. This result trivially holds by summing up samples with different initial conditions. Combination of this result with the definition (10) completes the proof. The DMPREC algorithm consists of running the message-passing equations for the derivatives of the dynamic variables (12), (13) in parallel with DMP equations (5)-(7), allowing for the computation of the gradient of the free energy (11) through (14), which is used afterwards in the optimization procedure. Let us analyse the computational complexity of each step of parameters update. The number of runs is equal to the number of distinct initial conditions in the ensemble of observed cascades, so if all M cascades start with distinct initial conditions, the complexity of the DMPREC algorithm is equal to O(|E|2TM) for each step of the update of {αrs}(rs)∈E. Hence, in a typical situation where each cascade is initiated at one particular node, the number of runs will be limited by N, and the overall update-step complexity of DMPREC will be O(|E|2TN). Missing information in time. On top of inaccessible nodes, the state of the network can be monitored at a lower frequency compared to the natural time scale of the dynamics. It is easy to adapt the algorithm to the case of observations at K time steps T ≡{tk}k∈[1,K]. Since the activation time τ c i of node i in cascade c is now known only up to the interval [tkc i + 1, tkc i +1] ≡δkc i , where tkc i < τ c i ≤tkc i +1, one should maximize P t∈δkc i mi(t) = P i S(tkc i ) −P i S(tkc i +1) ≡∆kc i P i S(t | Gα) instead of mi(τ c i ) in this case. This leads to obvious modifications to the expressions (10) and (11), using the differences of derivatives at corresponding times instead of one-step differences as in (14). For instance, if the final time is not included in the observations, we have f i DMP = − X c ln  ∆kc i P i S(t | Gα)  , ∂f i DMP ∂αrs = − X c " ∂∆kc i P i S(t | Gα)/∂αrs ∆kc i P i S(t | Gα) # . 5 Numerical results We evaluate the performance of the DMPREC algorithm on synthetic and real-world networks under assumption of partial observations. In numerical experiments, we focus primarily on the presence of inaccessible nodes, which is a more computationally difficult case compared to the setting of missing information in time. An example involving partial time observations is shown in section 5.1. 5.1 Tests with synthetic data Experimental setup. In the tests described in this section, the couplings {αij} are sampled uniformly in the range [0, 1], the final observation time is set to T = 10. Each cascade is generated using a discrete-time SI model defined in section 2 from randomly selected sources. In the case of inaccessible nodes, the activation times data is hidden in all the samples for H randomly selected nodes. We use the likelihood methods for benchmarking the accuracy of our approach. The MLE algorithm introduced above is not tractable even on small graphs, therefore we compare the results of DMPREC with 6 the HTS algorithm outlined in the section 2. Still, HTS has a very high computational complexity, and therefore we are bound to run comparative tests on small graphs: a connected component of an artificially-generated network with N = 20, sampled using a power-law degree distribution, and a real directed network of relationships in a New England monastery with N = 18 nodes [24]. Both algorithms are initialized with αij = 0.5 for all (ij) ∈E. The accuracy of reconstruction is assessed using the ℓ1 norm of the difference between reconstructed and original couplings, normalized over the number of directed edges in the graph2 . Intuitively, this measure gives an average expected error for each parameter αij. 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 α* ij αij (c) 0 0.05 0.1 0 2 5 7 10 H (c) (b) 0 0.05 0.1 0.15 102 103 104 105 106 〈αij - α* ij〉 M(×0.64) (c) (b) (a) HTS DMPrec Figure 2: Tests for DMPREC and HTS on a small power-law network: (a) for fixed number of nodes with unobserved information H = 5, (b) for fixed number of samples M = 6400. (c) Scatter plot of {αij} obtained with DMPREC versus original parameters {α∗ ij} in the case of missing information in time with M = 6400, T = 10; the state of the network is observed every other time step. (c) 0 0.05 0.1 0 2 4 6 8 H (c) (b) 0 0.05 0.1 0.15 102 103 104 105 106 〈αij - α* ij〉 M(×0.64) (c) (b) (a) HTS DMPrec Figure 3: Numerical results for the real-world Monastery network of [24]: (a) for fixed number of nodes with unobserved information H = 4, (b) for fixed number of samples M = 6400. (c) The topology of the network (thickness of edges proportional to {α∗ ij} used for generating cascades). Results. In the Figure 2 we present results for a small power-law network with short loops, which is not a favorable situation for DMP equations derived in the treelike approximation of the graph. Figures 2 (a) and 2 (b) show the dependence of an average reconstruction error as a function of M (for fixed H/N = 0.25) and H (for fixed M = 6400), respectively. DMPREC clearly outperforms the HTS algorithm, yielding surprisingly accurate reconstruction of transmission probabilities even in the case where a half of network nodes do not report any information. Most importantly, DMPREC achieves reconstruction with a significantly lower computational time: for example, while it took more than 24 hours to compute the point corresponding to H = 4 and M = 6400 with HTS (MLE at this test point took several weeks to converge), the computation involving DMPREC converged to the presented level of accuracy in less than 10 minutes on a standard laptop. These times illustrate the hardness of the learning problem involving incomplete information. We have also used this case study network to test the estimation of transmission probabilities with the DMPREC algorithm when the state of the network is recorded only at a subset of times T ∈[0, T]. Results for the case where every other time stamp is missing are given in the Figure 2 (c): couplings estimated with DMPREC are compared to the original values {α∗ ij}; despite the fact that only 50% of time stamps are available, the inferred couplings show an excellent agreement with the ground truth. Equivalent results for the real-world relationship network extracted from the study [24] and containing both directed and undirected links, are shown in the Figure 3; an ability of DMPREC to capture the mutual dependencies of different couplings through dynamic correlations is even more pronounced in this case, with almost perfect reconstruction of couplings for large M and a rather weak dependence 2Note that this measure excludes those few parameters which are impossible to reconstruct: e.g. no algorithm can learn the coupling associated with the ingoing edge of the hidden node located at the leaf of a network. 7 on the number of nodes with removed observations. We have run tests on larger synthetic networks which show similar reconstruction results for DMPREC, but where comparisons with the likelihood method could not be carried out. In the next section we focus on an application involving real-world data which represents a more interesting and important case for the validation of the algorithm. 5.2 Test with a real-world data As a proxy for the real statistics, we used the data provided by the Bureau of Transportation Statistics [25], from which we reconstructed a part of the U.S. air transportation network, where airports are the nodes, and directed links correspond to traffic between them. The reason behind this choice is based on the fact that the majority of large-scale influenza pandemics over the past several decades represented the air-traffic mediated epidemics. For illustration purposes, we selected top N = 30 airports ranked according to the total number of passenger enplanements and commonly classified as large hubs, and extracted a sub-network of flights between them. The weight of each edge is defined by the annual number of transported passengers, aggregated over multiple routes; we have pruned links with a relatively low traffic – below 10% of the traffic level on the busiest routes, so that the total number of remaining directed links is |E| = 210. The final weights are based on the assumption that the probability of infection transmission is proportional to the flux; the weights have been renormalized accordingly so that the busiest route received the coupling αij = 0.5. The resulting network is depicted in the Figure 4 . We have generated M = 10, 000 independent cascades in this network, and have hidden the information at H = 15 nodes (50% of airports) selected at random. We observe that even with a significantly large portion of missing information, the reconstructed parameters show a good agreement with the original ones. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 α* ij αij H=0 〈αij - α* ij〉=0.0400 0.1 0.2 0.3 0.4 0.5 0.1 0.2 0.3 0.4 0.5 α* ij αij H=15 〈αij - α* ij〉=0.0473 Figure 4: Left: Sub-network of flights between major U.S. hubs, where the thickness of edges is proportional to the aggregated traffic between them; nodes which do not report information are indicated in red. Right: Scatter plots of reconstructed {αij} versus original {α∗ ij} couplings for H = 0 and H = 15 and M = 10, 000. 6 Conclusions and path forward From the algorithmic point of view, inference of spreading parameters in the presence of nodes with incomplete information considerably complicates the problem because the reconstruction can no longer be performed independently for each neighborhood. In this paper, it is shown how the dynamic interdependence of parameters can be exploited in order to be able to recover the couplings in the setting involving hidden information. Let us discuss several directions for future work. DMPREC can be straightforwardly generalized to more complicated spreading models using a generic form of DMP equations [20] and the key approximation ingredient (9), as well as adapted to the case of temporal graphs by encoding network dynamics via time-dependent coefficients αij(t), which might be more appropriate in certain real situations. It would also be useful to extend the present framework to the case of continuous dynamics using the continuous-time version of DMP equations of [19]. An important direction would be to generalize the learning problem beyond the assumption of a known network, and formulate precise conditions for detection of hidden nodes and for a perfect network recovery in this case. Finally, in the spirit of active learning, we anticipate that DMPREC could be helpful for the problems involving an optimal placement of observes in the situations where collection of full measurements is costly. Acknowledgements. The author is grateful to M. Chertkov and T. Misiakiewicz for discussions and comments, and acknowledges support from the LDRD Program at Los Alamos National Laboratory by the National Nuclear Security Administration of the U.S. Department of Energy under Contract No. DE-AC52-06NA25396. 8 References [1] C. Nowzari, V. Preciado, and G. Pappas. Analysis and control of epidemics: A survey of spreading processes on complex networks. Control Systems, IEEE, 36(1):26–46, 2016. [2] A. Y. Lokhov and D. Saad. Optimal deployment of resources for maximizing impact in spreading processes. arXiv preprint arXiv:1608.08278, 2016. [3] R. Pastor-Satorras, C. Castellano, P. Van Mieghem, and A. Vespignani. Epidemic processes in complex networks. Rev. Mod. Phys., 87:925–979, 2015. [4] S. Boccaletti, V. Latora, Y. Moreno, M. Chavez, and D.-U. Hwang. Complex networks: Structure and dynamics. Physics reports, 424(4):175–308, 2006. [5] I. Dobson, B. A. Carreras, V. E. Lynch, and D. E. Newman. Complex systems analysis of series of blackouts: Cascading failure, critical points, and self-organization. Chaos, 17(2):026103, 2007. [6] R. O’Dea, J. J. Crofts, and M. Kaiser. Spreading dynamics on spatially constrained complex brain networks. J. R. Soc. Interface, 10(81):20130016, 2013. [7] S. Myers and J. Leskovec. On the convexity of latent social network inference. In Advances in Neural Information Processing Systems, pages 1741–1749, 2010. [8] M. Gomez-Rodriguez, D. Balduzzi, and B. Schölkopf. Uncovering the temporal dynamics of diffusion networks. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 561–568, New York, NY, USA, June 2011. ACM. [9] N. Du, L. Song, M. Yuan, and A. J. Smola. Learning networks of heterogeneous influence. In Advances in Neural Information Processing Systems, pages 2780–2788, 2012. [10] P. Netrapalli and S. Sanghavi. Learning the graph of epidemic cascades. In ACM SIGMETRICS Performance Evaluation Review, volume 40, pages 211–222. ACM, 2012. [11] D. Kempe, J. Kleinberg, and É. Tardos. Maximizing the spread of influence through a social network. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 137–146. ACM, 2003. [12] H. Daneshmand, M. Gomez-Rodriguez, L. Song, and B. Schölkopf. Estimating diffusion network structures: Recovery conditions, sample complexity & soft-thresholding algorithm. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), volume 2014, page 793, 2014. [13] J. Pouget-Abadie and T. Horel. Inferring graphs from cascades: A sparse recovery framework. In Proceedings of The 32nd International Conference on Machine Learning, pages 977–986, 2015. [14] B. Abrahao, F. Chierichetti, R. Kleinberg, and A. Panconesi. Trace complexity of network inference. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 491–499. ACM, 2013. [15] V. Gripon and M. Rabbat. Reconstructing a graph from path traces. In Information Theory Proceedings (ISIT), 2013 IEEE International Symposium on, pages 2488–2492. IEEE, 2013. [16] K. Amin, H. Heidari, and M. Kearns. Learning from contagion (without timestamps). In Proceedings of the 31st International Conference on Machine Learning, pages 1845–1853, 2014. [17] E. Sefer and C. Kingsford. Convex risk minimization to infer networks from probabilistic diffusion data at multiple scales. In Data Engineering (ICDE), 2015 IEEE 31th International Conference on, 2015. [18] M. Farajtabar, M. Gomez-Rodriguez, N. Du, M. Zamani, H. Zha, and L. Song. Back to the past: Source identification in diffusion networks from partially observed cascades. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics (AISTATS), pages 232–240, 2015. [19] B. Karrer and M. E. Newman. Message passing approach for general epidemic models. Physical Review E, 82(1):016101, 2010. [20] A. Y. Lokhov, M. Mézard, and L. Zdeborová. Dynamic message-passing equations for models with unidirectional dynamics. Physical Review E, 91(1):012811, 2015. [21] A. Y. Lokhov and T. Misiakiewicz. Efficient reconstruction of transmission probabilities in a spreading process from partial observations. arXiv preprint arXiv:1509.06893, 2016. [22] R. Rossi and N. Ahmed. Network repository, 2013. http://networkrepository.com. [23] F. Altarelli, A. Braunstein, L. Dall’Asta, A. Lage-Castellanos, and R. Zecchina. Bayesian inference of epidemics on networks via belief propagation. Physical review letters, 112(11):118701, 2014. [24] S. F. Sampson. Crisis in a cloister. PhD thesis, Cornell University, Ithaca, 1969. [25] Bureau of transportation statistics. http://www.rita.dot.gov/bts/. 9
2016
154
6,055
Tracking the Best Expert in Non-stationary Stochastic Environments Chen-Yu Wei Yi-Te Hong Chi-Jen Lu Institute of Information Science Academia Sinica, Taiwan {bahh723, ted0504, cjlu}@iis.sinica.edu.tw Abstract We study the dynamic regret of multi-armed bandit and experts problem in nonstationary stochastic environments. We introduce a new parameter Λ, which measures the total statistical variance of the loss distributions over T rounds of the process, and study how this amount affects the regret. We investigate the interaction between Λ and Γ, which counts the number of times the distributions change, as well as Λ and V , which measures how far the distributions deviates over time. One striking result we find is that even when Γ, V , and Λ are all restricted to constant, the regret lower bound in the bandit setting still grows with T. The other highlight is that in the full-information setting, a constant regret becomes achievable with constant Γ and Λ, as it can be made independent of T, while with constant V and Λ, the regret still has a T 1/3 dependency. We not only propose algorithms with upper bound guarantee, but prove their matching lower bounds as well. 1 Introduction Many situations in our daily life require us to make repeated decisions which result in some losses corresponding to our chosen actions. This can be abstracted as the well-known online decision problem in machine learning [5]. Depending on how the loss vectors are generated, two different worlds are usually considered. In the adversarial world, loss vectors are assumed to be deterministic and controlled by an adversary, while in the stochastic world, loss vectors are assumed to be sampled independently from some distributions. In both worlds, good online algorithms are known which can achieve a regret of about √ T over T time steps, where the regret is the difference between the total loss of the online algorithm and that of the best offline one. Another distinction is about the information the online algorithm can receive after each action. In the full-information setting, it gets to know the whole loss vector of that step, while in the bandit setting, only the loss value of the chosen action is received. Again, in both settings, a regret of about √ T turns out to be achievable. While the regret bounds remain in the same order in those general scenarios discussed above, things become different when some natural conditions are considered. One well-known example is that in the stochastic multi-armed bandit (MAB) problem, when the best arm (or action) is substantially better than the second best, with a constant gap between their means, then a much lower regret, of the order of log T, becomes possible. This motivates us to consider other possible conditions which can have finer characterization of the problem in terms of the achievable regret. In the stochastic world, most previous works focused on the stationary setting, in which the loss (or reward) vectors are assumed to be sampled from the same distribution for all time steps. With this assumption, although one needs to balance between exploration and exploitation in the beginning, after some trials, one can be confident about which action is the best and rest assured that there are no more surprises. On the other hand, the world around us may not be stationary, in which existing learning algorithms for the stationary case may no longer work. In fact, in a non-stationary world, the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. dilemma between exploration and exploitation persists as the underlying distribution may drift as time evolves. How does the non-stationarity affect the achievable regret? How does one measure the degree of non-stationarity? In this paper, we answer the above questions through the notion of dynamic regret, which measures the algorithm’s performance against an offline algorithm allowed to select the best arm at every step. Related Works. One way to measure the non-stationarity of a sequence of distributions is to count the number of times the distribution at a time step differs from its previous one. Let Γ −1 be this number so that the whole time horizon can be partitioned into Γ intervals, with each interval having a stationary distribution. In the bandit setting, a regret of about √ ΓT is achieved by the EXP3.S algorithm in [2], as well as the discounted UCB and sliding-window UCB algorithms in [8]. The dependency on T can be refined in the full-information setting: AdaNormalHedge [10] and Adapt-ML-Prod [7] can both achieve regret in the form of √ ΓC, where C is the total first-order and second-order excess loss respectively, which is upper-bounded by T. From a slightly different Online Mirror Descent approach, [9] can also achieve a regret of about √ ΓD, where D is the sum of differences between consecutive loss vectors. Another measure of non-stationarity, denoted by V , is to compute the difference between the means of consecutive distributions and sum them up. Note that this allows the possibility for the best arm to change frequently, with a very large Γ, while still having similar distributions with a small V . For such a measure V , [3] provided a bandit algorithm which achieves a regret of about V 1/3T 2/3. This regret upper bound is unimprovable in general even in the full-information setting, as a matching lower bound was shown in [4]. Again, [9] refined the upper bound in the full-information setting through the introduction of D, achieving the regret of about 3p ˜V DT, for a parameter ˜V different but related to V : ˜V calculates the sum of differences between consecutive realized loss vectors, while V measures that between mean loss vectors. This makes the results of [3] and [9] incomparable. The problem stems from the fact that [9] considers the traditional adversarial setting, while [3] studies the non-stationary stochastic setting. In this paper, we will provide a framework that bridges these two seemingly disparate worlds. Our Results. We base ourselves in the stochastic world with non-stationary distributions, characterized by the parameters Γ and V . In addition, we introduce a new parameter Λ, which measures the total statistical variance of the distributions. Note that traditional adversarial setting corresponds to the case with Λ = 0 and Γ ≈V ≈T, while the traditional stochastic setting has Λ ≈T and Γ = V = 1. Clearly, with a smaller Λ, the learning problem becomes easier, and we would like to understand the tradeoff between Λ and other parameters, including Γ, V , and T. In particular, we would like to know how the bounds described in the related works would be changed. Would all the dependency on T be replaced by Λ, or would only some partial dependency on T be shifted to Λ? First, we consider the effect of the variance Λ with respect to the parameter Γ. We show that in the full-information setting, a regret of about √ ΓΛ + Γ can be achieved, which is independent of T. On the other hand, we show a sharp contrast that in the bandit setting, the dependency on T is unavoidable, and a lower bound of the order of √ ΓT exists. That is, even when there is no variance in distributions, with Λ = 0, and the distributions only change once, with Γ = 2, any bandit algorithm cannot avoid a regret of about √ T, while a full-information algorithm can achieve a constant regret independent of T. Next, we study the tradeoff between Λ and V . We show that in the bandit setting, a regret of about 3√ ΛV T + √ V T is achievable. Note that this recovers the V 1/3T 2/3 regret bound of [3] as Λ is at most of the order of T, but our bound becomes better when Λ is much smaller than T. Again, one may notice the dependency on T and wonder if this can also be removed in the full-information setting. We show that in the full-information setting, the regret upper bound and lower bound are both about 3√ ΛV T + V . Our upper bound is incomparable to the 3p ˜V DT bound of [9], since their adversarial setting corresponds to Λ = 0 and their D can be as large as T in our setting. Moreover, we see that while the full-information regret bound is slightly better than that in the bandit setting, there is still an unavoidable T 1/3 dependency. 2 Our results provide a big picture of the regret landscape in terms of the parameters Λ, Γ, V , and T, in both full-information and bandit settings. A table summarizing our bounds as well as previous ones is given in Appendix A in the supplementary material. Finally, let us remark that our effort mostly focuses on characterizing the achievable (minimax) regrets, and most of our upper bounds are achieved by algorithms which need the knowledge of the related parameters and may not be practical. To complement this, we also propose a parameter-free algorithm, which still achieve a good regret bound and may have independent interest of its own. 2 Preliminaries Let us first introduce some notations. For an integer K > 0, let [K] denote the set {1, . . . , K}. For a vector ℓ∈RK, let ℓi denote its i’th component. When we need to refer to a time-indexed vector ℓt ∈RK, we will write ℓt,i to denote its i’th component. We will use the indicator function 1C for a condition C, which gives the value 1 if C holds and 0 otherwise. For a vector ℓ, we let ∥ℓ∥b denote its Lb-norm. While standard notation O(·) is used to hide constant factors, we will use the notation ˜O(·) to hide logarithmic factors. Next, let us describe the problem we study in this paper. Imagine that a learner is given the choice of a total of K actions, and has to play iteratively for a total of T steps. At step t, the learner needs to choose an action at ∈[K], and then suffers a corresponding loss ℓt,i ∈[0, 1], which is independently drawn from a non-stationary distribution with expected loss E[ℓt,i] = µt,i, which may drift over time. After that, the learner receives some feedback from the environment. In the full-information setting, the feedback gives the whole loss vector ℓt = (ℓt,1, ..., ℓt,K), while in the bandit setting, only the loss ℓt,at of the chosen action is revealed. A standard way to evaluate the learner’s performance is to measure her (or his) regret, which is the difference between the total loss she suffers and that of an offline algorithm. While most prior works consider offline algorithms which can only play a fixed action for all the steps, we consider stronger offline algorithms which can take different actions in different steps. Our consideration is natural for non-stationary distributions, although this would make the regret large when compared to such stronger offline algorithms. Formally, we measure the learner’s performance by its expected dynamic pseudo-regret, defined as PT t=1 E  ℓt,at −ℓt,u∗ t  = PT t=1 µt,at −µt,u∗ t  , where u∗ t = arg mini µt,i is the best action at step t. For convenience, we will simply refer it as the regret of the learner later in our paper. We will consider the following parameters characterizing different aspects of the environments: Γ = 1 + T X t=2 1µt̸=µt−1, V = T X t=1 ∥µt −µt−1∥∞, and Λ = T X t=1 E  ∥ℓt −µt∥2 2  , (1) where we let µ0 be the all-zero vector. Here, Γ −1 is the number of times the distributions switch, V measures the distance the distributions deviate, and Λ is the total statistical variance of these T distributions. We will call distributions with a small Γ switching distributions, while we will call distributions with a small V drifting distributions and call V the total drift of the distributions. Finally, we will need the following large deviation bound, known as empirical Bernstein inequality. Theorem 2.1. [11] Let X = (X1, ..., Xn) be a vector of independent random variables taking values in [0, 1], and let ΛX = P 1≤i<j≤n(Xi −Xj)2/(n(n −1)). Then for any δ > 0, we have Pr " n X i=1 E [Xi] −Xi n > ρ(n, ΛX, δ) # ≤δ, for ρ(n, Λ, δ) = s 2Λ log 2 δ n + 7 log 2 δ 3(n −1). 3 Algorithms We would like to characterize the achievable regret bounds for both switching and drifting distributions, in both full-information and bandit settings. In particular, we would like to understand the interplay among the parameters Γ, V, Λ, and T, defined in (1). The only known upper bound which is good enough for our purpose is that by [8] for switching distributions in the bandit setting, which is close to the lower bound in our Theorem 4.1. In subsection 3.1, we provide a bandit algorithm for drifting distributions which achieves an almost optimal regret upper bound, when given the parameters 3 Algorithm 1 Rerun-UCB-V Initialization: Set B according to (2) and δ = 1/(KT). for m = 1, . . . , T/B do for t = (m −1)B + 1, . . . , mB do Choose arm at := argmini(ˆµt,i −λt,i), with ˆµt,i and λt,i computed according to (3). end for end for V, Λ, T. In subsection 3.2, we provide a full-information algorithm which works for both switching and drifting distributions. The regret bounds it achieves are also close to optimal, but it again needs the knowledge of the related parameters. To complement this, we provide a full-information algorithm in subsection 3.3, which does not need to know the parameters but achieves slightly larger regret bounds. 3.1 Parameter-Dependent Bandit Algorithm In this subsection, we consider drifting distributions parameterized by V and Λ. Our main result is a bandit algorithm which achieves a regret of about 3√ ΛV T + √ V T. As we aim to achieve smaller regrets for distributions with smaller statistical variances, we adopt a variant of the UCB algorithm developed by [1], called UCB-V, which takes variances into account when building its confidence interval. Our algorithm divides the time steps into T/B intervals I1, . . . , IT/B, each having B steps,1 with B = 3p K2ΛT/V 2 if KΛ2 ≥TV and B = p KT/V otherwise. (2) For each interval, our algorithm clears all the information from previous intervals, and starts a fresh run of UCB-V. More precisely, before step t in an interval I, it maintains for each arm i its empirical mean ˆµt,i, empirical variance ˆΛt,i, and size of confidence interval λt,i, defined as ˆµt,i = X s∈St,i ℓs,i |St,i|, ˆΛt,i = X r,s∈St,i (ℓr,i −ℓs,i)2 |St,i|(|St,i| −1), and λt,i = ρ(|St,i|, ˆΛt,i, δ), (3) where St,i denotes the set of steps before t in I that arm i was played, and ρ is the function given in Theorem 2.1. Here we use the convention that ˆµt,i = 0 if |St,i| = 0, while ˆΛt,i = 0 and λt,i = 1 if |St,i| ≤1. Then at step t, our algorithm selects the optimistic arm at := argmin i (ˆµt,i −λt,i), receives the corresponding loss, and updates the statistics. Our algorithm is summarized in Algorithm 1, and its regret is guaranteed by the following, which we prove in Appendix B in the supplementary material. Theorem 3.1. The expected regret of Algorithm 1 is at most ˜O( 3√ K2ΛV T + √ KV T). 3.2 Parameter-Dependent Full-Information Algorithms In this subsection, we provide full-information algorithms for switching and drifting distributions. In fact, they are based on an existing algorithm from [6], which is known to work in a different setting: the loss vectors are deterministic and adversarial, and the offline comparator cannot switch arms. In that setting, one of their algorithms, based on gradient-descent (GD), can achieve a regret of O( √ D) where D = P t ∥ℓt −ℓt−1∥2 2, which is small when the loss vectors have small deviation. Our first observation is that their algorithm in fact can work against a dynamic offline comparator which switches arms less than N times, given any N, with its regret becoming O( √ ND). Our second observation is that when Λ is small, each observed loss vector ℓt is likely to be close to its true mean 1For simplicity of presentation, let us assume here and later in the paper that taking divisions and roots to produce blocks of time steps all yield integers. It is easy to modify our analysis to the general case without affecting the order of our regret bound. 4 Algorithm 2 Full-information GD-based algorithm Initialization: Let x1 = ˆx1 = (1/K, . . . , 1/K)⊤. for t = 1, 2, . . . , T do Play ˆxt = arg minˆx∈X (⟨ℓt−1, ˆx⟩+ 1 ηt ∥ˆx −xt∥2 2), and then receive loss vector ℓt. Update xt+1 = arg minx∈X (⟨ℓt, x⟩+ 1 ηt ∥x −xt∥2 2). end for µt, and when V is small, ℓt is likely to be close to ℓt−1. These two observations make possible for us to adopt their algorithm to our setting. We show the first algorithm in Algorithm 2, with the feasible set X being the probability simplex. The idea is to use ℓt−1 as an estimate for ℓt to move ˆxt further in a possibly beneficial direction. Its regret is guaranteed by the following, which we prove in Appendix C in the supplementary material. Theorem 3.2. For switching distributions parameterized by Γ and Λ, the regret of Algorithm 2 with ηt = η = p Γ/(Λ + KΓ), is at most O( √ ΓΛ + √ KΓ). Note that for switching distributions, the regret of Algorithm 2 does not depend on T, which means that it can achieve a constant regret for constant Γ and Λ. Let us remark that although using a variant based on multiplicative updates could result in a better dependency on K, an additional factor of log T would then emerge when using existing techniques for dealing with dynamic comparators. For drifting distributions, one can show that Algorithm 2 still works and has a good regret bound. However, a slightly better bound can be achieved as we describe next. The idea is to divide the time steps into T/B intervals of size B, with B = 3p ΛT/V 2 if ΛT > V 2 and B = 1 otherwise, and re-run Algorithm 2 in each interval with an adaptive learning rate. One way to have an adaptive learning rate can be found in [9], which works well when there is only one interval. A natural way to adopt it here is to reset the learning rate at the start of each interval, but this does not lead to a good enough regret bound as it results in some constant regret at the start of every interval. To avoid this, some careful changes are needed. Specifically, in an interval [t1, t2], we run Algorithm 2 with the learning rate reset as ηt = 1/ v u u t4 t−1 X τ=t1 ∥ℓτ −ℓτ−1∥2 2 for t > t1, with ηt1 = ∞initially for every interval. This has the benefit of having small or even no regret at the start of an interval when the loss vectors across the boundary have small or no deviation. The regret of this new algorithm is guaranteed by the following, which we prove in Appendix D in the supplementary material. Theorem 3.3. For drifting distributions parameterized by V and Λ, the regret of this new algorithm is at most O( 3√ V ΛT + √ KV ). 3.3 Parameter-Free Full-Information Algorithm The reason that our algorithm for Theorem 3.3 needs the related parameters is to set its learning rate properly. To have a parameter-free algorithm, we would like to adjust the learning rate dynamically in a data-driven way. One way for doing this can be found in [7], which is based on the multiplicative updates variant of the mirror-descent algorithm. It achieves a static regret of about qP t r2 t,k against any expert k, where rt,k = ⟨pt, ℓt⟩−ℓt,k is its instantaneous regret for playing pt at step t. However, in order to work in our setting, we would like the regret bound to depend on ℓt −ℓt−1 as seen previously. This suggests us to modify the Adapt-ML-Prod algorithm of [7] using the idea of [6], which takes ℓt−1 as an estimate of ℓt to move pt further in an optimistic direction. Recall that the algorithm of [7] maintains a separate learning rate ηt,k for each arm k at time t, and it updates the weight wt,k as well as ηt,k using the instantaneous regret rt,k. To modify the algorithm using the idea of [6], we would like to have an estimate mt,k for rt,k in order to move pt,k further using mt,k and update the learning rate accordingly. More precisely, at step t, we now play pt, with pt,k = ηt−1,k ˜wt−1,k/⟨ηt−1, ˜wt−1⟩where ˜wt−1,k = wt−1,k exp(ηt−1,kmt,k), (4) 5 Algorithm 3 Optimistic-Adapt-ML-Prod Initialization: Let w0,k = 1/K and ℓ0,k = 0 for every k ∈[K]. for t = 1, 2, . . . , T do Play pt according to (4), and then receive loss vector ℓt. Update each weight wt,k according to (5) and each learning rate ηt,k according to (6). end for which uses the estimate mt,k to move further from wt−1,k. Then after receiving the loss vector ℓt, we update each weight wt,k = wt−1,k exp ηt−1,krt,k −η2 t−1,k(rt,k −mt,k)2ηt,k/ηt−1,k (5) as well as each learning rate ηt,k = min ( 1/4, s (ln K)/  1 + X s∈[t](rs,k −ms,k)2 ) . (6) Our algorithm is summarized in Algorithm 3, and we will show that it achieves a regret of about pP t(rt,k −mt,k)2 against arm k. It remains to choose an appropriate estimate mt,k. One attempt is to have mt,k = rt−1,k, but rt,k −rt−1,k = (⟨pt, ℓt⟩−ℓt,k) −(⟨pt−1, ℓt−1⟩−ℓt−1,k), which does not lead to a desirable bound. The other possibility is to set mt,k = ⟨pt, ℓt−1⟩−ℓt−1,k, which can be shown to have (rt,k −mt,k)2 ≤(2∥ℓt −ℓt−1∥∞)2. However, it is not clear how to compute such mt,k because it depends on pt,k which in turns depends on mt,k itself. Fortunately, we can approximate it efficiently in the following way. Note that the key quantity is ⟨pt, ℓt−1⟩. Given its value α, ˜wt−1,k and pt,k can be seen as functions of α, defined according to (5) as ˜wt−1,k(α) = wt−1,k exp(ηt,k(α −ℓt−1,k)) and pt,k(α) = ηt−1,k ˜wt−1,k(α)/ P i ηt−1,i ˜wt−1,i(α). Then we would like to show the existence of α such that ⟨pt(α), ℓt−1⟩= α and to find it efficiently. For this, consider the function f(α) = ⟨pt(α), ℓt−1⟩, with pt(α) defined above. It is easy to check that f is a continuous function bounded in [0, 1], which implies the existence of some fixed point α ∈[0, 1] with f(α) = α. Using a binary search, such an α can be approximated within error 1/T in log T iterations. As such a small error does not affect the order of the regret, we will ignore it for simplicity of presentation, and assume that we indeed have ⟨pt, ℓt−1⟩and hence mt,k = ⟨pt, ℓt−1⟩−ℓt−1,k without error. Then we have the following regret bound (c.f. [7, Corollary 4]), which we prove in Appendix E in the supplementary material. Theorem 3.4. The static regret of Algorithm 3 w.r.t. any arm (or expert) k ∈[K] is at most ˆO rX t∈[T ](rt,k −mt,k)2 ln K + ln K  ≤ˆO rX t∈[T ]∥ℓt −ℓt−1∥2∞ln K + ln K  , where the notation ˆO(·) hides a ln ln T factor. The regret in the theorem above is measured against a fixed arm. To achieve a dynamic regret against an offline algorithm which can switch arms, one can use a generic reduction to the so-called sleeping experts problem. In particular, we can use the idea in [7] by creating ˜K = KT sleeping experts, and run our Algorithm 3 on these ˜K experts (instead of on the K arms). More precisely, each sleeping expert is indexed by some pair (s, k), and it is asleep for steps before s and becomes awake for steps t ≥s. At step t, it calls Algorithm 3 for the distribution ˜pt over the ˜K experts, and computes its own distribution pt over K arms, with pt,k proportional to Pt s=1 ˜pt,(s,k). Then it plays pt, receives loss vector ℓt, and feeds some modified loss vector ˜ℓt and estimate vector ˜mt to Algorithm 3 for update. Here, we set ˜ℓt,(s,k) to its expected loss ⟨pt, ℓt⟩if expert (s, k) is asleep and to ℓt,k otherwise, while we set ˜mt,(s,k) to 0 if expert (s, k) is asleep and to mt,k = ⟨pt, ℓt−1⟩−ℓt−1,k otherwise. This choice allows us to relate the regret of Algorithm 3 to that of the new algorithm, which can be seen in the proof of the following theorem, given in Appendix F in the supplementary material. Theorem 3.5. The dynamic expected regret of the new algorithm is ˜O( √ ΓΛ ln K + Γ ln K) for switching distributions and ˜O( 3√ V ΛT ln K + √ V T ln K) for drifting distributions. 6 4 Lower Bounds We study regret lower bounds in this section. In subsection 4.1, we show that for switching distributions with Γ −1 ≥1 switches, there is an Ω( √ ΓT) lower bound for bandit algorithms, even when there is no variance (Λ = 0) and there are constant loss gaps between the optimal and suboptimal arms. We also show a full-information lower bound, which almost matches our upper bound in Theorem 3.2. In subsection 4.2, we show that for drifting distributions, our upper bounds in Theorem 3.1 and Theorem 3.2 are almost tight. In particular, we show that now even for full-information algorithms, a large 3√ T dependency in the regret turns out to be unavoidable, even for small V and Λ. This provides a sharp contrast to the upper bound of our Theorem 3.2, which shows that a constant regret is in fact achievable by a full-information algorithm for switching distributions with constant Γ and Λ. For simplicity of presentation, we will only discuss the case with K = 2 actions, as it is not hard to extend our proofs to the general case. 4.1 Switching Distributions In contrast to the full-information setting, the existence of switches presents a dilemma with lose-lose situation for a bandit algorithm: in order to detect any possible switch early enough, it must explore aggressively, but this has the consequence of playing suboptimal arms too often. To fool any bandit algorithm, we will switch between two deterministic distributions, with no variance, which have mean vectors ℓ(1) = (1/2, 1)⊤and ℓ(2) = (1/2, 0)⊤, respectively. Our result is the following. Theorem 4.1. The worst-case expected regret of any bandit algorithm is Ω( √ ΓT), for Γ ≥2. Proof. Consider any bandit algorithm A, and let us partition the T steps into Γ/2 intervals, each consisting of B = 2T/Γ steps. Our goal is to make A suffer in each interval an expected regret of Ω( √ B) by switching the loss vectors at most once. As mentioned before, we will only switch between two different deterministic distributions with mean vectors: ℓ(1) and ℓ(2). Note that we can see these two distributions simply as two loss vectors, with ℓ(i) having arm i as the optimal arm. In what follows, we focus on one of the intervals, and assume that we have chosen the distributions in all previous intervals. We would like to start the interval with the loss vector ℓ(1). Let N2 denote the expected number of steps A plays the suboptimal arm 2 in this interval if ℓ(1) is used for the whole interval. If N2 ≥ √ B/2, we can actually use ℓ(1) for the whole interval with no switch, which makes A suffer an expected regret of at least (1/2) · √ B/2 = √ B/4 in this interval. Thus, it remains to consider the case with N2 < √ B/2. In this case, A does not explore arm 2 often enough, and we let it pay by choosing an appropriate step to switch to the other loss vector ℓ(2) = (1/2, 0)⊤, which has arm 2 as the optimal one. For this, let us divide the B steps of the interval into √ B blocks, each consisting of √ B steps. As N2 < √ B/2, there must be a block in which the expected number of steps that A plays arm 2 is at most N2/ √ B < 1/2. By a Markov inequality, the probability that A ever plays arm 2 in this block is less than 1/2. This implies that when given the loss vector ℓ(1) for all the steps till the end of this block, A never plays arm 2 in the block with probability more than 1/2. Therefore, if we make the switch to the loss vector ℓ(2) = (1/2, 0)⊤at the beginning of the block, then A with probability more than 1/2 still never plays arm 2 and never notices the switch in this block. As arm 2 is the optimal one with respect to ℓ(2), the expected regret of A in this block is more than (1/2) · (1/2) · √ B = √ B/4. Now if we choose distributions in each interval as described above, then there are at most Γ/2 · 2 = Γ periods of stationary distribution in the whole horizon, and the total expected regret of A can be made at least Γ/2 · √ B/4 = Γ/2 · p 2T/Γ/4 = Ω( √ ΓT), which proves the theorem. For full-information algorithms, we have the following lower bound, which almost matches our upper bound in Theorem 3.2. We provide the proof in Appendix G in the supplementary material. Theorem 4.2. The worst-case expected regret of any full-information algorithm is Ω( √ ΓΛ + Γ). 7 4.2 Drifting Distributions In this subsection, we show that the regret upper bounds achieved by our bandit algorithm and full-information algorithm are close to optimal by showing almost matching lower bounds. More precisely, we have the following. Theorem 4.3. The worst-case expected regret of any full-information algorithm is Ω( 3√ ΛV T + V ), while that of any bandit algorithm is Ω( 3√ ΛV T + √ V T). Proof. Let us first consider the full-information case. When ΛT ≤32KV 2, we immediately have from Theorem 4.2 the regret lower bound of Ω(Γ) ≥Ω(V ) ≥Ω( 3√ ΛV T + V ). Thus, let us focus on the case with ΛT ≥32KV 2. In this case, V ≤O( 3√ ΛV T), so it suffices to prove a lower bound of Ω( 3√ ΛV T). Fix any full-information algorithm A, and we will show the existence of a sequence of loss distributions for A to suffer such an expected regret. Following [3], we divide the time steps into T/B intervals of length B, and we set B = 3p ΛT/(32KV 2) ≥1. For each interval, we will pick some arm i as the optimal one, and give it some loss distribution P, while other arms are sub-optimal and all have some loss distribution Q. We need P and Q to satisfy the following three conditions: (a) P’s mean is smaller than Q’s by ϵ, (b) their variances are at most σ2, and (c) their KL divergence satisfies (ln 2)KL(Q, P) ≤ϵ2/σ2, for some ϵ, σ ∈(0, 1) to be specified later. Their existence is guaranteed by the following, which we prove in Appendix H in the supplementary material. Lemma 4.4. For any 0 ≤σ ≤1/2 and 0 ≤ϵ ≤σ/ √ 2, there exist distributions P and Q satisfying the three conditions above. Let Di denote the joint distribution of such K distributions, with arm i being the optimal one, and we will use the same Di for all the steps in an interval. We will show that for any interval, there is some i such that using Di this way can make algorithm A suffer a large expected regret in the interval, conditioned on the distributions chosen for previous intervals. Before showing that, note that when we choose distributions in this way, their total variance is at most TKσ2 while their total drift is at most (T/B)ϵ. To have them bounded by Λ and V respectively, we choose σ = p Λ/(4KT) and ϵ = V B/T, which satisfy the condition of Lemma 4.4, with our choice of B. To find the distributions, we deal with the intervals one by one. Consider any interval, and assume that the distributions for previous intervals have been chosen. Let Ni denote the number of steps A plays arm i in this interval, and let Ei[Ni] denote its expectation when Di is used for every step of the interval, conditioned on the distributions of previous intervals. One can bound this conditional expectation in terms of a related one, denoted as Eunif[Ni], when every arm has the distribution Q for every step of the interval, again conditioned on the distributions of previous intervals. Specifically, using an almost identical argument to that in [2, proof of Theorem A.2.], one can show that Ei [Ni] ≤Eunif [Ni] + B 2 p B(2 ln 2) · KL(Q, P).2 (7) According to Lemma 4.4 and our choice of parameters, we have B(2 ln 2) · KL(Q, P) ≤2B · (ϵ2/σ2) ≤1/4. Summing both sides of (7) over arm i, and using the fact that P i Eunif [Ni] = B, we get P i Ei [Ni] ≤B + BK/4, which implies the existence of some i such that Ei [Ni] ≤ B/K + B/4 ≤(3/4)B. Therefore, if we choose this distribution Di, the conditional expected regret of algorithm A in this interval is at least ϵ(B −Ei[Ni]) ≥ϵB/4. By choosing distributions inductively in this way, we can make A suffer a total expected regret of at least (T/B) · (ϵB/4) ≥Ω( 3√ ΛV T). This completes the proof for the full-information case. Next, let us consider the bandit case. From Theorem 4.1, we immediately have a lower bound of Ω( √ ΓT) ≥Ω( √ V T), which implies the required bound when √ V T ≥ 3√ ΛV T. When √ V T ≤ 3√ ΛV T, we have V ≤Λ2/T which implies that V ≤ 3√ ΛV T, and we can then use the fullinformation bound of Ω( 3√ ΛV T) just proved before. This completes the proof of the theorem. 2Note that inside the square root, we use B instead of Eunif[Ni] as in [2]. This is because in their bandit setting, Ni is the number of steps when arm i is sampled and has its information revealed to the learner, while in our full-information case, information about arm i is revealed in every step and there are at most B steps. 8 References [1] Jean-Yves Audibert, Rémi Munos, and Csaba Szepesvári. Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theor. Comput. Sci., 410(19):1876–1902, 2009. [2] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002. [3] Omar Besbes, Yonatan Gur, and Assaf J. Zeevi. Stochastic multi-armed-bandit problem with non-stationary rewards. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems (NIPS), December 2014. [4] Omar Besbes, Yonatan Gur, and Assaf J. Zeevi. Non-stationary stochastic optimization. Operations Research, 63(5):1227–1244, 2015. [5] Nicolò Cesa-Bianchi and Gábor Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [6] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In The 25th Conference on Learning Theory (COLT), June 2012. [7] Pierre Gaillard, Gilles Stoltz, and Tim van Erven. A second-order bound with excess losses. In The 27th Conference on Learning Theory (COLT), June 2014. [8] Aurélien Garivier and Eric Moulines. On upper-confidence bound policies for switching bandit problems. In The 22nd International Conferenc on Algorithmic Learning Theory (ALT), October 2011. [9] Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online optimization : Competing with dynamic comparators. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTAT), May 2015. [10] Haipeng Luo and Robert E. Schapire. Achieving all with no parameters: Adanormalhedge. In The 28th Conference on Learning Theory (COLT), July 2015. [11] Andreas Maurer and Massimiliano Pontil. Empirical bernstein bounds and sample-variance penalization. In The 22nd Conference on Learning Theory (COLT), June 2009. 9
2016
155
6,056
Statistical Inference for Pairwise Graphical Models Using Score Matching Ming Yu mingyu@chicagobooth.edu Varun Gupta varun.gupta@chicagobooth.edu Mladen Kolar⇤ mladen.kolar@chicagobooth.edu University of Chicago Booth School of Business Chicago, IL 60637 Abstract Probabilistic graphical models have been widely used to model complex systems and aid scientific discoveries. As a result, there is a large body of literature focused on consistent model selection. However, scientists are often interested in understanding uncertainty associated with the estimated parameters, which current literature has not addressed thoroughly. In this paper, we propose a novel estimator for edge parameters for pairwise graphical models based on Hyvärinen scoring rule. Hyvärinen scoring rule is especially useful in cases where the normalizing constant cannot be obtained efficiently in a closed form. We prove that the estimator is pn-consistent and asymptotically Normal. This result allows us to construct confidence intervals for edge parameters, as well as, hypothesis tests. We establish our results under conditions that are typically assumed in the literature for consistent estimation. However, we do not require that the estimator consistently recovers the graph structure. In particular, we prove that the asymptotic distribution of the estimator is robust to model selection mistakes and uniformly valid for a large number of data-generating processes. We illustrate validity of our estimator through extensive simulation studies. 1 Introduction Undirected probabilistic graphical models are widely used to explore and represent dependencies between random variables. They have been used in areas ranging from computational biology to neuroscience and finance. See [7] for a recent review. An undirected probabilistic graphical model consists of an undirected graph G = (V, E), where V = {1, . . . , p} is the vertex set and E ⇢V ⇥V is the edge set, and a random vector X = (X1, . . . , Xp) 2 X p ✓RP . Each coordinate of the random vector X is associated with a vertex in V and the graph structure encodes the conditional independence assumptions underlying the distribution of X. In particular, Xa and Xb are conditionally independent given all the other variables if and only if (a, b) 62 E, that is, the nodes a and b are not adjacent in G. One of the fundamental problems in statistics is that of learning the structure of G from i.i.d. samples from X and quantifying uncertainty of the estimated structure. ⇤This work is supported by an IBM Corporation Faculty Research Fund at the University of Chicago Booth School of Business. This work was completed in part with resources provided by the University of Chicago Research Computing Center. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We consider a basic class of pairwise interaction graphical models with densities belonging to an exponential family P = {p✓(x) | ✓2 ⇥} with natural parameter space ⇥and log p✓(x) = X a2V X k2[K] ✓(k) a t(k) a (xa)+ X (a,b)2E X l2[L] ✓(l) ab t(l) ab(xa, xb)− (✓)+ X a2V ha(xa), x 2 X ✓Rp. (1) The functions t(k) a , t(l) ab are sufficient statistics and (✓) is the log-partition function. In this paper the support of the densities is either X = RP or X = RP + and P is dominated by Lebesgue measure on Rp. To simplify the notation, we will write log p✓(x) = ✓Tt(x) − (✓) + h(x) where ✓2 Rs and t(x) : Rp 7! Rs with s = "p 2 # · L + p · K. The natural parameter space has the form ⇥= {✓2 Rs | (x) = log R X exp(✓Tt(x)dx) < 1}. Under the model in (1), there is no edge between a and b in the corresponding conditional independence graph if and only if ✓(1) ab = · · · = ✓(L) ab = 0. The model in (1) encompasses a large number of graphical models studied in the literature (see, for example, [7, 15] and referenced there in). The main focus of the paper is on construction of an asymptotically normal estimator for parameters in (1) and performing (asymptotic) inference for them. We illustrate a procedure for construction of valid confidence intervals that have the nominal coverage and propose a statistical test for existence of edges in the graphical model with nominal size. Our inference results are robust to model selection mistakes, which commonly occur in ultra-high dimensional setting. Results in the paper complement existing literature, which is focused on consistent model selection and parameter recovery, as we review in the next section. We use Hyvärinen scoring rule to estimate ✓, as in [15]. However, rather than focusing on consistent model selection we modify the regularized score matching procedure to construct a regular estimator that is robust to model selection mistakes and show how to use its asymptotic distribution for statistical inference. Compared to previous work on high-dimensional inference in graphical models [23, 2, 29, 11], this is the first work on inference in models where computing the normalizing constant is intractable. Related work. Our work straddles two areas of statistical learning which have attracted significant research of late: model selection and estimation in high-dimensional graphical models, and highdimensional inference. Our approach to inference for high-dimensional graphical models is based on regularized score matching. We briefly review the literature most relevant to our work, and refer the reader to a recent review article for a comprehensive overview [7]. Graphical model selection: Much of the research effort on graphical model selection has been done under the assumption that the data obeys the law X ⇠N(0, ⌃) (Gaussian graphical models), in which case the edge set E of the graph G is encoded by the non-zero elements of the precision matrix ⌦= ⌃−1. More recently, [31] studied estimation of graphical models under the assumption that the node conditional distributions belong to an exponential family distribution (including, for example, Bernoulli, Gaussian, Poisson and exponential) via regularized likelihood (see also [13, 6, 30] and references therein). In our paper, we construct a novel pn-consistent estimator of a parameter corresponding to a particular edge in (1). As we mentioned earlier, this is the first procedure that can obtain a parametric rate of convergence for an edge parameter in a graphical model where computing the normalizing constant is intractable. High-dimensional inference: Methods for construction of confidence intervals for low dimensional parameters in high-dimensional linear and generalized linear models, as well as hypothesis tests, have been developed in [32, 4, 28, 12]. These methods construct honest, uniformly valid confidence intervals and hypothesis test based on a first stage `1 penalized estimator. [16, 23, 5] construct pn-consistent estimators for elements of the precision matrix ⌦under a Gaussian assumption. We contribute to the literature on high dimensional inference by demonstrating how to construct estimators that are robust and uniformly valid under more general distributional assumptions than Gaussian. Score Matching estimators: Score matching estimators were first proposed in [9, 10]. Score matching offers a computational advantage when the normalization constant is not available in closed-form making likelihood based approaches intractable. Despite its power, there have not been any results on inference in high-dimensional models using score matching. In [8], the authors use score matching for inference of Gaussian linear models (and hence for Gaussian graphical models) in low-dimensional setting. In [15], the authors use `1 regularized score matching to develop consistent 2 estimators for graphical models in high-dimensional setting. We present the first high-dimensional inference results using score matching. 2 Score Matching Let X be a random variable with values in X, and let P be a family of distributions over X. A scoring rule S(x, Q) is a real valued function that quantifies accuracy of Q 2 P upon observing a realized value of X, x 2 X. There are a large number of scoring rules that correspond to different decision problems [20]. Given n independent realizations of X, {xi}i2[n], one finds optimal score estimator bQ 2 P that minimizes the empirical score bQ = arg min Q2P En [S(xi, Q)] . (2) When X = Rp and P consists of twice differentiable densities with respect to Lebesgue measure, the Hyvärinen scoring rule [9] is given as S(x, Q) = (1/2)||r log q(x)||2 2 + ∆log q(x) (3) where q is the density of Q with respect to Lebesgue measure on X, rf(x) = {@/(@xj)f(x)}j2[p] denotes the gradient, and ∆f(x) = P j2[p] @2/(@x2 j)f(x) the Laplacian operator on Rp. This scoring rule is convenient for learning models that are specified in an unnormalized fashion or whose normalizing constant is difficult to compute. The score matching rule is proper, that is, EX⇠P S(X, Q) is minimized over P at Q = P. Under suitable regularity conditions, the Fisher divergence between P, Q 2 P, D(P, Q) = R p(x)||r log q(x)−r log p(x)||2 2dx, where p is the density of P, is induced by the score matching rule [9]. For a parametric exponential family P = {p✓| ✓2 ⇥} with densities given in (1), minimizing (2) can be done in a closed form [9, 8]. An estimator b✓obtained in this way can be shown to be asymptotically consistent [9], however, in general it will not be efficient [8]. Hyvärinen [10] proposed a generalization of the score matching approach to the case of non-negative data. When X = Rp + the scoring rule is given as S+(x, Q) = X a2V " 2xa @ log q(x) @xa + x2 a @2 log q(x) @x2a + 1 2x2 a ✓@ log q(x) @xa ◆2# . (4) For exponential families, the non-negative score matching loss again can be obtained in a closed form and the estimator is consistent and asymptotically normal under suitable conditions [10]. In the context of probabilistic graphical models, [8] studied score matching to learn Gaussian graphical models with symmetry constraints. [15] proposed a regularized score matching procedure to learn conditional independence graph in a high-dimensional setting by minimizing En [`(xi, ✓)] + λ||✓||1, where the loss `(xi, ✓) is either S(xi, Q✓) or S+(xi, Q✓). For Gaussian models, `1-norm regularized score matching is a simple but state-of-the-art method, which coincides with the method in [17]. Extending the work on estimation of infinite-dimensional exponential families [26], [27] study learning structure of nonparametric probabilistic graphical models using a score matching estimator. In the next section, we present a new estimator for components of ✓in (1) that is consistent and asymptotically normal, building on [15] and [4]. 3 Methodology In this section, we propose a procedure that constructs a pn-consistent estimator of an element ✓ab of ✓. Our procedure is based on the three steps that we describe after introducing some additional notation. We start by describing the procedure for the case where X = Rp. For fixed indices a, b 2 [p], let qab ✓(x) := qab ✓(xa, xb | x−ab) be the conditional density of (Xa, Xb) given X−ab = x−ab. In particular, log qab ✓(x) = h✓ab, '(x)i − ab(✓, x−ab) + hab(x) where ✓ab 2 Rs0 is a part of the vector ✓corresponding to {✓(k) a , ✓(k) b }k2[K], {✓(l) ac , ✓(l) bc }l2[L],c2−ab and '(x) = 'ab(x) 2 Rs0 is the corresponding vector of sufficient statistics with the dimension 3 s0 = 2K +2(p−2)L. Here ab(✓, x−ab) is the log-partition function for the conditional distribution and hab(x) = ha(xa) + hb(xb). Let rabf(x) = ((@/@xa)f(x), (@/@xb)f(x))T 2 R2 be the gradient with respect to xa and xb and ∆abf(x) = " (@2/@x2 a) + (@2/@x2 b) # f(x). With this notation, we introduce the following scoring rule Sab(x, ✓) = (1/2)||rab log qab ✓(x)||2 2 + ∆ab log qab ✓(x) = (1/2)✓T Γ(x)✓+ ✓T g(x), (5) where Γ(x) = '1(x)'1(x)T + '2(x)'2(x)T and g(x) = '1(x)hab 1 (x) + '2(x)hab 2 (x) + ∆ab'(x) with '1 = (@/@xa)', '2 = (@/@xb)', hab 1 = (@/@xa)hab, and hab 2 = (@/@xb)hab. This scoring rule is related to the one in (3), however, rather than using the density q✓in evaluating the parameter vector, we only consider the conditional density qab ✓. We will use this conditional scoring rule to create an asymptotically normal estimator of an element ✓ab. Our motivation for using this estimator comes from the fact that the parameter ✓ab can be identified from the conditional distribution of (Xa, Xb) | XMab where Mab := {c | (a, c) 2 E or (b, c) 2 E} is the Markov blanket of (Xa, Xb). Furthermore, the optimization problems arising in steps 1-3 below can be solved much more efficiently, as the problems are of much smaller dimension. We are now ready to describe our procedure for estimating ✓ab, which proceeds in three steps. Step 1: We find a pilot estimator of ✓ab by solving the following program b✓ab = arg min ✓2Rs0 En ⇥ Sab(xi, ✓) ⇤ + λ1||✓||1 (6) where λ1 is a tuning parameter. Let c M1 = M(b✓ab) := {(c, d) | b✓ab cd 6= 0}. Since we are after an asymptotically normal estimator of ✓ab, one may think that it is sufficient to find e✓ab = arg min{En ⇥ Sab(xi, ✓) ⇤ | M(✓) ✓c M1} and appeal to results of [21]. Unfortunately, this is not the case. Since e✓is obtained via a model selection procedure, it is irregular and its asymptotic distribution cannot be estimated [14, 22]. Therefore, we proceed to create a regular estimator of ✓ab in steps 2 and 3. The idea is to create an estimator e✓ab that is insensitive to first order perturbations of other components of e✓ab, which we consider as nuisance components. The idea of creating an estimator that is robust to perturbations of nuisance have been recently used in [4], however, the approach goes back to the work of [19]. Step 2: Let bγab be a minimizer of (1/2)En[('1,ab(xi) −'1,−ab(xi)T γ)2 + ('2,ab(xi) −'2,−ab(xi)T γ)2] + λ2||γ||1. (7) The vector (1, −bγab,T)T approximately computes a row of the inverse of the Hessian in (6). Step 3: Let f M = {(a, b)} [ c M1 [ M(bγab). We obtain our estimator as a solution to the following program e✓ab = arg min En ⇥ Sab(xi, ✓) ⇤ s.t. M(✓) ✓f M. (8) Motivation for this procedure will be clear from the proof of Theorem 1 given in the next section. Extension to non-negative data. For non-negative data, the procedure is slightly different. Instead of (5), as shown in [15], we instead define a different scoring rule Sab + (x, ✓) = 1 2✓T Γ+(x)✓+ ✓T g+(x) with Γ+(x) = x2 a · '1(x)'1(x)T + x2 b · '2(x)'2(x)T and g+(x) = '1(x)hab 1 (x) + '2(x)hab 2 (x) + x2 a'11(x) + x2 b'22(x) + 2xa'1(x) + 2xb'2(x). Here '11 = (@2/@x2 a)', and '22 = (@2/@x2 b)'. Now we can define e'1 = xa'1 and e'2 = xb'2. Then Γ+(x) = e'1(x)e'1(x)T + e'2(x)e'2(x)T , which is of the same form as (5) with e'1 and e'2 replacing '1 and '2, respectively. Thus our three step procedure for non-negative data follows as before. 4 Asymptotic Normality of the Estimator In this section, we outline main theoretical properties of our procedure. We start by providing high-level conditions that allow us to establish properties of each step in our procedure. 4 Assumption M. We are given n i.i.d. samples {xi}i2[n] from p✓⇤of the form in (1). The parameter vector ✓⇤is sparse, with |M(✓ab,⇤)| ⌧n. Let γab,⇤= arg min E[('1,ab(xi) −'1,−ab(xi)T γ)2 + ('2,ab(xi) −'2,−ab(xi)T γ)2] (9) and ⌘1i = '1,ab(xi) −'1,−ab(xi)T γab,⇤and ⌘2i = '2,ab(xi) −'2,−ab(xi)T γab,⇤for i 2 [n]. The vector γab,⇤is sparse with |M(γab,⇤)| ⌧n. Let m = |M(✓ab,⇤)| _ |M(γab,⇤)|. The assumption M supposes that the parameter to be estimated is sparse, which makes estimation in high-dimensional setting feasible. An extension to approximately sparse parameter is possible, but technical. One of the benefits of using the conditional score to learn parameters of the model is that the sample size will only depend on the size of M(✓ab,⇤) and not on the sparsity of the whole vector ✓⇤as in [15]. The second part of the assumption states that the inverse of population Hessian is approximately sparse, which is a reasonable assumption since the Markov blanket of (Xa, Xb) is small under the sparsity assumption on ✓ab,⇤. Our next condition assumes that the Hessian in (6) and (7) is well conditioned. Let φ−(s, A) = inf 0 δT Aδ/||δ||2 2 | 1 ||δ||0 s and φ+(s, A) = sup 0 δT Aδ/||δ||2 2 | 1 ||δ||0 s denote the minimal and maximal s-sparse eigenvalues of a semi-definite matrix A, respectively. Assumption SE. The event ESE = {φmin φ−(m · log n, En [Γ(xi)]) φ+(m · log n, En [Γ(xi)]) φmax} holds with probability 1 −δSE where 0 < φmin φmax < 1. We choose to impose the sparse eigenvalue condition directly on En [Γ(xi)] rather that on the population quantity E [Γ(xi)]. It is well known that the condition SE holds for a large number of models. See for example [24] and specifically [31] for exponential family graphical models. Let rj✓= ||b✓ab −✓ab,⇤||j and rjγ = ||bγab −γab,⇤||j, for j 2 {1, 2}, be the rates of estimation in steps 1 and 2. Under the assumption SE, on the event E✓= {||En [Γ(xi)✓+ g(xi)] ||1 λ1/2} we have that r1✓ c1mλ/φ−and r2✓ c2 pmλ/φ−. Similarly, on the event Eγ = {||En [⌘1i'1,−ab(xi) + ⌘2i'2,−ab(xi)] ||1 λ2/2} we have that r1γ c1mλ/φmin and r2γ  c2 pmλ/φmin using results of [18]. Again, one needs to verify the two events hold with highprobability for the model at hand. However, this is a routine calculation under suitable tail assumptions. See for example Lemma 9 in [31]. The following result establishes a Bahadur representation for e✓ab. Theorem 1. Suppose that assumptions M and SE holds. Define w⇤with w⇤ ab = 1 and w⇤ −ab = −γab,⇤, where γab,⇤is given in the assumption M. On the event Eγ \ E✓, we have that pn · ⇣ e✓ab −✓⇤ ab ⌘ = −bσ−1 n · pnEn ⇥ w⇤,T " Γ(xi)✓ab,⇤+ g(xi) #⇤ + O " φ2 maxφ−4 min · pnλ2m # , (10) where λ = λ1 _ λ2 and σn = En [⌘1i'1,ab(xi) + ⌘2i'2,ab(xi)]. Theorem 1 is deterministic in nature. It establishes a representation that holds on the event Eγ \ E✓\ ESE, which in many cases holds with overwhelming probability. We will show that under suitable conditions the first term converges to a normal distribution. The following is a regularity condition needed even in a low dimensional setting for asymptotic normality [8]. Assumption R. Eqab ⇥ ||Γ(Xa, Xb, x−ab)✓ab,⇤||2⇤ and Eqab ⇥ ||g(Xa, Xb, x−ab)||2⇤ are finite for all values of x−ab in the domain. Theorem 1 and Lemma 9 together give the following corollary: Corollary 2. Suppose that the conditions of Theorem 1 hold. In addition, suppose the assumption R holds, (m log p)2/n = o(1) and P (Eγ \ E✓\ ESE) ! 1. Then pn(e✓ab −✓⇤ ab) −!D N(0, V ) + op(1), where V = (E [σn])−2 · Var " w⇤,T " Γ(xi)✓ab + g(xi) ## and σn is as in Theorem 1. We see that the variance V depend on true ✓ab and γab, which are unknown. In practice, we estimate V using the following consistent estimator bV , e T ab " En [Γ(xi)]f M #−1 ⇣ En h⇣ Γ(xi)e✓ab + g(xi) ⌘ f M ⇣ Γ(xi)e✓ab + g(xi) ⌘T f M i⌘" En [Γ(xi)]f M #−1 eab, 5 where eab is a canonical vector with 1 in position of element ab. Using this estimate, we can construct a confidence interval with asymptotically nominal coverage. In particular, lim n!1 sup ✓⇤2⇥ P✓⇤ ✓ ✓⇤ ab 2 e✓ab ± z↵/2 · q bV /n ◆ = ↵+ o(1). In the next section, we outline the proof of Theorem 1. Proofs of other technical results are relegated to appendix. 4.1 Proof of Theorem 1 We first introduce some auxiliary estimates. Let eγab be a minimizer of the following constrained problem min En h" '1,ab(xi) −'1,−ab(xi)T γ #2 + " '2,ab(xi) −'2,−ab(xi)T γ #2i s.t. M(γ) ✓f M (11) where f M is defined in step 3 of the procedure. Essentially, eγab is the refitted estimator from step 2 constrained to have the support on f M. Let ew 2 Rs0 with ewab = 1, ewf M = −eγf M and zero elsewhere. The solution e✓ab satisfies the first order optimality condition ⇣ En [Γ(xi)] e✓ab + En[g(xi)] ⌘ f M = 0. Multiplying by ew, it follows that ew T ⇣ En [Γ(xi)] e✓ab + En[g(xi)] ⌘ = ( ew −w⇤) T En [Γ(xi)] ⇣ e✓ab −✓ab,⇤⌘ + ( ew −w⇤) T " En ⇥ Γ(xi)✓ab,⇤+ g(xi) ⇤# + w⇤,TEn [Γ(xi)] ⇣ e✓ab −✓ab,⇤⌘ + w⇤,T " En ⇥ Γ(xi)✓ab,⇤+ g(xi) ⇤# , L1 + L2 + L3 + L4 = 0. (12) From Lemma 6 and Lemma 7, we have that |L1 + L2| C · φ2 maxφ−4 min · λ2m. Using Lemma 8, the term L3 can be written as En [⌘1i'1,ab(xi) + ⌘2i'2,ab(xi)] ⇣ e✓ab −✓ab,⇤ ab ⌘ + O ⇣ φ1/2 maxφ−2 min · λ2m ⌘ . Putting all the pieces together completes the proof. 5 Synthetic Datasets In this section we illustrate finite sample properties of our inference procedure on data simulated from three different Exponential family distributions. The first two examples involve Gaussian node-conditional distributions, for which we use regularized score matching. For the third setting where the node-conditional distributions follow an Exponential distribution, we use regularized non-negative score matching procedure. In each example, we report the mean coverage rate of 95% confidence intervals for several coefficients averaged over 500 independent simulation runs. Gaussian Graphical Model. We first consider the simplest case of a Gaussian graphical model. The data is generated according to X ⇠N(0, ⌃). We denote the precision matrix by ⌦= ⌃−1 = (wab) (the inverse of covariance matrix). For the experiment, we set diagonal entries of ⌦as wjj = 1, and we set the coefficients of the 4 nearest neighbor lattice graph according to wj,j−1 = wj−1,j = 0.5 and wj,j−2 = wj−2,j = 0.3. We set the sample size n = 300. Table 1 shows the empirical coverage rate for different choices of the number of nodes p for four chosen coefficients. As is evident, our inference procedure performs remarkably well for the Gaussian graphical model studied. Normal Conditionals. Our second synthetic dataset is sampled from the following exponential family distribution: q(x|B, b, b(2)) / exp{P j6=k βjkx2 jx2 k + Pp j=1 β(2) j x2 j + Pp j=1 βjxj}, where b = (β1, . . . , βp) and b(2) = (β(2) 1 , . . . , β(2) p ) are p dimensional vectors, and B = {βjk} is a symmetric interaction matrix with diagonal entries set to 0. The above distribution is a special case of a class of exponential family distributions with normal conditionals, and densities that need not be unimodal [1]. This family is intriguing from the perspective of graphical modeling as, in contrast to the Gaussian case, conditional dependence may also express itself in the variances. 6 Table 1: Empirical Coverage for Gaussian Graphical Model w1,2 w1,3 w1,4 w1,10 p = 50 95.4% 92.4% 93.8% 93.2% p = 200 94.6% 92.4% 92.6% 94.0% p = 400 94.6% 94.8% 92.6% 93.8% Table 2: Empirical Coverage for Normal Conditionals β1,2 β1,3 β1,4 β1,10 p = 100 93.2% 93.4% 94.6% 95.0% p = 300 93.2% 93.0% 92.6% 93.0% Table 3: Empirical Coverage for Exponential Graphical Model ✓1,2 ✓1,3 ✓1,4 ✓1,10 p = 100 92.0% 90.0% 90.0% 92.4% p = 300 92.6% 92.0% 92.2% 92.4% For our experiment we set βj = 0.4, β(2) j = −2, and we use a 4 nearest neighbor lattice dependence graph with interaction matrix: βj,j−1 = βj−1,j = −0.2 and βj,j−2 = βj−2,j = −0.2. Since the univariate marginal distributions are all Gaussian, we generate the data by Gibbs sampling. The first 500 samples were discarded as ‘burn in’ step, and of the remaining samples, we keep one in three. We set the number of samples n = 500. Table 2 shows the empirical coverage rate for p = 100 and p = 300 nodes. Again, we see that our inference algorithm behaves well on the above Normal Conditionals Model. Exponential Graphical Model. Our final synthetic simulated example illustrates non-negative score matching for Exponential Graphical Model. Here the node-conditional distributions obey an exponential distribution, and therefore the variables take only non-negative values. Such exponential distributions are typically used for data describing inter-arrival times between events, among other applications. The density function is given by q(x|✓) / exp{−Pp j=1 ✓jXj −P j6=k ✓jkXjXk}. To ensure that the distribution is valid and normalizable, we require ✓j > 0, and ✓jk ≥0. Therefore, we can only model negative dependencies via the Exponential graphical model. For the experiment we choose ✓j = 2, and a 2 nearest neighbor dependence graph with ✓j,j−1 = ✓j−1,j = 0.3. We set n = 1000 and again use Gibbs sampling to generate data. The empirical coverage rate and histograms of estimates of four selected coefficients are presented in Table 3 and Figures 1 for p = 100 and p = 300, respectively. We should point out that, in general, non-negative score matching is harder than regular score matching. For example, as shown in [15], to recover the structure from a regular Gaussian distribution θ1,2 0 0.2 0.4 0.6 Density 0 1 2 3 4 5 θ1,3 -0.4 -0.2 0 0.2 0.4 Density 0 1 2 3 4 5 6 θ1,4 -0.4 -0.2 0 0.2 0.4 Density 0 1 2 3 4 5 6 θ1,10 -0.4 -0.2 0 0.2 0.4 Density 0 1 2 3 4 5 6 θ1,2 0 0.2 0.4 0.6 Density 0 1 2 3 4 5 θ1,3 -0.3 -0.2 -0.1 0 0.1 0.2 Density 0 1 2 3 4 5 6 θ1,4 -0.3 -0.2 -0.1 0 0.1 0.2 Density 0 1 2 3 4 5 6 θ1,10 -0.4 -0.2 0 0.2 0.4 Density 0 1 2 3 4 5 6 Figure 1: Histograms for ✓: the first row is for p = 100 and the second row is for p = 300 7 with high probability, a sample size about O(m2 log p) suffices, while to recover from non-negative Gaussian distribution, we need O(m2(log p)8), which is significantly larger. Therefore, we expect that confidence intervals for non-negative score matching would require more samples to give accurate inference. We can see this from Table 3, where the empirical coverage rate tends to be about 92%, rather than the designed 95% – still impressive for the not so large sample size. The histograms in Figures 1 show that the fitting is quite good, but to get a better estimation and hence better coverage, we would need more samples. 6 Protein Signaling Dataset In this section we apply our algorithm to a protein signaling flow cytometry dataset. The dataset contains the presence of p = 11 proteins in n = 7466 cells. It was first analyzed using Bayesian Networks in [25] who fit a directed acyclic graph to the data, while [31] fit their proposed M-estimators for exponential and Gaussian graphical models to the data set. Figure 2 shows the network structure after applying our method to the data using an Exponential Graphical Model. Since the data is non-negative and skewed, it can also be analyzed after log transformation as was done by [31] for fitting Gaussian graphical model. We instead learn the structure directly from the data without such a transformation. To infer the network structure, we calculate the p-value for each pair of nodes, and keep the edges with p-values smaller than 0.01. Estimated negative conditional dependencies are shown via red edges in the figure. Recall that the exponential graphical model restricts the edge weights to be non-negative, hence only negative dependencies can be estimated. From the figure we see that PKA is a major protein inhibitor in cell signaling networks. This result is consistent with the estimated graph structure in [31], as well as in the Bayesian network of [25]. In addition, we find significant dependency between PKC and PIP3. Raf Mek Plcg PIP2 PIP3 Erk Akt PKA PKC P38 Jnk Figure 2: Estimated Structure of Protein Signaling Dataset 7 Conclusion Driven by applications in Biology and Social Networks, there has been a surge in statistical learning models and methods for networks with large number of nodes. Graphical models provide a very flexible modeling framework for such networks, leading to much work in estimation and inference algorithms for Gaussian graphical models, and more generally for graphical models with nodeconditional densities lying in Exponential family, in high dimensional setting. Most of this work is based on regularized likelihood loss minimization, which has the disadvantage of being computationally intractable when the normalization constant (partition function) of the conditional densities is not available in closed form. Score matching estimators provide a way around this issue, but so far there has been no work which provides inference guarantees for score matching based estimators for high-dimensional graphical models. In this paper we fill this gap for the case where score matching is used to estimate the parameter corresponding to a single edge at a time. An interesting future extension would be to perform inference on the entire model instead of one edge at a time as in the current paper. Another extension would be to extend our results to discrete valued data. 8 References [1] B. C. Arnold, E. Castillo, and J. M. Sarabia. Conditional specification of statistical models. Springer Series in Statistics. Springer-Verlag, New York, 1999. ISBN 0-387-98761-4. [2] R. F. Barber and M. Kolar. Rocket: Robust confidence intervals via kendall’s tau for transelliptical graphical models. ArXiv e-prints, arXiv:1502.07641, Feb. 2015. [3] A. Belloni and V. Chernozhukov. Least squares after model selection in high-dimensional sparse models. Bernoulli‘, 19(2):521–547, May 2013. [4] A. Belloni, V. Chernozhukov, and C. B. Hansen. Inference on treatment effects after selection amongst high-dimensional controls. Rev. Econ. Stud., 81(2):608–650, Nov 2013. [5] M. Chen, Z. Ren, H. Zhao, and H. H. Zhou. Asymptotically normal and efficient estimation of covariateadjusted gaussian graphical model. Journal of the American Statistical Association, 0(ja):00–00, 2015. [6] S. Chen, D. M. Witten, and A. Shojaie. Selection and estimation for mixed graphical models. ArXiv e-prints, arXiv:1311.0085, Nov. 2013. [7] M. Drton and M. H. Maathuis. Structure learning in graphical modeling. To appear in Annual Review of Statistics and Its Application, 3, 2016. [8] P. G. M. Forbes and S. L. Lauritzen. Linear estimating equations for exponential families with application to Gaussian linear concentration models. Linear Algebra Appl., 473:261–283, 2015. [9] A. Hyvärinen. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6: 695–709, 2005. [10] A. Hyvärinen. Some extensions of score matching. Comput. Stat. Data Anal., 51(5):2499–2512, 2007. [11] J. Jankova and S. A. van de Geer. Confidence intervals for high-dimensional inverse covariance estimation. ArXiv e-prints, arXiv:1403.6752, Mar. 2014. [12] A. Javanmard and A. Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. J. Mach. Learn. Res., 15(Oct):2869–2909, 2014. [13] J. D. Lee and T. J. Hastie. Learning the structure of mixed graphical models. J. Comput. Graph. Statist., 24(1):230–253, 2015. [14] H. Leeb and B. M. Pötscher. Can one estimate the unconditional distribution of post-model-selection estimators? Econ. Theory, 24(02):338–376, Nov 2007. [15] L. Lin, M. Drton, and A. Shojaie. Estimation of high-dimensional graphical models using regularized score matching. ArXiv e-prints, arXiv:1507.00433, July 2015. [16] W. Liu. Gaussian graphical model estimation with false discovery rate control. Ann. Stat., 41(6):2948–2978, 2013. [17] W. Liu and X. Luo. Fast and adaptive sparse precision matrix estimation in high dimensions. J. Multivar. Anal., 135:153–162, 2015. [18] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Stat. Sci., 27(4):538–557, 2012. [19] J. Neyman. Optimal asymptotic tests of composite statistical hypotheses. Probability and statistics, 57: 213, 1959. [20] M. Parry, A. P. Dawid, and S. L. Lauritzen. Proper local scoring rules. Ann. Stat., 40(1):561–592, Feb 2012. [21] S. L. Portnoy. Asymptotic behavior of likelihood methods for exponential families when the number of parameters tends to infinity. Ann. Stat., 16(1):356–366, 1988. [22] B. M. Pötscher. Confidence sets based on sparse estimators are necessarily large. Sankhy¯a, 71(1, Ser. A): 1–18, 2009. [23] Z. Ren, T. Sun, C.-H. Zhang, and H. H. Zhou. Asymptotic normality and optimalities in estimation of large Gaussian graphical models. Ann. Stat., 43(3):991–1026, 2015. [24] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. 2011. [25] K. Sachs, O. Perez, D. Pe’er, D. A. Lauffenburger, and G. P. Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523–529, 2005. [26] B. Sriperumbudur, K. Fukumizu, A. Gretton, and A. Hyvärinen. Density estimation in infinite dimensional exponential families. ArXiv e-prints, arXiv:1312.3516, Dec. 2013. [27] S. Sun, M. Kolar, and J. Xu. Learning structured densities via infinite dimensional exponential families. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2287–2295. Curran Associates, Inc., 2015. [28] S. A. van de Geer, P. Bühlmann, Y. Ritov, and R. Dezeure. On asymptotically optimal confidence regions and tests for high-dimensional models. Ann. Stat., 42(3):1166–1202, Jun 2014. [29] J. Wang and M. Kolar. Inference for high-dimensional exponential family graphical models. In A. Gretton and C. C. Robert, editors, Proc. of AISTATS, volume 51, pages 751–760, 2016. [30] E. Yang, Y. Baker, P. Ravikumar, G. I. Allen, and Z. Liu. Mixed graphical models via exponential families. In Proc. 17th Int. Conf, Artif. Intel. Stat., pages 1042–1050, 2014. [31] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. On graphical models via univariate exponential family distributions. J. Mach. Learn. Res., 16:3813–3847, 2015. [32] C.-H. Zhang and S. S. Zhang. Confidence intervals for low dimensional parameters in high dimensional linear models. J. R. Stat. Soc. B, 76(1):217–242, Jul 2013. 9
2016
156
6,057
Learning Structured Sparsity in Deep Neural Networks Wei Wen University of Pittsburgh wew57@pitt.edu Chunpeng Wu University of Pittsburgh chw127@pitt.edu Yandan Wang University of Pittsburgh yaw46@pitt.edu Yiran Chen University of Pittsburgh yic52@pitt.edu Hai Li University of Pittsburgh hal66@pitt.edu Abstract High demand for computation resources severely hinders deployment of large-scale Deep Neural Networks (DNN) in resource constrained devices. In this work, we propose a Structured Sparsity Learning (SSL) method to regularize the structures (i.e., filters, channels, filter shapes, and layer depth) of DNNs. SSL can: (1) learn a compact structure from a bigger DNN to reduce computation cost; (2) obtain a hardware-friendly structured sparsity of DNN to efficiently accelerate the DNN’s evaluation. Experimental results show that SSL achieves on average 5.1× and 3.1× speedups of convolutional layer computation of AlexNet against CPU and GPU, respectively, with off-the-shelf libraries. These speedups are about twice speedups of non-structured sparsity; (3) regularize the DNN structure to improve classification accuracy. The results show that for CIFAR-10, regularization on layer depth reduces a 20-layer Deep Residual Network (ResNet) to 18 layers while improves the accuracy from 91.25% to 92.60%, which is still higher than that of original ResNet with 32 layers. For AlexNet, SSL reduces the error by ∼1%. 1 Introduction Deep neural networks (DNN), especially deep Convolutional Neural Networks (CNN), made remarkable success in visual tasks [1][2][3][4][5] by leveraging large-scale networks learning from a huge volume of data. Deployment of such big models, however, is computation-intensive. To reduce computation, many studies are performed to compress the scale of DNN, including sparsity regularization [6], connection pruning [7][8] and low rank approximation [9][10][11][12][13]. Sparsity regularization and connection pruning, however, often produce non-structured random connectivity and thus, irregular memory access that adversely impacts practical acceleration in hardware platforms. Figure 1 depicts practical layer-wise speedup of AlexNet, which is non-structurally sparsified by ℓ1-norm. Compared to original model, the accuracy loss of the sparsified model is controlled within 2%. Because of the poor data locality associated with the scattered weight distribution, the achieved speedups are either very limited or negative even the actual sparsity is high, say, >95%. We define sparsity as the ratio of zeros in this paper. In recently proposed low rank approximation approaches, the DNN is trained first and then each trained weight tensor is decomposed and approximated by a product of smaller factors. Finally, fine-tuning is performed to restore the model accuracy. Low rank approximation is able to achieve practical speedups because it coordinates model parameters in dense matrixes and avoids the locality problem of non-structured sparsity regularization. However, low rank approximation can only obtain the compact structure within each layer, and the structures of the layers are fixed during fine-tuning such that costly reiterations of decomposing and fine-tuning are required to find an optimal weight approximation for performance speedup and accuracy retaining. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 0 1 0 0.5 1 1.5 conv1 conv2 conv3 conv4 conv5 Quadro K600 Tesla K40c GTX Titan Sparsity Speedup Sparsity Figure 1: Evaluation speedups of AlexNet on GPU platforms and the sparsity. conv1 refers to convolutional layer 1, and so forth. Baseline is profiled by GEMM of cuBLAS. The sparse matrixes are stored in the format of Compressed Sparse Row (CSR) and accelerated by cuSPARSE. Inspired by the facts that (1) there is redundancy across filters and channels [11]; (2) shapes of filters are usually fixed as cuboid but enabling arbitrary shapes can potentially eliminate unnecessary computation imposed by this fixation; and (3) depth of the network is critical for classification but deeper layers cannot always guarantee a lower error because of the exploding gradients and degradation problem [5], we propose Structured Sparsity Learning (SSL) method to directly learn a compressed structure of deep CNNs by group Lasso regularization during the training. SSL is a generic regularization to adaptively adjust multiple structures in DNN, including structures of filters, channels, filter shapes within each layer, and structure of depth beyond the layers. SSL combines structure regularization (on DNN for classification accuracy) with locality optimization (on memory access for computation efficiency), offering not only well-regularized big models with improved accuracy but greatly accelerated computation (e.g., 5.1× on CPU and 3.1× on GPU for AlexNet). Our source code can be found at https://github.com/wenwei202/caffe/tree/scnn. 2 Related works Connection pruning and weight sparsifying. Han et al. [7][8] reduced parameters of AlexNet and VGG-16 using connection pruning. Since most reduction is achieved on fully-connected layers, no practical speedups of convolutional layers are observed for the similar issue shown in Figure 1. However, convolution is more costly and many new DNNs use fewer fully-connected layers, e.g., only 3.99% parameters of ResNet-152 [5] are from fully-connected layers, compression and acceleration on convolutional layers become essential. Liu et al. [6] achieved >90% sparsity of convolutional layers in AlexNet with 2% accuracy loss, and bypassed the issue of Figure 1 by hardcoding the sparse weights into program. In this work, we also focus on convolutional layers. Compared to the previous techniques, our method coordinates sparse weights in adjacent memory space and achieve higher speedups. Note that hardware and program optimizations based on our method can further boost the system performance which is not covered in this paper due to space limit. Low rank approximation. Denil et al. [9] predicted 95% parameters in a DNN by exploiting the redundancy across filters and channels. Inspired by it, Jaderberg et al. [11] achieved 4.5× speedup on CPUs for scene text character recognition and Denton et al. [10] achieved 2× speedups for the first two layers in a larger DNN. Both of the works used Low Rank Approximation (LRA) with ∼1% accuracy drop. [13][12] improved and extended LRA to larger DNNs. However, the network structure compressed by LRA is fixed; reiterations of decomposing, training/fine-tuning, and cross-validating are still needed to find an optimal structure for accuracy and speed trade-off. As the number of hyper-parameters in LRA method increases linearly with the layer depth [10][13], the search space increases linearly or even exponentially. Comparing to LRA, our contributions are: (1) SSL can dynamically optimize the compactness of DNNs with only one hyper-parameter and no reiterations; (2) besides the redundancy within the layers, SSL also exploits the necessity of deep layers and reduce them; (3) DNN filters regularized by SSL have lower rank approximation, so it can work together with LRA for more efficient model compression. Model structure learning. Group Lasso [14] is an efficient regularization to learn sparse structures. Liu et al. [6] utilized group Lasso to constrain the structure scale of LRA. To adapt DNN structure to different databases, Feng et al. [16] learned the appropriate number of filters in DNN. Different from prior arts, we apply group Lasso to regularize multiple DNN structures (filters, channels, filter shapes, and layer depth). A most related parallel work is Group-wise Brain Damage [17], which is a subset (i.e., learning filter shapes) of our work and further justifies the effectiveness of our techniques. 2                 shortcut depth-wise                         filter-wise channel-wise             …           shape-wise W (l) nl,:,:,: (1) W (l) :,cl,:,: (2) W (l) :,cl,ml,kl (3) W (l) (4) 1 W (l) nl,:,:,: (1) W (l) :,cl,:,: (2) W (l) :,cl,ml,kl (3) W (l) (4) 1 W (l) nl,:,:,: (1) W (l) :,cl,:,: (2) W (l) :,cl,ml,kl (3) W (l) (4) 1 W (l) nl,:,:,: (1) W (l) :,cl,:,: (2) W (l) :,cl,ml,kl (3) W (l) (4) 1 Figure 2: The proposed Structured Sparsity Learning (SSL) for DNNs. The weights in filters are split into multiple groups. Through group Lasso regularization, a more compact DNN is obtained by removing some groups. The figure illustrates the filter-wise, channel-wise, shape-wise, and depth-wise structured sparsity that are explored in the work. 3 Structured Sparsity Learning Method for DNNs We focus mainly on the Structured Sparsity Learning (SSL) on convolutional layers to regularize the structure of DNNs. We first propose a generic method to regularize structures of DNN in Section 3.1, and then specify the method to structures of filters, channels, filter shapes and depth in Section 3.2. Variants of formulations are also discussed from computational efficiency viewpoint in Section 3.3. 3.1 Proposed structured sparsity learning for generic structures Suppose the weights of convolutional layers in a DNN form a sequence of 4-D tensors W (l) ∈RNl×Cl×Ml×Kl, where Nl, Cl, Ml and Kl are the dimensions of the l-th (1 ≤l ≤L) weight tensor along the axes of filter, channel, spatial height and spatial width, respectively. L denotes the number of convolutional layers. Then the proposed generic optimization target of a DNN with structured sparsity regularization can be formulated as: E(W ) = ED(W ) + λ · R(W ) + λg · L X l=1 Rg  W (l) . (1) Here W represents the collection of all weights in the DNN; ED(W ) is the loss on data; R(·) is non-structured regularization applying on every weight, e.g., ℓ2-norm; and Rg(·) is the structured sparsity regularization on each layer. Because group Lasso can effectively zero out all weights in some groups [14][15], we adopt it in our SSL. The regularization of group Lasso on a set of weights w can be represented as Rg(w) = PG g=1 ||w(g)||g, where w(g) is a group of partial weights in w and G is the total number of groups. Different groups may overlap. Here || · ||g is the group Lasso, or ||w(g)||g = r P|w(g)| i=1  w(g) i 2 , where |w(g)| is the number of weights in w(g). 3.2 Structured sparsity learning for structures of filters, channels, filter shapes and depth In SSL, the learned “structure” is decided by the way of splitting groups of w(g). We investigate and formulate the filer-wise, channel-wise, shape-wise, and depth-wise structured sparsity in Figure 2. For simplicity, the R(·) term of Eq. (1) is omitted in the following formulation expressions. Penalizing unimportant filers and channels. Suppose W (l) nl,:,:,: is the nl-th filter and W (l) :,cl,:,: is the cl-th channel of all filters in the l-th layer. The optimization target of learning the filter-wise and channel-wise structured sparsity can be defined as E(W ) = ED(W ) + λn · L X l=1   Nl X nl=1 ||W (l) nl,:,:,:||g  + λc · L X l=1   Cl X cl=1 ||W (l) :,cl,:,:||g  . (2) As indicated in Eq. (2), our approach tends to remove less important filters and channels. Note that zeroing out a filter in the l-th layer results in a dummy zero output feature map, which in turn makes a corresponding channel in the (l + 1)-th layer useless. Hence, we combine the filter-wise and channel-wise structured sparsity in the learning simultaneously. 3 Learning arbitrary shapes of filers. As illustrated in Figure 2, W (l) :,cl,ml,kl denotes the vector of all corresponding weights located at spatial position of (ml, kl) in the 2D filters across the cl-th channel. Thus, we define W (l) :,cl,ml,kl as the shape fiber related to learning arbitrary filter shape because a homogeneous non-cubic filter shape can be learned by zeroing out some shape fibers. The optimization target of learning shapes of filers becomes: E(W ) = ED(W ) + λs · L X l=1   Cl X cl=1 Ml X ml=1 Kl X kl=1 ||W (l) :,cl,ml,kl||g  . (3) Regularizing layer depth. We also explore the depth-wise sparsity to regularize the depth of DNNs in order to improve accuracy and reduce computation cost. The corresponding optimization target is E(W ) = ED(W ) + λd · PL l=1 ||W (l)||g. Different from other discussed sparsification techniques, zeroing out all the filters in a layer will cut off the message propagation in the DNN so that the output neurons cannot perform any classification. Inspired by the structure of highway networks [18] and deep residual networks [5], we propose to leverage the shortcuts across layers to solve this issue. As illustrated in Figure 2, even when SSL removes an entire unimportant layers, feature maps will still be forwarded through the shortcut. 3.3 Structured sparsity learning for computationally efficient structures All proposed schemes in section 3.2 can learn a compact DNN for computation cost reduction. Moreover, some variants of the formulations of these schemes can directly learn structures that can be efficiently computed. 2D-filter-wise sparsity for convolution. 3D convolution in DNNs essentially is a composition of 2D convolutions. To perform efficient convolution, we explored a fine-grain variant of filter-wise sparsity, namely, 2D-filter-wise sparsity, to spatially enforce group Lasso on each 2D filter of W (l) nl,cl,:,:. The saved convolution is proportional to the percentage of the removed 2D filters. The fine-grain version of filter-wise sparsity can more efficiently reduce the computation associated with convolution: Because the distance of weights (in a smaller group) from the origin is shorter, which makes group Lasso more easily to obtain a higher ratio of zero groups. Combination of filter-wise and shape-wise sparsity for GEMM. Convolutional computation in DNNs is commonly converted to modality of GEneral Matrix Multiplication (GEMM) by lowering weight tensors and feature tensors to matrices [19]. For example, in Caffe [20], a 3D filter W (l) nl,:,:,: is reshaped to a row in the weight matrix where each column is the collection of weights W (l) :,cl,ml,kl related to shape-wise sparsity. Combining filter-wise and shape-wise sparsity can directly reduce the dimension of weight matrix in GEMM by removing zero rows and columns. In this context, we use row-wise and column-wise sparsity as the interchangeable terminology of filter-wise and shape-wise sparsity, respectively. 4 Experiments We evaluate the effectiveness of our SSL using published models on three databases – MNIST, CIFAR-10, and ImageNet. Without explicit explanation, SSL starts with the network whose weights are initialized by the baseline, and speedups are measured in matrix-matrix multiplication by Caffe in a single-thread Intel Xeon E5-2630 CPU. Hyper-parameters are selected by cross-validation. 4.1 LeNet and multilayer perceptron on MNIST In the experiment of MNIST, we examine the effectiveness of SSL in two types of networks: LeNet [21] implemented by Caffe and a multilayer perceptron (MLP) network. Both networks were trained without data augmentation. LeNet: When applying SSL to LeNet, we constrain the network with filter-wise and channel-wise sparsity in convolutional layers to penalize unimportant filters and channels. Table 1 summarizes the remained filters and channels, floating-point operations (FLOP), and practical speedups. In the table, LeNet 1 is the baseline and the others are the results after applying SSL in different strengths 4 Table 1: Results after penalizing unimportant filters and channels in LeNet LeNet # Error Filter # § Channel # § FLOP § Speedup § 1 (baseline) 0.9% 20—50 1—20 100%—100% 1.00×—1.00× 2 0.8% 5—19 1—4 25%—7.6% 1.64×—5.23× 3 1.0% 3—12 1—3 15%—3.6% 1.99×—7.44× §In the order of conv1—conv2 Table 2: Results after learning filter shapes in LeNet LeNet # Error Filter size § Channel # FLOP Speedup 1 (baseline) 0.9% 25—500 1—20 100%—100% 1.00×—1.00× 4 0.8% 21—41 1—2 8.4%—8.2% 2.33×—6.93× 5 1.0% 7—14 1—1 1.4%—2.8% 5.19×—10.82× § The sizes of filters after removing zero shape fibers, in the order of conv1—conv2 of structured sparsity regularization. The results show that our method achieves the similar error (±0.1%) with much fewer filters and channels, and saves significant FLOP and computation time. To demonstrate the impact of SSL on the structures of filters, we present all learned conv1 filters in Figure 3. It can be seen that most filters in LeNet 2 are entirely zeroed out except for five most important detectors of stroke patterns that are sufficient for feature extraction. The accuracy of LeNet 3 (that further removes the weakest and redundant stroke detector) drops only 0.2% from that of LeNet 2. Compared to the random and blurry filter patterns in LeNet 1 which are resulted from the high freedom of parameter space, the filters in LeNet 2 & 3 are regularized and converge to smoother and more natural patterns. This explains why our proposed SSL obtains the same-level accuracy but has much less filters. The smoothness of the filters are also observed in the deeper layers. The effectiveness of the shape-wise sparsity on LeNet is summarized in Table 2. The baseline LeNet 1 has conv1 filters with a regular 5 × 5 square (size = 25) while LeNet 5 reduces the dimension that can be constrained by a 2 × 4 rectangle (size = 7). The 3D shape of conv2 filters in the baseline is also regularized to the 2D shape in LeNet 5 within only one channel, indicating that only one filter in conv1 is needed. This fact significantly saves FLOP and computation time. Figure 3: Learned conv1 filters in LeNet 1 (top), LeNet 2 (middle) and LeNet 3 (bottom) MLP: Besides convolutional layers, our proposed SSL can be extended to learn the structure (i.e., the number of neurons) of fully-connected layers. We enforce the group Lasso regularization on all the input (or output) connections of each neuron. A neuron whose input connections are all zeroed out can degenerate to a bias neuron in the next layer; similarly, a neuron can degenerate to a removable dummy neuron if all of its output connections are zeroed out. Figure 4(a) summarizes the learned structure and FLOP of different MLP networks. The results show that SSL can not only remove hidden neurons but also discover the sparsity of images. For example, Figure 4(b) depicts the number of connections of each input neuron in MLP 2, where 40.18% of input neurons have zero connections and they concentrate at the boundary of the image. Such a distribution is consistent with our intuition: handwriting digits are usually written in the center and pixels close to the boundary contain little discriminative classification information. 4.2 ConvNet and ResNet on CIFAR-10 We implemented the ConvNet of [1] and deep residual networks (ResNet) [5] on CIFAR-10. When regularizing filters, channels, and filter shapes, the results and observations of both networks are similar to that of the MNIST experiment. Moreover, we simultaneously learn the filter-wise and shape-wise sparsity to reduce the dimension of weight matrix in GEMM by ConvNet. We also learn the depth-wise sparsity of ResNet to regularize the depth of the DNNs. 5 the group Lasso regularization on all the input (or output) connections of every neuron, including 189 those of the input layer. Note that a neuron with all the input connections zeroed out degenerate 190 to a bias neuron in the next layer; similarly, a neuron degenerates to a removable dummy neuron 191 if all of its output connections are zeroed out. As such, the computation of GEneral Matrix Vector 192 (GEMV) product in fully-connected layers can be significantly reduced. Table 3 summarizes the 193 Table 3: Learning the number of neurons in multi-layer perceptron MLP # Error Neuron # per layer § FLOP per layer § 1 (baseline) 1.43% 784–500–300–10 100%–100%–100% 2 1.34% 469–294–166–10 35.18%–32.54%–55.33% 3 1.53% 434–174–78–10 19.26%–9.05%–26.00% §In the order of input layer–hidden layer 1–hidden layer 2–output layer 6 (a) 1 28 1 28 0 291 (b) Figure 4: (a) Results of learning the number of neurons in MLP. (b) the connection numbers of input neurons (i.e., pixels) in MLP 2 after SSL. Table 3: Learning row-wise and column-wise sparsity of ConvNet on CIFAR-10 ConvNet # Error Row sparsity § Column sparsity § Speedup § 1 (baseline) 17.9% 12.5%–0%–0% 0%–0%–0% 1.00×–1.00×–1.00× 2 17.9% 50.0%–28.1%–1.6% 0%–59.3%–35.1% 1.43×–3.05×–1.57× 3 16.9% 31.3%–0%–1.6% 0%–42.8%–9.8% 1.25×–2.01×–1.18× §in the order of conv1–conv2–conv3 ConvNet: We use the network from Alex Krizhevsky et al. [1] as the baseline and implement it using Caffe. All the configurations remain the same as the original implementation except that we added a dropout layer with a ratio of 0.5 in the fully-connected layer to avoid over-fitting. ConvNet is trained without data augmentation. Table 3 summarizes the results of three ConvNet networks. Here, the row/column sparsity of a weight matrix is defined as the percentage of all-zero rows/columns. Figure 5 shows their learned conv1 filters. In Table 3, SSL can reduce the size of weight matrix in ConvNet 2 by 50%, 70.7% and 36.1% for each convolutional layer and achieve good speedups without accuracy drop. Surprisingly, without SSL, four conv1 filters of the baseline are actually all-zeros as shown in Figure 5, demonstrating the great potential of filter sparsity. When SSL is applied, half of conv1 filters in ConvNet 2 can be zeroed out without accuracy drop. On the other hand, in ConvNet 3, SSL lowers 1.0% (±0.16%) error with a model even smaller than the baseline. In this scenario, SSL performs as a structure regularization to dynamically learn a better network structure (including the number of filters and filer shapes) to reduce the error. ResNet: To investigate the necessary depth of DNNs by SSL, we use a 20-layer deep residual networks (ResNet-20) [5] as the baseline. The network has 19 convolutional layers and 1 fully-connected layer. Identity shortcuts are utilized to connect the feature maps with the same dimension while 1×1 convolutional layers are chosen as shortcuts between the feature maps with different dimensions. Batch normalization [22] is adopted after convolution and before activation. We use the same data augmentation and training hyper-parameters as that in [5]. The final error of baseline is 8.82%. In SSL, the depth of ResNet-20 is regularized by depth-wise sparsity. Group Lasso regularization is only enforced on the convolutional layers between each pair of shortcut endpoints, excluding the first convolutional layer and all convolutional shortcuts. After SSL converges, layers with all zero weights are removed and the net is finally fine-tuned with a base learning rate of 0.01, which is lower than that (i.e., 0.1) in the baseline. Figure 6 plots the trend of the error vs. the number of layers under different strengths of depth regularizations. Compared with original ResNet in [5], SSL learns a ResNet with 14 layers (SSLResNet-14) reaching a lower error than that of the baseline with 20 layers (ResNet-20); SSL-ResNet-18 and ResNet-32 achieve an error of 7.40% and 7.51%, respectively. This result implies that SSL can work as a depth regularization to improve classification accuracy. Note that SSL can efficiently learn shallower DNNs without accuracy loss to reduce computation cost; however, it does not mean the depth of the network is not important. The trend in Figure 6 shows that the test error generally declines as more layers are preserved. A slight error rise of SSL-ResNet-20 from SSL-ResNet-18 shows the suboptimal selection of the depth in the group of “32×32”. Figure 5: Learned conv1 filters in ConvNet 1 (top), ConvNet 2 (middle) and ConvNet 3 (bottom) 6 12 14 16 18 20 7 8 9 10 SSL−ResNet−# % error SSL ResNet−20 ResNet−32 12 14 16 18 20 0 2 46 8 10 12 14 16 18 20 SSL−ResNet−# # conv layers 32×32 16×16 8×8 12 14 16 18 20 7 8 9 SSL−ResNet−# % error SSL ResNet−20 ResNet−32 12 14 16 18 20 0 2 46 8 10 12 14 16 18 20 SSL−ResNet−# # conv layers 32×32 16×16 8×8 Figure 6: Error vs. layer number after depth regularization. # is the number of layers including the last fully-connected layer. ResNet-# is the ResNet in [5]. SSL-ResNet-# is the depth-regularized ResNet by SSL. 32×32 indicates the convolutional layers with an output map size of 32×32, etc. 4.3 AlexNet on ImageNet To show the generalization of our method to large scale DNNs, we evaluate SSL using AlexNet with ILSVRC 2012. CaffeNet [20], the replication of AlexNet [1] with mirror changes, is used in our experiment. All training images are rescaled to the size of 256×256. A 227×227 image is randomly cropped from each scaled image and mirrored for data augmentation and only the center crop is used for validation. The final top-1 validation error is 42.63%. In SSL, AlexNet is first trained with structure regularization; when it converges, zero groups are removed to obtain a DNN with the new structure; finally, the network is fine-tuned without SSL to regain the accuracy. We first study 2D-filter-wise and shape-wise sparsity by exploring the trade-offs between computation complexity and classification accuracy. Figure 7(a) shows the 2D-filter sparsity (the ratio between the removed 2D filters and total 2D filters) and the saved FLOP of 2D convolutions vs. the validation error. In Figure 7(a), deeper layers generally have higher sparsity as the group size shrinks and the number of 2D filters grows. 2D-filter sparsity regularization can reduce the total FLOP by 30%–40% without accuracy loss or reduce the error of AlexNet by ∼1% down to 41.69% by retaining the original number of parameters. Shape-wise sparsity also obtains similar results. In Table 4, for example, AlexNet 5 achieves on average 1.4× layer-wise speedup on both CPU and GPU without accuracy loss after shape regularization; The top-1 error can also be reduced down to 41.83% if the parameters are retained. In Figure 7(a), the obtained DNN with the lowest error has a very low sparsity, indicating that the number of parameters in a DNN is still important to maintain learning capacity. In this case, SSL works as a regularization to add restriction of smoothness to the model in order to avoid overfitting. Figure 7(b) compares the results of dimensionality reduction of weight tensors in the baseline and our SSL-regularized AlexNet. The results show that the smoothness restriction enforces parameter searching in lower-dimensional space and enables lower rank approximation of the DNNs. Therefore, SSL can work together with low rank approximation to achieve even higher model compression. Besides the above analyses, the computation efficiencies of structured sparsity and non-structured sparsity are compared in Caffe using standard off-the-shelf libraries, i.e., Intel Math Kernel Library 0 20 40 60 80 100 0 20 40 60 80 100 41.5 42 42.5 43 43.5 44 conv1 conv2 conv3 conv4 conv5 FLOP % Sparsity % FLOP reduction % top-1 error (a) 0 50 100 0 10 20 30 40 50 % dimensionality % Reconstruction error conv1 conv2 conv3 conv4 conv5 (b) 0 1 2 3 4 5 6 Quadro Tesla Titan Black Xeon T8 Xeon T4 Xeon T2 Xeon T1 l1 SSL speedup (c) Figure 7: (a) 2D-filter-wise sparsity and FLOP reduction vs. top-1 error. Vertical dash line shows the error of original AlexNet; (b) The reconstruction error of weight tensor vs. dimensionality. Principal Component Analysis (PCA) is utilized to perform dimensionality reduction. The eigenvectors corresponding to the largest eigenvalues are selected as basis of lower-dimensional space. Dash lines denote the results of the baselines and solid lines indicate the ones of the AlexNet 5 in Table 4; (c) Speedups of ℓ1-norm and SSL on various CPUs and GPUs (In labels of x-axis, T# is the number of maximum physical threads in CPUs). AlexNet 1 and AlexNet 2 in Table 4 are used as testbenches. 7 on CPU and CUDA cuBLAS and cuSPARSE on GPU. We use SSL to learn a AlexNet with high column-wise and row-wise sparsity as the representative of structured sparsity method. ℓ1-norm is selected as the representative of non-structured sparsity method instead of connection pruning [7] because ℓ1-norm get a higher sparsity on convolutional layers as the results of AlexNet 3 and AlexNet 4 depicted in Table 4. Speedups achieved by SSL are measured by GEMM, where all-zero rows (and columns) in each weight matrix are removed and the remaining ones are concatenated in consecutive memory space. Note that compared to GEMM, the overhead of concatenation can be ignored. To measure the speedups of ℓ1-norm, sparse weight matrices are stored in the format of Compressed Sparse Row (CSR) and computed by sparse-dense matrix multiplication subroutines. Table 4 compares the obtained sparsity and speedups of ℓ1-norm and SSL on CPU (Intel Xeon) and GPU (GeForce GTX TITAN Black) under approximately the same errors, e.g., with acceptable or no accuracy loss. To make a fair comparison, after ℓ1-norm regularization, the DNN is also fine-tuned by disconnecting all zero-weighted connections so that, e.g., 1.39% accuracy is recovered for the AlexNet 1. Our experiments show that the DNNs require a very high non-structured sparsity to achieve a reasonable speedup (the speedups are even negative when the sparsity is low). SSL, however, can always achieve positive speedups. With an acceptable accuracy loss, our SSL achieves on average 5.1× and 3.1× layer-wise acceleration on CPU and GPU, respectively. Instead, ℓ1-norm achieves on average only 3.0× and 0.9× layer-wise acceleration on CPU and GPU, respectively. We note that, at the same accuracy, our average speedup is indeed higher than that of [6] which adopts heavy hardware customization to overcome the negative impact of non-structured sparsity. Figure 7(c) shows the speedups of ℓ1-norm and SSL on various platforms, including both GPU (Quadro, Tesla and Titan) and CPU (Intel Xeon E5-2630). SSL can achieve on average ∼3× speedup on GPU while non-structured sparsity obtain no speedup on GPU platforms. On CPU platforms, both methods can achieve good speedups and the benefit grows as the processors become weaker. Nonetheless, SSL can always achieve averagely ∼2× speedup compared to non-structured sparsity. 5 Conclusion In this work, we propose a Structured Sparsity Learning (SSL) method to regularize filter, channel, filter shape, and depth structures in Deep Neural Networks (DNN). Our method can enforce the DNN to dynamically learn more compact structures without accuracy loss. The structured compactness of the DNN achieves significant speedups for the DNN evaluation both on CPU and GPU with off-the-shelf libraries. Moreover, a variant of SSL can be performed as structure regularization to improve classification accuracy of state-of-the-art DNNs. Acknowledgments This work was supported in part by NSF XPS-1337198 and NSF CCF-1615475. The authors thank Drs. Sheng Li and Jongsoo Park for valuable feedback on this work. Table 4: Sparsity and speedup of AlexNet on ILSVRC 2012 # Method Top1 err. Statistics conv1 conv2 conv3 conv4 conv5 1 ℓ1 44.67% sparsity 67.6% 92.4% 97.2% 96.6% 94.3% CPU × 0.80 2.91 4.84 3.83 2.76 GPU × 0.25 0.52 1.38 1.04 1.36 2 SSL 44.66% column sparsity 0.0% 63.2% 76.9% 84.7% 80.7% row sparsity 9.4% 12.9% 40.6% 46.9% 0.0% CPU × 1.05 3.37 6.27 9.73 4.93 GPU × 1.00 2.37 4.94 4.03 3.05 3 pruning [7] 42.80% sparsity 16.0% 62.0% 65.0% 63.0% 63.0% 4 ℓ1 42.51% sparsity 14.7% 76.2% 85.3% 81.5% 76.3% CPU × 0.34 0.99 1.30 1.10 0.93 GPU × 0.08 0.17 0.42 0.30 0.32 5 SSL 42.53% column sparsity 0.00% 20.9% 39.7% 39.7% 24.6% CPU × 1.00 1.27 1.64 1.68 1.32 GPU × 1.00 1.25 1.63 1.72 1.36 8 References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, pages 1097–1105. 2012. [2] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [4] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2015. [5] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015. [6] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015. [7] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In Advances in Neural Information Processing Systems, pages 1135–1143. 2015. [8] Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. [9] Misha Denil, Babak Shakibi, Laurent Dinh, Marc' Aurelio Ranzato, and Nando de Freitas. Predicting parameters in deep learning. In Advances in Neural Information Processing Systems, pages 2148–2156. 2013. [10] Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting linear structure within convolutional networks for efficient evaluation. In Advances in Neural Information Processing Systems, pages 1269–1277. 2014. [11] Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. [12] Yani Ioannou, Duncan P. Robertson, Jamie Shotton, Roberto Cipolla, and Antonio Criminisi. Training cnns with low-rank filters for efficient image classification. arXiv preprint arXiv:1511.06744, 2015. [13] Cheng Tai, Tong Xiao, Xiaogang Wang, and Weinan E. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. [14] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society. Series B (Statistical Methodology), 68(1):49–67, 2006. [15] Seyoung Kim and Eric P Xing. Tree-guided group lasso for multi-task regression with structured sparsity. In Proceedings of the 27th International Conference on Machine Learning, 2010. [16] Jiashi Feng and Trevor Darrell. Learning the structure of deep convolutional networks. In The IEEE International Conference on Computer Vision (ICCV), 2015. [17] Vadim Lebedev and Victor Lempitsky. Fast convnets using group-wise brain damage. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. [18] Rupesh Kumar Srivastava, Klaus Greff, and Jürgen Schmidhuber. Highway networks. arXiv preprint arXiv:1505.00387, 2015. [19] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. [20] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [21] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [22] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. 9
2016
157
6,058
Efficient Globally Convergent Stochastic Optimization for Canonical Correlation Analysis Weiran Wang1∗ Jialei Wang2∗ Dan Garber1 Nathan Srebro1 1Toyota Technological Institute at Chicago 2University of Chicago {weiranwang,dgarber,nati}@ttic.edu jialei@uchicago.edu Abstract We study the stochastic optimization of canonical correlation analysis (CCA), whose objective is nonconvex and does not decouple over training samples. Although several stochastic gradient based optimization algorithms have been recently proposed to solve this problem, no global convergence guarantee was provided by any of them. Inspired by the alternating least squares/power iterations formulation of CCA, and the shift-and-invert preconditioning method for PCA, we propose two globally convergent meta-algorithms for CCA, both of which transform the original problem into sequences of least squares problems that need only be solved approximately. We instantiate the meta-algorithms with state-of-the-art SGD methods and obtain time complexities that significantly improve upon that of previous work. Experimental results demonstrate their superior performance. 1 Introduction Canonical correlation analysis (CCA, [1]) and its extensions are ubiquitous techniques in scientific research areas for revealing the common sources of variability in multiple views of the same phenomenon. In CCA, the training set consists of paired observations from two views, denoted (x1, y1), . . . , (xN, yN), where N is the training set size, xi ∈Rdx and yi ∈Rdy for i = 1, . . . , N. We also denote the data matrices for each view2 by X = [x1, . . . , xN] ∈Rdx×N and Y = [y1, . . . , yN] ∈Rdy×N, and d := dx + dy. The objective of CCA is to find linear projections of each view such that the correlation between the projections is maximized: max u,v u⊤Σxyv s.t. u⊤Σxxu = v⊤Σyyv = 1 (1) where Σxy = 1 N XY⊤is the cross-covariance matrix, Σxx = 1 N XX⊤+γxI and Σyy = 1 N YY⊤+ γyI are the auto-covariance matrices, and (γx, γy) ≥0 are regularization parameters [2]. We denote by (u∗, v∗) the global optimum of (1), which can be computed in closed-form. Define T := Σ −1 2 xx ΣxyΣ −1 2 yy ∈Rdx×dy, (2) and let (φ, ψ) be the (unit-length) left and right singular vector pair associated with T’s largest singular value ρ1. Then the optimal objective value, i.e., the canonical correlation between the views, is ρ1, achieved by (u∗, v∗) = (Σ −1 2 xx φ, Σ −1 2 yy ψ). Note that ρ1 = kTk ≤ Σ −1 2 xx X Σ −1 2 yy Y ≤1. Furthermore, we are guaranteed to have ρ1 < 1 if (γx, γy) > 0. ∗The first two authors contributed equally. 2We assume that X and Y are centered at the origin for notational simplicity; if they are not, we can center them as a pre-processing operation. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Table 1: Time complexities of different algorithms for achieving η-suboptimal solution (u, v) to CCA, i.e., min (u⊤Σxxu∗)2, (v⊤Σyyv∗)2 ≥1 −η. GD=gradient descent, AGD=accelerated GD, SVRG=stochastic variance reduced gradient, ASVRG=accelerated SVRG. Note ASVRG provides speedup over SVRG only when ˜κ > N, and we show the dominant term in its complexity. Algorithm Least squares solver Time complexity AppGrad [3] GD ˜O  dN˜κ ρ2 1 ρ2 1−ρ2 2 · log  1 η  (local) CCALin [6] AGD ˜O  dN √ ˜κ ρ2 1 ρ2 1−ρ2 2 · log  1 η  This work: Alternating least squares (ALS) AGD ˜O  dN √ ˜κ  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  SVRG ˜O  d(N + ˜κ)  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  ASVRG ˜O  d √ N˜κ  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  This work: Shift-and-invert preconditioning (SI) AGD ˜O  dN √ ˜κ q 1 ρ1−ρ2 · log2  1 η  SVRG ˜O  d  N + (˜κ 1 ρ1−ρ2 )2 · log2  1 η  ASVRG ˜O  dN 3 4 √ ˜κ q 1 ρ1−ρ2 · log2  1 η  For large and high dimensional datasets, it is time and memory consuming to first explicitly form the matrix T (which requires eigen-decomposition of the covariance matrices) and then compute its singular value decomposition (SVD). For such datasets, it is desirable to develop stochastic algorithms that have efficient updates, converges fast, and takes advantage of the input sparsity. There have been recent attempts to solve (1) based on stochastic gradient descent (SGD) methods [3, 4, 5], but none of these work provides rigorous convergence analysis for their stochastic CCA algorithms. The main contribution of this paper is the proposal of two globally convergent meta-algorithms for solving (1), namely, alternating least squares (ALS, Algorithm 2) and shift-and-invert preconditioning (SI, Algorithm 3), both of which transform the original problem (1) into sequences of least squares problems that need only be solved approximately. We instantiate the meta algorithms with state-of-the-art SGD methods and obtain efficient stochastic optimization algorithms for CCA. In order to measure the alignments between an approximate solution (u, v) and the optimum (u∗, v∗), we assume that T has a positive singular value gap Δ := ρ1 −ρ2 ∈(0, 1] so its top left and right singular vector pair is unique (up to a change of sign). Table 1 summarizes the time complexities of several algorithms for achieving η-suboptimal alignments, where ˜κ = max i max(kxik2, kyik2) min(σmin(Σxx), σmin(Σyy)) is the upper bound of condition numbers of least squares problems solved in all cases.3 We use the notation ˜O(·) to hide poly-logarithmic dependencies (see Sec. 3.1.1 and Sec. 3.2.3 for the hidden factors). Each time complexity may be preferrable in certain regime depending on the parameters of the problem. Notations We use σi(A) to denote the i-th largest singular value of a matrix A, and use σmax(A) and σmin(A) to denote the largest and smallest singular values of A respectively. 2 Motivation: Alternating least squares Our solution to (1) is inspired by the alternating least squares (ALS) formulation of CCA [7, Algorithm 5.2], as shown in Algorithm 1. Let the nonzero singular values of T be 1 ≥ρ1 ≥ρ2 ≥ · · · ≥ρr > 0, where r = rank(T) ≤min(dx, dy), and the corresponding (unit-length) left and right singular vector pairs be (a1, b1), . . . , (ar, br), with a1=φ and b1 = ψ. Define C =  0 T T⊤ 0 ∈Rd×d. (3) 3For the ALS meta-algorithm, its enough to consider a per-view conditioning. And when using AGD as the least squares solver, the time complexities dependends on σmax(Σxx) instead, which is less than maxi xi2. 2 Algorithm 1 Alternating least squares for CCA. Input: Data matrices X ∈Rdx×N, Y ∈Rdy×N, regularization parameters (γx, γy). Initialize ˜u0 ∈Rdx, ˜v0 ∈Rdy. n ˜φ0, ˜ψ0 o u0 ←˜u0/ p ˜u⊤ 0 Σxx˜u0, v0 ←˜v0/ q ˜v⊤ 0 Σyy˜v0 n φ0 ←˜φ0/ ˜φ0 , ψ0 ←˜ψ0/ ˜ψ0 o for t = 1, 2, . . . , T do ˜ut ←Σ−1 xx Σxyvt−1 n ˜φt ←Σ −1 2 xx ΣxyΣ −1 2 yy ψt−1 o ˜vt ←Σ−1 yy Σ⊤ xyut−1 n ˜ψt ←Σ −1 2 yy Σ⊤ xyΣ −1 2 xx φt−1 o ut ←˜ut/ p ˜u⊤ t Σxx˜ut, vt ←˜vt/ q ˜v⊤ t Σyy˜vt n φt ←˜φt/ ˜φt , ψt ←˜ψt/ ˜ψt o end for Output: (uT , vT ) →(u∗, v∗) as T →∞. {(φT , ψT ) →(φ, ψ)} It is straightforward to check that the nonzero eigenvalues of C are: ρ1 ≥· · · ≥ρr ≥−ρr ≥· · · ≥−ρ1, with corresponding eigenvectors 1 √ 2  a1 b1 , . . . , 1 √ 2  ar br , 1 √ 2  ar −br , . . . , 1 √ 2  a1 −b1 . The key observation is that Algorithm 1 effectively runs a variant of power iterations on C to extract its top eigenvector. To see this, make the following change of variables φt = Σ 1 2xxut, ψt = Σ 1 2yyvt, ˜φt = Σ 1 2xx˜ut, ˜ψt = Σ 1 2yy˜vt. (4) Then we can equivalently rewrite the steps of Algorithm 1 in the new variables as in {} of each line. Observe that the iterates are updated as follows from step t −1 to step t:  ˜φt ˜ψt ←  0 T T⊤ 0  φt−1 ψt−1 ,  φt ψt ←  ˜φt/||˜φt|| ˜ψt/||˜ψt|| . (5) Except for the special normalization steps which rescale the two sets of variables separately, Algorithm 1 is very similar to the power iterations [8]. We show the convergence rate of ALS below (see its proof in Appendix A). The first measure of progress is the alignment of φt to φ and the alignment of ψt to ψ, i.e., (φ⊤ t φ)2 = (u⊤ t Σxxu∗)2 and (ψ⊤ t ψ)2 = (v⊤ t Σyyv∗)2. The maximum value for such alignments is 1, achieved when the iterates completely align with the optimal solution. The second natural measure of progress is the objective of (1), i.e., u⊤ t Σxyvt, with the maximum value being ρ1. Theorem 1 (Convergence of Algorithm 1). Let µ := min (u⊤ 0 Σxxu∗)2, (v⊤ 0 Σyyv∗)2 > 0.4 Then for t ≥⌈ ρ2 1 ρ2 1−ρ2 2 log  1 µη  ⌉, we have in Algorithm 1 that min (u⊤ t Σxxu∗)2, (v⊤ t Σyyv∗)2 ≥ 1 −η, and u⊤ t Σxyvt ≥ρ1(1 −2η). Remarks We have assumed a nonzero singular value gap in Theorem 1 to obtain linear convergence in both the alignments and the objective. When there exists no singular value gap, the top singular vector pair is not unique and it is no longer meaningful to measure the alignments. Nonetheless, it is possible to extend our proof to obtain sublinear convergence for the objective in this case. Observe that, besides the steps of normalization to unit length, the basic operation in each iteration of Algorithm 1 is of the form ˜ut ←Σ−1 xx Σxyvt−1 = ( 1 N XX⊤+ γxI)−1 1 N XY⊤vt−1, which is equivalent to solving the following regularized least squares (ridge regression) problem min u 1 2N u⊤X −v⊤ t−1Y 2 + γx 2 kuk2 ≡min u 1 N N X i=1 1 2 u⊤xi −v⊤ t−1yi 2 + γx 2 kuk2 . (6) In the next section, we show that, to maintain the convergence of ALS, it is unnecessary to solve the least squares problems exactly. This enables us to use state-of-the-art SGD methods for solving (6) to sufficient accuracy, and to obtain a globally convergent stochastic algorithm for CCA. 4One can show that µ is bounded away from 0 with high probability using random initialization (u0, v0). 3 Algorithm 2 The alternating least squares (ALS) meta-algorithm for CCA. Input: Data matrices X ∈Rdx×N, Y ∈Rdy×N, regularization parameters (γx, γy). Initialize ˜u0 ∈Rdx, ˜v0 ∈Rdy. ˜u0 ←˜u0/ p ˜u⊤ 0 Σxx˜u0, ˜v0 ←˜v0/ q ˜v⊤ 0 Σyy˜v0, u0 ←˜u0, v0 ←˜v0 for t = 1, 2, . . . , T do Solve min u ft(u) := 1 2N u⊤X −v⊤ t−1Y 2 + γx 2 kuk2 with initialization ˜ut−1, and output approximate solution ˜ut satisfying ft(˜ut) ≤minu ft(u) + ǫ. Solve min v gt(v) := 1 2N v⊤Y −u⊤ t−1X 2 + γy 2 kvk2 with initialization ˜vt−1, and output approximate solution ˜vt satisfying gt(˜vt) ≤minv gt(v) + ǫ. ut ←˜ut/ p ˜u⊤ t Σxx˜ut, vt ←˜vt/ q ˜v⊤ t Σyy˜vt end for Output: (uT , vT ) is the approximate solution to CCA. 3 Our algorithms 3.1 Algorithm I: Alternating least squares (ALS) with variance reduction Our first algorithm consists of two nested loops. The outer loop runs inexact power iterations while the inner loop uses advanced stochastic optimization methods, e.g., stochastic variance reduced gradient (SVRG, [9]) to obtain approximate matrix-vector multiplications. A sketch of our algorithm is provided in Algorithm 2. We make the following observations from this algorithm. Connection to previous work At step t, if we optimize ft(u) and gt(v) crudely by a single batch gradient descent step from the initialization (˜ut−1, ˜vt−1), we obtain the following update rule: ˜ut ←˜ut−1 −2ξ X(X⊤˜ut−1 −Y⊤vt−1)/N, ut ←˜ut/ q ˜u⊤ t Σxx˜ut ˜vt ←˜vt−1 −2ξ Y(Y⊤˜vt−1 −X⊤ut−1)/N, vt ←˜vt/ q ˜v⊤ t Σyy˜vt where ξ > 0 is the stepsize (assuming γx = γy = 0). This coincides with the AppGrad algorithm of [3, Algorithm 3], for which only local convergence is shown. Since the objectives ft(u) and gt(v) decouple over training samples, it is convenient to apply SGD methods to them. This observation motivated the stochastic CCA algorithms of [3, 4]. We note however, no global convergence guarantee was shown for these stochastic CCA algorithms, and the key to our convergent algorithm is to solve the least squares problems to sufficient accuracy. Warm-start Observe that for different t, the least squares problems ft(u) only differ in their targets as vt changes over time. Since vt−1 is close to vt (especially when near convergence), we may use ˜ut as initialization for minimizing ft+1(u) with an iterative algorithm. Normalization At the end of each outer loop, Algorithm 2 implements exact normalization of the form ut ←˜ut/ p ˜u⊤ t Σxx˜ut to ensure the constraints, where ˜u⊤ t Σxx˜ut = 1 N (˜u⊤ t X)(˜u⊤ t X)⊤+ γx k˜utk2 requires computing the projection of the training set ˜u⊤ t X. However, this does not introduce extra computation because we also compute this projection for the batch gradient used by SVRG (at the beginning of time step t + 1). In contrast, the stochastic algorithms of [3, 4] (possibly adaptively) estimate the covariance matrix from a minibatch of training samples and use the estimated covariance for normalization. This is because their algorithms perform normalizations after each update and thus need to avoid computing the projection of the entire training set frequently. But as a result, their inexact normalization steps introduce noise to the algorithms. Input sparsity For high dimensional sparse data (such as those used in natural language processing [10]), an advantage of gradient based methods over the closed-form solution is that the former takes into account the input sparsity. For sparse inputs, the time complexity of our algorithm depends on nnz(X, Y), i.e., the total number of nonzeros in the inputs instead of dN. Canonical ridge When (γx, γy) > 0, ft(u) and gt(v) are guaranteed to be strongly convex due to the ℓ2 regularizations, in which case SVRG converges linearly. It is therefore beneficial to use 4 small nonzero regularization for improved computational efficiency, especially for high dimensional datasets where inputs X and Y are approximately low-rank. Convergence By the analysis of inexact power iterations where the least squares problems are solved (or the matrix-vector multiplications are computed) only up to necessary accuracy, we provide the following theorem for the convergence of Algorithm 2 (see its proof in Appendix B). The key to our analysis is to bound the distances between the iterates of Algorithm 2 and that of Algorithm 1 at all time steps, and when the errors of the least squares problems are sufficiently small (at the level of η2), the iterates of the two algorithms have the same quality. Theorem 2 (Convergence of Algorithm 2). Fix T ≥⌈ ρ2 1 ρ2 1−ρ2 2 log  2 µη  ⌉, and set ǫ(T) ≤ η2ρ2 r 128 ·  (2ρ1/ρr)−1 (2ρ1/ρr)T −1 2 in Algorithm 2. Then we have u⊤ T ΣxxuT = v⊤ T ΣyyvT = 1, min (u⊤ T Σxxu∗)2, (v⊤ T Σyyv∗)2 ≥1 −η, and u⊤ T ΣxyvT ≥ρ1(1 −2η). 3.1.1 Stochastic optimization of regularized least squares We now discuss the inner loop of Algorithm 2, which approximately solves problems of the form (6). Owing to the finite-sum structure of (6), several stochastic optimization methods such as SAG [11], SDCA [12] and SVRG [9], provide linear convergence rates. All these algorithms can be readily applied to (6); we choose SVRG since it is memory efficient and easy to implement. We also apply the recently developed accelerations techniques for first order optimization methods [13, 14] to obtain an accelerated SVRG (ASVRG) algorithm. We give the sketch of SVRG for (6) in Appendix C. Note that f(u) = 1 N PN i=1 f i(u) where each component f i(u) = 1 2 u⊤xi −v⊤yi 2 + γx 2 kuk2 is kxik2-smooth, and f(u) is σmin(Σxx)-strongly convex5 with σmin(Σxx) ≥γx. We show in Appendix D that the initial suboptimality for minimizing ft(u) is upper-bounded by constant when using the warm-starts. We quote the convergence rates of SVRG [9] and ASVRG [14] below. Lemma 3. The SVRG algorithm [9] finds a vector ˜u satisfying6 E[f(˜u)] −minu f(u) ≤ǫ in time O dx (N + κx) log 1 ǫ  where κx = maxikxik2 σmin(Σxx). The ASVRG algorithm [14] finds a such solution in time O dx √Nκx log 1 ǫ  . Remarks As mentioned in [14], the acceleration version provides speedup over normal SVRG only when κx > N and we only show the dominant term in the above complexity. By combining the iteration complexity of the outer loop (Theorem 2) and the time complexity of the inner loop (Lemma 3), we obtain the total time complexity of ˜O  d (N + κ)  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  for ALS+SVRG and ˜O  d √ Nκ  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  for ALS+ASVRG, where κ := max  maxikxik2 σmin(Σxx), maxikyik2 σmin(Σyy)  and ˜O(·) hides poly-logarithmic dependences on 1 µ and 1 ρr . Our algorithm does not require the initialization to be close to the optimum and converges globally. For comparison, the locally convergent AppGrad has a time complexity [3, Theorem 2.1] of ˜O  dNκ′ ρ2 1 ρ2 1−ρ2 2 · log  1 η  , where κ′ := max  σmax(Σxx) σmin(Σxx) , σmax(Σyy) σmin(Σyy)  . Note, in this complexity, the dataset size N and the least squares condition number κ′ are multiplied together because AppGrad essentially uses batch gradient descent as the least squares solver. Within our framework, we can use accelerated gradient descent (AGD, [15]) instead and obtain a globally convergent algorithm with a total time complexity of ˜O  dN √ κ′  ρ2 1 ρ2 1−ρ2 2 2 · log2  1 η  . 3.2 Algorithm II: Shift-and-invert preconditioning (SI) with variance reduction The second algorithm is inspired by the shift-and-invert preconditioning method for PCA [16, 17]. Instead of running power iterations on C as defined in (3), we will be running power iterations on Mλ = (λI −C)−1 =  λI −T −T⊤ λI −1 ∈Rd×d, (7) 5We omit the regularization in these constants, which are typically very small, to have concise expressions. 6The expectation is taken over random sampling of component functions. High probability error bounds can be obtained using the Markov’s inequality. 5 where λ > ρ1. It is straightforward to check that Mλ is positive definite and its eigenvalues are: 1 λ −ρ1 ≥· · · ≥ 1 λ −ρr ≥· · · ≥ 1 λ + ρr ≥· · · ≥ 1 λ + ρ1 , with eigenvectors 1 √ 2  a1 b1 , . . . , 1 √ 2  ar br , . . . , 1 √ 2  ar −br , . . . , 1 √ 2  a1 −b1 . The main idea behind shift-and-invert power iterations is that when λ −ρ1 = c(ρ1 −ρ2) with c ∼ O(1), the relative eigenvalue gap of Mλ is large and so power iterations on Mλ converges quickly. Our shift-and-invert preconditioning (SI) meta-algorithm for CCA is sketched in Algorithm 3 (in Appendix E due to space limit) and it proceeds in two phases. 3.2.1 Phase I: shift-and-invert preconditioning for eigenvectors of Mλ Using an estimate of the singular value gap ˜Δ and starting from an over-estimate of ρ1 (1 + ˜Δ suffices), the algorithm gradually shrinks λ(s) towards ρ1 by crudely estimating the leading eigenvector/eigenvalues of each Mλ(s) along the way and shrinking the gap λ(s) −ρ1, until we reach a λ(f) ∈(ρ1, ρ1 + c(ρ1 −ρ2)) where c ∼O(1). Afterwards, the algorithm fixes λ(f) and runs inexact power iterations on Mλ(f) to obtain an accurate estimate of its leading eigenvector. Note in this phase, power iterations implicitly operate on the concatenated variables 1 √ 2 " Σ 1 2xx˜ut Σ 1 2yy˜vt # and 1 √ 2 " Σ 1 2xxut Σ 1 2yyvt # in Rd (but without ever computing Σ 1 2xx and Σ 1 2yy). Matrix-vector multiplication The matrix-vector multiplications in Phase I have the form  ˜ut ˜vt ←  λΣxx −Σxy −Σ⊤ xy λΣyy −1  Σxx Σyy  ut−1 vt−1 , (8) where λ varies over time in order to locate λ(f). This is equivalent to solving  ˜ut ˜vt ←min u,v 1 2 u⊤v⊤  λΣxx −Σxy −Σ⊤ xy λΣyy  u v −u⊤Σxxut−1 −v⊤Σyyvt−1. And as in ALS, this least squares problem can be further written as finite-sum: min u,v ht(u, v) = 1 N N X i=1 hi t(u, v) where (9) hi t(u, v) = 1 2 u⊤v⊤  λ xix⊤ i + γxI  −xiy⊤ i −yix⊤ i λ yiy⊤ i + γyI   u v −u⊤Σxxut−1 −v⊤Σyyvt−1. We could directly apply SGD methods to this problem as before. Normalization The normalization steps in Phase I have the form  ut vt ← √ 2  ˜ut ˜vt q ˜u⊤ t Σxx˜ut + ˜v⊤ t Σyy˜vt, and so the following remains true for the normalized iterates in Phase I: u⊤ t Σxxut + v⊤ t Σyyvt = 2, for t = 1, . . . , T. (10) Unlike the normalizations in ALS, the iterates ut and vt in Phase I do not satisfy the original CCA constraints, and this is taken care of in Phase II. We have the following convergence guarantee for Phase I (see its proof in Appendix F). Theorem 4 (Convergence of Algorithm 3, Phase I). Let Δ = ρ1 −ρ2 ∈(0, 1], and ˜µ := 1 4 u⊤ 0 Σxxu∗+ v⊤ 0 Σyyv∗2 > 0, and ˜Δ ∈[c1Δ, c2Δ] where 0 < c1 ≤c2 ≤1. Set m1 = ⌈8 log  16 ˜µ  ⌉, m2 = ⌈5 4 log  128 ˜µη2  ⌉, and ˜ǫ ≤min  1 3084  ˜Δ 18 m1−1 , η4 410  ˜Δ 18 m2−1 in Algorithm 3. Then the (uT , vT ) output by Phase I of Algorithm 3 satisfies (10) and 1 4(u⊤ T Σxxu∗+ v⊤ T Σyyv∗)2 ≥1 −η2 64, (11) and the number of calls to the least squares solver of ht(u, v) is O  log  1 ˜µ  log 1 Δ  + log  1 ˜µη2  . 6 3.2.2 Phase II: final normalization In order to satisfy the CCA constraints, we perform a last normalization ˆu ←uT / q u⊤ T ΣxxuT , ˆv ←vT / q v⊤ T ΣyyvT . (12) And we output (ˆu, ˆv) as our final approximate solution to (1). We show that this step does not cause much loss in the alignments, as stated below (see it proof in Appendix G). Theorem 5 (Convergence of Algorithm 3, Phase II). Let Phase I of Algorithm 3 outputs (uT , vT ) that satisfy (11). Then after (12), we obtain an approximate solution (ˆu, ˆv) to (1) such that ˆu⊤Σxxˆu = ˆv⊤Σyyˆv = 1, min (ˆu⊤Σxxu∗)2, (ˆv⊤Σyyv∗)2 ≥1−η, and ˆu⊤Σxyˆv ≥ρ1(1−2η). 3.2.3 Time complexity We have shown in Theorem 4 that Phase I only approximately solves a small number of instances of (9). The normalization steps (10) require computing the projections of the traning set which are reused for computing batch gradients of (9). The final normalization (12) is done only once and costs O(dN). Therefore, the time complexity of our algorithm mainly comes from solving the least squares problems (9) using SGD methods in a blackbox fashion. And the time complexity for SGD methods depends on the condition number of (9). Denote Qλ =  λΣxx −Σxy −Σ⊤ xy λΣyy = " Σ 1 2xx Σ 1 2yy #  λI −T −T⊤ λI " Σ 1 2xx Σ 1 2yy # . (13) It is clear that σmax(Qλ) ≤(λ + ρ1) · max (σmax(Σxx), σmax(Σyy)) , σmin(Qλ) ≥(λ −ρ1) · min (σmin(Σxx), σmin(Σyy)) . We have shown in the proof of Theorem 4 that λ+ρ1 λ−ρ1 ≤ 9 ˜Δ ≤ 9 c1Δ throughout Algorithm 3 (cf. Lemma 10, Appendix F.2), and thus the condtion number for AGD is σmax(Qλ) σmin(Qλ) ≤ 9/c1 ρ1−ρ2 ˜κ′, where ˜κ′ := max(σmax(Σxx), σmax(Σyy)) min(σmin(Σxx), σmin(Σyy)) . For SVRG/ASVRG, the relevant condition number depends on the gradient Lipschitz constant of individual components. We show in Appendix H (Lemma 12) that the relevant condition number is at most 9/c1 ρ1−ρ2 ˜κ, where ˜κ := maxi max(kxik2, kyik2) min(σmin(Σxx), σmin(Σyy)). An interesting issue for SVRG/ASVRG is that, depending on the value of λ, the independent components hi t(u, v) may be nonconvex. If λ ≥1, each component is still guaranteed to by convex; otherwise, some components might be non-convex, with the overall average 1 N PN i=1 hi t being convex. In the later case, we use the modified analysis of SVRG [16, Appendix B] for its time complexity. We use warmstart in SI as in ALS, and the initial suboptimality for each subproblem can be bounded similarly. The total time complexities of our SI meta-algorithm are given in Table 1. Note that ˜κ (or ˜κ′) and 1 ρ1−ρ2 are multiplied together, giving the effective condition number. When using SVRG as the least squares solver, we obtain the total time complexity of ˜O  d(N + ˜κ 1 ρ1−ρ2 ) · log2  1 η  if all components are convex, and ˜O  d(N + (˜κ 1 ρ1−ρ2 )2) · log2  1 η  otherwise. When using ASVRG, we have ˜O  d √ N √ ˜κ q 1 ρ1−ρ2 · log2  1 η  if all components are convex, and ˜O  dN 3 4 √ ˜κ q 1 ρ1−ρ2 · log2  1 η  otherwise. Here ˜O(·) hides poly-logarithmic dependences on 1 ˜µ and 1 Δ. It is remarkable that the SI meta-algorithm is able to separate the dependence of dataset size N from other parameters in the time complexities. Parallel work In a parallel work [6], the authors independently proposed a similar ALS algorithm7, and they solve the least squares problems using AGD. The time complexity of their algorithm for extracting the first canonical correlation is ˜O  dN √ κ′ ρ2 1 ρ2 1−ρ2 2 · log  1 η  , which has linear dependence on ρ2 1 ρ2 1−ρ2 2 log  1 η  (so their algorithm is linearly convergent, but our complexity for ALS+AGD has quadratic dependence on this factor), but typically worse dependence on N and κ′ (see remarks in Section 3.1.1). Moreover, our SI algorithm tends to significantly outperform ALS theoretically and empirically. It is future work to remove extra log  1 η  dependence in our analysis. 7Our arxiv preprint for the ALS meta-algorithm was posted before their paper got accepted by ICML 2016. 7 γx = γy = 10−5 γx = γy = 10−4 γx = γy = 10−3 γx = γy = 10−2 κ′ = 53340, δ = 5.345 κ′ = 5335, δ = 4.924 κ′ = 534.4, δ = 4.256 κ′ = 54.34, δ = 2.548 Mediamill Suboptimality 0 100 200 300 400 500 600 10-6 10-4 10-2 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR κ′ = 2699000, δ = 11.22 κ′ = 332800, δ = 11.10 κ′ = 34070, δ = 10.58 κ′ = 3416, δ = 9.082 JW11 Suboptimality 0 100 200 300 400 500 600 10-4 10-3 10-2 10-1 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-6 10-4 10-2 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR κ′ = 2235000, δ = 12.82 κ′ = 223500, δ = 12.75 κ′ = 22350, δ = 12.30 κ′ = 2236, δ = 9.874 MNIST Suboptimality 0 100 200 300 400 500 600 10-6 10-4 10-2 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR 0 100 200 300 400 500 600 10-15 10-10 10-5 100 AppGrad S-AppGrad CCALin ALS-VR ALS-AVR SI-VR SI-AVR # Passes # Passes # Passes # Passes Figure 1: Comparison of suboptimality vs. # passes for different algorithms. For each dataset and regularization parameters (γx, γy), we give κ′ = max  σmax(Σxx) σmin(Σxx) , σmax(Σyy) σmin(Σyy)  and δ = ρ2 1 ρ2 1−ρ2 2 . Extension to multi-dimensional projections To extend our algorithms to L-dimensional projections, we can extract the dimensions sequentially and remove the explained correlation from Σxy each time we extract a new dimension [18]. For the ALS meta-algorithm, a cleaner approach is to extract the L dimensions simultaneously using (inexact) orthogonal iterations [8], in which case the subproblems become multi-dimensional regressions and our normalization steps are of the form Ut ←˜Ut( ˜U⊤ t Σxx ˜Ut)−1 2 (the same normalization is used by [3, 4]). Such normalization involves the eigenvalue decomposition of a L × L matrix and can be solved exactly as we typically look for low dimensional projections. Our analysis for L = 1 can be extended to this scenario and the convergence rate of ALS will depend on the gap between ρL and ρL+1. 4 Experiments We demonstrate the proposed algorithms, namely ALS-VR, ALS-AVR, SI-VR, and SI-AVR, abbreviated as “meta-algorithm – least squares solver” (VR for SVRG, and AVR for ASVRG) on three real-world datasets: Mediamill [19] (N = 3 × 104), JW11 [20] (N = 3 × 104), and MNIST [21] (N = 6 × 104). We compare our algorithms with batch AppGrad and its stochastic version s-AppGrad [3], as well as the CCALin algorithm in parallel work [6]. For each algorithm, we compare the canonical correlation estimated by the iterates at different number of passes over the data with that of the exact solution by SVD. For each dataset, we vary the regularization parameters γx = γy over {10−5, 10−4, 10−3, 10−2} to vary the least squares condition numbers, and larger regularization leads to better conditioning. We plot the suboptimality in objective vs. # passes for each algorithm in Figure 1. Experimental details (e.g. SVRG parameters) are given in Appendix I. We make the following observations from the results. First, the proposed stochastic algorithms significantly outperform batch gradient based methods AppGrad/CCALin. This is because the least squares condition numbers for these datasets are large, and SVRG enable us to decouple dependences on the dataset size N and the condition number κ in the time complexity. Second, SI-VR converges faster than ALS-VR as it further decouples the dependence on N and the singular value gap of T. Third, inexact normalizations keep the s-AppGrad algorithm from converging to an accurate solution. Finally, ASVRG improves over SVRG when the the condition number is large. Acknowledgments Research partially supported by NSF BIGDATA grant 1546500. 8 References [1] H. Hotelling. Relations between two sets of variates. Biometrika, 28(3/4):321–377, 1936. [2] H. D. Vinod. Canonical ridge and econometrics of joint production. J. Econometrics, 1976. [3] Z. Ma, Y. Lu, and D. Foster. Finding linear structure in large datasets with scalable canonical correlation analysis. In ICML, 2015. [4] W. Wang, R. Arora, N. Srebro, and K. Livescu. Stochastic optimization for deep CCA via nonlinear orthogonal iterations. In ALLERTON, 2015. [5] B. Xie, Y. Liang, and L. Song. Scale up nonlinear component analysis with doubly stochastic gradients. In NIPS, 2015. [6] R. Ge, C. Jin, S. Kakade, P. Netrapalli, and A. Sidford. Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis. arXiv, April 13 2016. [7] G. Golub and H. Zha. Linear Algebra for Signal Processing, chapter The Canonical Correlations of Matrix Pairs and their Numerical Computation, pages 27–49. 1995. [8] G. Golub and C. van Loan. Matrix Computations. third edition, 1996. [9] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. [10] Y. Lu and D. Foster. Large scale canonical correlation analysis with iterative least squares. In NIPS, 2014. [11] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Technical Report HAL 00860051, École Normale Supérieure, 2013. [12] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 2013. [13] R. Frostig, R. Ge, S. Kakade, and A. Sidford. Un-regularizing: Approximate proximal point and faster stochastic algorithms for empirical risk minimization. In ICML, 2015. [14] H. Lin, J. Mairal, and Z. Harchaoui. A universal catalyst for first-order optimization. In NIPS, 2015. [15] Y. Nesterov. Introductory Lectures on Convex Optimization. A Basic Course. Springer, 2004. [16] D. Garber and E. Hazan. Fast and simple PCA via convex optimization. arXiv, 2015. [17] C. Jin, S. Kakade, C. Musco, P. Netrapalli, and A. Sidford. Robust shift-and-invert preconditioning: Faster and more sample efficient algorithms for eigenvector computation. 2015. [18] D. Witten, R. Tibshirani, and T. Hastie. A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics, 2009. [19] C. Snoek, M. Worring, J. van Gemert, J. Geusebroek, and A. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In MULTIMEDIA, 2006. [20] J. Westbury. X-Ray Microbeam Speech Production Database User’s Handbook, 1994. [21] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proc. IEEE, 86(11):2278–2324, 1998. [22] M. Warmuth and D. Kuzmin. Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension. Journal of Machine Learning Research, 2008. [23] R. Arora, A. Cotter, K. Livescu, and N. Srebro. Stochastic optimization for PCA and PLS. In ALLERTON, 2012. [24] A. Balsubramani, S. Dasgupta, and Y. Freund. The fast convergence of incremental PCA. In NIPS, 2013. [25] O. Shamir. A stochastic PCA and SVD algorithm with an exponential convergence rate. In ICML, 2015. [26] F. Yger, M. Berar, G. Gasso, and A. Rakotomamonjy. Adaptive canonical correlation analysis based on matrix manifolds. In ICML, 2012. 9
2016
158
6,059
How Deep is the Feature Analysis underlying Rapid Visual Categorization? Sven Eberhardt∗ Jonah Cader∗ Thomas Serre Department of Cognitive Linguistic & Psychological Sciences Brown Institute for Brain Sciences Brown University Providence, RI 02818 {sven2,jonah_cader,thomas_serre}@brown.edu Abstract Rapid categorization paradigms have a long history in experimental psychology: Characterized by short presentation times and speeded behavioral responses, these tasks highlight the efficiency with which our visual system processes natural object categories. Previous studies have shown that feed-forward hierarchical models of the visual cortex provide a good fit to human visual decisions. At the same time, recent work in computer vision has demonstrated significant gains in object recognition accuracy with increasingly deep hierarchical architectures. But it is unclear how well these models account for human visual decisions and what they may reveal about the underlying brain processes. We have conducted a large-scale psychophysics study to assess the correlation between computational models and human behavioral responses on a rapid animal vs. non-animal categorization task. We considered visual representations of varying complexity by analyzing the output of different stages of processing in three stateof-the-art deep networks. We found that recognition accuracy increases with higher stages of visual processing (higher level stages indeed outperforming human participants on the same task) but that human decisions agree best with predictions from intermediate stages. Overall, these results suggest that human participants may rely on visual features of intermediate complexity and that the complexity of visual representations afforded by modern deep network models may exceed the complexity of those used by human participants during rapid categorization. 1 Introduction Our visual system is remarkably fast and accurate. The past decades of research in visual neuroscience have demonstrated that visual categorization is possible for complex natural scenes viewed in rapid presentations. Participants can reliably detect and later remember visual scenes embedded in continuous streams of images with exposure times as low as 100 ms [see 15, for review]. Observers can also reliably categorize animal vs. non-animal images (and other classes of objects) even when flashed for 20 ms or less [see 6, for review]. Unlike normal everyday vision which involves eye movements and shifts of attention, rapid visual categorization is assumed to involve a single feedforward sweep of visual information [see 19, for review] and engages our core object recognition system [reviewed in 5]. Interestingly, incorrect responses during rapid categorization tasks are not uniformly distributed across stimuli (as one would ∗These authors contributed equally. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. expect from random motor errors) but tend to follow a specific pattern reflecting an underlying visual strategy [1]. Various computational models have been proposed to describe the underlying feature analysis [see 2, for review]. In particular, a feedforward hierarchical model constrained by the anatomy and the physiology of the visual cortex was shown to agree well with human behavioral responses [16]. In recent years, however, the field of computer vision has championed the development of increasingly deep and accurate models – pushing the state of the art on a range of categorization problems from speech and music to text, genome and image categorization [see 12, for a recent review]. From AlexNet [11] to VGG [17] and Microsoft CNTK [8], over the years, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has been won by progressively deeper architectures. Some of the ILSVRC best performing architectures now include 150 layers of processing [8] and even 1,000 layers for other recognition challenges [9] – arguably orders of magnitude more than the visual system (estimated to be O(10), see [16]). Despite the absence of neuroscience constraints on modern deep learning networks, recent work has shown that these architectures explain neural data better than earlier models [reviewed in 20] and are starting to match human level of accuracy for difficult object categorization tasks [8]. It thus raises the question as to whether recent deeper network architectures better account for speeded behavioral responses during rapid categorization tasks or whether they have actually become too deep – instead deviating from human responses. Here, we describe a rapid animal vs. non-animal visual categorization experiment that probes this question. We considered visual representations of varying complexity by analyzing the output of different stages of processing in state-of-the-art deep networks [11, 17]. We show that while recognition accuracy increases with higher stages of visual processing (higher level stages indeed outperforming human participants for the same task) human decisions agreed best with predictions from intermediate stages. 2 Methods Image dataset A large set of (target) animal and (distractor) non-animal stimuli was created by sampling images from ImageNet [4]. We balanced the number of images across basic categories from 14 high-level synsets, to curb biases that are inherent in Internet images. (We used the invertebrate, bird, amphibian, fish, reptile, mammal, domestic cat, dog, structure, instrumentation, consumer goods, plant, geological formation, and natural object subtrees.) To reduce the prominence of low-level visual cues, images containing animals and objects on a white background were discarded. All pictures were converted to grayscale and normalized for illumination. Images less than 256 pixels in either dimension were similarly removed and all other images were cropped to a square and scaled to 256 × 256 pixels. All images were manually inspected and mislabeled images and images containing humans were removed from the set (∼17% of all images). Finally, we drew stimuli uniformly (without replacement) from all basic categories to create balanced sets of 300 images. Each set contained 150 target images (half mammal and half non-mammal animal images) and 150 distractors (half artificial objects and half natural scenes). We created 7 such sets for a total of 2,100 images used for the psychophysics experiment described below. Participants Rapid visual categorization data was gathered from 281 participants using the Amazon Mechanical Turk (AMT) platform (www.mturk.com). AMT is a powerful tool that allows the recruitment of massive trials of anonymous workers screened with a variety of criteria [3]. All participants provided informed consent electronically and were compensated $4.00 for their time (∼20–30 min per image set, 300 trials). The protocol was approved by the University IRB and was carried out in accordance with the provisions of the World Medical Association Declaration of Helsinki. Experimental procedure On each trial, the experiment ran as follows: On a white background (1) a fixation cross appeared for a variable time (1,100–1,600 ms); (2) a stimulus was presented for 50 ms. The order of image presentations was randomized. Participants were instructed to answer as fast and as accurately as possible by pressing either the “S” or “L” key depending on whether they saw an animal (target) or non-animal (distractor) image. Key assignment was randomized for each participant. 2 Time Pre-stimulus xation (1,100-1,600ms) Stimulus (50ms) Max Response Window (500ms) ...if window is reached Please respond faster! Figure 1: Experimental paradigm and stimulus set: (top) Each trial began with a fixation cross (1,100–1,600 ms), followed by an image presentation (∼50 ms). Participants were forced to answer within 500 ms. A message appeared when participants failed to respond in the allotted time. (bottom) Sample stimuli from the balanced set of animal and non-animal images (n=2,100). A fast answer time paradigm was used in favor of masking to avoid possible performance biases between different classes caused by the mask [6, 15]. Participants were forced to respond within 500 ms (a message was displayed in the absence of a response past the response deadline). In past studies, this has been shown to yield reliable behavioral data [e.g. 18]. We have also run a control to verify that the maximum response time did not affect qualitatively our results. An illustration of the experimental paradigm is shown in Figure 1. At the end of each block, participants received feedback about their accuracy. An experiment started with a short practice during which participants were familiarized with the task (stimulus presentation was slowed down and participants were provided feedback on their response). No other feedback was provided to participants during the experiment. We used the psiTurk framework [13] combined with custom javascript functions. Each trial (i.e., fixation cross followed by the stimulus) was converted to a HTML5-compatible video format to provide the fastest reliable presentation time possible in a web browser. Videos were generated to include the initial fixation cross and the post-presentation answer screen with the proper timing as described above. Videos were preloaded before each trial to ensure reliable image presentation times over the Internet. We used a photo-diode to assess the reliability of the timing on different machines including different OS, browsers and screens and found the timing to be accurate to ∼10 ms. Images were shown at a 3               Figure 2: Model decision scores: A classifier (linear SVM) is trained on visual features corresponding to individual layers from representative deep networks. The classifier learns a decision boundary (shown in red) that best discriminates target/animal and distractor/non-animal images. Here, we consider the signed distance from this decision boundary (blue dotted lines) as a measure of the model’s confidence on the classification of individual images. A larger distance indicates higher confidence. For example, while images (a) and (b) are both correctly classified, the model’s confidence for image (a) correctly classified as animal is higher than that of (b) correctly classified as non-animal. Incorrectly classified images, such as (c) are assigned negative scores corresponding to how far onto the wrong side of the boundary they fall. resolution of 256× 256. We estimate this to correspond to a stimulus size between approximately 5o −11o visual angle depending on the participants’ screen size and specific seating arrangement. The subjects pool was limited to users connections from the United States using either the Firefox or Chrome browser on a non-mobile device. Subjects also needed to have a minimal average approval rating of 95% on past Mechanical Turk tasks. As stated above, we ran 7 experiments altogether for a total of 2,100 unique images. Each experiment lasted 20-30 min and contained a total of 300 trials divided into 6 blocks (50 image presentations / trials each). Six of the experiments followed the standard experimental paradigm described above (1,800 images and 204 participants). The other 300 images and 77 participants were reserved for a control experiment in which the maximum reaction time per block was set to 500 ms, 1,000 ms, and 1,500 ms for two block each. (See below.) Computational models We tested the accuracy of individual layers from state-of-the-art deep networks including AlexNet [11], VGG16 and VGG19 [17]. Feature responses were extracted from different processing stages (Caffe implementation [10] using pre-trained weights). For fully connected layers, features were taken as is; for convolutional layers, a subset of 4,096 features was extracted via random sampling. Model decisions were based on the output of a linear SVM (scikit-learn [14] implementation) trained on 80,000 ImageNet images (C regularization parameter optimized by cross-validation). Qualitatively similar results were obtained with regularized logistic regression. Feature layer accuracy was computed from SVM performance. Model confidence for individual test stimuli was defined as the estimated distance from the decision boundary (see Figure 2). A similar confidence score was computed for human participants by considering the fraction of correct responses for individual images. Spearman’s rho rank-order correlations (rs) was computed between classifier confidence outputs and human decision scores. Bootstrapped 95% confidence intervals (CIs) were calculated on human-model correlation and human classification scores. Bootstrap runs (n=300) were based on 180 participants sampled with 4 replacement from the subject pool. CIs were computed by considering the bottom 2.5% and top 97.5% values as upper and lower bounds. 3 Results We computed the accuracy of individual layers from commonly used deep networks: AlexNet [11] as well as VGG16 and VGG19 [17]. The accuracy of individual layers for networks pre-trained on the ILSVRC 2012 challenge (1,000 categories) is shown in Figure 3 (a). The depth of individual layers was normalized with respect to the maximum depth as layer depth varies across models. In addition, we selected VGG16 as the most popular state-of-the-art model and fine-tuned it on the animal vs. non-animal categorization task at hand. Accuracy for all models increased monotonically (near linearly) as a function of depth to reach near perfect accuracy for the top layers for the best networks (fine-tuned VGG16). Indeed, all models exceeded human accuracy on this rapid animal vs. non-animal categorization task. Fine-tuning did improve test accuracy slightly from 95.0% correct to 97.0% correct on VGG16 highest layer, but the performance of all networks remained high in the absence of any fine-tuning. To benchmark these models, we assessed human participants’ accuracy and reaction times (RTs) on this animal vs. non-animal categorization task. On average, participants responded correctly with an accuracy of 77.4% (± 1.4%). These corresponded to an average d’ of 1.06 (± 0.06). Trials for which participants failed to answer before the deadline were excluded from the evaluation (13.7% of the total number of trials). The mean RT for correct responses was 429 ms (± 103 ms standard deviation). We computed the minimum reaction time MinRT defined as the first time bin for which correct responses start to significantly outnumber incorrect responses [6]. The MinRT is often considered a floor limit for the entire visuo-motor sequence (feature analysis, decision making, and motor response) and could be completed within a temporal window as short as 370 ms ± 75 ms. We computed this using a binomial test (p < 0.05) on classification accuracy from per-subject RT data sorted into 20 ms bins and found the median value of the corresponding distribution. Confidence scores for each of the 1,800 (animal and non-animal) main experiment images were calculated for human participants and all the computational models. The resulting correlation coefficients are shown in Figure 3 (b). Human inter-subject agreement, measured as Spearman’s rho correlation between 1,000 randomly selected pairs of bootstrap runs, is at ρ = 0.74 (± 0.05%). Unlike individual model layer accuracy which increases monotonically, the correlation between these same model layers and human participants picked for intermediate layers and decreased for deeper layers. This drop-off is stable across all tested architectures and started around at 70% of the relative model depth. For comparison, we re-plotted the accuracy of the individual layers and correlation to human participants for the fine-tuned VGG16 model in Figure 3 (c). The drop-off in correlation to human responses begins after layer conv5_2, where the correlation peaks at 0.383 ± 0.026. Without adjustment, i.e. correlating the answers including correctness, the peak lies at the same layer at 0.829 ± 0.008 (see supplement B for graph). Example images in which humans and VGG16 top layer disagree are shown in Figure 4. The model typically outperforms humans on elongated animals such as snakes and worms, as well as camouflaged animals and when objects are presented in an atypical context. Human participants outperform the model on typical, iconic illustrations such as a cat looking directly at the camera. We verified that the maximum response time (500 ms) allowed did not qualitatively affect our results. We ran a control experiment (77 participants) on a set of 300 images where we systematically varied the maximum response time available (500 ms, 1,000 ms and 2,000 ms). We evaluated differences in categorization accuracy using a one-way ANOVA with Tukey’s HSD for post-hoc correction. The accuracy increased significantly from 500 to 1,000 ms (from 74 % to 84 %; p < 0.01). However, no significant difference was found between 1,000 and 2,000 ms (both ± 84%; p > 0.05). Overall, we found no qualitative difference in the observed pattern of correlation between human and model decision scores for longer response times (results in supplement A). We found an overall slight upward trend for both intermediate and higher layers for longer response times. 5 Figure 3: Comparison between models and human behavioral data: (a) Accuracy and (b) correlation between decision scores derived from various networks and human behavioral data plotted as a function of normalized layers depth (normalized by the maximal depth of the corresponding deep net). (c) Superimposed accuracy and correlation between decision scores derived from the best performing network (VGG16 fine-tuned (ft) for animal categorization) and human behavioral data plotted as a function of the raw layers depth. Lines are fitted polynomials of 2nd (accuracy) and 3rd (correlation) degree order. Shaded red background corresponds to 95% CI estimated via bootstrapping shown for fine-tuned VGG16 model only for readability. Gray curve corresponds to human accuracy (CIs shown with dashed lines). 6 Figure 4: Sample images where human participants and model (VGG16 layer fc7) disagree. H: Average human decision score (% correct). M: Model decision score (distance to decision boundary). 4 Discussion The goal of this study was to perform a computational-level analysis aimed at characterizing the visual representations underlying rapid visual categorization. To this end, we have conducted a large-scale psychophysics study using a fast-paced animal vs. non-animal categorization task. This task is ecologically significant and has been extensively used in previous psychophysics studies [reviewed in 6]. We have considered 3 state-of-the-art deep networks: AlexNet [11] as well as VGG16 and VGG19 [17]. We have performed a systematic analysis of the accuracy of these models’ individual layers for the same animal/non-animal categorization task. We have also assessed the agreement between model and human decision scores for individual images. Overall, we have found that the accuracy of individual layers consistently increased as a function of depth for all models tested. This result confirms the current trend in computer vision that better performance on recognition challenges is typically achieved by deeper networks. This result is also consistent with an analysis by Yu et al. [21], who have shown that both sparsity of the representation and distinctiveness of the matched features increase monotonously with the depth of the network. However, the correlation between model and human decision scores peaked at intermediate layers and decreased for deeper layers. These results suggest that human participants may rely on visual features of intermediate complexity and that the complexity of visual representations afforded by modern deep network models may exceed those used by human participants during rapid categorization. In particular, the top layers (final convolutional and fully connected), while showing an improvement in accuracy, no longer maximize correlation with human data. Whether this result is based on the complexity of the representation or invariance properties of intermediate layers remains to be investigated. It should be noted that a depth of ∼10 layers of processing has been suggested as an estimate for the number of processing stages in the ventral stream of the visual cortex [16]. How then does the visual cortex achieve greater depth of processing when more time is allowed for categorization? One possibility is that speeded categorization reflects partial visual processing up to intermediate levels while longer response times would allow for deeper processing in higher stages. We compared the agreement between model and human decisions scores for longer response times (500 ms, 1,000 ms and 2,000 ms). While the overall correlation increased slightly for longer response times, this higher correlation did not appear to differentially affect high- vs. mid-level layers. An alternative hypothesis is that greater depth of processing for longer response times is achieved via recurrent circuits and that greater processing depth is achieved through time. The fastest behavioral responses would thus correspond to bottom-up / feed-forward processing. This would be followed by re-entrant and other top-down signals [7] when more time is available for visual processing. 7 Acknowledgments We would like to thank Matt Ricci for his early contribution to this work and further discussions. This work was supported by NSF early career award [grant number IIS-1252951] and DARPA young faculty award [grant number YFA N66001-14-1-4037]. Additional support was provided by the Center for Computation and Visualization (CCV). References [1] M. Cauchoix, S. M. Crouzet, D. Fize, and T. Serre. Fast ventral stream neural activity enables rapid visual categorization. Neuroimage, 125:280–290, 2016. ISSN 10538119. doi: 10.1016/j. neuroimage.2015.10.012. [2] S. M. Crouzet and T. Serre. What are the Visual Features Underlying Rapid Object Recognition? Front. Psychol., 2(November):326, jan 2011. ISSN 1664-1078. doi: 10.3389/fpsyg.2011.00326. [3] M. J. C. Crump, J. V. McDonnell, and T. M. Gureckis. Evaluating Amazon’s Mechanical Turk as a tool for experimental behavioral research. PLoS ONE,8(3), 2001. [4] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE, 2009. [5] J. J. Dicarlo, D. Zoccolan, and N. C. Rust. How does the brain solve visual object recognition? Neuron, 73(3):415–34, feb 2012. ISSN 1097-4199. doi: 10.1016/j.neuron.2012.01.010. [6] M. Fabre-Thorpe. The characteristics and limits of rapid visual categorization. Front. Psychol., 2(October):243, jan 2011. ISSN 1664-1078. doi: 10.3389/fpsyg.2011.00243. [7] C. D. Gilbert and W. Li. Top-down influences on visual processing. Nat. Rev. Neurosci., 14(5): 350–63, may 2013. ISSN 1471-0048. doi: 10.1038/nrn3476. [8] K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. feb 2015. [9] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. CoRR, abs/1603.05027, 2016. [10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrel. Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 2014 ACM Conference on Multimedia (MM 2014), pages 10005–10014, 2014. [11] A. Krizhevsky, I. Sutskever, and G. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Neural Inf. Process. Syst., Lake Tahoe, Nevada, 2012. [12] Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, may 2015. ISSN 0028-0836. doi: 10.1038/nature14539. [13] J. V. McDonnell, J. B. Martin, D. B. Markant, A. Coenen, A. S. Rich, and T. M. Gureckis. psiturk (version 1.02)[software]. new york, ny: New york university. available from https://github.com/nyuccl/psiturk, 2012. [14] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res., 12:2825–2830, 2011. [15] M. C. Potter. Recognition and memory for briefly presented scenes. Front. Psychol., 3:32, jan 2012. ISSN 1664-1078. doi: 10.3389/fpsyg.2012.00032. [16] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. PNAS, 104(15), pages 6424–6429, 2007. 8 [17] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. Technical report, sep 2014. [18] I. Sofer, S. Crouzet, and T. Serre. Explaining the timing of natural scene understanding with a computational model of perceptual categorization. PLoS Comput Biol, 2015. [19] R. VanRullen. The power of the feed-forward sweep. Adv. Cogn. Psychol., 3(1-2):167–176, 2007. [20] D. L. K. Yamins and J. J. DiCarlo. Using goal-driven deep learning models to understand sensory cortex. Nat. Neurosci., 19(3):356–365, feb 2016. ISSN 1097-6256. doi: 10.1038/nn.4244. [21] W. Yu, K. Yang, Y. Bai, H. Yao, and Y. Rui. Visualizing and comparing convolutional neural networks. arXiv preprint arXiv:1412.6631, 2014. 9
2016
159
6,060
A Non-parametric Learning Method for Confidently Estimating Patient’s Clinical State and Dynamics William Hoiles Department of Electrical Engineering University of California Los Angeles Los Angeles, CA 90024 whoiles@ucla.edu Mihaela van der Schaar Department of Electrical Engineering University of California Los Angeles Los Angeles, CA 90024 mihaela@ee.ucla.edu Abstract Estimating patient’s clinical state from multiple concurrent physiological streams plays an important role in determining if a therapeutic intervention is necessary and for triaging patients in the hospital. In this paper we construct a non-parametric learning algorithm to estimate the clinical state of a patient. The algorithm addresses several known challenges with clinical state estimation such as eliminating the bias introduced by therapeutic intervention censoring, increasing the timeliness of state estimation while ensuring a sufficient accuracy, and the ability to detect anomalous clinical states. These benefits are obtained by combining the tools of non-parametric Bayesian inference, permutation testing, and generalizations of the empirical Bernstein inequality. The algorithm is validated using real-world data from a cancer ward in a large academic hospital. 1 Introduction Timely clinical state estimation can significantly improve the quality of care for patient’s by informing clinicians of patient’s that have entered a high-risk clinical state. This is a challenging problem as the patient’s clinical state is not directly observable and must be inferred from the patient’s vital signs and the clinician’s domain-knowledge. Several methods exist for estimating the patient’s clinical state including clinical guidelines and risk scores [21, 18]. The limitation with these population based methods is that they are not personalized (e.g. patient models are not unique), can not detect anomalous patient dynamics, and most importantly, are biased due to therapeutic intervention censoring [16]. Therapeutic intervention censoring occurs when a patient’s physiological signals are misclassified in the training data as a result of the effects caused by therapeutic interventions. To improve the quality of patient care, new methods are needed to overcome these limitations. In this paper we develop an algorithm for estimating a patient’s clinical state based on previously recorded electronic health record (EHR) data. A schematic of the algorithm is provided in Fig.1 which contains three primary components: a) learning the patient’s stochastic model, b) using statistical techniques to evaluate the quality of the estimated stochastic model, and c) performing clinical state estimation for new patients based on their estimated models. The works by Fox et al. [10, 9] and Saria et al. [19] for temporal segmentation are the most related to our algorithm. However [10, 19] do not apply formal statistical techniques to validate and iteratively update the hyper-parameters of the non-parametric Bayesian inference, are not personalized, do not remove the bias caused by therapeutic intervention censoring, and do not utilize clinician domain knowledge for clinical state estimation. Additionally, applying fully Bayesian methods [9] for clinical state estimation are computationally prohibitive as the computational complexity of constructing the stochastic model of all patients grows polynomially with the number of samples and maximum number of possible states of all patients. The computational complexity of our algorithm is only polynomial in the number 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. of samples and states of a single patient. A detailed literature review is provided in the Supporting Material. The proposed algorithm (Fig.1) learns a combinatorial stochastic model for each patient based on their measured vital signs. A non-parametric Bayesian learning algorithm based on the hierarchical Dirichlet process hidden Markov model (HDP-HMM) [10] is used to learn the patient’s stochastic model which is composed of a possibly infinite state-space HMM where each state is associated with a unique dynamic model. The algorithm dynamically adjusts the number of detected dynamic models and their temporal duration based on the patient’s vital signs–that is, the algorithm has a data-driven bound on the model complexity (e.g. number of detected states). The patient’s stochastic model provides a fine-grained personalized representation of each patient that is interpretable for clinicians, and accounts for the patient’s specific dynamics which may result from therapeutic interventions and medical complications (e.g. disease, paradoxical reaction to a drug, bone fracture). To ensure that each detected dynamic model is associated with a unique clinical state, the hyper-parameters in the HPD-HMM are updated iteratively using the results from an improved Bonferroni method [2]. This mitigates the major weakness of non-parametric Bayesian inference methods of how to select the hyper-parameters [14, 12]. Additionally, the algorithm provides statistical guarantees on the dynamic model parameters using generalizations of the scalar Bernstein inequality [13] to vector-valued and matrix-valued random variables. In clinical applications it is desirable to relate a collection of dynamic models from several patient’s to a unique clinical state of interest for the clinician (e.g. detecting which patients have entered a high-risk clinical state). The clinician defines a supervised training set that is composed of all previously observed patient’s dynamic models and their associated clinical state, which is then used to construct a similarity metric. This construction of the similarity metric between dynamic models and clinical states ensures that the bias introduced from therapeutic intervention censoring is removed, and also allows for the detection of anomalous dynamic models that are not associated with a previously defined clinical state. When a new patient arrives the algorithm will learn their stochastic model, and then use the similarity metric to map the detected dynamic models to their associated clinical states of interest. Though our algorithm is general and can be applied in several medical settings (e.g. mobile health, wireless health) here we focus on detecting the clinical state of patients in hospital wards. Specifically we apply our algorithm to patient’s in a cancer ward of a large academic hospital. Segmentation  fine-grained personalization  Validation Model and Parameters Label Clinician Segmentation  fine-grained personalization  Similarity Electronic Health Records D New Patient Vitals {yt}t∈T Clinical State Estimate Offline Learning yes no ¯D L Figure 1: Schematic of the proposed algorithm for learning the dynamic model and estimating the clinical state of the patient. From D a valid segmentation ¯D is constructed and provided to the clinician to construct the labeled dataset L. New patient vital signs are labeled using the dataset L. 2 Non-parametric Learning Algorithm for Patient’s Stochastic Model In this section we provide a method to segment patient’s electronic health record data D = {{yi t}t∈T i}i∈I, with yi t ∈Rm the vital signs of patient i ∈I at time t. To segment the temporal data we assume that the vital signs of each patient originate from a switching multivariate Gaussian (SMG) process. A Bayesian non-parametric learning algorithm is utilized to select the switching times between the unique dynamic models–that is, we consider the observation dynamics and model switching dynamics simultaneously. The final result of the segmentation is the dataset: ¯D = {{yi t}t∈T i k, k ∈{1, . . . , Ki} = Ki}i∈I (1) 2 with T i k the time samples for segment k and Ki the set of segments for patient i. Statistical methods are used to ensure that each dynamic model is associated with a unique clinical state, refer to Sec.3 for details. We assume that the switching process between models satisfies a HMM where each state of the HMM is associated with a unique dynamic model given by: yt = εt(zt) εt(zt) ∼N(µ(zt), Σ(zt)) (2) where zt ∈Ki is the state of the patient, and εt(zt) is a Gaussian white noise term with covariance matrix Σ(zt). For notational convenience we will suppress the indices i and only include explicitly when required. For segmentation each of the patients is treated independently. Each state zt is assumed to evolve according to a HMM with zt associated with a specific segment k ∈K. Notice that we must estimate the total number of states |K|, and the associated model parameters {µ(k), Σ(k)}k∈K using only the data {yt}t∈T . To learn the cardinality of the HMM we use the tools of non-parametric Bayesian inference by placing a prior on the HMM parameters to allow a data-driven estimation of cardinality of the state-space. Recall that non-parametric here indicates that for larger sample size T, the number of possible states (i.e. dynamic models) can also increase. To model the infinite-HMM we use the hierarchical Dirichlet process (HDP) [3, 22]. The HDP can be interpreted as a HMM with a countably infinite state-space. That is, the HDP is a non-parametric prior for the infinite-HMM. The main idea of the HDP is to link a countably infinite set of Dirichlet processes by sharing atoms among the DPs with each DP associated with a specific state. The stick-breaking construction of the HDP is given by [8, 22]: m ∼H, φ0 = ∞ X m=1 βmδm, βm = vm m−1 Y l=1 (1 −vl), vm ∼Beta(1, γ), φk = ∞ X m=1 πkmδm, πk ∼DP(α, β). (3) Eq.(3) represents an infinite state HMM with πkm the transition probability of transitioning from state k ∈K to state m ∈K. πk represents the transition probabilities out of state k of the HMM with β the shared prior parameter of the transition distribution, H is a prior on the transition probability distribution, and α the concentration of the transition probability distribution of the HMM. The patient’s stochastic model is constructed by combining the SMG (2) with the HDP (or infinite HMM) and is given by: vk ∼Beta(1, γ), βk = vk k−1 Y l=1 (1 −vl), πk ∼DP  α + κ, αβ + κδk α + κ  k = 1, 2, . . . zt ∼π(·|zt−1) = πzt−1, yt = ε(zt) t = 1, 2, . . . , T. (4) The parameter γ controls how concentrated the state transition function is from state k to state k′. This can be seen by setting κ = 0 and α = 0 such that E[πk] = β. If γ = 1 then the parameter βk in β decays at approximately a geometric rate for increasing k. As γ increases, the decay of the elements in β decrease. For α > 0 and κ > 0 then E[πk] = (αβ + κδk)/(α + κ), as such κ controls the bias of πk towards self-transitions–that is, π(k|k) is given a large weight. The parameter α + κ controls the variability of πk and the base state transition distribution (αβ + κδk)/(α + κ). Given the patient’s stochastic model (4), non-parametric Bayesian inference are utilized to estimate the model parameters from the patient’s vital signs {yt}t∈T . To utilize Bayesian inference we define a prior and compute the associated posterior since a σ-finite density measure is present. The prior distributions on β and π are given by: β ∼Dir(γ/L, . . . , γ/L), πk ∼Dir(αβ1, . . . , αβk + κ, . . . , βL) k ∈{1, . . . , L}. (5) Eq.(5) is the weak limit approximation with truncation level L where L is the largest number of expected states in the estimated HMM from {yt}t∈T [25]. Note that as L →∞then (5) approach the HDP. If clinician domain knowledge is not available on the initial hyper-parameters γ, α, and κ, then it is common to place Beta or Gamma priors on these distributions [25]. For the multivariate Gaussian we utilize the Normal-Inverse-Wishart prior distribution [11]: p(µ, Σ|µ0, λ, S0, v) ∝|Σ| v+m+1 2 exp  −1 2 tr(vS0Σ−1 −λ 2 (µ −µ0)′Σ−1(µ −µ0))  (6) 3 where v and S0 are the degrees of freedom and the scale matrix for the inverse-Wishart distribution on Σ, µ0 is the prior mean, and λ is the number of prior measurements on the Σ scale. Given the prior distribution with associated posterior distributions a MCMC or variational sampler (i.e. Gibbs sampler [10], Beam sampler [25], variational Bayes [6, 7]) can be utilized to estimate the parameters of the patient’s stochastic model (4) given the data {yt}t∈T . 3 Statistical Methods to Evaluate Stochastic Model Quality Given the segmented dataset ¯D (1) generated from all the patient’s estimated stochastic models (4), this section presents methods to evaluate the quality of ¯D. This includes testing if the vital signs {yi t}t∈T i k for each patient and unique dynamic model are consistent with a multivariate Gaussian distribution, contain sufficient samples to guarantee the accuracy of the dynamic model parameters, and that the detected dynamic models for each patient are unique. If the estimated stochastic models are of low quality then the hyper-parameters of the non-parametric Bayesian inference algorithm can be iteratively updated to ensure that all the patient’s stochastic models accurately represent their dynamics. This is a vital step in medical applications since the results of the non-parametric Bayesian inference algorithm are sensitive to the selected hyper-parameters [14, 12]. For example Fig.2(a) illustrates a poor quality segmentation that results from poorly selected hyper-parameters. 3.1 Hypothesis Tests for Model Consistency with Segments To ensure model consistency we must test if each segment in ¯D is consistent with a multivariate Gaussian process (i.e. samples are independent and normally distributed). To test if the segment {yt}t∈Tk ∈¯D contains independent samples we evaluate the autocorrelation function (ACF) [5] for each segment. For {yt}t∈Tk the ACF must exponentially decay to zero which indicates that the segment contains independent samples. Note that it is possible for a spurious autocorrelation structure to be present in the segment if the segment is composed of a mixture of Gaussian processes. If this is suspected then the hyper-parameters of the non-parametric Bayesian inference algorithm are updated to increase the number of segments (for example by increasing L or decreasing κ). Since there is no universally most powerful test for multivariate normality, we use the improved Bonferroni method [23] which contains four affine invariant hypothesis test statistics elevating the need to select the most sensitive single test while retaining the benefits of the these four multivariate normality tests. 3.2 Data-Driven Confidence Bounds for Dynamic Model Estimation An important consideration when evaluating the quality of the segmentation ¯D is that each segment contains sufficient samples to confidently estimate the mean and covariance {µ, Σ} of the SMG model. This is particularly important in medical applications as it provides an estimate of the maximum number of samples needed to confidently estimate {µ, Σ} which are used to estimate the clinical state of the patient. Note that the estimated posterior distribution for {µ, Σ} can not be used to bound the number of samples required. To estimate {µ, Σ} given {yt}t∈Tk, the maximum likelihood estimators given by: ˆµ(k) = 1 nk nk X t=1 yt, ˆΣ(k) = 1 nk nk X t=1 (yt −ˆµ(k))(yt −ˆµ(k))′ (7) are used with nk = |Tk| is the total number of samples in segment k ∈K. If each vital sign is independent (i.e. spherical multivariate Gaussian distribution) then an empirical Bernstein bound [13] can be constructed to estimate the error between the sample mean ˆµ and the actual mean µ. From the empirical Bernstein bound, the minimum number of samples necessary to ensure that P(ˆµ(k, j) − µ(k, j) ≥ε) ≤α for all segments k ∈K and streams j ∈{1, . . . , m} for some confidence level α > 0 and tolerance ε ≥0 is given by: n(ε, α) ≥ 6σ2 max + 2∆maxε 3ε2  ln( 1 α) (8) with σ2 max the maximum possible variance and ∆max the maximum possible difference between the maximum and minimum values of all values in the vital sign data. 4 To construct a relaxed bound on the sample mean ˆµ ∈Rm, and a bound on the sample covariance ˆΣ ∈Rm×m computed using (7), we generalize the empirical Bernstein bound to the multidimensional case. The goal is to construct a bound of the form P(||Z|| ≥ε) ≤α where || · || denotes the spectral norm if Z is a matrix, or the 2-norm in the case Z is a vector. To construct a probabilistic bound on the accuracy of the estimated mean we utilize the vector Bernstein inequality given by Theorem 1. Theorem 1 Let {Y1, . . . , Yn} be a set of independent random vectors with Yt ∈Rm for t ∈ {1, . . . , n}. Assume that each vector has uniform bounded deviation such that ||Yt|| ≤L ∀t ∈ {1, . . . , n}. Writing Z = Pn t=1 Yt, then P(||Z|| ≥ε) ≤(2m) exp  −3ε2 6V (Z) + 2Lε  , V (Z) = n X t=1 E[||Yt||2 2]. (9) The proof of Theorem 1 is provided in the Supporting Material. To construct the bound on the number of samples necessary to estimate the mean we define Z = ˆµ −µ with Yt = (yt −µ)/n. Using the triangle inequality, Jensen’s inequality, and assuming ||yt||2 ≤B1 for some constant B1, we have that: L ≤2B1 n , V (Z) ≤1 n B2 1 −||µ||2 2  . (10) Plugging (10) into (9) results in the minimum number of samples necessary to guarantee that P(||ˆµ −µ|| ≥ε) ≤α with the number of samples n(ε, α) given by: n(ε, α) ≥ 6(B2 1 −||µ||2 2) + 4B1ε 3ε2  ln(2m α ). (11) To bound the number of samples necessary to estimate Σ we utilize the corollary of Theorem 1 for real-symmetric matrices with Z = ˆΣ −Σ. The bound on the number of samples necessary to guarantee P(||ˆΣ −Σ|| ≥ε) ≤α, assuming ||Σ|| ≤||yt −ˆµ|| ≤B2, is given by: n(ε, α) ≥ 6B2 2 + 4B2ε 3ε2  ln(2m α ). (12) For a given α and ε, and an estimate of the maximum spectral norm of Σ and norm of µ, equations (11) and (12) can be used to estimate the minimum number of samples necessary to sufficiently estimate {µ, Σ}. To accurately compute the clinical state from the unique dynamic model, each segment must satisfy (11) and (12), otherwise any clinical state estimation may give unreliable results. 3.3 Statistical Tests for Statistically Identical Dynamic Models In this section we construct a novel hypothesis test for mean and covariance equality with a given confidence, and design parameters that control the importance of the mean equality compared to the covariance equality. The hypothesis test both evaluates the quality of the estimated stochastic model, but can also be used to merge statistically identical segments to increase the accuracy of the dynamic model parameter estimates. Given two segments of vital signs, each associated with a supposedly unique dynamic model, we define the null hypothesis H0 as the equality of the mean and covariance matrices from the two dynamic models, and the alternate hypothesis H1 that either the mean or covariance are not equal. Formally: H0 : Σ(k) = Σ(k′) and µ(k) = µ(k′), H1 : Σ(k) ̸= Σ(k′) or µ(k) ̸= µ(k′). (13) Several methods exist for testing for covariance equality [20] and for mean equality [24], however we wish to test for both covariance and location equality. To test for the global hypothesis H0 in (13), note that H0 and H1 can equivalently be stated as a combination of the sub-hypothesis as follows: H0 : H1 0 ∩H2 0 and H1 : H1 1 ∪H2 1 (14) with H1 0 : µ(k) = µ(k′), H1 1 : µ(k) ̸= µ(k′), H2 0 : Σ(k) = Σ(k′), and H2 1 : Σ(k) ̸= Σ(k′). To construct the hypothesis test for H0 the non-parametric the permutation testing method [17] is used which allows us to combine the sub-hypothesis tests for covariance and mean equality to construct a hypothesis test for H0. To test for the null hypothesis H1 0 we utilize Hotelling’s T 2 test as it is asymptotically the most powerful invariant test when the data associated with k and k′ are normally distributed [4]. Given that 5 yt are generated from a multivariate normal distribution, the test statistic τ 1 follows a T 2 distribution such that τ 1 ∼T 2(m, n(k)+n(k′)−2) where n(k) and n(k′) are the number of samples in segments k and k′ respectively. To test for the null hypothesis H2 0 we utilize the modified likelihood ratio statistic provided by Bartlett [1], written Λ∗, which is uniformly the most power unbiased test for covariance equality [15]. The test statistic for covariance equality is given by: τ 2 = −2ρ log(Λ∗), ρ = 1 −2m2 + 3m −1 6(m + 1)n (n/n(k) + n/n(k′) −1), n = n(k) + n(k′). From (Theorem 8.2.7 in [15]) the asymptotic cumulative distribution function of τ 2 can be approximated by a linear combination of χ2 distributions which has a convergence rate of O((ρn)−3). To construct the permutation test for H0 Tippett’s combining function [17] is used with H0: τ = min(λ1/k1, λ2/k2) where λ1 and λ2 are the p-values of the sub-hypothesis tests H1 0 and H2 0 respectively, and k1 and k2 are design parameters. If k1 > k2 then the mean equality is weighted more then the covariance equality. If k1 = k2 then both mean equality and covariance equality are weighted equally. For the test statistics τ 1 and τ 2 the p-values are given by λ1 = P(τ 1 ≥τ 1 0 ) and λ2 = P(τ 2 ≥τ 2 0 ) where τ 1 0 and τ 2 0 are realizations of the test statistics. To utilize τ as a test statistic we require the cumulative distribution function of τ. Note that if H1 0 is true (i.e. mean equality) then the distributions of τ 1 and τ 2 are independent since τ 1 follows a T 2 distribution which results in λ1 ∼U(0, 1) and λ2 ∼U(0, 1) [17]. The cumulative distribution function of τ is given by P(τ ≤x) = (k1 +k2)x−k1k2x2 for x ∈[0, min(1/k1, 1/k2)]. Given P(τ ≤x), for a significance level α, we reject the null hypothesis H0 if τ ≤δ where δ is the solution to P(τ ≤δ) = α. The parameter δ is given by: δ = (k1 + k2) − p (k1 + k2)2 −4αk1k2 /(2k1k2). For a given significance level α, and design parameters k1 and k2, we can test H0 for the samples {yt}t∈Tk and {yt}t∈Tk′ by evaluating τ0 = min(λ1 0/k1, λ2 0/k2) with λ1 0 and λ2 0 the realizations of the p-values for τ1 and τ2. By repeatedly applying this hypothesis test to segments {yt}t∈Tk for k ∈K we can detect any segments with equal mean and covariance with a significance level α. Similar segments can be merged to increase the accuracy of the estimated dynamic model parameters, or be used to evaluate the quality of the patient’s stochastic model. 4 Estimating Patient’s Clinical State using Clinician Domain-Knowledge In this section the Algorithm 1 (Fig.1) is presented which constructs stochastic models of patients based on their historical EHR data and clinician domain-knowledge, and is used to classify the clinical state of new patients. Algorithm 1 is composed of five main steps. Step#1 to Step#2 are used to construct the stochastic models of the patients based on the EHR data D, and to construct the segmented dataset ¯D (1). The stochastic models are constructed using the non-parametric Bayesian inference algorithm from Sec.2. Step#2 measures the quality of the stochastic models, and iteratively updates the hyper-parameters of the Bayesian inference algorithm to guarantee the quality of the detected dynamic models as discussed in Sec.3. In Step#3 each segment (e.g. dynamic model) in ¯D is labelled by the clinician, based on the clinical states of interest, to construct the dataset L. Step#4 and Step#5 involves the online portion of the algorithm which constructs stochastic models for new patients and estimates their clinical state based on each patient’s estimated stochastic model. Step#4 constructs the stochastic model for the new patient, then in Step#5 each unique dynamic model from Step#4 is associated with a clinical state of interest using the labelled dataset L from Step#3. Note that L contains several segments (e.g. dynamic models) that are associated with one clinical state. To estimate the clinical state of the new patient a similarity metric based on the Bhattacharyya distance, written DB(·), is used. If the minimum Bhattacharyya distance between the new patients segment k and next closest segment k′ ∈L is greater then δth the segment is labelled as anomalous, otherwise the segment is given the label of segment k′ ∈L. Information on the computational complexity and implementation details of Algorithm 1 are provided in the Supporting Material. 5 Real-World Clinical State Estimation in Cancer Ward In this section Algorithm 1 is applied to a real-world EHR dataset composed of a cohort of patients admitted to a cancer ward. A detailed description of the dataset is provided in the Supporting Material. 6 Algorithm 1 Patient Clinical State Estimation Step#1: Construct stochastic models for each patient using D and the non-parametric Bayesian algorithm presented in Sec.2. Using the stochastic models construct the dataset ¯D (1). Step#2: To evaluate the quality of each stochastic model, each segment in ¯D from Step#1 is tested for: i) model consistency, ii) sufficient samples to guarantee accuracy of dynamic model parameter estimates, and iii) statistical uniqueness of segments using the methods in Sec.3. If the quality is not sufficient then return to Step#1 with updated hyper-parameters for the non-parametric Bayesian inference algorithm. Step#3: Given ¯D and the clinical states of interest, the clinician constructs the labelled dataset L = {({yi t}t∈T i k, li k), k ∈{1, . . . , Ki} = Ki}. Step#4: For a new patient i = 0 with vital signs {y0 t}t∈T 0, construct the stochastic model of the patient using the Bayesian non-parametric learning algorithm. Then, based on the stochastic model, construct the segmented vital sign data {{y0 t}t∈T 0 k , k ∈{1, . . . , K0} = K0}. Step#5: To estimate the label l(k), written ˆl(k), of each segment k ∈K0 from Step#4, compute the solution to the following optimization problem for each k: if min l∈L {DB(k, k′)} ≥δth then ˆl(k) = ∅, else ˆl(k) ∈argmin l∈L n mink′∈Ll{DB(k, k′)} mink′∈L−l{DB(k, k′)} o with ∅the anomalous state, Ll ∈L the set of segments that are labeled with l, L−l ∈L the set of all segments that are not labeled as l, and δth is a threshold. Return to Step#4. The first step of Algorithm 1 is to segment the EHR data based on the estimated stochastic models of the patients. Fig.2(a) illustrates the dynamic models of a specific patient’s estimated stochastic model for κ = 0.1 and S0 = 0.1Im (Im is the identity matrix), and for κ = 1 and S0 = Im. As seen, for κ = 0.1 and S0 = 0.1Im several segments have insufficient samples for estimating the model parameters, and are not statistically unique. However the segments resulting from κ = 1 and S0 = Im provide a stochastic model of sufficient quality where each segment contains sufficient samples to accurately estimate the model parameters, the segments are statistically unique, and satisfy the multivariate normality assumption. Therefore we set κ = 1 and S0 = Im to construct the segmented dataset ¯D from D. The dataset L is constructed by providing the clinician with ¯D who then labels each segment as either in the ICU admission clinical state, or non-ICU clinical state. Time [hours] 0 500 1000 1500 Dynamic Models 0 5 10 15 (a) Dynamic model estimates with {κ, S0} = {0.1, 0.1Im} (dotted), and {1, Im} (solid). Time [hours] 0 500 1000 1500 2000 Dynamic Models 1 2 3 4 5 6 7 8 9 10 ICU Admission (b) Estimated dynamic models for the intervals of patient data in Fig.2(d). Positive Predictive Value 0.1 0.2 0.3 0.4 0.5 0.6 True Positive Rate 0.2 0.4 0.6 0.8 1 Algorithm 1 Rothman MEWS (c) Trade off between the TPR and PPV. The dashed cross-hair indicates the performance of Algorithm 1 for δb = 1. Time [hours] 0 500 1000 1500 2000 Physiological Values 50 100 150 200 Heart-Rate Diastolic Systolic (d) Physiological signals from the patient with discovered models in Fig.2(b). Figure 2: Dynamic model discovery and performance of Algorithm 1. 7 Of critical importance in medical applications is the accuracy and timeliness of the detection of the clinical state of the patient. Fig.2(b) provides the trade-off between the TPR and PPV between Algorithm 1, Rothman index [18] which is a state-of-the-art method utilized in many hospitals today, and MEWS [21] which are dependent on the threshold selected for each. As seen Algorithm 1 has a superior performance compared to these two popular risk scoring methods. For example if we require the TPR = 71.9%, then the associated PPV values for the Rothman index and MEWS are 26.1% and 18.0% respectively. There is a 11.3% increase in the PPV value for the Rothman index, and 19.4% increase in the PPV for MEWS compared to the PPV of Algorithm 1. We also compare with methods commonly used in medical with the results presented in Table 1. As seen, Algorithm 1 outperforms all these methods for estimating the patient’s clinical state. There are several possible reasons that Algorithm 1 outperforms these methods including accounting for therapeutic interventions and utilizing fine-grained personalization. Note that the results in Table 1 are computed 12 hours prior to ICU admission or hospital discharge. Additionally, the average detection time of ICU admission or discharge using Algorithm 1 is approximately 24 hours prior to the clinician’s decision. This timeliness ensures that the patient’s clinical state estimate provides clinicians with sufficient warning to apply a therapeutic intervention to stabilize the patient. Table 1: Accuracy of Methods for Predicting ICU Admission Algorithm TPR(%) PPV(%) Algorithm 1 71.9% 37.4% Rothman Index 53.9% 34.5% MEWS 28.1% 26.3% Logistic Regression 55.7% 30.7% Lasso Regularization 55.8% 30.3% Random Forest 44.5% 31.1% SVMs 32.2% 29.9% A key feature of Algorithm 1 is that it learns the number of unique dynamic models for each patient, and as more data is collected the number of unique dynamic models discovered may increase. Fig.2(b) illustrates this process for a patient with associated physiological signals given in Fig.2(d). The horizontal dashed line indicates the intervals and associated discovered dynamic models. Note that typical hospitalization time for cancer ward patients in the dataset range from 4 hours to over 85 days. As seen, as more samples are obtained for the patient the number of dynamic models that describe the patient’s dynamics increase. Additionally, there is good agreement between where the patient’s dynamics change for the different time intervals. For example the change point at 40 hours after hospitalization occurs as a result of an increase in the systolic and diastolic blood pressure, and a decrease in the heart-rate. At 1700 hours the change in state results from a dramatic increase in both the systolic and diastolic blood pressure, and a decrease in the heart-rate. From Fig.2(d) these physiological signals were not observed previously, therefore Algorithm 1 correctly detects that this is a new unique state for the patient. Though Algorithm 1 can identify changes in patient state, the domain-knowledge from the clinician is required to define the clinical state of the patient. Only dynamic models 8 and 9 are associated with the ICU admission state. Further results are provided in the Supporting Material that illustrate how current methods for constructing risk scores suffer from the bias introduced from therapeutic intervention censoring, and how a binary threshold δb can be introduced into Algorithm 1 for controlling the TPR and PPV for clinical state estimation. 6 Conclusion In this paper a novel non-parametric learning algorithm for confidently learning stochastic models of patient’s and classifying their associated clinical state was presented. Compared to state-of-the-art clinical state estimation methods our algorithm eliminates the bias caused by therapeutic intervention censoring, is personalized to the patient’s specific dynamics resulting from medical complication (e.g. disease, drug interactions, physical contusions or fractures), and can detect anomalous clinical states. The algorithm was applied to real-world patient data from a cancer ward in a large academic hospital, and found to have a significant improvement in classifying patient’s clinical state in both accuracy and timeliness compared with current state-of-the-art methods such as the Rothman index. The algorithm provides valuable information to allow clinicians to make informed decisions about selecting if a therapeutic intervention is necessary to improve the clinical state of the patients. 8 Acknowledgments This research was supported by: NSF ECCS 1462245, and the Airforce DDDAS program. References [1] M. Bartlett. Properties of sufficiency and statistical tests. Proc. Roy. Soc. London A, 160:268–282, 1937. [2] D. Basso, F. Pesarin, L. Salmaso, and A. Solari. Permutation Tests. Springer, 2009. [3] M. Beal, Z. Ghahramani, and C. Rasmussen. The infinite hidden Markov model. In Advances in neural information processing systems, pages 577–584, 2001. [4] M. Bilodeau and D. Brenner. Theory of Multivariate Statistics. Springer, 2008. [5] P. Brockwell and R. Davis. Time series: theory and methods. Springer Science & Business Media, 2013. [6] M. Bryant and E. Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet processes. In Advances in Neural Information Processing Systems, pages 2699–2707, 2012. [7] T. Campbell, J. Straub, J. Fisher, and J. How. Streaming, distributed variational inference for Bayesian nonparametrics. In Advances in Neural Information Processing Systems, pages 280–288, 2015. [8] T. Ferguson. A Bayesian analysis of some nonparametric problems. The annals of statistics, pages 209–230, 1973. [9] E. Fox, M. Jordan, E. Sudderth, and A. Willsky. Sharing features among dynamical systems with beta processes. In Advances in Neural Information Processing Systems, pages 549–557, 2009. [10] E. Fox, E. Sudderth, M. Jordan, and A. Willsky. An HDP-HMM for systems with state persistence. In Proceedings of the 25th international conference on Machine learning, pages 312–319. ACM, 2008. [11] A. Gelman, J. Carlin, H. Stern, and D. Rubin. Bayesian data analysis, volume 2. Taylor & Francis, 2014. [12] A. Johnson, M. Ghassemi, S. Nemati, K. Niehaus, D. Clifton, and G. Clifford. Machine learning and decision support in critical care. Proceedings of the IEEE, 104(2):444–466, 2016. [13] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. COLT, 2009. [14] G. Montanez, S. Amizadeh, and N. Laptev. Inertial Hidden Markov Models: Modeling change in multivariate time series. In AAAI, pages 1819–1825, 2015. [15] R. Muirhead. Aspects of multivariate statistical theory. Wiley, 1982. [16] C. Paxton, A. Niculescu-Mizil, and S. Saria. Developing predictive models using electronic medical records: challenges and pitfalls. In Annual Symposium proceedings/AMIA Symposium. AMIA Symposium, volume 2013, pages 1109–1115. American Medical Informatics Association, 2012. [17] F. Pesarin and L. Salmaso. Permutation tests for complex data: theory, applications and software. John Wiley & Sons, 2010. [18] M. Rothman, S. Rothman, and J. Beals. Development and validation of a continuous measure of patient condition using the electronic medical record. Journal of biomedical informatics, 46(5):837–848, 2013. [19] S. Saria, D. Koller, and A. Penn. Learning individual and population level traits from clinical temporal data. In Proc. Neural Information Processing Systems (NIPS), Predictive Models in Personalized Medicine workshop. Citeseer, 2010. [20] J. Schott. A test for the equality of covariance matrices when the dimension is large relative to the sample sizes. Computational Statistics & Data Analysis, 51(12):6535–6542, 2007. [21] P. Subbe, M. Kruger, P. Rutherford, and L. Gemmel. Validation of a modified Early Warning Score in medical admissions. Qjm, 94(10):521–526, 2001. [22] Y. W. Teh, M. Jordan, M. Beal, and D. Blei. Hierarchical Dirichlet processes. Journal of the american statistical association, 2012. [23] C. Tenreiro. An affine invariant multiple test procedure for assessing multivariate normality. Computational Statistics & Data Analysis, 55(5):1980–1992, 2011. [24] N. Timm. Applied Multivariate Analysis, volume 1. Springer, 2002. [25] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov model. In Proceedings of the 25th international conference on Machine learning, pages 1088–1095. ACM, 2008. 9
2016
16
6,061
Regret of Queueing Bandits Subhashini Krishnasamy University of Texas at Austin Rajat Sen University of Texas at Austin Ramesh Johari Stanford University Sanjay Shakkottai University of Texas at Austin Abstract We consider a variant of the multiarmed bandit problem where jobs queue for service, and service rates of different servers may be unknown. We study algorithms that minimize queue-regret: the (expected) difference between the queue-lengths obtained by the algorithm, and those obtained by a “genie”-aided matching algorithm that knows exact service rates. A naive view of this problem would suggest that queue-regret should grow logarithmically: since queue-regret cannot be larger than classical regret, results for the standard MAB problem give algorithms that ensure queue-regret increases no more than logarithmically in time. Our paper shows surprisingly more complex behavior. In particular, the naive intuition is correct as long as the bandit algorithm’s queues have relatively long regenerative cycles: in this case queue-regret is similar to cumulative regret, and scales (essentially) logarithmically. However, we show that this “early stage” of the queueing bandit eventually gives way to a “late stage”, where the optimal queue-regret scaling is O(1/t). We demonstrate an algorithm that (order-wise) achieves this asymptotic queue-regret, and also exhibits close to optimal switching time from the early stage to the late stage. 1 Introduction Stochastic multi-armed bandits (MAB) have a rich history in sequential decision making [1, 2, 3]. In its simplest form, a collection of K arms are present, each having a binary reward (Bernoulli random variable over {0, 1}) with an unknown success probability1 (and different across arms). At each (discrete) time, a single arm is chosen by the bandit algorithm, and a (binary-valued) reward is accrued. The MAB problem is to determine which arm to choose at each time in order to minimize the cumulative expected regret, namely, the cumulative loss of reward when compared to a genie that has knowledge of the arm success probabilities. In this paper, we consider the variant of this problem motivated by queueing applications. Formally, suppose that arms are pulled upon arrivals of jobs; each arm is now a server that can serve the arriving job. In this model, the stochastic reward described above is equivalent to service. In other words, if the arm (server) that is chosen results in positive reward, the job is successfully completed and departs the system. However, this basic model fails to capture an essential feature of service in many settings: in a queueing system, jobs wait until they complete service. Such systems are stateful: when the chosen arm results in zero reward, the job being served remains in the queue, and over time the model must track the remaining jobs waiting to be served. The difference between the cumulative number of arrivals and departures, or the queue length, is the most common measure of the quality of the service strategy being employed. 1Here, the success probability of an arm is the probability that the reward equals ’1’. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Queueing is employed in modeling a vast range of service systems, including supply and demand in online platforms (e.g., Uber, Lyft, Airbnb, Upwork, etc.); order flow in financial markets (e.g., limit order books); packet flow in communication networks; and supply chains. In all of these systems, queueing is an essential part of the model: e.g., in online platforms, the available supply (e.g. available drivers in Uber or Lyft, or available rentals in Airbnb) queues until it is “served” by arriving demand (ride requests in Uber or Lyft, booking requests in Airbnb). Since MAB models are a natural way to capture learning in this entire range of systems, incorporating queueing behavior into the MAB model is an essential challenge. This problem clearly has the explore-exploit tradeoff inherent in the standard MAB problem: since the success probabilities across different servers are unknown, there is a tradeoff between learning (exploring) the different servers and (exploiting) the most promising server from past observations. We refer to this problem as the queueing bandit. Since the queue length is simply the difference between the cumulative number arrivals and departures (cumulative actual reward; here reward is 1 if job is served), the natural notion of regret here is to compare the expected queue length under a bandit algorithm with the corresponding one under a genie policy (with identical arrivals) that however always chooses the arm with the highest expected reward. Queueing System: To capture this trade-off, we consider a discrete-time queueing system with a single queue and K servers. Arrivals to the queue and service offered by the links are according to product Bernoulli distribution and i.i.d. across time slots. Statistical parameters corresponding to the service distributions are considered unknown. In any time slot, the queue can be served by at most one server and the problem is to schedule a server in every time slot. The service is pre-emptive and a job returns to the queue if not served. There is at least one server that has a service rate higher than the arrival rate, which ensures that the "genie" policy is stable. Let Q(t) be the queue length at time t under a given bandit algorithm, and let Q⇤(t) be the corresponding queue length under the “genie” policy that always schedules the optimal server (i.e. always plays the arm with the highest mean). We define the queue-regret as the difference in expected queue lengths for the two policies. That is, the regret is given by: (t) := E [Q(t) −Q⇤(t)] . (1) Here (t) has the interpretation of the traditional MAB regret with caveat that rewards are accumulated only if there is a job that can benefit from this reward. We refer to (t) as the queue-regret; formally, our goal is to develop bandit algorithms that minimize the queue-regret at a finite time t. To develop some intuition, we compare this to the standard stochastic MAB problem. For the standard problem, well-known algorithms such as UCB, KL-UCB, and Thompson sampling achieve a cumulative regret of O((K −1) log t) at time t [4, 5, 6], and this result is essentially tight [7]. In the queueing bandit, we can obtain a simple bound on the queue-regret by noting that it cannot be any higher than the traditional regret (where a reward is accrued at each time whether a job is present or not). This leads to an upper bound of O((K −1) log t) for the queue regret. However, this upper bound does not tell the whole story for the queueing bandit: we show that there are two “stages” to the queueing bandit. In the early stage, the bandit algorithm is unable to even stabilize the queue – i.e. on average, the queue length increases over time and is continuously backlogged; therefore the queue-regret grows with time, similar to the cumulative regret. Once the algorithm is able to stabilize the queue—the late stage—then a dramatic shift occurs in the behavior of the queue regret. A stochastically stable queue goes through regenerative cycles – a random cyclical behavior where queues build-up over time, then empty, and the cycle repeats. The associated recurring“zero-queue-length” epochs means that sample-path queue-regret essentially “resets” at (stochastically) regular intervals; i.e., the sample-path queue-regret becomes non-positive at these time instants. Thus the queue-regret should fall over time, as the algorithm learns. Our main results provide lower bounds on queue-regret for both the early and late stages, as well as algorithms that essentially match these lower bounds. We first describe the late stage, and then describe the early stage for a heavily loaded system. 1. The late stage. We first consider what happens to the queue regret as t ! 1. As noted above, a reasonable intuition for this regime comes from considering a standard bandit algorithm, but where the sample-path queue-regret “resets” at time points of regeneration.2 In this case, the queue-regret is 2This is inexact since the optimal queueing system and bandit queueing system may not regenerate at the same time point; but the intuition holds. 2 approximately a (discrete) derivative of the cumulative regret. Since the optimal cumulative regret scales like log t, asymptotically the optimal queue-regret should scale like 1/t. Indeed, we show that the queue-regret for ↵-consistent policies is at least C/t infinitely often, where C is a constant independent of t. Further, we introduce an algorithm called Q-ThS for the queueing bandit (a variant of Thompson sampling with explicit structured exploration), and show an asymptotic regret upper bound of O (poly(log t)/t) for Q-ThS, thus matching the lower bound up to poly-logarithmic factors in t. Q-ThS exploits structured exploration: we exploit the fact that the queue regenerates regularly to explore more systematically and aggressively. 2. The early stage. The preceding discussion might suggest that an algorithm that explores aggressively would dominate any algorithm that balances exploration and exploitation. However, an overly aggressive exploration policy will preclude the queueing system from ever stabilizing, which is necessary to induce the regenerative cycles that lead the system to the late stage. To even enter the late stage, therefore, we need an algorithm that exploits enough to actually stabilize the queue (i.e. choose good arms sufficiently often so that the mean service rate exceeds the expected arrival rate). We refer to the early stage of the system, as noted above, as the period before the algorithm has learned to stabilize the queues. For a heavily loaded system, where the arrival rate approaches the service rate of the optimal server, we show a lower bound of ⌦(log t/ log log t) on the queue-regret in the early stage. Thus up to a log log t factor, the early stage regret behaves similarly to the cumulative regret (which scales like log t). The heavily loaded regime is a natural asymptotic regime in which to study queueing systems, and has been extensively employed in the literature; see, e.g., [9, 10] for surveys. Perhaps more importantly, our analysis shows that the time to switch from the early stage to the late stage scales at least as t = ⌦(K/✏), where ✏is the gap between the arrival rate and the service rate of the optimal server; thus ✏! 0 in the heavy-load setting. In particular, we show that the early stage lower bound of ⌦(log t/ log log t) is valid up to t = O(K/✏); on the other hand, we also show that, in the heavy-load limit, depending on the relative scaling between K and ✏, the regret of Q-ThS scales like O ! poly(log t)/✏2t " for times that are arbitrarily close to ⌦(K/✏). In other words, Q-ThS is nearly optimal in the time it takes to “switch” from the early stage to the late stage. Our results constitute the first insight into the behavior of regret in this queueing setting; as emphasized, it is quite different than that seen for minimization of cumulative regret in the standard MAB problem. The preceding discussion highlights why minimization of queue-regret presents a subtle learning problem. On one hand, if the queue has been stabilized, the presence of regenerative cycles allows us to establish that queue regret must eventually decay to zero at rate 1/t under an optimal algorithm (the late stage). On the other hand, to actually have regenerative cycles in the first place, a learning algorithm needs to exploit enough to actually stabilize the queue (the early stage). Our analysis not only characterizes regret in both regimes, but also essentially exactly characterizes the transition point between the two regimes. In this way the queueing bandit is a remarkable new example of the tradeoff between exploration and exploitation. 2 Related work MAB algorithms. Stochastic MAB models have been widely used in the past as a paradigm for various sequential decision making problems in industrial manufacturing, communication networks, clinical trials, online advertising and webpage optimization, and other domains requiring resource allocation and scheduling; see, e.g., [1, 2, 3]. The MAB problem has been studied in two variants, based on different notions of optimality. One considers mean accumulated loss of rewards, often called regret, as compared to a genie policy that always chooses the best arm. Most effort in this direction is focused on getting the best regret bounds possible at any finite time in addition to designing computationally feasible algorithms [3]. The other line of research models the bandit problem as a Markov decision process (MDP), with the goal of optimizing infinite horizon discounted or average reward. The aim is to characterize the structure of the optimal policy [2]. Since these policies deal with optimality with respect to infinite horizon costs, unlike the former body of research, they give steady-state and not finite-time guarantees. Our work uses the regret minimization framework to study the queueing bandit problem. Bandits for queues. There is body of literature on the application of bandit models to queueing and scheduling systems [2, 11, 12, 13, 14, 15, 16, 17]. These queueing studies focus on infinite-horizon 3 costs (i.e., statistically steady-state behavior, where the focus typically is on conditions for optimality of index policies); further, the models do not typically consider user-dependent server statistics. Our focus here is different: algorithms and analysis to optimize finite time regret. 3 Problem Setting We consider a discrete-time queueing system with a single queue and K servers. The servers are indexed by k = 1, . . . , K. Arrivals to the queue and service offered by the links are according to product Bernoulli distribution and i.i.d. across time slots. The mean arrival rate is given by λ and the mean service rates by the vector µ = [µk]k2[K], with λ < maxk2[K] µk. In any time slot, the queue can be served by at most one server and the problem is to schedule a server in every time slot. The scheduling decision at any time t is based on past observations corresponding to the services obtained from the scheduled servers until time t −1. Statistical parameters corresponding to the service distributions are considered unknown. The queueing system evolution can be described as follows. Let (t) denote the server that is scheduled at time t. Also, let Rk(t) 2 {0, 1} be the service offered by server k and S(t) denote the service offered by server (t) at time t, i.e., S(t) = R(t)(t). If A(t) is the number of arrivals at time t, then the queue-length at time t is given by: Q(t) = (Q(t −1) + A(t) −S(t))+. Our goal in this paper is to focus attention on how queueing behavior impacts regret minimization in bandit algorithms. We evaluate the performance of scheduling policies against the policy that schedules the (unique) optimal server in every time slot, i.e., the server k⇤:= arg maxk2[K] µk with the maximum mean rate µ⇤:= maxk2[K] µk. Let Q(t) be the queue-length vector at time t under our specified algorithm, and let Q⇤(t) be the corresponding vector under the optimal policy. We define regret as the difference in mean queue-lengths for the two policies. That is, the regret is given by: (t) := E [Q(t) −Q⇤(t)] . We use the terms queue-regret or simply regret to refer to (t). Throughout, when we evaluate queue-regret, we do so under the assumption that the queueing system starts in the steady state distribution of the system induced by the optimal policy, as follows. Assumption 1 (Initial State). Both Q(0) and Q⇤(0) have the same initial state distribution, and this is chosen to be the stationary distribution of Q⇤(t); this distribution is denoted ⇡(λ,µ⇤). 4 The Late Stage We analyze the performance of a scheduling algorithm with respect to queue-regret as a function of time and system parameters like: (a) the load on the system ✏:= (µ⇤−λ), and (b) the minimum difference between the rates of the best and the next best servers ∆:= µ⇤−maxk6=k⇤µk. t 0 500 1000 1500 2000 2500 3000 3500 4000 Ψ(t) 0 5 10 15 20 25 30 35 40 Ω ! 1 t " O # log3 t t $ O ! log3 t " O # log t log log t $ Early Stage Late Stage Figure 1: Queue-regret (t) under Q-ThS in a system with K = 5, ✏= 0.1 and ∆= 0.17 As a preview of the theoretical results, Figure 1 shows the evolution of queue-regret with time in a system with 5 servers under a scheduling policy inspired by Thompson Sampling. Exact details of the scheduling algorithm can be found in Section 4.2. It is observed that the regret goes through a phase transition. In the initial stage, when the algorithm has not estimated the service rates well enough to stabilize the queue, the regret grows poly-logarithmically similar to the classical MAB setting. After a critical point when the algorithm has learned the system parameters well enough to stabilize the queue, the queue-length goes through regenerative cycles as the queue become empty. In other-words, instead of the queue length being continuously backlogged, the queuing system has a stochastic cyclical behavior where the queue builds up, becomes empty, and this cycle recurs. Thus at the beginning of every regenerative cycle, there is no accumulation of past errors and the sample-path queue-regret is at most zero. As the algorithm estimates the parameters better with time, the length of the regenerative cycles decreases and the queue-regret decays to zero. 4 Notation: For the results in Section 4, the notation f(t) = O (g(K, ✏, t)) for all t 2 h(K, ✏) (here, h(K, ✏) is an interval that depends on K, ✏) implies that there exist constants C and t0 independent of K and ✏such that f(t) Cg(K, ✏, t) for all t 2 (t0, 1) \ h(K, ✏). 4.1 An Asymptotic Lower Bound We establish an asymptotic lower bound on regret for the class of ↵-consistent policies; this class for the queueing bandit is a generalization of the ↵-consistent class used in the literature for the traditional stochastic MAB problem [7, 18, 19]. The precise definition is given below (1{·} below is the indicator function). Definition 1. A scheduling policy is said to be ↵-consistent (for some ↵2 (0, 1)) if given any problem instance, specified by (λ,µµµ), E hPt s=1 1{(s) = k} i = O(t↵) for all k 6= k⇤. Theorem 1 below gives an asymptotic lower bound on the average queue-regret and per-queue regret for an arbitrary ↵-consistent policy. Theorem 1. For any problem instance (λ,µµµ) and any ↵-consistent policy, the regret (t) satisfies (t) ≥ ✓λ 4 D(µµµ)(1 −↵)(K −1) ◆1 t for infinitely many t, where D(µµµ) = ∆ KL & µmin, µ⇤+1 2 '. (2) Outline for theorem 1. The proof of the lower bound consists of three main steps. First, in lemma 21, we show that the regret at any time-slot is lower bounded by the probability of a sub-optimal schedule in that time-slot (up to a constant factor that is dependent on the problem instance). The key idea in this lemma is to show the equivalence of any two systems with the same marginal service distributions under bandit feedback. This is achieved through a carefully constructed coupling argument that maps the original system with independent service across links to another system with service process that is dependent across links but with the same marginal distribution. As a second step, the lower bound on the regret in terms of the probability of a sub-optimal schedule enables us to obtain a lower bound on the cumulative queue-regret in terms of the number of sub-optimal schedules. We then use a lower bound on the number of sub-optimal schedules for ↵-consistent policies (lemma 19 and corollary 20) to obtain a lower bound on the cumulative regret. In the final step, we use the lower bound on the cumulative queue-regret to obtain an infinitely often lower bound on the queue-regret. 4.2 Achieving the Asymptotic Bound We next focus on algorithms that can (up to a poly log factor) achieve a scaling of O (1/t) . A key challenge in showing this is that we will need high probability bounds on the number of times the correct arm is scheduled, and these bounds to hold over the late-stage regenerative cycles of the queue. Recall that these regenerative cycles are random time intervals with ⇥(1) expected length for the optimal policy, and whose lengths are correlated with the bandit algorithm decisions (the queue length evolution is dependent on the past history of bandit arm schedules). To address this, we propose a slightly modified version of the Thompson Sampling algorithm. The algorithm, which we call Q-ThS, has an explicit structured exploration component similar to ✏-greedy algorithms. This structured exploration provides sufficiently good estimates for all arms (including sub-optimal ones) in the late stage. We describe the algorithm we employ in detail. Let Tk(t) be the number of times server k is assigned in the first t time-slots and ˆµµµ(t) be the empirical mean of service rates at time-slot t from past observations (until t −1). At time-slot t, Q-ThS decides to explore with probability min{1, 3K log2 t/t}, otherwise it exploits. When exploring, it chooses a server uniformly at random. The chosen exploration rate ensures that we are able to obtain concentration results for the number 5 of times any link is sampled.3 When exploiting, for each k 2 [K], we pick a sample ˆ✓k(t) of distribution Beta (ˆµk(t)Tk(t −1) + 1, (1 −ˆµk(t)) Tk(t −1) + 1) , and schedule the arm with the largest sample (the standard Thompson sampling for Bernoulli arms [20]). Details of the algorithm are given in Algorithm 1 in the Appendix. We now show that, for a given problem instance (λ,µµµ) (and therefore fixed ✏), the regret under Q-ThS scales as O (poly(log t)/t). We state the most general form of the asymptotic upper bound in theorem 2. A slightly weaker version of the result is given in corollary 3. This corollary is useful to understand the dependence of the upper bound on the load ✏and the number of servers K. Notation : For the following results, the notation f(t) = O (g(K, ✏, t)) for all t 2 h(K, ✏) (here, h(K, ✏) is an interval that depends on K, ✏) implies that there exist constants C and t0 independent of K and ✏such that f(t) Cg(K, ✏, t) for all t 2 (t0, 1) \ h(K, ✏). Theorem 2. Consider any problem instance (λ,µµµ). Let w(t) = exp ✓⇣ 2 log t ∆ ⌘2/3◆ , v0(t) = 6K ✏w(t) and v(t) = 24 ✏2 log t + 60K ✏ v0(t) log2 t t . Then, under Q-ThS the regret (t), satisfies (t) = O ✓Kv(t) log2 t t ◆ for all t such that w(t) log t ≥2 ✏, t ≥exp % 6/∆2& and v(t) + v0(t) t/2. Corollary 3. Let w(t) be as defined in Theorem 2. Then, (t) = O ✓ K log3 t ✏2t ◆ for all t such that w(t) log t ≥2 ✏, t w(t) ≥max ' 24K ✏, 15K2 log t , t ≥exp % 6/∆2& and t log t ≥198 ✏2 . Outline for Theorem 2. As mentioned earlier, the central idea in the proof is that the sample-path queue-regret is at most zero at the beginning of regenerative cycles, i.e., instants at which the queue becomes empty. The proof consists of two main parts – one which gives a high probability result on the number of sub-optimal schedules in the exploit phase in the late stage, and the other which shows that at any time, the beginning of the current regenerative cycle is not very far in time. The former part is proved in lemma 9, where we make use of the structured exploration component of Q-ThS to show that all the links, including the sub-optimal ones, are sampled a sufficiently large number of times to give a good estimate of the link rates. This in turn ensures that the algorithm schedules the correct link in the exploit phase in the late stages with high probability. For the latter part, we prove a high probability bound on the last time instant when the queue was zero (which is the beginning of the current regenerative cycle) in lemma 15. Here, we make use of a recursive argument to obtain a tight bound. More specifically, we first use a coarse high probability upper bound on the queue-length (lemma 11) to get a first cut bound on the beginning of the regenerative cycle (lemma 12). This bound on the regenerative cycle-length is then recursively used to obtain tighter bounds on the queue-length, and in turn, the start of the current regenerative cycle (lemmas 14 and 15 respectively). The proof of the theorem proceeds by combining the two parts above to show that the main contribution to the queue-regret comes from the structured exploration component in the current regenerative cycle, which gives the stated result. 5 The Early Stage in the Heavily Loaded Regime In order to study the performance of ↵-consistent policies in the early stage, we consider the heavily loaded system, where the arrival rate λ is close to the optimal service rate µ⇤, i.e., ✏= µ⇤−λ ! 0. This is a well studied asymptotic in which to study queueing systems, as this regime leads to 3The exploration rate could scale like log t/t if we knew ∆in advance; however, without this knowledge, additional exploration is needed. 6 fundamental insight into the structure of queueing systems. See, e.g., [9, 10] for extensive surveys. Analyzing queue-regret in the early stage in the heavily loaded regime has the effect that the the optimal server is the only one that stabilizes the queue. As a result, in the heavily loaded regime, effective learning and scheduling of the optimal server play a crucial role in determining the transition point from the early stage to the late stage. For this reason the heavily loaded regime reveals the behavior of regret in the early stage. Notation: For all the results in this section, the notation f(t) = O (g(K, ✏, t)) for all t 2 h(K, ✏) (h(K, ✏) is an interval that depends on K, ✏) implies that there exist numbers C and ✏0 that depend on ∆such that for all ✏≥✏0, f(t) Cg(K, ✏, t) for all t 2 h(K, ✏). Theorem 4 gives a lower bound on the regret in the heavily loaded regime, roughly in the time interval ! K1/(1−↵), O (K/✏) " for any ↵-consistent policy. Theorem 4. Given any problem instance (λ,µµµ), and for any ↵-consistent policy and γ > 1 1−↵, the regret (t) satisfies (t) ≥D(µµµ) 2 (K −1) log t log log t for t 2 h max{C1Kγ, ⌧}, (K −1) D(µµµ) 2✏ i where D(µµµ) is given by equation 2, and ⌧and C1 are constants that depend on ↵, γ and the policy. Outline for Theorem 4. The crucial idea in the proof is to show a lower bound on the queue-regret in terms of the number of sub-optimal schedules (Lemma 22). As in Theorem 1, we then use a lower bound on the number of sub-optimal schedules for ↵-consistent policies (given by Corollary 20) to obtain a lower bound on the queue-regret. Theorem 4 shows that, for any ↵-consistent policy, it takes at least ⌦(K/✏) time for the queue-regret to transition from the early stage to the late stage. In this region, the scaling O(log t/ log log t) reflects the fact that queue-regret is dominated by the cumulative regret growing like O(log t). A reasonable question then arises: after time ⌦(K/✏), should we expect the regret to transition into the late stage regime analyzed in the preceding section? We answer this question by studying when Q-ThS achieves its late-stage regret scaling of O ! poly(log t)/✏2t " scaling; as we will see, in an appropriate sense, Q-ThS is close to optimal in its transition from early stage to late stage, when compared to the bound discovered in Theorem 4. Formally, we have Corollary 5, which is an analog to Corollary 3 under the heavily loaded regime. Corollary 5. For any problem instance (λ,µµµ), any γ 2 (0, 1) and δ 2 (0, min(γ, 1 −γ)), the regret under Q-ThS satisfies (t) = O ✓K log3 t ✏2t ◆ 8t ≥C2 max n! 1 ✏ " 1 γ−δ , ! K ✏ " 1 1−γ , (K2) 1 1−γ−δ , ! 1 ✏2 " 1 1−δ o , where C2 is a constant independent of ✏ (but depends on ∆, γ and δ). By combining the result in Corollary 5 with Theorem 4, we can infer that in the heavily loaded regime, the time taken by Q-ThS to achieve O ! poly(log t)/✏2t " scaling is, in some sense, order-wise close to the optimal in the ↵-consistent class. Specifically, for any β 2 (0, 1), there exists a scaling of K with ✏such that the queue-regret under Q-ThS scales as O ! poly(log t)/✏2t " for all t > (K/✏)β while the regret under any ↵-consistent policy scales as ⌦(K log t/ log log t) for t < K/✏. We conclude by noting that while the transition point from the early stage to the late stage for Q-ThS is near optimal in the heavily loaded regime, it does not yield optimal regret performance in the early stage in general. In particular, recall that at any time t, the structured exploration component in Q-ThS is invoked with probability 3K log2 t/t. As a result, we see that, in the early stage, queue-regret under Q-ThS could be a log2 t-factor worse than the ⌦(log t/ log log t) lower bound shown in Theorem 4 for the ↵-consistent class. This intuition can be formalized: it is straightforward to show an upper bound of 2K log3 t for any t > max{C3, U}, where C3 is a constant that depends on ∆but is independent of K and ✏; we omit the details. 7 t 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 Ψ(t) 0 50 100 150 Phase Transition Shift ϵ = 0.05 ϵ = 0.1 ϵ = 0.15 (a) Queue-Regret under Q-ThS for a system with 5 servers with ✏2 {0.05, 0.1, 0.15} t 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 Ψ(t) 0 50 100 150 200 250 ϵ = 0.05 ϵ = 0.1 ϵ = 0.15 (b) Queue-Regret under Q-ThS for a a system with 7 servers with ✏2 {0.05, 0.1, 0.15} Figure 2: Variation of Queue-regret (t) with K and ✏under Q-Ths. The phase-transition point shifts towards the right as ✏decreases. The efficiency of learning decreases with increase in the size of the system. 6 Simulation Results In this section we present simulation results of various queueing bandit systems with K servers. These results corroborate our theoretical analysis in Sections 4 and 5. In particular a phase transition from unstable to stable behavior can be observed in all our simulations, as predicted by our analysis. In the remainder of the section we demonstrate the performance of Algorithm 1 under variations of system parameters like the traffic (✏), the gap between the optimal and the suboptimal servers (∆), and the size of the system (K). We also compare the performance of our algorithm with versions of UCB-1 [4] and Thompson Sampling [20] without structured exploration (Figure 3 in the appendix). Variation with ✏✏✏and K. In Figure 2 we see the evolution of (t) in systems of size 5 and 7 . It can be observed that the regret decays faster in the smaller system, which is predicted by Theorem 2 in the late stage and Corollary 5 in the early stage. The performance of the system under different traffic settings can be observed in Figure 2. It is evident that the regret of the queueing system grows with decreasing ✏. This is in agreement with our analytical results (Corollaries 3 and 5). In Figure 2 we can observe that the time at which the phase transition occurs shifts towards the right with decreasing ✏which is predicted by Corollaries 3 and 5. 7 Discussion and Conclusion This paper provides the first regret analysis of the queueing bandit problem, including a characterization of regret in both early and late stages, together with analysis of the switching time; and an algorithm (Q-ThS) that is asymptotically optimal (to within poly-logarithmic factors) and also essentially exhibits the correct switching behavior between early and late stages. There remain substantial open directions for future work. First, is there a single algorithm that gives optimal performance in both early and late stages, as well as the optimal switching time between early and late stages? The price paid for structured exploration by Q-ThS is an inflation of regret in the early stage. An important open question is to find a single, adaptive algorithm that gives good performance over all time. As we note in the appendix, classic (unstructured) Thompson sampling is an intriguing candidate from this perspective. Second the most significant technical hurdle in finding a single optimal algorithm is the difficulty of establishing concentration results for the number of suboptimal arm pulls within a regenerative cycle whose length is dependent on the bandit strategy. Such concentration results would be needed in two different limits: first, as the start time of the regenerative cycle approaches infinity (for the asymptotic analysis of late stage regret); and second, as the load of the system increases (for the analysis of early stage regret in the heavily loaded regime). Any progress on the open directions described above would likely require substantial progress on these technical questions as well. Acknowledgement: This work is partially supported by NSF Grants CNS-1161868, CNS-1343383, CNS1320175, ARO grants W911NF-16-1-0377, W911NF-15-1-0227, W911NF-14-1-0387 and the US DoT supported D-STOP Tier 1 University Transportation Center. 8 References [1] J. C. Gittins, “Bandit processes and dynamic allocation indices,” Journal of the Royal Statistical Society. Series B (Methodological), pp. 148–177, 1979. [2] A. Mahajan and D. Teneketzis, “Multi-armed bandit problems,” in Foundations and Applications of Sensor Management. Springer, 2008, pp. 121–151. [3] S. Bubeck and N. Cesa-Bianchi, “Regret analysis of stochastic and nonstochastic multi-armed bandit problems,” Machine Learning, vol. 5, no. 1, pp. 1–122, 2012. [4] P. Auer, N. Cesa-Bianchi, and P. Fischer, “Finite-time analysis of the multiarmed bandit problem,” Machine learning, vol. 47, no. 2-3, pp. 235–256, 2002. [5] A. Garivier and O. Cappé, “The kl-ucb algorithm for bounded stochastic bandits and beyond,” arXiv preprint arXiv:1102.2490, 2011. [6] S. Agrawal and N. Goyal, “Analysis of thompson sampling for the multi-armed bandit problem,” arXiv preprint arXiv:1111.1797, 2011. [7] T. L. Lai and H. Robbins, “Asymptotically efficient adaptive allocation rules,” Advances in applied mathematics, vol. 6, no. 1, pp. 4–22, 1985. [8] J.-Y. Audibert and S. Bubeck, “Best arm identification in multi-armed bandits,” in COLT-23th Conference on Learning Theory-2010, 2010, pp. 13–p. [9] W. Whitt, “Heavy traffic limit theorems for queues: a survey,” in Mathematical Methods in Queueing Theory. Springer, 1974, pp. 307–350. [10] H. Kushner, Heavy traffic analysis of controlled queueing and communication networks. Springer Science & Business Media, 2013, vol. 47. [11] J. Niño-Mora, “Dynamic priority allocation via restless bandit marginal productivity indices,” Top, vol. 15, no. 2, pp. 161–198, 2007. [12] P. Jacko, “Restless bandits approach to the job scheduling problem and its extensions,” Modern trends in controlled stochastic processes: theory and applications, pp. 248–267, 2010. [13] D. Cox and W. Smith, “Queues,” Wiley, 1961. [14] C. Buyukkoc, P. Varaiya, and J. Walrand, “The cµ rule revisited,” Advances in applied probability, vol. 17, no. 1, pp. 237–238, 1985. [15] J. A. Van Mieghem, “Dynamic scheduling with convex delay costs: The generalized c| mu rule,” The Annals of Applied Probability, pp. 809–833, 1995. [16] J. Niño-Mora, “Marginal productivity index policies for scheduling a multiclass delay-/loss-sensitive queue,” Queueing Systems, vol. 54, no. 4, pp. 281–312, 2006. [17] C. Lott and D. Teneketzis, “On the optimality of an index rule in multichannel allocation for single-hop mobile networks with multiple service classes,” Probability in the Engineering and Informational Sciences, vol. 14, pp. 259–297, 2000. [18] A. Salomon, J.-Y. Audiber, and I. El Alaoui, “Lower bounds and selectivity of weak-consistent policies in stochastic multi-armed bandit problem,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 187–207, 2013. [19] R. Combes, C. Jiang, and R. Srikant, “Bandits with budgets: Regret lower bounds and optimal algorithms,” in Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems. ACM, 2015, pp. 245–257. [20] W. R. Thompson, “On the likelihood that one unknown probability exceeds another in view of the evidence of two samples,” Biometrika, pp. 285–294, 1933. [21] S. Bubeck, V. Perchet, and P. Rigollet, “Bounded regret in stochastic multi-armed bandits,” arXiv preprint arXiv:1302.1611, 2013. [22] V. Perchet, P. Rigollet, S. Chassang, and E. Snowberg, “Batched bandit problems,” arXiv preprint arXiv:1505.00369, 2015. [23] A. B. Tsybakov, Introduction to nonparametric estimation. Springer Science & Business Media, 2008. [24] O. Chapelle and L. Li, “An empirical evaluation of thompson sampling,” in Advances in neural information processing systems, 2011, pp. 2249–2257. [25] S. L. Scott, “A modern bayesian look at the multi-armed bandit,” Appl. Stoch. Models in Business and Industry, vol. 26, no. 6, pp. 639–658, 2010. [26] E. Kaufmann, N. Korda, and R. Munos, “Thompson sampling: An asymptotically optimal finite-time analysis,” in Algorithmic Learning Theory. Springer, 2012, pp. 199–213. [27] D. Russo and B. Van Roy, “Learning to optimize via posterior sampling,” Mathematics of Operations Research, vol. 39, no. 4, pp. 1221–1243, 2014. 9
2016
160
6,062
Dual Space Gradient Descent for Online Learning Trung Le, Tu Dinh Nguyen, Vu Nguyen, Dinh Phung Centre for Pattern Recognition and Data Analytics Deakin University, Australia {trung.l, tu.nguyen, v.nguyen, dinh.phung}@deakin.edu.au Abstract One crucial goal in kernel online learning is to bound the model size. Common approaches employ budget maintenance procedures to restrict the model sizes using removal, projection, or merging strategies. Although projection and merging, in the literature, are known to be the most effective strategies, they demand extensive computation whilst removal strategy fails to retain information of the removed vectors. An alternative way to address the model size problem is to apply random features to approximate the kernel function. This allows the model to be maintained directly in the random feature space, hence effectively resolve the curse of kernelization. However, this approach still suffers from a serious shortcoming as it needs to use a high dimensional random feature space to achieve a sufficiently accurate kernel approximation. Consequently, it leads to a significant increase in the computational cost. To address all of these aforementioned challenges, we present in this paper the Dual Space Gradient Descent (DualSGD), a novel framework that utilizes random features as an auxiliary space to maintain information from data points removed during budget maintenance. Consequently, our approach permits the budget to be maintained in a simple, direct and elegant way while simultaneously mitigating the impact of the dimensionality issue on learning performance. We further provide convergence analysis and extensively conduct experiments on five real-world datasets to demonstrate the predictive performance and scalability of our proposed method in comparison with the state-of-the-art baselines. 1 Introduction Online learning represents a family of effective and scalable learning algorithms for incrementally building a predictive model from a sequence of data samples [1]. Unlike the conventional learning algorithms, which usually require a costly procedure to retrain the entire dataset when a new instance arrives [2], the goal of online learning is to utilize new incoming instances to improve the model given knowledge of the correct answers to previously processed data. The seminal line of work in online learning, referred to as linear online learning [3, 4], aims to learn a linear predictor in the input space. The key limitation of this approach lies in its oversimplified assumption in using a linear hyperplane to represent data that could possibly possess nonlinear dependency as commonly seen in many real-world applications. This inspires the work of kernel online learning [5, 6] that uses a linear model in the feature space to capture the nonlinearity of input data. However, the kernel online learning approach suffers from the so-called curse of kernelization [7], that is, the model size linearly grows with the data size accumulated over time. A notable approach to address this issue is to use a budget [8, 9, 7, 10, 11]. The work in [7] leveraged the budgeted approach with stochastic gradient descent (SGD) [12, 13] wherein the learning procedure employed SGD and a budget maintenance procedure (e.g., removal, projection, or merging) was employed to maintain the model size. Although the projection and merging were shown to be effective [7], their associated computational costs render them impractical for large-scale datasets. An alternative way to address the curse of kernelization is to use random features [14] to approximate a kernel function 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain Acknowledgment: This work is partially supported by the Australian Research Council under the Discovery Project DP160109394. [15, 16]. The work in [16] proposed to transform data from the input space to the random-feature space, and then performed SGD in the feature space. However, in order for this approach to achieve good kernel approximation, excessive number of random features is required, hence could lead to serious computational issue. In this paper, we propose the Dual Space Gradient Descent (DualSGD) to address the computational problem encountered in the projection and merging strategies in the budgeted approach [8, 9, 17, 7] and the excessive number of random features in the random feature approach [15, 16]. In particular, the proposed DualSGD utilizes the random-feature space as an auxiliary space to store the information of the vectors that have been discarded during the budget maintenance process. More specifically, the DualSGD uses a provision vector in the random-feature space to store the information of all vectors being removed. This allows us to propose a novel budget maintenance strategy, named k-merging, which unifies the removal, projection, and merging strategies. Figure 1: Comparison of DualSGD with BSGD-M and FOGD on the cod-rna dataset. Left: DualSGD vs. BSGD-M when B is varied. Right: DualSGD vs. FOGD when D is varied. Our proposed DualSGD advances the existing works in the budgeted and random-feature approaches in twofold. Firstly, since the goal of using random features is to approximate the original feature space as much as possible, the proposed k-merging of DualSGD can preserve the information of the removed vectors more effectively than the existing budget maintenance strategies. For example comparing with the budgeted SGD using merging strategy (BSGD-M) [7], as shown in Fig. 1 (left), the DualSGD with a small budget size (B = 5) can gain a significant better mistake rate than that of BSGD-M with a 80-fold larger budget size (B = 400). Secondly, since the core part of the model (i.e., the vectors in the support set) is stored in the feature space and the auxiliary part (i.e., the removed vectors) is stored in the random-feature space, our DualSGD can significantly reduce the influence of the number of random features to the learning performance. For example comparing with the Fourier Online Gradient Descent (FOGD) [16], as shown in Fig. 1 (right), the DualSGD with a small number of random features (D = 20) can achieve a comparable mistake rate to that of FOGD with a 40-fold larger number of random features (D = 800) and the DualSGD with a medium value of number of random features (D = 100) achieves a predictive performance that would not be reached by FOGD (the detail of comparison in computational complexities of our DualSGD and FOGD can be found in Section 3 in the supplementary material). To provide theoretical foundation for DualSGD, we develop an extensive convergence analysis for a wide spectrum of loss functions including Hinge, Logistic, and smooth Hinge [18] for classification task and ℓ1, ε-insensitive for regression. We conduct extensive experiments on five real-world datasets to compare the proposed method with the state-of-the-art online learning methods. The experimental results show that our proposed DualSGD achieves the most optimal predictive results in almost all cases, whilst its execution time is much faster than the baselines. 2 Dual Space Gradient Descent for Online Learning 2.1 Problem Setting We propose to solve the following optimization problem: min w J (w) whose objective function is defined for online setting as follows: J (w) ≡λ 2 ∥w∥2 + E(x,y)∼pX,Y [l (w, x, y)] (1) 2 where x ∈RM is the data vector, y the label, pX,Y denotes the joint distribution over X × Y with the data domain X and label domain Y, l (w, x, y) is a convex loss function with parameters w, and λ ≥0 is a regularization parameter. A kernelization of the loss function introduces a nonlinear function Φ that maps x from the input space to a feature space. A classic example is the Hinge loss: l (w, x, y) = max 0, 1 −yw⊤Φ (x)  . 2.2 The Key Ideas of the Proposed DualSGD Our key motivations come from the shortcomings of three current budget maintenance strategies: removal, projection and merging. The removal strategy fails to retain information of the removed vectors. Although the projection strategy can overcome this problem, it requires a costly procedure to compute the inverse of an B × B matrix wherein B is the budget size, typically in the cubic complexity of B. On the other hand, the merging strategy needs to estimate the preimage of a vector in the feature space, leading to a significant information loss and requiring extensive computation. Our aim is to find an approach to simultaneously retain the information of the removed vectors accurately, and perform budget maintenance efficiently. To this end, we introduce the k-merging, a new budget maintenance approach that unifies three aforementioned budget maintenance strategies under the following interpretation. For k = 1, the proposed k-merging can be seen as a hybrid strategy of removal and projection. For k = 2, it can be regarded as the standard merging. Moreover, our proposed k-merging strategy enables an arbitrary number of vectors to be conveniently merged. Technically, we employ a vector in the random-feature space [14], called provision vector ˜w, to retain the information of all removed vectors. When k-merging is invoked, the most redundant k vectors are sorted out, e.g., xi1, . . . , xik and we increment ˜w as ˜w = ˜w + Pk j=1 αijz xij  where αij is the coefficient of support vector associated with xij, and z xij  denotes the mapping function from the input space to the random feature space. The advantage of using the random-feature space as an auxiliary space is twofold: 1) the information loss is negligible since the random-feature space is designed to approximate the original feature space, and 2) the operations in budget maintenance strategy are direct and economic. Algorithm 1 The learning of Dual Space Gradient Descent. Input: Kernel K, regularization parameter λ, budget B, random feature dimension D. 1: ˆw1 = 0; ˜w1 = 0; b = 0; I0 = ∅ 2: for t = 1, . . . , T do 3: (xt, yt) ∼pX,Y 4: ˆwt+1 = t−1 t ˆwt; ˜wt+1 = t−1 t ˜wt 5: if ∇ol yt, oh t  ̸= 0 then 6: It = It−1∪{t} 7: ˆwt+1 = ˆwt+1 −1 λt∇ol yt, oh t  Φ (xt) 8: if |It| > B then 9: invokes k-merging(It, ˆwt+1, ˜wt+1) 10: end if 11: end if 12: end for Output: wh T +1 = ˆwT +1 ⊕˜wT +1 . 2.3 The Proposed Algorithm In our proposed DualSGD, the model is distributed into two spaces: the feature and random-feature spaces with a hybrid vector wh t defined as: wh t ≜ˆwt ⊕˜wt. Here we note that the kernel part ˆwt and the provision part ˜wt lie in two different spaces, thus for convenience we define an abstract operator ⊕to allow the addition between them, which implies that the decision function crucially depends on both kernel and provision parts wh t , x ≜⟨( ˆwt ⊕˜wt) , x⟩≜ˆwT t Φ (x) + ˜w⊤ t z (x) We employ one vector ˜wt in the random-feature space to preserve the information of the discarded vectors, that are outside It – the set of indices of all support vectors in ˆwt. When an instance arrives and the model size exceeds the budget B, the budget maintenance procedure k-merging(It, ˆwt+1, ˜wt+1) is invoked to adjust ˆwt+1 and ˜wt+1, accordingly. Our proposed DualSGD is summarized in Algorithm 1 where we note that, l (y, o) is another representation of convex loss function w.r.t the variable 3 o (e.g., the Hinge loss given by l (y, o) = max (0, 1 −yo)), and oh t = ˆw⊤ t Φ (x) + ˜w⊤ t z (x) (i.e., hybrid objective value). 2.4 k-merging Budget Maintenance Strategy Crucial to our proposed DualSGD in Algorithm 1 is the k-merging routine to allow efficient merging of k arbitrary vectors. We summarize the key steps for k-merging in Algorithm 2. In particular, we first select k support vectors whose corresponding coefficients (αi1, αi2, ..., αik) have the smallest absolute values (cf. line 1). We then approximate them by z (xi1) , . . . , z (xik) and merge them by updating the provision vector as ˜wt+1 = ˜wt+1 + Pk j=1 αijz xij  (cf. line 2). Finally, we remove the chosen vectors from the kernel part ˆwt+1 (cf. line 2). 2.5 Convergence Analysis In this section, we present the convergence analysis for our proposed algorithm. We first prove that with a high probability f h t (x) (i.e., hybrid decision function and cf. 3) is a good approximation of ft (x) for all x and t (cf. Theorem 1). Let w⋆be the optimal solution of the optimization problem defined in Eq. (1): w⋆= argmin w J (w). We then prove that if {wt}∞ t=1 is constructed as in Eq. (2), this sequence rapidly converges to w⋆or ft (x) = w⊤ t Φ (x) rapidly approaches the optimal decision function (cf. Theorems 2, 3). Therefore, the decision function f h t (x) also rapidly approaches the optimal decision function. Our analysis can be generalized for the general k-merging strategy, but for comprehensibility we present the analysis for the 1-merging case (i.e., k = 1). We assume that the loss function used in the analysis satisfies the condition |∇ol (y, o)| ≤A, ∀y, o, where A is a positive constant. A wide spectrum of loss functions including Hinge, logistic, smooth Hinge [18], ℓ1, and ε-insensitive satisfy this condition and hence are appropriate for this convergence analysis. We further assume that ∥Φ (x)∥= K (x, x) 1/2 = 1, ∀x. Let βt be a binary random variable which indicates whether the budget maintenance procedure is performed at the iteration t (i.e., the event ∇ol yt, oh t  ̸= 0). We assume that if βt = 1, the vector Φ (xit) is selected to move to the random-feature space. Without loss of generality, we assume that it = t since we can arrange the data instances so as to realize it. We define gh t = λwt + ∇ol yt, f h t (xt)  Φ (xt) and wt+1 = wt −ηtgh t (2) ft (x) = w⊤ t Φ (x) = t X j=1 αjK (xj, x) f h t (x) = ˆw⊤ t Φ (xt) + ˜w⊤ t z (xt) = t X j=1 αj (1 −βj) K (xj, x) + t X j=1 αjβj ˜K (xj, x) (3) where ˜K (x, x′) = z (x)⊤z (x′) is the approximated kernel induced by the random-feature space, and the learning rate ηt = 1 λt. Theorem 1 establishes that f h t (.) is a good approximation of ft (x) with a high probability, followed by Theorem 2 which establishes the bound on the regret. Algorithm 2 k-merging Budget Maintenance Procedure. procedure k-merging(It, ˆwt+1, ˜wt+1) // Assume that ˆwt+1 = P j∈It αjΦ (xj) 1: (i1, . . . , ik) =k-argmin j∈It |αj|; It = It\ {i1, . . . , ik} 2: ˜wt+1 = ˜wt+1 + Pk j=1 αijz xij  ; ˆwt+1 = ˆwt+1 −Pk j=1 αijΦ xij  endprod Theorem 1. With a probability at least 1 −θ = 1 −28  σµAdX λε  exp  − Dλ2ε2 4(M+2)A2  where M is the dimension of input space, D is the dimension of random feature space,dX denotes the diameter of the compact set X, and the constant σµ is defined as in [14], we have i) ft (x) −f h t (x) ≤ε for all t > 0 and x ∈X. ii) E  ft (x) −f h t (x)  ≤A−1λε Pt j=1 E  α2 j 1/2 µ 1/2 j where µj = p (βj = 1). 4 Theorem 1 shows that with a high probability f h t (x) can approximate ft (x) with an ε-precision. It also indicates that to decrease the gap ft (x) −f h t (x) , when performing budget maintenance, we should choose the vectors whose coefficients have smallest absolute values to move to the random-feature space. Theorem 2. The following statement guarantees for all T E [J (wT )] −J (w⋆) ≤E " 1 T T X t=1 J (wt) −J (w⋆) # ≤8A2 (log T + 1) λT + 1 T W T X t=1 E  M 2 t 1/2 where wT = 1 T PT t=1 wt, Mt = ∇ol (yt, ft (xt))−∇ol yt, f h t (xt)  , and W = 2A 1 + √ 5  λ−1. If a smooth loss function is used, we can quantify the gap in more detail and with a high probability, the gap is negligible and this is shown in Theorem 3. Theorem 3. Assume that l (y, o) is a γ-strongly smooth loss function. With a probability at least 1 −28  σµAdX λε  exp  − Dλ2ε2 4(M+2)A2  , we have E [J (wT )] −J (w⋆) ≤E " 1 T T X t=1 J (wt) −J (w⋆) # ≤8A2 (log T + 1) λT + 1 T Wγε T X t=1 Pt j=1 µj t !1/2 ≤8A2 (log T + 1) λT + Wγε 3 Experiments In this section, we conduct comprehensive experiments to quantitatively evaluate the performance of our proposed Dual Space Gradient Descent (DualSGD) on binary classification, multiclass classification and regression tasks under online settings. Our main goal is to examine the scalability, classification and regression capabilities of DualSGDs by directly comparing them with those of several recent state-of-the-art online learning approaches using a number of real-world datasets with a wide range of sizes. In what follows, we present the data statistics, experimental setup, results and our observations. 3.1 Data Statistics and Experimental Setup We use 5 datasets which are ijcnn1, cod-rna, poker, year, and airlines. The datasets where purposely are selected with various sizes in order to clearly expose the differences among scalable capabilities of the models. Three of which are large-scale datasets with hundreds of thousands and millions of data points (year: 515, 345; poker: 1, 025, 010; and airlines: 5, 929, 413), whilst the rest are medium size databases (ijcnn1: 141, 691 and cod-rna: 331, 152). These datasets can be downloaded from LIBSVM1 and UCI2 websites, except the airlines which was obtained from American Statistical Association (ASA3). For the airlines dataset, our aim is to predict whether a flight will be delayed or not under binary classification setting, and how long (in minutes) the flight will be delayed in terms of departure time under regression setting. A flight is considered delayed if its delay time is above 15 minutes, and non-delayed otherwise. Following the procedure in [19], we extract 8 features for flights in the year of 2008, and then normalize them into the range [0,1]. For each dataset, we perform 10 runs on each algorithm with different random permutations of the training data samples. In each run, the model is trained in a single pass through the data. Its prediction result and time spent are then reported by taking the average together with the standard deviation over all runs. For comparison, we employ 11 state-of-the-art online kernel learning methods: perceptron [5], online gradient descent (OGD) [6], randomized budget perceptron (RBP) [9], forgetron [8] projectron, projectron++ [20], budgeted passive-aggressive simple (BPAS) [17], budgeted SGD using merging strategy (BSGD-M) [7], bounded OGD (BOGD) [21], Fourier OGD (FOGD) and Nystrom OGD (NOGD) [16]. Their implementations are published as a part of LIBSVM, BudgetedSVM4 and LSOKL5 toolboxes. We use a Windows machine with 3.46GHz Xeon processor and 96GB RAM to conduct our experiments. 1https://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/ 2https://archive.ics.uci.edu/ml/datasets.html 3http://stat-computing.org/dataexpo/2009/. 4http://www.dabi.temple.edu/budgetedsvm/index.html 5http://lsokl.stevenhoi.com/ 5 3.2 Model Evaluation on the Effect of Hyperparameters In the first experiment, we investigate the effect of hyperparameters, i.e., budget size B, merging size k and random feature dimension D (cf. Section 2) on the performance behavior of DualSGD. Particularly, we conduct an initial analysis to quantitatively evaluate the sensitivity of these hyperparameters and their impact on the predictive accuracy and wall-clock time. This analysis provides an approach to find the best setting of hyperparameters. Here the DualSGD with Hinge loss is trained on the cod-rna dataset under the online classification setting. Figure 2: The effect of k-merging size on the mistake rate and running time (left). The effect of budget size B and random feature dimension D on the mistake rate (middle) and running time (right). First we set B = 200, D = 100, and vary k in the range of 1, 2, 10, 20, 50, 100, 150. For each setting, we run our models and record the average mistake rates and running time as shown in Fig. 2 (left). There is a pattern that the classification error increases for larger k whilst the wall-clock time decreases. This represents the trade-off between model discriminative performance and model computational complexity via the number of merging vectors. In this analysis, we can choose k = 20 to balance the performance and computational cost. Fixing k = 20, we vary B and D in 4 values doubly increasing from 50 to 400 and from 100 to 800, respectively, to evaluate the prediction performance and execution time. Fig. 2 depicts the average mistake rates (middle) and running time in seconds (right) as a heat map of these values. These visualizations indicate that the higher B and D produce better classification results, but hurt the training speed of the model. We found that increasing the dimension of random feature space from 100 to 800 at B = 50 significantly reduces the mistake rates by 25%, at the same time increases the wall-clock time by 76%. The same pattern with less effect is observed when increasing the budget size B from 50 to 400 at D = 100 (mistake rate decreases by 1.5%, time increases by 54%). For a good trade-off between classification performance and computational cost, we select B = 100 and D = 200 which achieves fairly comparable classification result and running time. 3.3 Online Classification We now examine the performances of DualSGDs in the online classification task. We use four datasets: cod-rna, ijcnn1, poker and airlines (delayed and non-delayed labels). We create two versions of our approach: DualSGD with Hinge loss (DualSGD-Hinge) and DualSGD with Logistic loss (DualSGD-Logit). It is worth mentioning that the Hinge loss is not a smooth function with undefined gradient at the point that the classification confidence yf (x) = 1. Following the subgradient definition, in our experiment, we compute the gradient given the condition that yf (x) < 1, and set it to 0 otherwise. Hyperparameters setting. There are a number of different hyperparameters for all methods. Each method requires a different set of hyperparameters, e.g., the regularization parameters (λ in DualSGD), the learning rates (η in FOGD and NOGD), and the RBF kernel width (γ in all methods). Thus, for a fair comparison, these hyperparameters are specified using cross-validation on a subset of data. In particular, we further partition the training set into 80% for learning and 20% for validation. For large-scale databases, we use only 1% of dataset, so that the searching can finish within an acceptable time budget. The hyperparameters are varied in certain ranges and selected for the best performance on the validation set. The ranges are given as follows: C ∈{2−5, 2−3, ..., 215}, λ ∈{2−4/N, 2−2/N, ..., 216/N}, γ ∈{2−8, 2−4, 2−2, 20, 22, 24, 28}, and η ∈{2−4, 2−3, ..., 2−1, 21, 22..., 24} where N is the number of data points. The budget size B, merging size k and random feature dimension D of DualSGD are selected following the approach described in Section 3.2. For the budget size ˆB in NOGD and Pegasos algorithm, and the feature dimension ˆD in FOGD for each dataset, we use identical values to those used in Section 7.1.1 of [16]. 6 Table 1: Mistake rate (%) and execution time (seconds). The notation [k; B; D; ˆB; ˆD] denotes the merging size k, the budget sizes B and ˆB of DualSGD-based models and other budgeted algorithms, and the number of random features D and ˆD of DualSGD and FOGD, respectively. Dataset cod-rna ijcnn1 h k | B | D | ˆB | ˆD i [20 | 100 | 200 | 400 | 1, 600] [20 | 100 | 200 | 1, 000 | 4, 000] Algorithm Mistake Rate Time Mistake Rate Time Perceptron 9.79±0.04 1,393.56 12.85±0.09 727.90 OGD 7.81±0.03 2,804.01 10.39±0.06 960.44 RBP 26.02±0.39 85.84 15.54±0.21 54.29 Forgetron 28.56±2.22 102.64 16.17±0.26 60.54 Projectron 11.16±3.61 97.38 12.98±0.23 59.37 Projectron++ 17.97±15.60 1,799.93 9.97±0.09 749.70 BPAS 11.97±0.09 92.08 10.68±0.05 55.44 BSGD-M 5.33±0.04 184.58 9.14±0.18 1,562.61 BOGD 38.13±0.11 104.60 10.87±0.18 55.99 FOGD 7.15±0.03 53.45 9.41±0.03 25.93 NOGD 7.83±0.06 105.18 10.43±0.08 59.36 DualSGD-Hinge 4.92±0.25 28.29 8.35±0.20 12.12 DualSGD-Logit 4.83±0.21 31.96 8.82±0.24 13.30 Dataset [S] poker airlines h k | B | D | ˆB | ˆD i [20 | 100 | 200 | 1, 000 | 4, 000] [20 | 100 | 200 | 1, 000 | 4, 000] Algorithm Mistake Rate Time Mistake Rate Time FOGD 52.28±0.04 928.89 20.98±0.01 1,270.75 NOGD 44.90±0.16 4,920.33 25.56±0.01 3,553.50 DualSGD-Hinge 46.73±0.22 139.87 19.28±0.00 472.21 DualSGD-Logit 46.65±0.14 133.50 19.28±0.00 523.23 Results. Table 1 reports the average classification results and execution time after the methods see all data samples. Note that for two biggest datasets (poker, airlines) that consist of millions of data points, we only include the fast algorithms FOGD, NOGD and DualSGDs. The other methods would exceed the time limit, which we set to two hours, when running on such data as they suffer from serious computation issue. From these results, we can draw key observations below. The budgeted online approaches show their effectiveness with substantially faster computation than the ones without budgets. More specifically, the execution time of our proposed models is several orders of magnitude (100 times) lower than that of regular online algorithms (e.g., 28.29 seconds compared with 2, 804 seconds for cod-rna dataset). Moreover, our models are twice as fast as the recent fast algorithm FOGD for cod-rna and ijcnn1 datasets, and approximately eight and three times for vast-sized data poker and airlines. This is because the DualSGDs maintain a sparse budget of support vectors and a low random feature space, whose size and dimensionality are 10 times and 20 times smaller than those of other methods. Second, in terms of classification, the DualSGD-Hinge and DualSGD-Logit outperform other methods for almost all datasets except the poker data. In particular, the DualSGD-based methods achieve the best mistake rates 4.83±0.21, 8.35±0.20, 19.28±0.00 for the cod-rna, ijcnn1 and airlines data, that are, respectively, 32.4%, 11.3%, 8.8% lower than the error rates of the second best models – two recent approaches FOGD and NOGD. For poker dataset, our methods obtain fairly comparable results with that of the NOGD, but still surpass the FOGD with a large margin. The reason is that the DualSGD uses a dual space: a kernel space containing core support vectors and a random feature space keeping the projections of the core vectors that are removed from the budget in kernel space. This would minimize the information loss when the model performs budget maintenance. Finally, two versions of DualSGDs demonstrate similar discriminative performances and computational complexities wherein the DualSGD-Logit is slightly slower due to the additional exponential operators. All of these observations validate the effectiveness and efficiency of our proposed technique. Thus, we believe that our approximation machine is a promising technique for building scalable online kernel learning algorithms for large-scale classification tasks. 3.4 Online Regression The last experiment addresses the online regression problem to evaluate the capabilities of our approach with two proposed loss functions: ℓ1 and ε-insensitive losses. Incorporating these loss functions creates two versions: DualSGD-ε, DualSGD-ℓ1. We use two datasets: year and airlines (delay minutes), and six baselines: RBP, Forgetron, Projectron, BOGD, FOGD and NOGD. 7 Table 2: Root mean squared error (RMSE) and execution time (seconds) of 6 baselines and 2 versions of our DualSGDs. The notation [k; B; D; ˆB; ˆD] denotes the same meaning as those in Table 1. Dataset year airlines h k | B | D | ˆB | ˆD i [20 | 100 | 200 | 400 | 1, 600] [20 | 100 | 200 | 1, 000 | 2, 000] Algorithm RMSE Time RMSE Time RBP 0.19±0.00 605.42 36.51±0.00 3,418.89 Forgetron 0.19±0.00 904.09 36.51±0.00 5,774.47 Projectron 0.14±0.00 605.19 36.14±0.00 3,834.19 BOGD 0.20±0.00 596.10 35.73±0.00 3,058.96 FOGD 0.16±0.00 76.70 53.16±0.01 646.15 NOGD 0.14±0.00 607.37 34.74±0.00 3,324.38 DualSGD-ε 0.13±0.00 48.01 36.20±0.01 457.30 DualSGD-ℓ1 0.12±0.00 47.29 36.20±0.01 443.39 Hyperparameters setting. We adopt the same hyperparameter searching procedure for online classification task as in Section 3.3. Furthermore, for the budget size ˆB and the feature dimension ˆD in FOGD, we follow the same strategy used in Section 7.1.1 of [16]. More specifically, these hyperparameters are separately set for different datasets as reported in Table 2. They are chosen such that they are roughly proportional to the number of support vectors produced by the batch SVM algorithm in LIBSVM running on a small subset. The aim is to achieve competitive accuracy using a relatively larger budget size for tackling more challenging regression tasks. Results. Table 2 reports the average regression errors and computation costs after the methods see all data samples. From these results, we can draw some observations below. Our proposed models enjoy a significant advantage in computational efficacy whilst achieve better (for year dataset) or competitive regression results (for airlines dataset) with other methods. The DualSGD, again, secures the best performance in terms of model sparsity. Among the baselines, the FOGD is the fastest, that is, its time costs can be considered to compare with those of our methods, but its regression performances are worse. The remaining algorithms usually obtain better results, but is paid by the sacrifice of scalability. Finally, comparing the capability of two DualSGD’s variants, both models demonstrate similar regression capabilities and computational complexities wherein the DualSGD-ℓ1 is slightly faster due to its simpler operator in computing the gradient. Besides, its regression scores are also lower or equal to those of DualSGD-ε. These observations, once again, verifies the effectiveness and efficiency of our proposed techniques. Therefore the DualSGD is also a promising machine to perform online regression task for large-scale datasets. 4 Conclusion In this paper, we have proposed Dual Space Gradient Descent (DualSGD) that overcomes the computational problem in the projection and merging strategies in Budgeted SGD (BSGD) and the excessive number of random features in Fourier Online Gradient Descent (FOGD). More specifically, we have employed the random features to form an auxiliary space for storing the vectors being removed during the budget maintenance process. This makes the operations in budget maintenance simple and convenient. We have further presented the convergence analysis that is appropriate for a wide spectrum of loss functions. Finally, we have conducted the extensive experiments on several benchmark datasets to prove the efficiency and accuracy of the proposed method. 8 References [1] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386–408, 1958. [2] C.-C. Chang and C.-J. Lin. Libsvm: A library for support vector machines. ACM Trans. Intell. Syst. Technol., 2(3):27:1–27:27, May 2011. [3] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive-aggressive algorithms. J. Mach. Learn. Res., 7:551–585, 2006. [4] M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In International Conference on Machine Learning 2008, pages 264–271, 2008. [5] Y. Freund and R. E. Schapire. Large margin classification using the perceptron algorithm. Mach. Learn., 37(3):277–296, December 1999. [6] J. Kivinen, A. J. Smola, and R. C. Williamson. Online Learning with Kernels. IEEE Transactions on Signal Processing, 52:2165–2176, August 2004. [7] Z. Wang, K. Crammer, and S. Vucetic. Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale svm training. J. Mach. Learn. Res., 13(1):3103–3131, 2012. [8] O. Dekel, S. Shalev-Shwartz, and Y. Singer. The forgetron: A kernel-based perceptron on a fixed budget. In Advances in Neural Information Processing Systems, pages 259–266, 2005. [9] G. Cavallanti, N. Cesa-Bianchi, and C. Gentile. Tracking the best hyperplane with a simple budget perceptron. Machine Learning, 69(2-3):143–167, 2007. [10] T. Le, V. Nguyen, T. D. Nguyen, and Dinh Phung. Nonparametric budgeted stochastic gradient descent. In The 19th International Conference on Artificial Intelligence and Statistics, May 2016. [11] T. Le, P. Duong, M. Dinh, T. D. Nguyen, V. Nguyen, and D. Phung. Budgeted semi-supervised support vector machine. In The 32th Conference on Uncertainty in Artificial Intelligence, June 2016. [12] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400–407, 1951. [13] S. Shalev-shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML 2007, pages 807–814, 2007. [14] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in Neural Infomration Processing Systems, 2007. [15] L. Ming, W. Shifeng, and Z. Changshui. On the sample complexity of random fourier features for online learning: How many random fourier features do we need? ACM Trans. Knowl. Discov. Data, 8(3):13:1–13:19, June 2014. [16] J. Lu, S. C.H. Hoi, J. Wang, P. Zhao, and Z.-Y. Liu. Large scale online kernel learning. J. Mach. Learn. Res., 2015. [17] Z. Wang and S. Vucetic. Online passive-aggressive algorithms on a budget. In AISTATS, volume 9, pages 908–915, 2010. [18] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss. Journal of Machine Learning Research, 14(1):567–599, 2013. [19] J. Hensman, N. Fusi, and N. D Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence, pages 282–290, 2013. [20] F. Orabona, J. Keshet, and B. Caputo. Bounded kernel-based online learning. J. Mach. Learn. Res., 10:2643–2666, December 2009. [21] P. Zhao, J. Wang, P. Wu, R. Jin, and S. C. H. Hoi. Fast bounded online gradient descent algorithms for scalable kernel-based online learning. CoRR, 2012. 9
2016
161
6,063
Asynchronous Parallel Greedy Coordinate Descent Yang You ⇧, + XiangRu Lian†, + Ji Liu † Hsiang-Fu Yu ‡ Inderjit S. Dhillon ‡ James Demmel ⇧ Cho-Jui Hsieh ⇤ + equally contributed ⇤University of California, Davis † University of Rochester ‡ University of Texas, Austin ⇧University of California, Berkeley youyang@cs.berkeley.edu, xiangru@yandex.com, jliu@cs.rochester.edu {rofuyu,inderjit}@cs.utexas.edu, demmel@eecs.berkeley.edu chohsieh@cs.ucdavis.edu Abstract In this paper, we propose and study an Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) algorithm for minimizing a smooth function with bounded constraints. At each iteration, workers asynchronously conduct greedy coordinate descent updates on a block of variables. In the first part of the paper, we analyze the theoretical behavior of Asy-GCD and prove a linear convergence rate. In the second part, we develop an efficient kernel SVM solver based on Asy-GCD in the shared memory multi-core setting. Since our algorithm is fully asynchronous—each core does not need to idle and wait for the other cores—the resulting algorithm enjoys good speedup and outperforms existing multi-core kernel SVM solvers including asynchronous stochastic coordinate descent and multi-core LIBSVM. 1 Introduction Asynchronous parallel optimization has recently become a popular way to speedup machine learning algorithms using multiple processors. The key idea of asynchronous parallel optimization is to allow machines work independently without waiting for the synchronization points. It has many successful applications including linear SVM [13, 19], deep neural networks [7, 15], matrix completion [19, 31], linear programming [26], and its theoretical behavior has been deeply studied in the past few years [1, 9, 16]. The most widely used asynchronous optimization algorithms are stochastic gradient method (SG) [7, 9, 19] and coordinate descent (CD) [1, 13, 16], where the workers keep selecting a sample or a variable randomly and conduct the corresponding update asynchronously. Although these stochastic algorithms have been studied deeply, in some important machine learning problems a “greedy” approach can achieve much faster convergence speed. A very famous example is greedy coordinate descent: instead of randomly choosing a variable, at each iteration the algorithm selects the most important variable to update. If this selection step can be implemented efficiently, greedy coordinate descent can often make bigger progress compared with stochastic coordinate descent, leading to a faster convergence speed. For example, the decomposition method (a variant of greedy coordinate descent) is widely known as best solver for kernel SVM [14, 21], which is implemented in LIBSVM and SVMLight. Other successful applications can be found in [8, 11, 29]. In this paper, we study asynchronous greedy coordinate descent algorithm framework. The variable is partitioned into subsets, and each worker asynchronously conducts greedy coordinate descent in one of the blocks. To our knowledge, this is the first paper to present a theoretical analysis or practical applications of this asynchronous parallel algorithm. In the first part of the paper, we formally define the asynchronous greedy coordinate descent procedure, and prove a linear convergence rate under mild assumption. In the second part of the paper, we discuss how to apply this algorithm to solve the kernel SVM problem on multi-core machines. Our algorithm achieves linear speedup with number of cores, and performs better than other multi-core SVM solvers. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The rest of the paper is outlined as follows. The related work is discussed in Section 2. We propose the asynchronous greedy coordinate descent algorithm in Section 3 and derive the convergence rate in the same section. In Section 4 we show the details how to apply this algorithm for training kernel SVM, and the experimental comparisons are presented in Section 5. 2 Related Work Coordinate Descent. Coordinate descent (CD) has been extensively studied in the optimization community [2], and has become widely used in machine learning. At each iteration, only one variable is chosen and updated while all the other variables remain fixed. CD can be classified into stochastic coordinate descent (SCD), cyclic coordinate descent (CCD) and greedy coordinate descent (GCD) based on their variable selection scheme. In SCD, variables are chosen randomly based on some distribution, and this simple approach has been successfully applied in solving many machine learning problems [10, 25]. The theoretical analysis of SCD has been discussed in [18, 22]. Cyclic coordinate descent updates variables in a cyclic order, and has also been applied to several applications [4, 30]. Greedy Coordinate Descent (GCD).The idea of GCD is to select a good, instead of random, coordinate that can yield better reduction of objective function value. This can often be measured by the magnitude of gradient, projected gradient (for constrained minimization) or proximal gradient (for composite minimization). Since the variable is carefully selected, at each iteration GCD can reduce objective function more than SCD or CCD, which leads to faster convergence in practice. Unfortunately, selecting a variable with larger gradient is often time consuming, so one needs to carefully organize the computation to avoid the overhead, and this is often problem dependent. The most famous application of GCD is the decomposition method [14, 21] used in kernel SVM. By exploiting the structure of quadratic programming, selecting the variable with largest gradient magnitude can be done without any overhead; as a result GCD becomes the dominant technique in solving kernel SVM, and is implemented in LIBSVM [5] and SVMLight [14]. There are also other applications of GCD, such as non-negative matrix factorization [11], large-scale linear SVM [29], and [8] proposed an approximate way to select variables in GCD. Recently, [20] proved an improved convergence bound for greedy coordinate descent. We focus on parallelizing the GS-r rule in this paper but our analysis can be potentially extended to the GS-q rule mentioned in that paper. To the best of our knowledge, the only literature discussing how to parallelize GCD was in [23, 24]. A thread-greedy/block-greedy coordinate descent is a synchronized parallel GCD for L1-regularized empirical risk minimization. At an iteration, each thread randomly selects a block of coordinates from a pre-partitioned block partition and proposes the best coordinate from this block along with its increment (i.e., step size). Then all the threads are synchronized to perform the actual update to the variables. However, the method can potentially diverge; indeed, this is mentioned in [23] about the potential divergence when the number of threads is large. [24] establishes sub-linear convergence for this algorithm. Asynchronous Parallel Optimization Algorithms.In a synchronous algorithm each worker conducts local updates, and in the end of each round they have to stop and communicate to get the new parameters. This is not efficient when scaling to large problem due to the curse of last reducer (all the workers have to wait for the slowest one). In contrast, in asynchronous algorithms there is no synchronization point, so the throughput will be much higher than a synchronized system. As a result, many recent work focus on developing asynchronous parallel algorithms for machine learning as well as providing theoretical guarantee for those algorithms [1, 7, 9, 13, 15, 16, 19, 28, 31]. In distributed systems, asynchronous algorithms are often implemented using the concept of parameter servers [7, 15, 28]. In such setting, each machine asynchronously communicates with the server to read or write the parameters. In our experiments, we focus on another multi-core shared memory setting, where multiple cores in a single machine conduct updates independently and asynchronously, and the communication is implicitly done by reading/writing to the parameters stored in the shared memory space. This has been first discussed in [19] for the stochastic gradient method, and recently proposed for parallelizing stochastic coordinate descent [13, 17]. This is the first work proposing an asynchronous greedy coordinate decent framework. The closest work to ours is [17] for asynchronous stochastic coordinate descent (ASCD). In their algorithm, each worker asynchronously conducts the following updates: (1) randomly select a variable (2) compute the update and write to memory or server. In our AGCD algorithm, each worker will select the best variable to update in a block, which leads to faster convergence speed. We also compare with ASCD algorithm in the experimental results for solving the kernel SVM problem. 2 3 Asynchronous Greedy Coordinate Descent We consider the following constrained minimization problem: min x2⌦ f(x), (1) where f is convex and smooth, ⌦⇢RN is the constraint set, ⌦= ⌦1 ⇥⌦2 ⇥· · · ⇥⌦N and each ⌦i, i = 1, 2, . . . , N is a closed subinterval of the real line. Notation: We denote S to be the optimal solution set for (1) and PS(x), P⌦(x) to be the Euclidean projection of x onto S, ⌦, respectively. We also denote f ⇤to be the optimal objective function value for (1). We propose the following Asynchronous parallel Greedy Coordinate Descent (Asy-GCD) for solving (1). Assume N coordinates are divided into n non-overlapping sets S1 [ . . . [ Sn. Let k be the global counter of total number of updates. In Asy-GCD, each processor repeatedly runs the following GCD updates: • Randomly select a set Sk 2 {S1, . . . , Sn} and pick the coordinate ik 2 Sk where the projected gradient (defined in (2)) has largest absolute value. • Update the parameter by xk+1 P⌦(xk −γrikf(xk)), where γ is the step size. Here the projected gradient defined by r+ ikf(ˆxk) := xk −P⌦(xk −rikf(ˆxk)) (2) is a measurement of optimality for each variable, where ˆxk is current point stored in memory used to calculate the update. The processors will run concurrently without synchronization. In order to analyze Asy-GCD, we capture the system-wise global view in Algorithm 1. Algorithm 1 Asynchronous Parallel Greedy Coordinate Descent (Asy-GCD) Input: x0 2 ⌦, γ, K Output: xK+1 1: Initialize k 0; 2: while k K do 3: Choose Sk from {S1, . . . , Sn} with equal probability; 4: Pick ik = arg maxi2Sk kr+ i f(ˆx)k; 5: xk+1 P⌦(xk −γrikf(ˆxk)); 6: k k + 1; 7: end while The update in the kth iteration is xk+1 P⌦(xk −γrikf(ˆxk)), where ik is the selected coordinate in kth iteration, ˆxk is the point used to calculate the gradient and rikf(ˆxk) is a zero vector where the ikth coordinate is set to the corresponding coordinate of the gradient of f at ˆxk. Note that ˆxk may not be equal to the current value of the optimization variable xk due to asynchrony. Later in the theoretical analysis we will need to assume ˆxk is close to xk using the bounded delay assumption. In the following we prove the convergence behavior of Asy-GCD. We first make some commonly used assumptions: Assumption 1. 1. (Bounded Delay) There is a set J(k) ⇢{k −1, . . . , k −T} for each iteration k such that ˆxk := xk − X j2J(k) (xj+1 −xj), (3) where T is the upper bound of the staleness. In this “inconsistent read” model, we assume some of the latest T updates are not yet written back to memory. This is also used in some previous papers [1, 17], and is more general than the “consistent read” model that assumes ˆxk is equal to some previous iterate. 3 2. For simplicity, we assume each set Si, i 2 {1, . . . , n} has m coordinates. 3. (Lipschitzian Gradient) The gradient function of the objective rf(·) is Lipschitzian. That is to say, krf(x) −rf(y)k Lkx −yk 8x, 8y. (4) Under the Lipschitzian gradient assumption, we can define three more constants Lres, Ls and Lmax. Define Lres to be the restricted Lipschitz constant satisfying the following inequality: krf(x) −rf(x + ↵ei)k Lres|↵|, 8i 2 {1, 2, ..., N} and t 2 R with x, x + tei 2 ⌦ (5) Let ri be the operator calculating a zero vector where the ith coordinate is set to the ith coordinate of the gradient. Define L(i) for i 2 {1, 2, . . . , N} as the minimum constant that satisfies: krif(x) −rif(x + ↵ei)k L(i)|↵|. (6) Define Lmax := maxi2{1,...,N} L(i). It can be seen that Lmax Lres L. Let s be any positive integer bounded by N. Define Ls to be the minimal constant satisfying the following inequality: 8x 2 ⌦, 8S ⇢{1, 2, ..., N} where |S| s: ""rf(x) −rf # x + P i2S ↵iei %"" Ls ""P i2S ↵iei "". 4. (Global Error Bound) We assume that our objective f has the following property: when γ = 1 3Lmax , there exists a constant such that kx −PS(x)k 6 k˜x −xk, 8x 2 ⌦. (7) Where ˜x is defined by argminx02⌦ ⇣ hrf(x), x0 −xi + 1 2γ kx0 −xk2⌘ . This is satisfied by strongly convex objectives and some weakly convex objectives. For example, it is proved in [27] that the kernel SVM problem (9) satisfies the global error bound even when the kernel is not strictly positive definite. 5. (Independence) All random variables in {Sk}k=0,1,··· ,K in Algorithm 1 are independent to each other. We then have the following convergence result: Theorem 2 (Convergence). Choose γ = 1/(3Lmax) in Algorithm 1. Suppose n ≥6 and that the upper bound for staleness T satisfies the following condition T(T + 1) 6 pnLmax 4eLres . (8) Under Assumption 1, we have the following convergence rate for Algorithm 1: E(f(xk) −f ⇤) 6 ✓ 1 −2Lmaxb L2n ◆k (f(x0) −f ⇤). where b is defined as b = ✓ L2 T 18pnLmaxLres + 2 ◆−1 . This theorem indicates a linear convergence rate under the global error bound and the condition T 2 O(pn). Since T is usually proportional to the total number cores involved in the computation, this result suggests that one can have linear speedup as long as the total number of cores is smaller than O(n1/4). Note that for n = N Algorithm 1 reduces to the standard asynchronous coordinate descent algorithm (ASCD) and our result is essentially consistent with the one in [17], although they use the optimally strong convexity assumption for f(·). The optimally strong convexity is a similar condition to the global error bound assumption [32]. Here we briefly discuss the constants involved in the convergence rate. Using Gaussian kernel SVM on covtype as a concrete sample, Lmax = 1 for Gaussian kernel, Lres is the maximum norm of columns of kernel matrix (⇡3.5), L is the 2-norm of Q (21.43 for covtype), and conditional number ⇡1190. As number of samples increased, the conditional number will become a dominant term, and this also appears in the rate of serial greedy coordinate descent. In terms of speedup when increasing number of threads (T), although LT may grow, it only appears in b = ( L2 T 18pnLmaxLres + 2)−1, where the first term inside b is usually small since there is a pn in the demominator. Therefore, b ⇡2−1 in most cases, which means the convergence rate does not slow down too much when we increase T. 4 4 Application to Multi-core Kernel SVM In this section, we demonstrate how to apply asynchronous parallel greedy coordinate descent to solve kernel SVM [3, 6]. We follow the conventional notations for kernel SVM, where the variables for the dual form are ↵2 Rn (instead of x in the previous section). Given training samples {ai}` i=1 with corresponding labels yi 2 {+1, −1}, kernel SVM solves the following quadratic minimization problem: min ↵2Rn ⇢1 2↵T Q↵−eT ↵ + := f(↵) s.t. 0 ↵C, (9) where Q is an ` by ` symmetric matrix with Qij = yiyjK(ai, aj) and K(ai, aj) is the kernel function. Gaussian kernel is a widely-used kernel function, where K(ai, aj) = e−γkai−ajk2. Greedy coordinate descent is the most popular way to solve kernel SVM. In the following, we first introduce greedy coordinate descent for kernel SVM, and then discuss the detailed update rule and implementation issues when applying our proposed Asy-GCD algorithm on multi-core machines. 4.1 Kernel SVM and greedy coordinate descent When we apply coordinate descent to solve the dual form of kernel SVM (9), the one variable update rule for any index i can be computed by: δ⇤ i = P[0, C] # ↵i −rfi(↵)/Qii % −↵i (10) where P[0, C] is the projection to the interval [0, C] and the gradient is rfi(↵) = (Q↵)i −1. Note that this update rule is slightly different from (2) by setting the step size to be γ = 1/Qii. For quadratic functions this step size leads to faster convergence because δ⇤ i obtained by (10) is the closed form solution of δ⇤= arg min δ f(↵+ δei), and ei is the i-th indicator vector. As in Algorithm 1, we choose the best coordinate based on the magnitude of projected gradient. In this case, by definition r+ i f(↵) = ↵i −P[0, C] # ↵i −rif(↵) % . (11) The success of GCD in solving kernel SVM is mainly due to the maintenance of the gradient g := rif(↵) = (Q↵) −1. Consider the update rule (10): it requires O(`) time to compute (Q↵)i, which is the cost for stochastic coordinate descent or cyclic coordinate descent. However, in the following we show that GCD has the same time complexity per update by using the trick of maintaining g during the whole procedure. If g is available in memory, each element of the projected gradient (11) can be computed in O(1) time, so selecting the variable with maximum magnitude of projected gradient only costs O(`) time. The single variable update (10) can be computed in O(1) time. After the update ↵i ↵i + δ, the g has to be updated by g g + δqi, where qi is the i-th column of Q. This also costs O(`) time. Therefore, each GCD update only costs O(`) using this trick of maintaining g. Therefore, for solving kernel SVM, GCD is faster than SCD and CCD since there is no additional cost for selecting the best variable to update. Note that in the above discussion we assume Q can be stored in memory. Unfortunately, this is not the case for large scale problems because Q is an ` by ` dense matrix, where ` can be millions. We will discuss how to deal with this issue in Section 4.3. With the trick of maintaining g = Q↵−1, the GCD for solving (9) can be summarized in Algorithm 2. Algorithm 2 Greedy Coordinate Descent (GCD) for Dual Kernel SVM 1: Initial g = −1, ↵= 0 2: For k = 1, 2, · · · 3: step 1: Pick i = arg maxi |r+ i f(↵)| using g (See eq (11)) 4: step 2: Compute δ⇤ i by eq (10) 5: step 3: g g + δ⇤qi 6: step 4: ↵i ↵i + δ⇤ 5 4.2 Asynchronous greedy coordinate descent When we have n threads in a multi-core shared memory machine, and the dual variables (or corresponding training samples) are partitioned into the same number of blocks: S1 [ S2 [ · · · [ Sn = {1, 2, · · · , `} and Si \ Sj = φ for all i, j. Now we apply Asy-GCD algorithm to solve (9). For better memory allocation of kernel cache (see Section 4.3), we bind each thread to a partition. The behavior of our algorithm still follows Asy-GCD because the sequence of updates are asynchronously random. The algorithm is summarized in Algorithm 3. Algorithm 3 Asy-GCD for Dual Kernel SVM 1: Initial g = −1, ↵= 0 2: Each thread t repeatedly performs the following updates in parallel: 3: step 1: Pick i = arg maxi2St |r+ i f(↵)| using g (See eq (11)) 4: step 2: Compute δ⇤ i by eq (10) 5: step 3: For j = 1, 2, · · · , ` 6: gj gj + δ⇤Qj,i using atomic update 7: step 4: ↵i ↵i + δ⇤ Note that each thread will read the `-dimensional vector g in step 2 and update g in step 3 in the shared memory. For the read, we do not use any atomic operations. For the writes, we maintain the correctness of g by atomic writes, otherwise some updates to g might be overwritten by others, and the algorithm cannot converge to the optimal solution. Theorem 2, suggests a linear convergence rate of our algorithm, and in the experimental results we will see the algorithm is much faster than the widely used Asynchronous Stochastic Coordinate Descent (Asy-SCD) algorithm [17]. 4.3 Implementation Issues In addition to the main algorithm, there are some practical issues we need to handle in order to make Asy-GCD algorithm scales to large-scale kernel SVM problems. Here we discuss these implementation issues. Kernel Caching.The main difficulty for scaling kernel SVM to large dataset is the memory requirement for storing the Q matrix, which takes O(`2) memory. In the GCD algorithm, step 2 (see eq (10)) requires a diagonal element of Q, which can be pre-computed and stored in memory. However, the main difficulty is to conduct step 3, where a column of Q (denoted by qi)is needed. If qi is in the memory, the algorithm only takes O(`) time; however, if qi is not in the memory, re-computing it from scratch takes O(dn) time. As a result, how to maintain most important columns of Q in memory is an important implementation issues in SVM software. In LIBSVM, the user can specify the size of memory they want to use for storing columns of Q. The columns of Q are stored in a linked-list in the memory, and when memory space is not enough the Least Recent Used column will be kicked out (LRU technique). In our implementation, instead of sharing the same LRU for all the cores, we create an individual LRU for each core, and make the memory space used by a core in a contiguous memory space. As a result, remote memory access will happen less in the NUMA system when there are more than 1 CPU in the same computer. Using this technique, our algorithm is able to scale up in a multi-socket machine (see Figure 2). Variable Partitioning.The theory of Asy-GCD allows any non-overlapping partition of the dual variables. However, we observe a better partition that minimizes the between-cluster connections can often lead to faster convergence. This idea has been used in a divide-and-conquer SVM algorithm [12], and we use the same idea to obtain the partition. More specifically, we partition the data by running kmeans algorithm on a subset of 20000 training samples to obtain cluster centers {cr}n r=1, and then assign each i to the nearest center: ⇡(i) = argminr kcr −xik. This steps can be easily parallelized, and costs less than 3 seconds in all the datasets used in the experiments. Note that we include this kmeans time in all our experimental comparisons. 5 Experimental Results We conduct experiments to show that the proposed method Asy-GCD achieves good speedup in parallelizing kernel SVM in multi-core systems. We consider three datasets: ijcnn1, covtype and webspam (see Table 1 for detailed information). We follow the parameter settings in [12], where C 6 Table 1: Data statistics. ` is number of training samples, d is dimensionality, `t is number of testing samples. ` `t d C γ ijcnn1 49,990 91,701 22 32 2 covtype 464,810 116,202 54 32 32 webspam 280,000 70,000 254 8 32 (a) ijcnn1 time vs obj (b) webspam time vs obj (c) covtype time vs obj Figure 1: Comparison of Asy-GCD with 1–20 threads on ijcnn1, covtype and webspam datasets. and γ are selected by cross validation. All the experiments are run on the same system with 20 CPUs and 256GB memory, where the CPU has two sockets, each with 10 cores. We locate 64GB for kernel caching for all the algorithms. In our algorithm, the 64GB will distribute to each core; for example, for Asy-GCD with 20 cores, each core will have 3.2GB kernel cache. We include the following algorithms/implementations into our comparison: 1. Asy-GCD: Our proposed method implemented by C++ with OpenMP. Note that the preprocessing time for computing the partition is included in all the timing results. 2. PSCD: We implement the asynchronous stochastic coordinate descent [17] approach for solving kernel SVM. Instead of forming the whole kernel matrix in the beginning (which cannot scale to all the dataset we are using), we use the same kernel caching technique as Asy-GCD to scale up PSCD. 3. LIBSVM (OMP): In LIBSVM, there is an option to speedup the algorithm in multi-core environment using OpenMP (see http://www.csie.ntu.edu.tw/~cjlin/libsvm/ faq.html#f432). This approach uses multiple cores when computing a column of kernel matrix (qi used in step 3 of Algorithm 2). All the implementations are modified from LIBSVM (e.g., they share the similar LRU cache class), so the comparison is very fair. We conduct the following two sets of experiments. Note that another recent proposed DC-SVM solver [12] is currently not parallelizable; however, since it is a meta algorithm and requires training a series of SVM problems, our algorithm can be naturally served as a building block of DC-SVM. 5.1 Scaling with number of cores In the first set of experiments, we test the speedup of our algorithm with varying number of cores. The results are presented in Figure 1 and Figure 2. We have the following observations: • Time vs obj (for 1, 2, 4, 10, 20 cores). From Fig. 1 (a)-(c), we observe that when we use more CPU cores, the objective decreases faster. • Cores vs speedup. From Fig. 2, we can observe that we got good strong scaling when we increase the number of threads. Note that our computer has two sockets, each with 10 cores, and our algorithm can often achieve 13-15 times speedup. This suggests our algorithm can scale to multiple sockets in a Non-Uniform Memory Access (NUMA) system. Previous asynchronous parallel algorithms such as HogWild [19] or PASSCoDe [13] often struggle when scaling to multiple sockets. 5.2 Comparison with other methods Now we compare the efficiency of our proposed algorithm with other multi-core parallel kernel SVM solvers on real datasets in Figure 3. All the algorithms in this comparison are using 20 cores and 64GB memory space for kernel caching. Note that LIBSVM is solving the kernel SVM problem with the bias term, so the objective function value is not showing in the figures. We have the following observations: 7 (a) ijcnn1 cores vs speedup (b) webspam cores vs speedup (c) covtype cores vs speedup Figure 2: The scalability of Asy-GCD with up to 20 threads. (a) ijcnn1 time vs accuracy (b) covtype time vs accuracy (c) webspam time vs accuracy (d) ijcnn1 time vs objective (e) covtype time vs objective (f) webspam time vs objective Figure 3: Comparison among multi-core kernel SVM solvers. All the solvers use 20 cores and the same amount of memory. • Our algorithm achieves much faster convergence in terms of objective function value compared with PSCD. This is not surprising because using the trick of maintaining g (see details in Section 4) greedy approach can select the best variable to update, while stochastic approach just chooses variables randomly. In terms of accuracy, PSCD is sometimes good in the beginning, but converges very slowly to the best accuracy. For example, in covtype data the accuracy of PSCD remains 93% after 4000 seconds, while our algorithm can achieve 95% accuracy after 1500 seconds. • LIBSVM (OMP) is slower than our method. The main reason is that they only use multiple cores when computing kernel values, so the computational power is wasted when the column of kernel (qi) needed is available in memory. Conclusions In this paper, we propose an Asynchronous parallel Greedy Coordinate Descent (AsyGCD) algorithm, and prove a linear convergence rate under mild condition. We show our algorithm is useful for parallelizing the greedy coordinate descent method for solving kernel SVM, and the resulting algorithm is much faster than existing multi-core SVM solvers. Acknowledgement XL and JL are supported by the NSF grant CNS-1548078. HFY and ISD are supported by the NSF grants CCF-1320746, IIS-1546459 and CCF-1564000. YY and JD are supported by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Award Number DE-SC0010200; by the U.S. Department of Energy Office of Science, Office of Advanced Scientific Computing Research under Award Numbers DE-SC0008700 and AC02-05CH11231; by DARPA Award Number HR001112-2-0016, Intel, Google, HP, Huawei, LGE, Nokia, NVIDIA, Oracle and S Samsung, Mathworks and Cray. CJH also thank the XSEDE and Nvidia support. 8 References [1] H. Avron, A. Druinsky, and A. Gupta. Revisiting asynchronous linear solvers: Provable convergence rate through randomization. In IEEE International Parallel and Distributed Processing Symposium, 2014. [2] D. P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA 02178-9998, second edition, 1999. [3] B. E. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In COLT, 1992. [4] A. Canutescu and R. Dunbrack. Cyclic coordinate descent: A robotics algorithm for protein loop closure. Protein Science, 2003. [5] C.-C. Chang and C.-J. Lin. LIBSVM: Introduction and benchmarks. Technical report, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2000. [6] C. Cortes and V. Vapnik. Support-vector network. Machine Learning, 20:273–297, 1995. [7] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, A. Senior, P. Tucker, K. Yang, Q. V. Le, et al. Large scale distributed deep networks. In NIPS, 2012. [8] I. S. Dhillon, P. Ravikumar, and A. Tewari. Nearest neighbor based greedy coordinate descent. In NIPS, 2011. [9] J. C. Duchi, S. Chaturapruek, and C. Ré. Asynchronous stochastic convex optimization. arXiv preprint arXiv:1508.00882, 2015. [10] C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In ICML, 2008. [11] C.-J. Hsieh and I. S. Dhillon. Fast coordinate descent methods with variable selection for non-negative matrix factorization. In KDD, 2011. [12] C. J. Hsieh, S. Si, and I. S. Dhillon. A divide-and-conquer solver for kernel support vector machines. In ICML, 2014. [13] C.-J. Hsieh, H. F. Yu, and I. S. Dhillon. PASSCoDe: Parallel ASynchronous Stochastic dual Coordinate Descent. In International Conference on Machine Learning(ICML),, 2015. [14] T. Joachims. Making large-scale SVM learning practical. In Advances in Kernel Methods - Support Vector Learning. MIT Press, 1998. [15] M. Li, D. G. Andersen, J. W. Park, A. J. Smola, A. Ahmed, V. Josifovski, J. Long, E. J. Shekita, and B.-Y. Su. Scaling distributed machine learning with the parameter server. In OSDI, 2014. [16] J. Liu and S. J. Wright. Asynchronous stochastic coordinate descent: Parallelism and convergence properties. 2014. [17] J. Liu, S. J. Wright, C. Re, and V. Bittorf. An asynchronous parallel stochastic coordinate descent algorithm. In ICML, 2014. [18] Y. E. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341–362, 2012. [19] F. Niu, B. Recht, C. Ré, and S. J. Wright. HOGWILD!: a lock-free approach to parallelizing stochastic gradient descent. In NIPS, pages 693–701, 2011. [20] J. Nutini, M. Schmidt, I. H. Laradji, M. Friedlander, and H. Koepke. Coordinate descent converges faster with the gauss-southwell rule than random selection. In ICML, 2015. [21] J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, Cambridge, MA, 1998. MIT Press. [22] P. Richtárik and M. Takáˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144:1–38, 2014. [23] C. Scherrer, M. Halappanavar, A. Tewari, and D. Haglin. Scaling up coordinate descent algorithms for large l1 regularization problems. In ICML, 2012. [24] C. Scherrer, A. Tewari, M. Halappanavar, and D. Haglin. Feature clustering for accelerating parallel coordinate descent. In NIPS, 2012. [25] S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567–599, 2013. [26] S. Sridhar, S. Wright, C. Re, J. Liu, V. Bittorf, and C. Zhang. An approximate, efficient LP solver for lp rounding. NIPS, 2013. [27] P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization. Journal of Machine Learning Research, 15:1523–1548, 2014. [28] E. P. Xing, W. Dai, J. Kim, J. Wei, S. Lee, X. Zheng, P. Xie, A. Kumar, and Y. Yu. Petuum: A new platform for distributed machine learning on big data. In KDD, 2015. [29] I. Yen, C.-F. Chang, T.-W. Lin, S.-W. Lin, and S.-D. Lin. Indexed block coordinate descent for large-scale linear classification with limited memory. In KDD, 2013. [30] H.-F. Yu, C.-J. Hsieh, S. Si, and I. S. Dhillon. Parallel matrix factorization for recommender systems. KAIS, 2013. [31] H. Yun, H.-F. Yu, C.-J. Hsieh, S. Vishwanathan, and I. S. Dhillon. Nomad: Non-locking, stochastic multi-machine algorithm for asynchronous and decentralized matrix completion. In VLDB, 2014. [32] H. Zhang. The restricted strong convexity revisited: Analysis of equivalence to error bound and quadratic growth. ArXiv e-prints, 2015. 9
2016
162
6,064
Catching heuristics are optimal control policies Boris Belousov*, Gerhard Neumann*, Constantin A. Rothkopf**, Jan Peters* *Department of Computer Science, TU Darmstadt **Cognitive Science Center & Department of Psychology, TU Darmstadt Abstract Two seemingly contradictory theories attempt to explain how humans move to intercept an airborne ball. One theory posits that humans predict the ball trajectory to optimally plan future actions; the other claims that, instead of performing such complicated computations, humans employ heuristics to reactively choose appropriate actions based on immediate visual feedback. In this paper, we show that interception strategies appearing to be heuristics can be understood as computational solutions to the optimal control problem faced by a ball-catching agent acting under uncertainty. Modeling catching as a continuous partially observable Markov decision process and employing stochastic optimal control theory, we discover that the four main heuristics described in the literature are optimal solutions if the catcher has sufficient time to continuously visually track the ball. Specifically, by varying model parameters such as noise, time to ground contact, and perceptual latency, we show that different strategies arise under different circumstances. The catcher’s policy switches between generating reactive and predictive behavior based on the ratio of system to observation noise and the ratio between reaction time and task duration. Thus, we provide a rational account of human ball-catching behavior and a unifying explanation for seemingly contradictory theories of target interception on the basis of stochastic optimal control. 1 Introduction Humans exhibit impressive abilities of intercepting moving targets as exemplified in sports such as baseball [6]. Despite the ubiquity of this visuomotor capability, explaining how humans manage to catch flying objects is a long-standing problem in cognitive science and human motor control. What makes this problem computationally difficult for humans are the involved perceptual uncertainties, high sensory noise, and long action delays compared to artificial control systems and robots. Thus, understanding action generation in human ball interception from a computational point of view may yield important insights on human visuomotor control. Surprisingly, there is no generally accepted model that explains empirical observations of human interception of airborne balls. McIntyre et al. [15] and Hayhoe et al. [13] claim that humans employ an internal model of the physical world to predict where the ball will hit the ground and how to catch it. Such internal models allow for planning and potentially optimal action generation, e.g., they enable optimal catching strategies where humans predict the interception point and move there as fast as mechanically possible to await the ball. Clearly, there exist situations where latencies of the catching task require such strategies (e.g., when a catcher moves the arm to receive the pitcher’s ball). By contrast, Gigerenzer & Brighton [11] argue that the world is far too complex for sufficiently precise modeling (e.g., a catcher or an outfielder in baseball would have to take air resistance, wind, and spin of the ball into account to predict its trajectory). Thus, humans supposedly extract few simple but robust features that suffice for successful execution of tasks such as catching. Here, immediate feedback is employed to guide action generation instead of detailed modeling. Policies based on these features are called heuristics and the claim is that humans possess a bag of such tricks, the “adaptive toolbox”. For a baseball outfielder, a successful heuristic could be “Fix your gaze on the ball, start running, and adjust your running speed so that the angle of gaze remains constant” [10]. Thus, at the core, finding a unifying computational account of the human interception of moving targets also contributes to the long-lasting debate about the nature of human rationality [20]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In this paper, we propose that these seemingly contradictory views can be unified using a single computational model based on a continuous partially observable Markov decison process model (POMDP). In this model, the intercepting agent is assumed to choose optimal actions that take uncertainty about future movement into account. This model prescribes that both the catcher and the outfielder act optimally for their respective situation and uncertainty. We show that an outfielder agent using a highly stochastic internal model for prediction will indeed resort to purely reactive polices resembling established heuristics from the literature. The intuitive reason for such short-sighted behavior being optimal is that ball predictions over sufficiently long time horizons with highly stochastic models effectively become guessing. Similarly, our model will yield optimally planned actions based on predictions if the uncertainty encountered by the catcher agent is low while the latency is non-negligible in comparison to the movement duration. Moreover, we identify catching scenarios where the only strategy to intercept the ball requires to turn away from it and run as fast as possible. While such strategies cannot be explained by the heuristics proposed so far, the optimal control approach yields a plausible policy exhibiting both reactive and feedforward behavior. While other motor tasks (e.g., reaching movements [9, 22], locomotion [1]) have been explained in terms of stochastic optimal control theory, to the best of our knowledge this paper is the first to explain ball catching within this computational framework. We show that the four previously described empirical heuristics are actually optimal control policies. Moreover, our approach allows predictions for settings that cannot be explained by heuristics and have not been studied before. As catching behavior has previously been described as a prime example of humans not following complex computations but using simple heuristics, this study opens an important perspective on the fundamental question of human rationality. 2 Related work A number of heuristics have been proposed to explain how humans catch balls, see [27, 8, 16] for an overview. We focus on three theories well-supported by experiments: Chapman’s theory, the generalized optic acceleration cancellation (GOAC) theory, and the linear optical trajectory (LOT) theory. Figure 1: Well-known heuristics. Chapman [6] considered a simple kinematic problem (see Figure 1) where the ball B follows a parabolic trajectory B0:N while the agent C follows C0:N to intercept it. Only the position of the agent is relevant—his gaze is always directed towards the ball. Angle α is the elevation angle; angle γ is the bearing angle with respect to direction C0B0 (or C2G2 which is parallel). Due to delayed reaction, the agent starts running when the ball is already in the air. Chapman proposed two heuristics, i.e., the optic acceleration cancellation (OAC) that prescribes maintaining d tan α/dt = const, and the constant bearing angle (CBA), which requires γ = const. However, Chapman did not explain how these heuristics cope with disturbances and observations. To incorporate visual observations, McLeod et al. [16] introduced the field of view of the agent into Chapman’s theory and coupled the agent’s running velocity to the location of the ball in the visual field. Instead of the CBA heuristic, a tracking heuristic is employed to form the generalized optic acceleration cancellation (GOAC) theory. This tracking heuristic allows reactions to uncertain observations. In our example in Figure 1, the agent might have moved from C0 to C2 while maintaining a constant γ. To keep fulfilling this heuristic, the ball needs to arrive at B2 at the same time. However, if the ball is already at B′ 2, the agent will see it falling into the right side of his field of view and he will speed up. Thus, the agent internally tracks the angle δ between CD and C0B0 and attempts to adjust δ to γ. In Chapman’s theory and the GOAC theory, the elevation angle α and the bearing angle γ are controlled independently. As such, separate control strategies are implausible, therefore McBeath et al. [14] proposed the linear optical trajectory (LOT) heuristic that controls both angles jointly. LOT suggests that the catching agent runs such that the projection of the ball trajectory onto the plane perpendicular to the direction CD remains linear, which implies that ζ = ∠E2B0F2 remains constant. As tan ζ = tan α2/ tan β2 can be observed from the pyramid B0F2C2E2 with the right angles at F2, there exist a coupling between the elevation angle α and the horizontal optical angle β (defined as the angle between CB0 and CD), which can be used for directing the agent. 2 In contrast to the literature on outfielder’s catching in baseball, other strands of research in human motor control have focused on predictive models [17] and optimality of behavior [9, 22]. Tasks similar to the catcher’s in baseball have yielded evidence for prediction. Humans were shown to anticipate where a tennis ball will hit the floor when thrown with a bounce [13], and humans also appear to use an internal model of gravity to estimate time-to-contact when catching balls [15]. Optimal control theory has been used to explain reaching movements (with cost functions such as minimum-jerk [9], minimum-torque-change [23] and minimum end-point variance [12]), motor coordination [22], and locomotion (as minimizing metabolic energy [1]) 3 Modeling ball catching under uncertainty as an optimal control problem To parsimoniously model the catching agent, we rely on an optimal control formulation (Sec. 3.1) where the agent is described in terms of state-transitions, observations and a cost function (Sec. 3.2). 3.1 Optimal control under uncertainty In optimal control, the interaction of the agent with the environment is described by a stochastic dynamic model or system (e.g., describing ball flight and odometry). The system’s state xk+1 = f(xk, uk) + ϵk+1, k = 0 . . . N −1, (1) at the next time step k + 1 is given as a noisy function of the state xk ∈Rn and the action uk ∈Rm at the current time step k. The mean state dynamics f are perturbed by zero-mean stationary white Gaussian noise ϵk ∼N(0, Q) with a constant system noise covariance matrix Q modeling the uncertainty in the system (e.g., the uncertainty in the agent’s and ball’s positions). The state of the system is not always fully observed (e.g., the catching agent can only observe a ball when he looks at it), lower-dimensional than the system’s state (e.g., only ball positions can directly be observed) and the observations are generally noisy (e.g., visuomotor noise affects ball position estimates). Thus, at every time step k, sensory input provides a noisy lower-dimensional measurement zk ∈Rp of the true underlying system state xk ∈Rn with p < n described by zk = h(xk) + δk, k = 1 . . . N, (2) where h is a deterministic observation function and δk ∼N(0, Rk) is zero-mean non-stationary white Gaussian noise with a state-dependent covariance matrix Rk = R(xk). For catching, such state-dependency is crucial to modeling the effect of the human visual field. When the ball is at its center, measurements are least uncertain; whereas when the ball is outside the visual field, observations are maximally uncertain. The agent obviously can only generate actions based on the observations collected so far, while affecting his and the environment’s true next state. The history of observations allows forming probability distributions over the state at different time-steps called beliefs. Taking the uncertainty in (1) and (2) into account, the agent needs to plan and control in the belief space (i.e., the space of probability distributions over states) rather than in the state space. We approximate belief bk about the state of the system at time k by a Gaussian distribution with mean µk and variance Σk. For brevity, we write bk = (µk, Σk), associating the belief with its sufficient statistics. Belief dynamics (bk−1, uk−1, zk) →bk is approximated by the extended Kalman filter [21, Chapter 3.3]. A cost function J can be a parsimonious description of the agent’s objective. The agent will choose the next action by optimizing such a cost function with respect to all future actions at every time-step. To make the resulting optimal control computations numerically tractable, future observations need to be assumed to coincide with their most likely values (see e.g., [19, 5]). Thus, at every time step, the agent solves a constrained nonlinear optimization problem min u0:N−1 J(µ0:N, Σ0:N; u0:N−1) s.t. uk ∈Ufeasible, k = 0 . . . N −1, µk ∈Xfeasible, k = 0 . . . N, (3) which returns an optimal sequence of controls u0:N−1 minimizing the objective function J. The agent executes the first action, obtains a new observation, and replans again; such an approach is known as model predictive control. The policy resulting from such computations is sub-optimal because of open-loop planning and limited time horizon, but with growing time horizon it approaches the optimal policy. Reaction time τr can be incorporated by delaying the observations. An interesting property of this model is that the catching agent decides on his own in an optimal way when to gather information by looking at the ball and when to exploit already acquired knowledge depending on the level of uncertainty he agrees to tolerate. 3 3.2 A computational model of the catching agent for belief-space optimal control Here we explain the modeling assumptions concerning states, actions, state transitions, and observations. After that we describe the cost function that the agent has to minimize. States and actions. The state of the system x consists of the location and velocity of the ball in 3D space, the location and velocity of the catching agent in the ground plane, and the agent’s gaze direction represented by a unit 3D vector. The agent’s actions u consist of the force applied to the center of mass and the rate of change of the gaze direction. State transitions and observations. Several model components are essential to faithfully describe catching behavior. First, the state transfer is described by the damped dynamics of the agent’s center of mass ¨rc = F −λ˙rc, where rc = [x, y] are the agent’s Cartesian coordinates, F is the applied force resulting from the agent’s actions, and λ is the damping coefficient. Damping ensures that the catching agent’s velocity does not grow without bound when the maximum force is applied. The magnitude of the maximal force and the friction coefficient are chosen to fit Usain Bolt’s sprint data1. Second, the gaze vector’s direction d is controlled through the first derivatives of the two angles that define it. These are the angle between d and its projection onto the xy-plane and the angle between d’s projection onto the xy-plane and the x-axis. Such parametrization of the actions allows for realistically fast changes of gaze direction. Third, the maximal running speed depends on the gaze direction, e.g., running backwards is slower than running forward or even sideways. This relationship can be incorporated through dependence of the maximal applicable force F max on the direction d. It can be expressed by limiting the magnitude of the maximal applicable force |F max(θ)| = F1 +F2 cos θ, where θ is the angle between F (i.e., the direction into which the catcher accelerates) and the projection of the catcher’s gaze direction d onto the xy-plane. The parameters F1 and F2 are chosen to fit human data on forward and backwards running2. The resulting continuous time dynamics of agent and ball are converted into discrete time state transfers using the classical Runga-Kutta method. Fourth, the observation uncertainty depends on the state, which reflects the fact that humans’ visual resolution falls off across the visual field with increasing distance from the fovea. When the ball falls to the side of the agent’s field of view, the uncertainty about ball’s position grows according to σ2 o = s(σ2 max(1 −cos Ω) + σ2 min) depending on the distance to the ball s and the angle Ωbetween gaze direction d and the vector pointing from the agent towards the ball. The parameters {σmin, σmax} control the scale of the noise. The ball is modeled as a parabolic flight perturbed by Gaussian noise with variance σ2 b. Cost function. The catching agent has to trade-off success (i.e., catching the ball) with effort. In other words, he aims at maximizing the probability of catching the ball with minimal effort. A ball is assumed to be caught if it is within reach, i.e., not further away from the catching agent than εthreshold at the final time. Thus, the probability of catching the ball can be expressed as Pr(|µb −µc| ≤εthreshold), where µb and µc are the predicted positions of the ball and the agent at the final time (i.e., parts of the belief state of the agent). Since such beliefs are modeled as Gaussians, this probability has a unique global maximum at µb = µc and ΣN →0+. Therefore, a final cost Jfinal = w0∥µb −µc∥2 2 + w1 tr ΣN can approximate the negated log-probability of successfully catching the ball while rendering the optimal control problem solvable. The weights w0 and w1 are set to optimally approximate this negated log-probability. The desire of the agent to be energy efficient is encoded as a penalty on the control signals Jenergy = τ P N−1 k=0 uT k Muk with the fixed duration τ of the discretized time steps and a diagonal weight matrix M to trade-off controls. Finally, we add a term that penalizes agent’s uncertainty at every time step Jrunning = τw2 P N−1 k=0 tr Σk that encodes the agent preference of certainty over uncertainty. It appears naturally in optimal control problems when the maximum likelihood observations assumption is relaxed [24] and captures how final uncertainty distributes over the preceding time steps, but has to be added explicitly within the model predictive control framework in order to account for replanning at every time step. The complete cost function is thus given by the sum J =Jfinal+Jrunning+Jenergy =w0∥µb−µc∥2 2 | {z } final position + w1 tr ΣN | {z } final uncertainity +τw2 XN−1 k=0 tr Σk | {z } running uncertainty +τ XN−1 k=0 uT k Muk | {z } total energy that the catching agent has to minimize in order to successfully intercept the ball. 1Usain Bolt’s world record sprint data http://datagenetics.com/blog/july32013/index.html 2World records for backwards running http://www.recordholders.org/en/list/backwards-running.html 4 3.3 Implementation details To solve Problem (3), we use the covariance-free multiple shooting method [18] for trajectory optimization [7, 3] in the belief space. Derivatives of the cost function are computed using CasADi [2]. Non-linear optimization is carried out by Ipopt [26]. L-BFGS and warm-starts used. 4 Simulated experiments and results In this section, we present the results of two simulated scenarios and a comparative evaluation. First, using the optimal control approach, we show that continuous tracking (where the ball always remains in the field of view of the outfielder) naturally leads to the heuristics from literature [6, 16, 14] if the catching agent is sufficiently fast in comparison to the ball independent of whether he is running forward, backwards, or sideways. Subsequently, we show that more complex behavior arises when the ball is too fast to be caught while running only sideways or backwards (e.g., as in soccer or long passes in American football). Here, tracking is interrupted as the agent needs to turn away from the ball to run forward. While the heuristics break, our optimal control formulation exhibits plausible strategies similar to those employed by human catchers. Finally, we systematically study the effects of noise and time delay onto the agent’s policy. The optimal control policies arising from our model switch between reactive and predictive behaviors depending on uncertainty and latency. 4.1 Continuous tracking of an outfielder—heuristics hold To directly compare our model against empirical catching data that has been described as resulting from a heuristic, we reproduce the settings from [16] where a ball flew 15 m in 3 s and a human subject starting about 6 m away from the impact point had to intercept it. 0 5 10 15 20 25 30 35 Distance x [m] 0 2 4 6 8 10 Distance y [m] Catcher’s trajectory Catcher’s gaze Ball trajectory Observed ball trajectory Belief trajectory, mean Belief trajectory, covariance Figure 2: A typical simulated trajectory of a successful catch in the continuous tracking scenario as encountered by the outfielder. The uncertainty in the belief state is kept low by the agent by fixating the ball. Such empirically observed scenarios [6, 16, 14] have led to the proposition of the heuristics which arise naturally from our optimal control formulation. The optimal control policy can deal with such situations and yields the behavior observed by McLeod et al. [16]. In fact, even when doubling all distances the reactive control policy exhibits all four major heuristics (OAC, GOAC, CBA and LOT) with approximately the same precision as in the original human experiments. Figure 2 shows a typical simulated catch viewed from above. The ball and the agent’s true trajectories are depicted in green (note that the ball is frequently hidden behind the belief state trajectory). The agent’s observations and the mean belief trajectory of the ball are represented by magenta crosses and a magenta line, respectively. The belief uncertainty is indicated by the cyan ellipsoids that capture 95% of the probability mass. The gaze vectors of the agent are shown as red arrows. The catching agent starts sufficiently close to the interception point to continuously visually track the ball, therefore he is able to efficiently reduce his uncertainty on the ball’s position and successfully intercept it while keeping it in sight. Note that the agent does not follow a straight trajectory but a curved one in agreement with human experiments [16]. Figure 3 shows plots of the relevant angles over time to compare the behavior exhibited by human catchers to the optimal catching policy. The tangent of the elevation angle tan α grows linearly with time, as predicted by the optic acceleration cancellation heuristic (OAC). The bearing angle γ remains constant (within a 5 deg margin) as predicted by the constant bearing angle heuristic (CBA). The rotation angle δ oscillates around γ as predicted by the generalized optic acceleration cancellation theory (GOAC). The tangent of the horizontal optical angle tan β is proportional to tan α, as predicted by the linear optical trajectory theory (LOT). The small oscillations in the rotation angle and in the horizontal optical angle are due to reaction delay and uncertainty; they are also predicted by GOAC and LOT. Thus, in this well-studied case, the model produces an optimal policy that exhibits behavior which is fully in accordance with the heuristics. 5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] 0.0 0.2 0.4 0.6 0.8 1.0 Tangent of elevation angle tan α Optic acceleration cancellation (OAC) Simulation Linear fit 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] −30 −20 −10 0 10 20 30 Angle [deg] Tracking heuristic (part of GOAC) Rotation angle δ Bearing angle γ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] −5 0 5 10 15 20 25 Bearing angle γ [deg] Constant bearing angle (CBA) Simulation Constant fit 0.00 0.25 0.50 0.75 1.00 1.25 1.50 Tangent of elevation angle tan α 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Tangent of hor. opt. angle tan β Linear optical trajectory (LOT) Linear fit Simulation Figure 3: During simulations of successful catches for the continuous tracking scenario encountered by the outfielder (shown in Figure 2), the policies resulting from our optimal control formulation always fulfill the heuristics (OAC, GOAC, CBA, and LOT) from literature with approximately the same precision as in the original human experiments. 4.2 Interrupted tracking during long passes—heuristics break but prediction is required The competing theory to the heuristics claims that a predictive internal model allows humans to intercept the ball [15, 13]. Brancazio [4] points out that "the best outfielders can even turn their backs to the ball, run to the landing point, and then turn and wait for the ball to arrive". Similar behavior is observed in football and american football during long passes. To see whether predictions become necessary, we reproduced situations where the agent cannot catch the ball when acting purely reactively. For example, if the running time to interception point when running backwards (i.e., 0 5 10 15 20 25 30 35 Distance x [m] 0 5 10 15 Distance y [m] Plan Posterior Prior Prior + posterior Catcher’s gaze Figure 4: An interception plan that leads to successful catch despite violating heuristics. Here, the agent would not be able to reach the interception point in time while running backwards and, thus, has to turn forward to run faster. The resulting optimal control policy relies on beliefs on the future generated by an internal model. the ratio between the distance to the interception point divided by the maximal backwards running velocity) is substantially higher than the flight time of the ball, no backwards running strategy will be successful. Thus, by varying the initial conditions for the catching agent and the ball, new scenarios can be generated using our optimal control model. The agent’s control policy can be tested on reliance on predictions as it is available in form of a computational model, i.e., if the computed policy makes use of the belief states on future time steps, the agent clearly employs an internal model to predict the interception point. By choosing appropriate initial conditions for the ball and the agent, we can pursue such scenarios. For example, if the ball flies over the agent’s head, he has to turn 6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 Tangent of elevation angle tan α Optic acceleration cancellation (OAC) Simulation Linear fit 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] 0 20 40 60 80 100 120 140 Angle [deg] Tracking heuristic (part of GOAC) Rotation angle δ Bearing angle γ 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Time t [sec] −5 0 5 10 15 20 25 Bearing angle γ [deg] Constant bearing angle (CBA) Simulation Constant fit 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Tangent of elevation angle tan α −30 −25 −20 −15 −10 −5 0 5 Tangent of hor. opt. angle tan β Linear optical trajectory (LOT) Linear fit Simulation Figure 5: For initial conditions (positions of the ball and the agent) which do not allow the agent to reach the interception point by running backwards or sideways, the optimal policy will include running forward with maximal velocity (as shown in Figure 4). In this case, the agent cannot continuously visually track the ball and, expectedly, the heuristics do not hold. away from it for a moment in order to gain speed by running forward, instead of running backwards or sideways and looking at the ball all the time. Figure 4 shows such an interception plan where the agent decides to initially speed up and, when sufficiently close, turn around and track the ball while running sideways. Notice that the future belief uncertainty (i.e., the posterior uncertainty Σ returned by the extended Kalman filter), represented by red ellipses, grows when the catcher is not looking at the ball and shrinks otherwise. The prior uncertainty (obtained by integrating out future observations), shown in yellow, on the other hand, grows towards the end of the trajectory because future observations are not available at planning time. Similar to [5, 25], we can show for our model predictive control law that the sum of prior and posterior uncertainties (shown as green circles) equals the total system uncertainty obtained by propagating the belief state into the future without incorporating future observations. Figure 5 shows that the heuristics fail to explain this catch—even in the final time steps where the catching agent is tracking the ball to intercept it. OAC deviates from linearity, CBA is not constant, the tracking heuristic wildly deviates from the prediction, and LOT is highly non-linear. GOAC and LOT are affected more dramatically because they directly depend on the catcher’s gaze, in contrast to OAC and CBA. Since the heuristics were not meant to describe such situations, they predictably do not hold. Only an internal model can explain the reliance of the optimal policy on the future belief states. 4.3 Switching behaviors when uncertainty and reaction time are varied The previous experiment has pointed us towards policies that switch between predictive subpolicies based on internal models and reactive policies based on current observations. To systematically study what behaviors arise, we use the scenario from Section 4.2 and vary two essential model parameters: system to observation noise ratio η1 = log σ2 b/σ2 o and reaction time to task duration ratio η2 = τr/T, where T is the duration of the ball flight. The system to observation noise ratio effectively determines whether predictions based on the internal model of the dynamics are sufficiently trustworthy for (partially) open-loop behavior or whether reactive control based on the observations of the current state of the system should be preferred. The reaction time to task duration ratio sets the time scale of the problem. For example, an outfielder in baseball may have about 3 s to catch a ball and his reaction delay of about 200 ms is negligible, whereas a catcher in baseball often has to act within a fraction of a second, and, thus, the reaction latency becomes crucial. 7 Figure 6: Switches between reactive and feedforward policies are determined by uncertainties and latency. We run the experiment at different noise levels and time delays and average the results over 10 trials. In all cases, the agent starts at the point (20, 5) looking towards the origin, while the ball flies from the origin towards the point (30, 15) in 3 s. All parameters are kept fixed apart from the reaction time and system noise; in particular, task duration and observation noise are kept fixed. Figure 6 shows how the agent’s policy depends on the parameters. Boundaries correspond to contour lines of the function that equals number of times the agent turns towards the ball. We count turns by analyzing trajectories for gaze direction changes and reduction of uncertainty (e.g., in Figure 4 the agent turns once towards the ball). When reaction delays are long and predictions are reliable, the agent turns towards the interception points and runs as fast as he can (purely predictive strategies; lower right corner in Figure 6). When predictions are not sufficiently trustworthy, the agent has to switch multiple times between a reactive policy to gather information and a predictive feedforward strategy to successfully fulfill the task (upper left corner). When reaction time and system noise become sufficiently large, the agent fails to intercept the ball (upper right grayed out area). Thus, seemingly substantially different behaviors can be explained by means of a single model. Note that in this figure a purely reactive strategy (as required for only using the heuristics) is not possible. However, if different initial conditions enabling the purely reactive strategy are used, the upper left corner is dominated by the purely reactive strategy. 5 Discussion and conclusion We have presented a computational model of human interception of a moving target, such as an airborne ball, in form of a continuous state-action partially observable Markov decision problem. Depending on initial conditions, the optimal control solver either generates continuously tracking behavior or dictates the catching agent to turn away from the ball in order to speed up. Interception trajectories in the first case turn out to demonstrate all properties that were previously taken as evidence that humans avoid complex computations by employing simple heuristics. In the second case, we have shown that different regimes of switches between reactive and predictive behavior arise depending on relative uncertainty and latency. When the agent has sufficient time to gather observations (bottom-left in Figure 6), he turns towards the ball as soon as possible and continuously tracks it till the end (e.g., outfielder in baseball acts in this regime). If he is confident in the interception point prediction but the task duration is so short relative to the latency that he does not have sufficient time to gather observations (bottom-right), he will rely entirely on the internal model (e.g., catcher in baseball may act in this regime). If the agent’s interception point prediction is rather uncertain (e.g., due to system noise), the agent will gather observations more often regardless of time delays. Conclusions regarding the trade-off between reactive and predictive behaviors may well generalize beyond ball catching to various motor skills. Assuming an agent has an internal model of a task and gets noisy delayed partial observations, he has to tolerate a certain level of uncertainty; if moreover the agent has a limited time to perform the task, he is compelled to act based on prediction instead of observations. As our optimal control policy can explain both reactive heuristics and predictive feedforward strategies, as well as switches between these two kinds of subpolicies, it can be viewed as a unifying explanation for the two seemingly contradictory theories of target interception. In this paper, we have provided a computational level explanation for a range of observed human behaviors in ball catching. Importantly, while previous interpretations of whether human catching behavior is the result of complex computations or the result of simple heuristics have been inconclusive, here we have demonstrated that what looks like simple rules of thumb from a bag of tricks is actually the optimal solution to a continuous partially observable Markov decision problem. This result therefore fundamentally contributes to our understanding of human rationality. Acknowledgements This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 640554. 8 References [1] F. C. Anderson and M. G. Pandy. Dynamic optimization of human walking. Journal of biomechanical engineering, 123(5):381–390, 2001. [2] J. Andersson, J. Åkesson, and M. Diehl. CasADi: A symbolic package for automatic differentiation and optimal control. In Recent Advances in Algorithmic Differentiation, pages 297–307. Springer, 2012. [3] J. T. Betts. Survey of Numerical Methods for Trajectory Optimization. Journal of Guidance, Control, and Dynamics, 21(2):193–207, 1998. [4] P. J. Brancazio. Looking into Chapman’s homer: The physics of judging a fly ball. American Journal of Physics, 53(9):849, 1985. [5] A. Bry and N. Roy. Rapidly-exploring random belief trees for motion planning under uncertainty. Proceedings - IEEE ICRA, pages 723–730, 2011. [6] S. Chapman. Catching a baseball. American Journal of Physics, 36(10):868, 1968. [7] M. Diehl, H. G. Bock, H. Diedam, and P. B. Wieber. Fast direct multiple shooting algorithms for optimal robot control. In Lecture Notes in Control and Information Sciences, volume 340, pages 65–93, 2006. [8] P. W. Fink, P. S. Foo, and W. H. Warren. Catching fly balls in virtual reality: a critical test of the outfielder problem. Journal of vision, 9(13):1–8, 2009. [9] T. Flash and N. Hogan. The coordination of arm movements: an experimentally confirmed mathematical model. The Journal of Neuroscience, 5(7):1688–1703, 1985. [10] G. Gigerenzer. Gut feelings: The intelligence of the unconscious. Penguin, 2007. [11] G. Gigerenzer and H. Brighton. Homo Heuristicus: Why Biased Minds Make Better Inferences. Topics in Cognitive Science, 1(1):107–143, 2009. [12] C. M. Harris and D. M. Wolpert. Signal-dependent noise determines motor planning. Nature, 394(6695):780–4, 1998. [13] M. M. Hayhoe, N. Mennie, K. Gorgos, J. Semrau, and B. Sullivan. The role of prediction in catching balls. Journal of Vision, 4(8):156–156, 2004. [14] M. McBeath, D. Shaffer, and M. Kaiser. How baseball outfielders determine where to run to catch fly balls. Science, 268(5210):569–573, 1995. [15] J. McIntyre, M. Zago, A. Berthoz, and F. Lacquaniti. Does the brain model Newton’s laws? Nature neuroscience, 4(7):693–694, 2001. [16] P. McLeod, N. Reed, and Z. Dienes. The generalized optic acceleration cancellation theory of catching. Journal of experimental psychology. Human perception and performance, 32(1):139– 48, 2006. [17] R. C. Miall and D. M. Wolpert. Forward models for physiological motor control, 1996. [18] S. Patil, G. Kahn, M. Laskey, and J. Schulman. Scaling up Gaussian Belief Space Planning through Covariance-Free Trajectory Optimization and Automatic Differentiation. Algorithmic Foundations of Robotics XI, pages 515–533, 2015. [19] R. Platt, R. Tedrake, L. Kaelbling, and T. Lozano-Perez. Belief space planning assuming maximum likelihood observations. Robotics: Science and Systems, 2010. [20] H. A. Simon. A Behavioral Model of Rational Choice. The Quarterly Journal of Economics, 69(1):99–118, 1955. [21] S. Thrun, W. Burgard, and D. Fox. Probabilistic robotics. 2005. [22] E. Todorov and M. I. Jordan. Optimal feedback control as a theory of motor coordination. Nature neuroscience, 5(11):1226–1235, 2002. [23] Y. Uno, M. Kawato, and R. Suzuki. Formation and control of optimal trajectory in human multijoint arm movement. Minimum torque-change model. Biological cybernetics, 61(2):89– 101, 1989. [24] J. van den Berg, S. Patil, and R. Alterovitz. Motion planning under uncertainty using iterative local optimization in belief space. The International Journal of Robotics Research, 31(11):1263– 1278, 2012. [25] M. P. Vitus and C. J. Tomlin. Closed-loop belief space planning for linear, Gaussian systems. In Proceedings - IEEE ICRA, pages 2152–2159, 2011. [26] A. Wächter and L. T. Biegler. On the Implementation of a Primal-Dual Interior Point Filter Line Search Algorithm for Large-Scale Nonlinear Programming. Mathematical Programming, 106:25–57, 2006. [27] M. Zago, J. McIntyre, P. Senot, and F. Lacquaniti. Visuo-motor coordination and internal models for object interception, 2009. 9
2016
163
6,065
Online Pricing with Strategic and Patient Buyers Michal Feldman Tel-Aviv University and MSR Herzliya michal.feldman@cs.tau.ac.il Tomer Koren⇤ Google Brain tkoren@google.com Roi Livni⇤ Princeton University rlivni@cs.princeton.edu Yishay Mansour⇤ Tel-Aviv University mansour@tau.ac.il Aviv Zohar⇤ Hebrew University of Jerusalem avivz@cs.huji.ac.il Abstract We consider a seller with an unlimited supply of a single good, who is faced with a stream of T buyers. Each buyer has a window of time in which she would like to purchase, and would buy at the lowest price in that window, provided that this price is lower than her private value (and otherwise, would not buy at all). In this setting, we give an algorithm that attains O(T2/3) regret over any sequence of T buyers with respect to the best fixed price in hindsight, and prove that no algorithm can perform better in the worst case. 1 Introduction Perhaps the most common way to sell items is using a “posted price” mechanism in which the seller publishes the price of an item in advance, and buyers that wish to obtain the item decide whether to acquire it at the given price or to forgo the purchase. Such mechanisms are extremely appealing. The decision made by the buyer in a single-shot interaction is simple: if it values the item by more than the offering price, it should buy, and if its valuation is lower, it should decline. The seller on the other hand needs to determine the price at which she wishes to sell goods. In order to set prices, additive regret can be minimized using, for example, a multi-armed bandit (MAB) algorithm in which arms correspond to a different prices, and rewards correspond to the revenue obtained by the seller. Things become much more complicated when the buyers who are facing the mechanism are patient and can choose to wait for the price to drop. The simplicity of posted price mechanisms is then tainted by strategic considerations, as buyers attempt to guess whether or not the seller will lower the price in the future. The direct application of MABs is no longer adequate, as prices set by such algorithms may fluctuate at every time period. Strategic buyers can make use of this fact to gain the item at a lower price, which lowers the revenue of the seller and, more crucially, changes the seller’s feedback for a given price. With patient buyers, the revenue from sales is no longer a result of the price at the current period alone, but rather the combined outcome of prices that were set in surrounding time periods, and of the expectation of buyers regarding future prices. In this paper, we focus on strategic buyers that may delay their purchase in hopes of obtaining a better deal. We assume that each buyer has a valuation for the item, and a “patience level” which represents the length of the time-window during which it is willing to wait in order to purchase the item. Buyers wish to minimize the price during this period. Note that such buyers may interfere with naïve attempts to minimize regret, as consecutive days at which different prices are set are no longer independent. ⇤Parts of this work were done while the author was at Microsoft Research, Herzliya. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. To regain the simplicity of posted prices for the buyers, we consider a setting in which the seller commits to the price in subsequent time periods in advance, publishing prices for the entire window of the buyers. Strategic buyers that arrive at the market are then able to immediately choose the lowest price within their window. Thus, given the valuation and patience of the buyers (the number of days they are willing to wait) their actions are clearly determined: buy it at a day that is within the buyer’s patience window and price is cheapest, provided that it is lower than the valuation. An important aspect of our proposed model is to consider for each buyer a window of time (rather than, for example, discounting). For example, when considering discounting, the buyers, in order to best respond, would have argue how would other buyers would behave and how would the seller adjust the prices in response to them. By fixing a window of time, and forcing the seller to publish prices for the entire window, the buyers become “price takers” and their behavior becomes tractable to analyze. As in previous works, we focus on minimizing the additive regret of the seller, assuming that the appearance of buyers is adversarial; that is, we do not make any statistical assumptions on the buyers’ valuation and window size (except for a simple upper bound). Specifically we assume that the values are in the range [0, 1] and that the window size is in the range {1, . . ., ˆ⌧+ 1}. The regret is measured with respect to the best single price in hindsight. Note that the benchmark of a fixed price p⇤implies that any buyer with value above p⇤buys and any buyer with value below p⇤does not buy. The window size has no effect when we have a fixed price. On the other hand, for the online algorithm, having to deal with various window sizes create a new challenge. The special case of this model where ˆ⌧= 0 (and hence all buyers have window of size exactly one) was previously studied by Kleinberg and Leighton [11], who discussed a few different models for the buyer valuations and derived tight regret bounds for them. When the set of feasible prices is of constant size their result implies a ⇥( p T) regret bound with respect to the best fixed price, which is also proven to be the best possible in that case. In contrast, in the current paper we focus on the case ˆ⌧≥1, where the buyers’ window sizes may be larger than one, and exhibit the following contributions: (i) We present an algorithm that achieves O( ˆ⌧1/3T2/3) additive regret in an adversarial setting, compared to the best fixed posted price in hindsight. The upper bound relies on creating epochs, when the price within each epoch is fixed and the number of epochs limit the number of times the seller switches prices. The actual algorithm that is used to select prices within an epoch is EXP3 (or can be any other multi-arm bandit algorithm with similar performance). (ii) We exhibit a matching lower bound of ⌦( ˆ⌧1/3T2/3) regret. The proof of the lower bound reveals that the difficulty in achieving lower regret stems from the lost revenue that the seller suffers every time she tries to lower costs. Buyers from preceding time slots wait and do not purchase the items at the higher prices that prevailed when they arrive. We are thus able to prove a lower bound by reducing to a multi-armed bandit problem with switching costs. Our lower bound uses only two prices. In other words, we see that as soon as the buyers’ patience increases from zero to one, the optimal regret rate immediately jumps from ⇥( p T) to ⇥(T2/3). The rest of the paper is organized as follows. In the remainder of this section we briefly overview related work. We then proceed in Section 2 to provide a formal definition of the model and the statement of our main results. We continue in Section 3 with a presentation of our algorithm and its analysis, present our lower bound in Section 4, and conclude with a brief discussion. 1.1 Related work As mentioned above, the work most closely related to ours is the paper of Kleinberg and Leighton [11] that studies the case ˆ⌧= 0, i.e., in which the buyers’ windows are limited to be all of size one. For a fixed set of feasible prices of constant size, their result implies a ⇥( p T) regret bound, whereas for a continuum of prices they achieve a ⇥(T2/3) regret bound. The ⌦(T2/3) lower bound found in [11] is similar to our own in asymptotic magnitude, but stems from the continuous nature of the prices. In our case the lower bound is achieved for buyers with only 2 prices, a case in which Kleinberg and Leighton [11] have a bound of ⇥( p T). Hence, we show that such a bound can occur due to the strategic nature of the interaction itself. 2 A line of work appearing in [1, 12, 13] considers a model of a single buyer and a single seller, where the buyer is strategic and has a constant discount factor. The main issue is that the buyer continuously interacts with the seller and thus has an incentive to lower future prices at the cost of current valuations. They define strategic regret and derive near optimal strategic regret bounds for various valuation models. We differ from this line of work in a few important ways. First, they consider other either fixed unknown valuation or stochastic i.i.d. valuations, while we consider adversarial valuations. Second, they consider a single buyer while we consider a stream of buyers. More importantly, in our model the buyers do not influence the prices they are offered, so the strategic incentives are very different. Third, their model uses discounting to model the decay of buyer valuation over time, while we use a window of time. There is a vast literature in Algorithmic Game Theory on revenue maximization with posted prices, in settings where agents’ valuations are drawn from unknown distributions. For the case of a single good of unlimited supply, the goal is to approximate the best price, as a function of the number of samples observed and with a multiplicative approximation ratio. The work of Balcan et al. [4] gives a generic reduction which can be used to show that one can achieve an ✏-optimal pricing with a sample of size O((H/✏2) log(H/✏)), where H is a bound on the maximum valuation. The works of Cole and Roughgarden [8] and Huang et al. [10] show that for regular and Monotone Hazard Rate distributions sample bounds of ⇥(✏−3) and ⇥(✏−3/2), respectively, guarantee a multiplicative approximation of 1 −✏. Finally, our setting is somewhat similar to a unit-demand auction in which agents desire a single item out of several offerings. In our case, we can consider items sold at different times as different items and agents desire a single one that is within their window. When agents have unit-demand preferences, posted-price mechanisms can extract a constant fraction of the optimal revenue [5, 6, 7]. Note that a constant ratio approximation algorithm implies a linear regret in our model. On the other hand, these works consider a more involved problem from a buyer’s valuation perspective. 2 Setup and Main Results We consider a setting with a single seller and a sequence of T buyers b1, . . ., bT. Every buyer bt is associated with value vt 2 [0, 1] and patience ⌧t. A buyer’s patience indicates the time duration in which the buyer stays in the system and may purchase an item. The seller posts prices in advance over some time window. Let ˆ⌧be the maximum patience, and assume that ⌧t ˆ⌧for every t. Let pt denote the price at time t, and assume that all prices are chosen from a discrete (and normalized) predefined set of n prices P = {0, 1 n, 2 n, . . . 1}. At time t = 1, the seller posts prices p1, . . ., pˆ⌧+1, and learns the revenue obtained at time t = 1 (the revenue depends on the buyers’ behavior, which is explained below). Then, at each time step t, the seller publishes a new price pt+ˆ⌧2 P, and learns the revenue obtained at time t, which she can use to set the next prices. Note that at every time step, prices are known for the next ˆ⌧time steps. The revenue in every time step is determined by the strategic behavior of buyers, which is explained next. Every buyer bt observes prices pt, . . ., pt+⌧t , and purchases the item at the lowest price among these prices (breaking ties toward earlier times), if she does not exceed her value. The revenue obtained from buyer bt is given by: β(pt, . . ., pt+ˆ⌧; bt) = ⇢min{pt, . . ., pt+⌧t } if min{pt, . . ., pt+⌧t } vt, 0 otherwise. As bt has patience ⌧t, we will sometime omit the irrelevant prices and write β(pt, . . ., pt+⌧t ; bt) = β(pt, . . ., pt+ˆ⌧; bt). As we described, a buyer need not buy the item on her day of appearance and may choose to wait. If the buyer chooses to wait, we will observe the feedback from her decision only on the day of purchase. We therefore need to distinguish between the revenue from buyer t and the revenue at time t. Given a sequence of prices p1, . . ., pt+ˆ⌧and a sequence of buyers b1, . . ., bt we define the revenue at time t to be the sum of all revenues from buyers that preferred to buy at time t. Formally, let It denote the set of all buyers that buy at time t, i.e., It = {bi : t = arg min{i t i + ⌧i : pt = β(pi . . ., pi+ˆ⌧; bi)}}. 3 Then the revenue obtained at time t is given by: Rt(pt−ˆ⌧, . . ., pt+ˆ⌧) = R(p1, . . ., pt+ˆ⌧; b1:t) := ’ i2It β(pi, . . . pi+ˆ⌧; bi)), where we use the notation b1:T as a shorthand for the sequence b1, . . ., bT. The regret of the (possibly randomized) seller A is the difference between the revenue obtained by the best fixed price in hindsight and the expected revenue obtained by the seller A, given a sequence of buyers: RegretT(A; b1:T) = max p⇤2P T ’ t=1 R(p⇤, . . ., p⇤; b1:t) −E " T ’ t=1 R(p1, . . . pt+ˆ⌧; b1:t) # . We further denote by RegretT(A) the expected regret a seller A incurs for the worst case sequence, i.e., RegretT(A) = maxb1:T RegretT(A; b1:T). 2.1 Main Results Our main result are optimal regret rates in the strategic buyers setting. Theorem 1. The T-round expected regret of Algorithm 1 for any sequence of buyers b1, . . ., bT with patience at most ˆ⌧≥1 is upper bounded as RegretT 10( ˆ⌧n log n)1/3T2/3. Theorem 2. For any ˆ⌧≥1, n ≥2 and for any pricing algorithm, there exists a sequence of buyers b1, . . ., bT with patience at most ˆ⌧such that RegretT = ⌦( ˆ⌧1/3T2/3). 3 Algorithm In this section we describe and analyze our online pricing algorithm. It is worth to start by highlighting why simply running an “off the shelf” multi-arm bandit algorithm such as EXP3 would fail. Consider a fixed distribution over the actions and assume the buyer has a window size of two. Unlike the standard multi-arm bandit, where we get the expected revenue from the price we select, now the buyer would select the lower of the two prices, which would clearly hurt our revenue (there is a slight gain, by the increased probability of sell, but it does suffice to offset the loss). For this reason, the seller would intuitively like to minimize the number of time it changes prices (more precisely, lower the prices). Our online pricing algorithm, which is given in Algorithm 1, is based on the EXP3 algorithm of Auer et al. [3] which we use as a black-box. The algorithm divides the time horizon to roughly T2/3 epochs, and within each epoch the seller repeatedly announces the same price, that was chosen by the EXP3 black-box in the beginning of the epoch. In the end of the epoch, EXP3 is updated with the overall average performance of the chosen price during the epoch (ignoring the time steps which might be influenced by different prices). Hence, our algorithm changes the posted price only O(T2/3) times, thereby keeping under control the costs associated with price fluctuations due to the patience of the buyers. Algorithm 1: Online posted pricing algorithm Parameters: horizon T, number of prices n, and maximal patience ˆ⌧; Let B = b ˆ⌧2/3(n log n)−1/3T1/3c and T 0 = bT/Bc; Initialize A EXP3(T 0, n); for j = 0, . . .,T 0 −1 do Sample i ⇠A and let p0 j = i/n; for t = Bj + 1, . . ., B(j + 1) do Announce price pt+ˆ⌧= p0 j; %On j = 0, t = 1 announce p1, . . . pt+⌧= p0 0.; Receive and observe total revenue Rt(pt−ˆ⌧, . . ., pt+ˆ⌧); Update A with feedback 1 B ÕB(j+1) t=Bj+2ˆ⌧+1 Rt(pt−ˆ⌧, . . ., pt+ˆ⌧); for t = BT 0 + 1, . . .,T do Announce price pt+ˆ⌧= p0 T0−1; 4 We now analyze Algorithm 1 and prove Theorem 1. The proof follows standard arguments in adversarial online learning (e.g., Arora et al. [2]); we note, however, that for obtaining the optimal dependence on the maximal patience ˆ⌧one cannot apply existing results directly and has to analyse the effect of accumulating revenues over epochs more carefully, as we do in the proof below. This is mainly because in our model the revenue at time t is not bounded by 1 but by ⌧, hence readily amenable results would add a factor ⌧to the regret. Proof of Theorem 1. For all 0 j T 0 and for all prices p 2 P, define R0 j(p) = 1 B B(j+1) ’ t=Bj+2ˆ⌧+1 Rt(p, . . ., p). (Here, the argument p is repeated 2 ˆ⌧+ 1 times.) Observe that 0 R0 j(p) 1 for all j and p, as the maximal total revenue between rounds Bj + 2 ˆ⌧+ 1 and B(j + 1) is at most B; indeed, there are at most B buyers who might make a purchase during that time, and each purchase yields revenue of at most 1. By a similar reasoning, we also have Bj+2ˆ⌧ ’ t=Bj+1 Rt(p, . . ., p) 4 ˆ⌧ (1) for all j and p. Now, notice that pt = p0 j for all Bj + ˆ⌧+ 1 t B(j + 1) + ˆ⌧, hence the feedback fed back to A after epoch j is 1 B B(j+1) ’ t=Bj+2ˆ⌧+1 Rt(pt−ˆ⌧, . . ., pt+ˆ⌧) = 1 B B(j+1) ’ t=Bj+2ˆ⌧+1 Rt(p0 j, . . ., p0 j) = R0 j(p0 j). That is, Algorithm 1 is essentially running EXP3 on the reward functions R0 j. By the regret bound of EXP3, we know that T0−1 ’ j=0 R0 j(p⇤) −E 266664 T0−1 ’ j=0 R0 j(p0 j) 377775 3 p T 0n log n for any fixed p⇤2 P, which implies T0−1 ’ j=0 B(j+1) ’ t=Bj+2ˆ⌧+1 Rt(p⇤, . . ., p⇤) −E 266664 T0−1 ’ j=0 B(j+1) ’ t=Bj+2ˆ⌧+1 Rt(pt−ˆ⌧, . . ., pt+ˆ⌧) 377775 3 p BTn log n. (2) In addition, due to Eq. (1) and the non-negativity of the revenues, we also have T0−1 ’ j=0 Bj+2ˆ⌧ ’ t=Bj+1 Rt(p⇤, . . ., p⇤) −E 266664 T0−1 ’ j=0 Bj+2ˆ⌧ ’ t=Bj+1 Rt(pt−ˆ⌧, . . ., pt+ˆ⌧) 377775 4 ˆ⌧T 0 4 ˆ⌧T B . (3) Summing Eqs. (2) and (3), and taking into account rounds BT 0 + 1, . . .,T during which the total revenue is at most B + 2 ˆ⌧, we obtain the regret bound T ’ t=1 Rt(p⇤, . . ., p⇤) −E " T ’ t=1 Rt(pt−ˆ⌧, . . ., pt+ˆ⌧) # 3 p BTn log n + 4 ˆ⌧T B + B + 2 ˆ⌧. Finally, for B = b ˆ⌧2/3(n log n)−1/3T1/3c, the theorem follows (assuming that ˆ⌧< T). ⇤ 4 Lower Bound We next briefly overview the lower bound and the proof’s main technique. A full proof is given in the supplementary material; for simplicity of exposition, here we assume ˆ⌧= 1 and n = 2. 5 Our proof relies on two steps. The first step is a reduction from pricing with patience ˆ⌧= 0 but with switching cost. The second step is to lower bound the regret of pricing with switching cost. This we do again by reduction from the Multi Armed Bandit (MAB) problem with switching cost. We begin by briefly over-viewing these terms and definitions. We recall the standard setting of MAB with two actions and switching cost c. A sequence of losses is produced `1, . . ., `T where each loss is defined as a function `t : {1, 2} ! {0, 1}. At each round a player chooses an action it 2 {1, 2} and receives as feedback `t(it). The switching cost regret of player A is given by Sc-RegretT(A; `1:T) = E " T ’ t=1 `t(it) −min i⇤ T ’ t=1 `t(i⇤) # + cE [|{it : it , it−1}|] . We will define analogously the switching cost regret for non-strategic buyers. Namely, given a sequence of buyers b1, . . ., bT, all with patience ˆ⌧= 0, the switching cost regret for a seller is given by: Sc-RegretT(A; b1:T) = E " max p⇤ ’ R(p⇤; bt) − T ’ t=1 R(pt; bt) # + cE [|{pt : pt , pt−1}|] . 4.1 Reduction from Switching Cost Regret As we stated above, our first step is to show a reduction from switching cost regret for non-strategic buyers. This we do in Theorem 3: Theorem 3. For every (possibly randomized) seller A for strategic buyers with patience at most ˆ⌧= 1, there exists a randomized seller A0 for non-strategic buyers with patience ˆ⌧= 0 such that: 1 2S 1 12 -RegretT(A0) RegretT(A) The proof idea is to construct from every sequence of non-strategic buyers b1, . . ., bT a sequence of strategic buyers ¯b1, . . ., ¯bT such that the regret incurred to A by ¯b1:T is at least the switching cost regret incurred to A0 by b1:T. The idea behind the construction is as follows: At each iteration t we choose with probability half to present to the seller bt and with probability half we present to the seller a buyer zt that has the following statistics: zt = ⇢(v = 1 2, ⌧= 0) w.p. 1 2 (v = 1, ⌧= 1) w.p. 1 2 (4) That is, zt is with probability 1 2 a buyer with value v = 1 2 and patience ⌧= 0, and with probability 1 2, zt is a buyer with value v = 1 and patience ⌧= 1. Observe that if zt would always have patience ⌧= 0 (i.e., even if her value is v = 1), for any sequence of prices the expected rewards from the zt buyer is always half, independent of the prices. In other words, the sequence of noise does not change the performance of the sequence of prices and cannot be exploited to improve. On the other hand, note since the value 1 corresponds to patience 1, the seller might lose half whenever she reduces the price from 1 to 1 2. A crucial point is that the seller must post her price in advance, therefore she cannot in any way predict if the buyer is willing to wait or not and manipulate prices accordingly. A proof for the following Lemma is provided in the supplementary material. Lemma 4. Consider the pricing problem with ˆ⌧= 1 and n = 2. Let b1, . . ., bT be a sequence of buyers with patience 0. Let z1, . . ., zT be a sequence of stochastic buyers as in Eq. (4). Define ¯bt to be a stochastic buyer that is with probability half bt and with probability half zt. Then, for any seller A, the expected regret A incurs from the sequence ¯b1:T is at least E ⇥ RegretT(A; ¯b1:T) ⇤ ≥1 2E " max p⇤2P T ’ t=1 β(p⇤; bt) −β(pt; bt) # + 1 8E " T ’ t=1 |{pt : pt > pt+1}| # (5) where the expectations are taken with respect to the internal randomization of the seller A and the random bits used to generate the sequence ¯b1:T. 6 4.1.1 Proof for Theorem 3 To construct algortihm A0 from A, we develop a meta algorithm A, depicted in Algorithm 2 that receives an algorithm, or seller, as input. A0 is then the seller obtained by fixing A as the input for A. In our reduction we assume that at each iteration algorithm A can ask from A one posted price,pt, and in turn she can return a feedback rt to algorithm A, then a new iteration begins. The idea of construction is as follows: As an initialization step Algorithm A0 produces a stochastic sequence of buyers of type z1, . . ., zt, the algorithm then chooses apriori if at step t a buyer ¯bt is going to be the buyer bt that she observes or zt (with probability half each). The sequence ¯bt is distributed as depicted in Lemma 4. Note that we do not assume that the learner knows the value of bt. At each iteration t, algorithm A0 receives price pt from algorithm A and posts price pt. She then receives as feedback β(pt; bt): Given the revenues β(p1; b1), . . ., β(pt; bt) and her own internal random variables, the algorithm can calculate the revenue for algorithm A w.r.t to the sequence of buyers ¯b1, . . ., ¯bt, namely rt = R(pt−1, . . ., pt+1, ¯b1:t). In turn, at time t algorithm A0 returns to algorithm A her revenue, or feedback, w.r.t ¯b1, . . ., ¯bT at time t which is rt. Since Algorithm A receives as feedback at time t R(pt−1, pt, pt+1; ¯b1:t), we obtain that for the sequence of posted prices p1, . . ., pT: RegretT(A; ¯b1:T) = T ’ t=1 β(p⇤, p⇤; ¯bt) − T ’ t=1 β(pt, pt+1; ¯bt). Taking expectation, using Lemma 4, and noting that the number of time pt+1 > pt is at least 1/3 of the times pt , pt+1 (since there are only 2 prices), we have that 1 2S 1 12 -RegretT(A0; b1:T) E¯b1:T ⇥ RegretT(A; ¯b1:T) ⇤ RegretT(A) Since this is true for any sequence b1:T we obtain the desired result. Algorithm 2: Reduction from from pricing with switching cost to strategic buyers Input:T, A % A is an algorithm with bounded regret for strategic buyers; Output:p1, . . ., pT; Set r1 = . . . = rT = 0; Draw IID z1, . . ., zT % see Eq. 4; Draw IID e1, . . ., eT 2 {0, 1} Distributed according to Bernoulli distribution; for t=1,...,T do Receive from A a posted price pt+1; %At first round receive two prices p1, p2.; post price pt and receive as feedback β(pt; bt); if et = 0 then Set rt = rt + β(pt; bt); % ¯bt = bt else if (pt pt+1) OR (zt has patience 0) then Set rt = rt + β(pt; zt) else Set rt+1 = rt+1 + β(pt, pt+1; zt) Return rt as feedback to A. 4.2 From MAB with switching cost to Pricing with switching cost The above section concluded that switching cost for pricing may be reduced to pricing with strategic buyers. Therefore, our next step would be to show that we can produce a sequence of non-strategic buyers with high switching cost regret. Our proof relies on a further reduction for MAB with Switching cost. Theorem 5 (Dekel et al. [9]). Consider the MAB setting with 2 actions. For any randomized player, there exists a sequence of loss functions `1, . . ., `T where `t : {1, 2} ! {0, 1} such that Sc-RegretT(A; `1:T) 2 ⌦(T2/3), for every c > 0. 7 Here we prove an analogous statement for pricing setting: Theorem 6. Consider the pricing problem for buyers with patience ˆ⌧= 0 and n = 2. For any randomized seller, there exists a sequence of buyers b1, . . ., bT such that Sc-RegretT(A; b1:T) 2 ⌦(T2/3), for every c > 0. The transition from MAB with switching cost to pricing with switching cost is a non-trivial task. To do so, we have to relate actions to prices and values to loss vectors in a manner that would relate the revenue regret to the loss regret. The main challenge, perhaps, is that the structure of the feedback is inherently different in the two problems. In two-armed bandit problems all loss configuration are feasible. In contrast, in the pricing case certain feedbacks collapse to full information: for example, if we sell at price 1 we know the feedback from price 1 2, and if we fail to sell at price 1 2 we obtain full feedback for price 1. Our reduction proceeds roughly along the following lines. We begin by constructing stochastic mappings that turn loss vectors into values ⌫t : {0, 1}2 ! {0, 1 2, 1}. This in turn defines a mapping from a sequences of losses `t to stochastic sequences of buyers bt. In our reduction we assume we are given an algorithm A that solves the pricing problem; that is, at each iteration we may ask for a price and then in turn we return a feedback β(pt; bt). Note that we cannot assume that we have access or know bt that is defined by ⌫t(`t). The buyer bt depends on the full loss vector `t: assuming that we can see the full `t would not lead to a meaningful reduction for MAB. However, our construction of ⌫t is such that each posted price is associated with a single action. This means that for each posted price there is a single action we need to observe in order to calculate the correct feedback or revenue. This also means that we switch actions only when algorithm A switches prices. Finally, our sequence of transformation has the following property: if i is the action needed in order to discover the revenue for price p, then E(`t(i)) = 1 2 −1 4E(β(p; bt)). Thus, the regret for our actions compares to the regret of the seller. 5 Discussion In this work we introduced a new model of strategic buyers, where buyers have a window of time in which they would like to purchase the item. Our modeling circumvents complicated dynamics between the buyers, since it forces the seller to post prices for the entire window of time in advance. We consider an adversarial setting, where both buyer valuation and window size are selected adversarially. We compare our online algorithm to a static fixed price, which is by definition oblivious to the window sizes. We show that the regret is sub-linear, and more precisely ⇥(T2/3). The upper bound shows that in this model the average regret per buyer is still vanishing. The lower bound shows that having a window size greater than 1 impacts the regret bounds dramatically. Even for window sizes 1 or 2 and prices 1 2 or 1 we get a regret of ⌦(T2/3), compared to a regret of O(T1/2) when all the windows are of size 1. Given the sharp ⇥(T2/3) bound, it might be worth revisiting our feedback model. Our model assumes that the feedback for the seller is the revenue obtained at the end of each day. It is worthwhile to consider stronger feedback models, where the seller can gain more information about the buyers. Namely, their day of arrival and their window size. In terms of the upper bound, our result applies to any feedback model that is stronger, i.e., as long as the seller gets to observe the revenue per day, the O(T2/3) bound holds. As far as the lower bound is concerned, one can observe that our proofs and construction are valid even for very strong feedback models. Namely, even if the seller gets as feedback the revenue from buyer t at time t (instead of the time of purchase), and in fact even if she gets to observe the patience of the buyers (i.e. full information w.r.t. patience), the ⌦(T2/3) bound holds, as long as the seller posts prices in advance. We did not consider continuous pricing explicitly, but one can verify that applying our algorithm to a setting of continuous pricing gives a regret bound of O(T3/4), by discretizing the continuous prices to T1/4 prices. On the positive side, it shows that we still obtain a vanishing average regret in the continuous case. On the other hand, we were not able to improve our lower bound to match this upper bound. This gap is one of the interesting open problems in our work. 8 References [1] K. Amin, A. Rostamizadeh, and U. Syed. Learning prices for repeated auctions with strategic buyers. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 1169–1177. 2013. [2] R. Arora, O. Dekel, and A. Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. arXiv preprint arXiv:1206.6400, 2012. [3] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002. [4] M.-F. Balcan, A. Blum, J. D. Hartline, and Y. Mansour. Reducing mechanism design to algorithm design via machine learning. J. Comput. Syst. Sci., 74(8):1245–1270, 2008. [5] S. Chawla, J. D. Hartline, and R. D. Kleinberg. Algorithmic pricing via virtual valuations. In ACM Conference on Electronic Commerce, pages 243–251, 2007. [6] S. Chawla, J. D. Hartline, D. L. Malec, and B. Sivan. Multi-parameter mechanism design and sequential posted pricing. In STOC, pages 311–320, 2010. [7] S. Chawla, D. L. Malec, and B. Sivan. The Power of Randomness in Bayesian Optimal Mechanism Design. In the 11th ACM Conference on Electronic Commerce (EC), 2010. [8] R. Cole and T. Roughgarden. The sample complexity of revenue maximization. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 243–252. ACM, 2014. [9] O. Dekel, J. Ding, T. Koren, and Y. Peres. Bandits with switching costs: T 2/3 regret. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 459–467. ACM, 2014. [10] Z. Huang, Y. Mansour, and T. Roughgarden. Making the most of your samples. In Proceedings of the Sixteenth ACM Conference on Economics and Computation, EC, pages 45–60, 2015. [11] R. D. Kleinberg and F. T. Leighton. The value of knowing a demand curve: Bounds on regret for online posted-price auctions. In 44th Symposium on Foundations of Computer Science FOCS, pages 594–605, 2003. [12] M. Mohri and A. Munoz. Optimal regret minimization in posted-price auctions with strategic buyers. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1871–1879. 2014. [13] M. Mohri and A. Munoz. Revenue optimization against strategic buyers. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2530–2538. 2015. 9
2016
164
6,066
Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling Jiajun Wu* Chengkai Zhang* Tianfan Xue MIT CSAIL MIT CSAIL MIT CSAIL William T. Freeman Joshua B. Tenenbaum MIT CSAIL, Google Research MIT CSAIL Abstract We study the problem of 3D object generation. We propose a novel framework, namely 3D Generative Adversarial Network (3D-GAN), which generates 3D objects from a probabilistic space by leveraging recent advances in volumetric convolutional networks and generative adversarial nets. The benefits of our model are three-fold: first, the use of an adversarial criterion, instead of traditional heuristic criteria, enables the generator to capture object structure implicitly and to synthesize high-quality 3D objects; second, the generator establishes a mapping from a low-dimensional probabilistic space to the space of 3D objects, so that we can sample objects without a reference image or CAD models, and explore the 3D object manifold; third, the adversarial discriminator provides a powerful 3D shape descriptor which, learned without supervision, has wide applications in 3D object recognition. Experiments demonstrate that our method generates high-quality 3D objects, and our unsupervisedly learned features achieve impressive performance on 3D object recognition, comparable with those of supervised learning methods. 1 Introduction What makes a 3D generative model of object shapes appealing? We believe a good generative model should be able to synthesize 3D objects that are both highly varied and realistic. Specifically, for 3D objects to have variations, a generative model should be able to go beyond memorizing and recombining parts or pieces from a pre-defined repository to produce novel shapes; and for objects to be realistic, there need to be fine details in the generated examples. In the past decades, researchers have made impressive progress on 3D object modeling and synthesis [Van Kaick et al., 2011, Tangelder and Veltkamp, 2008, Carlson, 1982], mostly based on meshes or skeletons. Many of these traditional methods synthesize new objects by borrowing parts from objects in existing CAD model libraries. Therefore, the synthesized objects look realistic, but not conceptually novel. Recently, with the advances in deep representation learning and the introduction of large 3D CAD datasets like ShapeNet [Chang et al., 2015, Wu et al., 2015], there have been some inspiring attempts in learning deep object representations based on voxelized objects [Girdhar et al., 2016, Su et al., 2015a, Qi et al., 2016]. Different from part-based methods, many of these generative approaches do not explicitly model the concept of parts or retrieve them from an object repository; instead, they synthesize new objects based on learned object representations. This is a challenging problem because, compared to the space of 2D images, it is more difficult to model the space of 3D shapes due to its higher dimensionality. Their current results are encouraging, but often there still exist artifacts (e.g., fragments or holes) in the generated objects. In this paper, we demonstrate that modeling volumetric objects in a general-adversarial manner could be a promising solution to generate objects that are both novel and realistic. Our approach combines ∗indicates equal contributions. Emails: {jiajunwu, ckzhang, tfxue, billf, jbt}@mit.edu 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the merits of both general-adversarial modeling [Goodfellow et al., 2014, Radford et al., 2016] and volumetric convolutional networks [Maturana and Scherer, 2015, Wu et al., 2015]. Different from traditional heuristic criteria, generative-adversarial modeling introduces an adversarial discriminator to classify whether an object is synthesized or real. This could be a particularly favorable framework for 3D object modeling: as 3D objects are highly structured, a generative-adversarial criterion, but not a voxel-wise independent heuristic one, has the potential to capture the structural difference of two 3D objects. The use of a generative-adversarial loss may also avoid possible criterion-dependent overfitting (e.g., generating mean-shape-like blurred objects when minimizing a mean squared error). Modeling 3D objects in a generative-adversarial way offers additional distinctive advantages. First, it becomes possible to sample novel 3D objects from a probabilistic latent space such as a Gaussian or uniform distribution. Second, the discriminator in the generative-adversarial approach carries informative features for 3D object recognition, as demonstrated in experiments (Section 4). From a different perspective, instead of learning a single feature representation for both generating and recognizing objects [Girdhar et al., 2016, Sharma et al., 2016], our framework learns disentangled generative and discriminative representations for 3D objects without supervision, and applies them on generation and recognition tasks, respectively. We show that our generative representation can be used to synthesize high-quality realistic objects, and our discriminative representation can be used for 3D object recognition, achieving comparable performance with recent supervised methods [Maturana and Scherer, 2015, Shi et al., 2015], and outperforming other unsupervised methods by a large margin. The learned generative and discriminative representations also have wide applications. For example, we show that our network can be combined with a variational autoencoder [Kingma and Welling, 2014, Larsen et al., 2016] to directly reconstruct a 3D object from a 2D input image. Further, we explore the space of object representations and demonstrate that both our generative and discriminative representations carry rich semantic information about 3D objects. 2 Related Work Modeling and synthesizing 3D shapes 3D object understanding and generation is an important problem in the graphics and vision community, and the relevant literature is very rich [Carlson, 1982, Tangelder and Veltkamp, 2008, Van Kaick et al., 2011, Blanz and Vetter, 1999, Kalogerakis et al., 2012, Chaudhuri et al., 2011, Xue et al., 2012, Kar et al., 2015, Bansal et al., 2016, Wu et al., 2016]. Since decades ago, AI and vision researchers have made inspiring attempts to design or learn 3D object representations, mostly based on meshes and skeletons. Many of these shape synthesis algorithms are nonparametric and they synthesize new objects by retrieving and combining shapes and parts from a database. Recently, Huang et al. [2015] explored generating 3D shapes with pre-trained templates and producing both object structure and surface geometry. Our framework synthesizes objects without explicitly borrow parts from a repository, and requires no supervision during training. Deep learning for 3D data The vision community have witnessed rapid development of deep networks for various tasks. In the field of 3D object recognition, Li et al. [2015], Su et al. [2015b], Girdhar et al. [2016] proposed to learn a joint embedding of 3D shapes and synthesized images, Su et al. [2015a], Qi et al. [2016] focused on learning discriminative representations for 3D object recognition, Wu et al. [2016], Xiang et al. [2015], Choy et al. [2016] discussed 3D object reconstruction from in-the-wild images, possibly with a recurrent network, and Girdhar et al. [2016], Sharma et al. [2016] explored autoencoder-based networks for learning voxel-based object representations. Wu et al. [2015], Rezende et al. [2016], Yan et al. [2016] attempted to generate 3D objects with deep networks, some using 2D images during training with a 3D to 2D projection layer. Many of these networks can be used for 3D shape classification [Su et al., 2015a, Sharma et al., 2016, Maturana and Scherer, 2015], 3D shape retrieval [Shi et al., 2015, Su et al., 2015a], and single image 3D reconstruction [Kar et al., 2015, Bansal et al., 2016, Girdhar et al., 2016], mostly with full supervision. In comparison, our framework requires no supervision for training, is able to generate objects from a probabilistic space, and comes with a rich discriminative 3D shape representation. Learning with an adversarial net Generative Adversarial Nets (GAN) [Goodfellow et al., 2014] proposed to incorporate an adversarial discriminator into the procedure of generative modeling. More recently, LAPGAN [Denton et al., 2015] and DC-GAN [Radford et al., 2016] adopted GAN with convolutional networks for image synthesis, and achieved impressive performance. Researchers have also explored the use of GAN for other vision problems. To name a few, Wang and Gupta [2016] discussed how to model image style and structure with sequential GANs, Li and Wand [2016] and Zhu et al. [2016] used GAN for texture synthesis and image editing, respectively, and Im et al. [2016] 2 z G(z) in 3D Voxel Space 64×64×64 512×4×4×4 256×8×8×8 128×16×16×16 64×32×32×32 Figure 1: The generator in 3D-GAN. The discriminator mostly mirrors the generator. developed a recurrent adversarial network for image generation. While previous approaches focus on modeling 2D images, we discuss the use of an adversarial component in modeling 3D objects. 3 Models In this section we introduce our model for 3D object generation. We first discuss how we build our framework, 3D Generative Adversarial Network (3D-GAN), by leveraging previous advances on volumetric convolutional networks and generative adversarial nets. We then show how to train a variational autoencoder [Kingma and Welling, 2014] simultaneously so that our framework can capture a mapping from a 2D image to a 3D object. 3.1 3D Generative Adversarial Network (3D-GAN) As proposed in Goodfellow et al. [2014], the Generative Adversarial Network (GAN) consists of a generator and a discriminator, where the discriminator tries to classify real objects and objects synthesized by the generator, and the generator attempts to confuse the discriminator. In our 3D Generative Adversarial Network (3D-GAN), the generator G maps a 200-dimensional latent vector z, randomly sampled from a probabilistic latent space, to a 64 × 64 × 64 cube, representing an object G(z) in 3D voxel space. The discriminator D outputs a confidence value D(x) of whether a 3D object input x is real or synthetic. Following Goodfellow et al. [2014], we use binary cross entropy as the classification loss, and present our overall adversarial loss function as L3D-GAN = log D(x) + log(1 −D(G(z))), (1) where x is a real object in a 64 × 64 × 64 space, and z is a randomly sampled noise vector from a distribution p(z). In this work, each dimension of z is an i.i.d. uniform distribution over [0, 1]. Network structure Inspired by Radford et al. [2016], we design an all-convolutional neural network to generate 3D objects. As shown in Figure 1, the generator consists of five volumetric fully convolutional layers of kernel sizes 4 × 4 × 4 and strides 2, with batch normalization and ReLU layers added in between and a Sigmoid layer at the end. The discriminator basically mirrors the generator, except that it uses Leaky ReLU [Maas et al., 2013] instead of ReLU layers. There are no pooling or linear layers in our network. More details can be found in the supplementary material. Training details A straightforward training procedure is to update both the generator and the discriminator in every batch. However, the discriminator usually learns much faster than the generator, possibly because generating objects in a 3D voxel space is more difficult than differentiating between real and synthetic objects [Goodfellow et al., 2014, Radford et al., 2016]. It then becomes hard for the generator to extract signals for improvement from a discriminator that is way ahead, as all examples it generated would be correctly identified as synthetic with high confidence. Therefore, to keep the training of both networks in pace, we employ an adaptive training strategy: for each batch, the discriminator only gets updated if its accuracy in the last batch is not higher than 80%. We observe this helps to stabilize the training and to produce better results. We set the learning rate of G to 0.0025, D to 10−5, and use a batch size of 100. We use ADAM [Kingma and Ba, 2015] for optimization, with β = 0.5. 3.2 3D-VAE-GAN We have discussed how to generate 3D objects by sampling a latent vector z and mapping it to the object space. In practice, it would also be helpful to infer these latent vectors from observations. For example, if there exists a mapping from a 2D image to the latent representation, we can then recover the 3D object corresponding to that 2D image. 3 Following this idea, we introduce 3D-VAE-GAN as an extension to 3D-GAN. We add an additional image encoder E, which takes a 2D image x as input and outputs the latent representation vector z. This is inspired by VAE-GAN proposed by [Larsen et al., 2016], which combines VAE and GAN by sharing the decoder of VAE with the generator of GAN. The 3D-VAE-GAN therefore consists of three components: an image encoder E, a decoder (the generator G in 3D-GAN), and a discriminator D. The image encoder consists of five spatial convolution layers with kernel size {11, 5, 5, 5, 8} and strides {4, 2, 2, 2, 1}, respectively. There are batch normalization and ReLU layers in between, and a sampler at the end to sample a 200 dimensional vector used by the 3D-GAN. The structures of the generator and the discriminator are the same as those in Section 3.1. Similar to VAE-GAN [Larsen et al., 2016], our loss function consists of three parts: an object reconstruction loss Lrecon, a cross entropy loss L3D-GAN for 3D-GAN, and a KL divergence loss LKL to restrict the distribution of the output of the encoder. Formally, these loss functions write as L = L3D-GAN + α1LKL + α2Lrecon, (2) where α1 and α2 are weights of the KL divergence loss and the reconstruction loss. We have L3D-GAN = log D(x) + log(1 −D(G(z))), (3) LKL = DKL(q(z|y) || p(z)), (4) Lrecon = ||G(E(y)) −x||2, (5) where x is a 3D shape from the training set, y is its corresponding 2D image, and q(z|y) is the variational distribution of the latent representation z. The KL-divergence pushes this variational distribution towards to the prior distribution p(z), so that the generator can sample the latent representation z from the same distribution p(z). In this work, we choose p(z) a multivariate Gaussian distribution with zero-mean and unit variance. For more details, please refer to Larsen et al. [2016]. Training 3D-VAE-GAN requires both 2D images and their corresponding 3D models. We render 3D shapes in front of background images (16, 913 indoor images from the SUN database [Xiao et al., 2010]) in 72 views (from 24 angles and 3 elevations). We set α1 = 5, α2 = 10−4, and use a similar training strategy as in Section 3.1. See our supplementary material for more details. 4 Evaluation In this section, we evaluate our framework from various aspects. We first show qualitative results of generated 3D objects. We then evaluate the unsupervisedly learned representation from the discriminator by using them as features for 3D object classification. We show both qualitative and quantitative results on the popular benchmark ModelNet [Wu et al., 2015]. Further, we evaluate our 3D-VAE-GAN on 3D object reconstruction from a single image, and show both qualitative and quantitative results on the IKEA dataset [Lim et al., 2013]. 4.1 3D Object Generation Figure 2 shows 3D objects generated by our 3D-GAN. For this experiment, we train one 3D-GAN for each object category. For generation, we sample 200-dimensional vectors following an i.i.d. uniform distribution over [0, 1], and render the largest connected component of each generated object. We compare 3D-GAN with Wu et al. [2015], the state-of-the-art in 3D object synthesis from a probabilistic space, and with a volumetric autoencoder, whose variants have been employed by multiple recent methods [Girdhar et al., 2016, Sharma et al., 2016]. Because an autoencoder does not restrict the distribution of its latent representation, we compute the empirical distribution p0(z) of the latent vector z of all training examples, fit a Gaussian distribution g0 to p0, and sample from g0. Our algorithm produces 3D objects with much higher quality and more fine-grained details. Compared with previous works, our 3D-GAN can synthesize high-resolution 3D objects with detailed geometries. Figure 3 shows both high-res voxels and down-sampled low-res voxels for comparison. Note that it is relatively easy to synthesize a low-res object, but is much harder to obtain a high-res one due to the rapid growth of 3D space. However, object details are only revealed in high resolution. A natural concern to our generative model is whether it is simply memorizing objects from training data. To demonstrate that the network can generalize beyond the training set, we compare synthesized objects with their nearest neighbor in the training set. Since the retrieval objects based on ℓ2 distance in the voxel space are visually very different from the queries, we use the output of the last convolutional 4 Our results (64 × 64 × 64) NN Gun Chair Car Sofa Table Objects generated by Wu et al. [2015] (30 × 30 × 30) Table Car Objects generated by a volumetric autoencoder (64 × 64 × 64) Chair Table Sofa Figure 2: Objects generated by 3D-GAN from vectors, without a reference image/object. We show, for the last two objects in each row, the nearest neighbor retrieved from the training set. We see that the generated objects are similar, but not identical, to examples in the training set. For comparison, we show objects generated by the previous state-of-the-art [Wu et al., 2015] (results supplied by the authors). We also show objects generated by autoencoders trained on a single object category, with latent vectors sampled from empirical distribution. See text for details. High-res Low-res High-res Low-res High-res Low-res High-res Low-res Figure 3: We present each object at high resolution (64 × 64 × 64) on the left and at low resolution (down-sampled to 16 × 16 × 16) on the right. While humans can perceive object structure at a relatively low resolution, fine details and variations only appear in high-res objects. layer in our discriminator (with a 2x pooling) as features for retrieval instead. Figure 2 shows that generated objects are similar, but not identical, to the nearest examples in the training set. 4.2 3D Object Classification We then evaluate the representations learned by our discriminator. A typical way of evaluating representations learned without supervision is to use them as features for classification. To obtain features for an input 3D object, we concatenate the responses of the second, third, and fourth convolution layers in the discriminator, and apply max pooling of kernel sizes {8, 4, 2}, respectively. We use a linear SVM for classification. Data We train a single 3D-GAN on the seven major object categories (chairs, sofas, tables, boats, airplanes, rifles, and cars) of ShapeNet [Chang et al., 2015]. We use ModelNet [Wu et al., 2015] for testing, following Sharma et al. [2016], Maturana and Scherer [2015], Qi et al. [2016].∗Specifically, we evaluate our model on both ModelNet10 and ModelNet40, two subsets of ModelNet that are often ∗For ModelNet, there are two train/test splits typically used. Qi et al. [2016], Shi et al. [2015], Maturana and Scherer [2015] used the train/test split included in the dataset, which we also follow; Wu et al. [2015], Su 5 Supervision Pretraining Method Classification (Accuracy) ModelNet40 ModelNet10 Category labels ImageNet MVCNN [Su et al., 2015a] 90.1% MVCNN-MultiRes [Qi et al., 2016] 91.4% None 3D ShapeNets [Wu et al., 2015] 77.3% 83.5% DeepPano [Shi et al., 2015] 77.6% 85.5% VoxNet [Maturana and Scherer, 2015] 83.0% 92.0% ORION [Sedaghat et al., 2016] 93.8% Unsupervised SPH [Kazhdan et al., 2003] 68.2% 79.8% LFD [Chen et al., 2003] 75.5% 79.9% T-L Network [Girdhar et al., 2016] 74.4% VConv-DAE [Sharma et al., 2016] 75.5% 80.5% 3D-GAN (ours) 83.3% 91.0% Table 1: Classification results on the ModelNet dataset. Our 3D-GAN outperforms other unsupervised learning methods by a large margin, and is comparable to some recent supervised learning frameworks. # objects per class in training 10 20 40 80 160 full Accuracy (%) 65 70 75 80 85 3D-GAN VoxNet VConv-DAE Figure 4: ModelNet40 classification with limited training data Figure 5: The effects of individual dimensions of the object vector Figure 6: Intra/inter-class interpolation between object vectors used as benchmarks for 3D object classification. Note that the training and test categories are not identical, which also shows the out-of-category generalization power of our 3D-GAN. Results We compare with the state-of-the-art methods [Wu et al., 2015, Girdhar et al., 2016, Sharma et al., 2016, Sedaghat et al., 2016] and show per-class accuracy in Table 1. Our representation outperforms other features learned without supervision by a large margin (83.3% vs. 75.5% on ModelNet40, and 91.0% vs 80.5% on ModelNet10) [Girdhar et al., 2016, Sharma et al., 2016]. Further, our classification accuracy is also higher than some recent supervised methods [Shi et al., 2015], and is close to the state-of-the-art voxel-based supervised learning approaches [Maturana and Scherer, 2015, Sedaghat et al., 2016]. Multi-view CNNs [Su et al., 2015a, Qi et al., 2016] outperform us, though their methods are designed for classification, and require rendered multi-view images and an ImageNet-pretrained model. 3D-GAN also works well with limited training data. As shown in Figure 4, with roughly 25 training samples per class, 3D-GAN achieves comparable performance on ModelNet40 with other unsupervised learning methods trained with at least 80 samples per class. 4.3 Single Image 3D Reconstruction As an application, our show that the 3D-VAE-GAN can perform well on single image 3D reconstruction. Following previous work [Girdhar et al., 2016], we test it on the IKEA dataset [Lim et al., 2013], and show both qualitative and quantitative results. Data The IKEA dataset consists of images with IKEA objects. We crop the images so that the objects are centered in the images. Our test set consists of 1, 039 objects cropped from 759 images (supplied by the author). The IKEA dataset is challenging because all images are captured in the wild, often with heavy occlusions. We test on all six categories of objects: bed, bookcase, chair, desk, sofa, and table. Results We show our results in Figure 7 and Table 2, with performance of a single 3D-VAE-GAN jointly trained on all six categories, as well as the results of six 3D-VAE-GANs separately trained on et al. [2015a], Sharma et al. [2016] used 80 training points and 20 test points in each category for experiments, possibly with viewpoint augmentation. 6 Method Bed Bookcase Chair Desk Sofa Table Mean AlexNet-fc8 [Girdhar et al., 2016] 29.5 17.3 20.4 19.7 38.8 16.0 23.6 AlexNet-conv4 [Girdhar et al., 2016] 38.2 26.6 31.4 26.6 69.3 19.1 35.2 T-L Network [Girdhar et al., 2016] 56.3 30.2 32.9 25.8 71.7 23.3 40.0 3D-VAE-GAN (jointly trained) 49.1 31.9 42.6 34.8 79.8 33.1 45.2 3D-VAE-GAN (separately trained) 63.2 46.3 47.2 40.7 78.8 42.3 53.1 Table 2: Average precision for voxel prediction on the IKEA dataset.† Figure 7: Qualitative results of single image 3D reconstruction on the IKEA dataset each class. Following Girdhar et al. [2016], we evaluate results at resolution 20 × 20 × 20, use the average precision as our evaluation metric, and attempt to align each prediction with the ground-truth over permutations, flips, and translational alignments (up to 10%), as IKEA ground truth objects are not in a canonical viewpoint. In all categories, our model consistently outperforms previous state-of-the-art in voxel-level prediction and other baseline methods.† 5 Analyzing Learned Representations In this section, we look deep into the representations learned by both the generator and the discriminator of 3D-GAN. We start with the 200-dimensional object vector, from which the generator produces various objects. We then visualize neurons in the discriminator, and demonstrate that these units capture informative semantic knowledge of the objects, which justifies its good performance on object classification presented in Section 4. 5.1 The Generative Representation We explore three methods for understanding the latent space of vectors for object generation. We first visualize what an individual dimension of the vector represents; we then explore the possibility of interpolating between two object vectors and observe how the generated objects change; last, we present how we can apply shape arithmetic in the latent space. Visualizing the object vector To visualize the semantic meaning of each dimension, we gradually increase its value, and observe how it affects the generated 3D object. In Figure 5, each column corresponds to one dimension of the object vector, where the red region marks the voxels affected by changing values of that dimension. We observe that some dimensions in the object vector carries semantic knowledge of the object, e.g., the thickness or width of surfaces. Interpolation We show results of interpolating between two object vectors in Figure 6. Earlier works demonstrated interpolation between two 2D images of the same category [Dosovitskiy et al., 2015, Radford et al., 2016]. Here we show interpolations both within and across object categories. We observe that for both cases walking over the latent space gives smooth transitions between objects. Arithmetic Another way of exploring the learned representations is to show arithmetic in the latent space. Previously, Dosovitskiy et al. [2015], Radford et al. [2016] presented that their generative nets are able to encode semantic knowledge of chair or face images in its latent space; Girdhar et al. [2016] also showed that the learned representation for 3D objects behave similarly. We show our shape arithmetic in Figure 8. Different from Girdhar et al. [2016], all of our objects are randomly sampled, requiring no existing 3D CAD models as input. 5.2 The Discriminative Representation We now visualize the neurons in the discriminator. Specifically, we would like to show what input objects, and which part of them produce the highest intensity values for each neuron. To do that, †For methods from Girdhar et al. [2016], the mean values in the last column are higher than the originals in their paper, because we compute per-class accuracy instead of per-instance accuracy. 7 Figure 8: Shape arithmetic for chairs and tables. The left images show the obtained “arm” vector can be added to other chairs, and the right ones show the “layer” vector can be added to other tables. Figure 9: Objects and parts that activate specific neurons in the discriminator. For each neuron, we show five objects that activate it most strongly, with colors representing gradients of activations with respect to input voxels. for each neuron in the second to last convolutional layer of the discriminator, we iterate through all training objects and exhibit the ones activating the unit most strongly. We further use guided back-propagation [Springenberg et al., 2015] to visualize the parts that produce the activation. Figure 9 shows the results. There are two main observations: first, for a single neuron, the objects producing strongest activations have very similar shapes, showing the neuron is selective in terms of the overall object shape; second, the parts that activate the neuron, shown in red, are consistent across these objects, indicating the neuron is also learning semantic knowledge about object parts. 6 Conclusion In this paper, we proposed 3D-GAN for 3D object generation, as well as 3D-VAE-GAN for learning an image to 3D model mapping. We demonstrated that our models are able to generate novel objects and to reconstruct 3D objects from images. We showed that the discriminator in GAN, learned without supervision, can be used as an informative feature representation for 3D objects, achieving impressive performance on shape classification. We also explored the latent space of object vectors, and presented results on object interpolation, shape arithmetic, and neuron visualization. Acknowledgement This work is supported by NSF grants #1212849 and #1447476, ONR MURI N00014-16-1-2007, the Center for Brain, Minds and Machines (NSF STC award CCF-1231216), Toyota Research Institute, Adobe, Shell, IARPA MICrONS, and a hardware donation from Nvidia. References Aayush Bansal, Bryan Russell, and Abhinav Gupta. Marr revisited: 2d-3d alignment via surface normal prediction. In CVPR, 2016. 2 Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In SIGGRAPH, 1999. 2 Wayne E Carlson. An algorithm and data structure for 3d object synthesis using surface patch intersections. In SIGGRAPH, 1982. 1, 2 Angel X Chang, Thomas Funkhouser, Leonidas Guibas, et al. Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012, 2015. 1, 5 Siddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun. Probabilistic reasoning for assembly-based 3d modeling. ACM TOG, 30(4):35, 2011. 2 Ding-Yun Chen, Xiao-Pei Tian, Yu-Te Shen, and Ming Ouhyoung. On visual similarity based 3d model retrieval. CGF, 2003. 6 Christopher B Choy, Danfei Xu, JunYoung Gwak, Kevin Chen, and Silvio Savarese. 3d-r2n2: A unified approach for single and multi-view 3d object reconstruction. In ECCV, 2016. 2 Emily L Denton, Soumith Chintala, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. 2 Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. 7 Rohit Girdhar, David F Fouhey, Mikel Rodriguez, and Abhinav Gupta. Learning a predictable and generative vector representation for objects. In ECCV, 2016. 1, 2, 4, 6, 7 8 Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. 2, 3 Haibin Huang, Evangelos Kalogerakis, and Benjamin Marlin. Analysis and synthesis of 3d shape families via deep-learned generative models of surfaces. CGF, 34(5):25–38, 2015. 2 Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. Generating images with recurrent adversarial networks. arXiv preprint arXiv:1602.05110, 2016. 2 Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, and Vladlen Koltun. A probabilistic model for component-based shape synthesis. ACM TOG, 31(4):55, 2012. 2 Abhishek Kar, Shubham Tulsiani, Joao Carreira, and Jitendra Malik. Category-specific object reconstruction from a single image. In CVPR, 2015. 2 Michael Kazhdan, Thomas Funkhouser, and Szymon Rusinkiewicz. Rotation invariant spherical harmonic representation of 3 d shape descriptors. In SGP, 2003. 6 Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. 3 Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. 2, 3 Anders Boesen Lindbo Larsen, Søren Kaae Sønderby, and Ole Winther. Autoencoding beyond pixels using a learned similarity metric. In ICML, 2016. 2, 4 Chuan Li and Michael Wand. Precomputed real-time texture synthesis with markovian generative adversarial networks. arXiv preprint arXiv:1604.04382, 2016. 2 Yangyan Li, Hao Su, Charles Ruizhongtai Qi, Noa Fish, Daniel Cohen-Or, and Leonidas J Guibas. Joint embeddings of shapes and images via cnn image purification. ACM TOG, 34(6):234, 2015. 2 Joseph J. Lim, Hamed Pirsiavash, and Antonio Torralba. Parsing ikea objects: Fine pose estimation. In ICCV, 2013. 4, 6 Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. Rectifier nonlinearities improve neural network acoustic models. In ICML, 2013. 3 Daniel Maturana and Sebastian Scherer. Voxnet: A 3d convolutional neural network for real-time object recognition. In IROS, 2015. 2, 5, 6 Charles R Qi, Hao Su, Matthias Niessner, Angela Dai, Mengyuan Yan, and Leonidas J Guibas. Volumetric and multi-view cnns for object classification on 3d data. In CVPR, 2016. 1, 2, 5, 6 Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. 2, 3, 7 Danilo Jimenez Rezende, SM Eslami, Shakir Mohamed, Peter Battaglia, Max Jaderberg, and Nicolas Heess. Unsupervised learning of 3d structure from images. In NIPS, 2016. 2 Nima Sedaghat, Mohammadreza Zolfaghari, and Thomas Brox. Orientation-boosted voxel nets for 3d object recognition. arXiv preprint arXiv:1604.03351, 2016. 6 Abhishek Sharma, Oliver Grau, and Mario Fritz. Vconv-dae: Deep volumetric shape learning without object labels. arXiv preprint arXiv:1604.03755, 2016. 2, 4, 5, 6 Baoguang Shi, Song Bai, Zhichao Zhou, and Xiang Bai. Deeppano: Deep panoramic representation for 3-d shape recognition. IEEE SPL, 22(12):2339–2343, 2015. 2, 5, 6 Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. In ICLR Workshop, 2015. 8 Hang Su, Subhransu Maji, Evangelos Kalogerakis, and Erik Learned-Miller. Multi-view convolutional neural networks for 3d shape recognition. In ICCV, 2015a. 1, 2, 5, 6 Hao Su, Charles R Qi, Yangyan Li, and Leonidas Guibas. Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views. In ICCV, 2015b. 2 Johan WH Tangelder and Remco C Veltkamp. A survey of content based 3d shape retrieval methods. Multimedia tools and applications, 39(3):441–471, 2008. 1, 2 Oliver Van Kaick, Hao Zhang, Ghassan Hamarneh, and Daniel Cohen-Or. A survey on shape correspondence. CGF, 2011. 1, 2 Xiaolong Wang and Abhinav Gupta. Generative image modeling using style and structure adversarial networks. In ECCV, 2016. 2 Jiajun Wu, Tianfan Xue, Joseph J Lim, Yuandong Tian, Joshua B Tenenbaum, Antonio Torralba, and William T Freeman. Single image 3d interpreter network. In ECCV, 2016. 2 Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 3d shapenets: A deep representation for volumetric shapes. In CVPR, 2015. 1, 2, 4, 5, 6 Yu Xiang, Wongun Choi, Yuanqing Lin, and Silvio Savarese. Data-driven 3d voxel patterns for object category recognition. In CVPR, 2015. 2 Jianxiong Xiao, James Hays, Krista Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. 4 Tianfan Xue, Jianzhuang Liu, and Xiaoou Tang. Example-based 3d object reconstruction from line drawings. In CVPR, 2012. 2 Xinchen Yan, Jimei Yang, Ersin Yumer, Yijie Guo, and Honglak Lee. Perspective transformer nets: Learning single-view 3d object reconstruction without 3d supervision. In NIPS, 2016. 2 Jun-Yan Zhu, Philipp Krähenbühl, Eli Shechtman, and Alexei A Efros. Generative visual manipulation on the natural image manifold. In ECCV, 2016. 2 9
2016
165
6,067
Optimistic Gittins Indices Eli Gutin Operations Research Center, MIT Cambridge, MA 02142 gutin@mit.edu Vivek F. Farias MIT Sloan School of Management Cambridge, MA 02142 vivekf@mit.edu Abstract Starting with the Thomspon sampling algorithm, recent years have seen a resurgence of interest in Bayesian algorithms for the Multi-armed Bandit (MAB) problem. These algorithms seek to exploit prior information on arm biases and while several have been shown to be regret optimal, their design has not emerged from a principled approach. In contrast, if one cared about Bayesian regret discounted over an infinite horizon at a fixed, pre-specified rate, the celebrated Gittins index theorem offers an optimal algorithm. Unfortunately, the Gittins analysis does not appear to carry over to minimizing Bayesian regret over all sufficiently large horizons and computing a Gittins index is onerous relative to essentially any incumbent index scheme for the Bayesian MAB problem. The present paper proposes a sequence of ‘optimistic’ approximations to the Gittins index. We show that the use of these approximations in concert with the use of an increasing discount factor appears to offer a compelling alternative to state-of-the-art index schemes proposed for the Bayesian MAB problem in recent years by offering substantially improved performance with little to no additional computational overhead. In addition, we prove that the simplest of these approximations yields frequentist regret that matches the Lai-Robbins lower bound, including achieving matching constants. 1 Introduction The multi-armed bandit (MAB) problem is perhaps the simplest example of a learning problem that exposes the tension between exploration and exploitation. Recent years have seen a resurgence of interest in Bayesian MAB problems wherein we are endowed with a prior on arm rewards, and a number of policies that exploit this prior have been proposed and/or analyzed. These include Thompson Sampling [20], Bayes-UCB [12], KL-UCB [9], and Information Directed Sampling [19]. The ultimate motivation for these algorithms appears to be two-fold: superior empirical performance and light computational burden. The strongest performance results available for these algorithms establish regret lower bounds that match the Lai-Robbins lower bound [15]. Even among this set of recently proposed algorithms, there is a wide spread in empirically observed performance. Interestingly, the design of the index policies referenced above has been somewhat ad-hoc as opposed to having emerged from a principled analysis of the underlying Markov Decision process. Now if in contrast to requiring ‘small’ regret for all sufficiently large time horizons, we cared about minimizing Bayesian regret over an infinite horizon, discounted at a fixed, pre-specified rate (or equivalently, maximizing discounted infinite horizon rewards), the celebrated Gittins index theorem provides an optimal, efficient solution. Importing this celebrated result to the fundamental problem of designing algorithms that achieve low regret (either frequentist or Bayesian) simultaneously over all sufficiently large time horizons runs into two substantial challenges: High-Dimensional State Space: Even minor ‘tweaks’ to the discounted infinite horizon objective render the corresponding Markov Decision problem for the Bayesian MAB problem intractable. For 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. instance, it is known that a Gittins-like index strategy is sub-optimal for a fixed horizon [5], let alone the problem of minimizing regret over all sufficiently large horizons. Computational Burden: Even in the context of the discounted infinite horizon problem, the computational burden of calculating a Gittins index is substantially larger than that required for any of the index schemes for the multi-armed bandit discussed thus far. The present paper attempts to make progress on these challenges. Specifically, we make the following contribution: • We propose a class of ‘optimistic’ approximations to the Gittins index that can be computed with significantly less effort. In fact, the computation of the simplest of these approximations is no more burdensome than the computation of indices for the Bayes UCB algorithm, and several orders of magnitude faster than the nearest competitor, IDS. • We establish that an arm selection rule that is greedy with respect to the simplest of these optimistic approximations achieves optimal regret in the sense of meeting the Lai-Robbins lower bound (including matching constants) provided the discount factor is increased at a certain rate. • We show empirically that even the simplest optimistic approximation to the Gittins index proposed here outperforms the state-of-the-art incumbent schemes discussed in this introduction by a non-trivial margin. We view this as our primary contribution – the Bayesian MAB problem is fundamental making the performance improvements we demonstrate important. Literature review Thompson Sampling [20] was proposed as a heuristic to the MAB problem in 1933, but was largely ignored until the last decade. An empirical study by Chapelle and Li [7] highlighted Thompson Sampling’s superior performance and led to a series of strong theoretical guarantees for the algorithm being proved in [2, 3, 12] (for specific cases when Gaussian and Beta priors are used). Recently, these proofs were generalized to the 1D exponential family of distributions in [13]. A few decades after Thompson Sampling was introduced, Gittins [10] showed that an index policy was optimal for the infinite horizon discounted MAB problem. Several different proofs for the optimality of Gittins index, were shown in [21, 22, 23, 6]. Inspired by this breakthrough, Lai and Robbins [15, 14], while ignoring the original MDP formulation, proved an asymptotic lower bound on achievable (non-discounted) regret and suggested policies that attained it. Simple and efficient UCB algorithms were later developed by Agrawal and Auer et al. [1, 4], with finite time regret bounds. These were followed by the KL-UCB [9] and Bayes UCB [12] algorithms. The Bayes UCB paper drew attention to how well Bayesian algorithms performed in the frequentist setting. In that paper, the authors also demonstrated that a policy using indices similar to Gittins’ had the lowest regret. The use of Bayesian techniques for bandits was explored further in [19] where the authors propose Information Directed Sampling, an algorithm that exploits complex information structures arising from the prior. There is also a very recent paper, [16], which also focuses on regret minimization using approximated Gittins Indices. However, in that paper, the time horizon is assumed to be known and fixed, which is different from the focus in this paper on finding a policy that has low regret over all sufficiently long horizons. 2 Model and Preliminaries We consider a multi-armed bandit problem with a finite set of arms A = {1, . . . , A}. Arm i 2 A if pulled at time t, generates a stochastic reward Xi,Ni(t) where Ni(t) denotes the cumulative number of pulls of arm i up to and including time t. (Xi,s, s 2 N) is an i.i.d. sequence of random variables, each distributed according to p✓i(·) where ✓i 2 ⇥is a parameter. Denote by ✓the tuple of all ✓i. The expected reward from the ith arm is denoted by µi(✓i) := E [Xi,1 | ✓i]. We denote by µ⇤(✓) the maximum expected reward across arms; µ⇤(✓) := maxi µi(✓i) and let i⇤be an optimal arm. The present paper will focus on the Bayesian setting, and so we suppose that each ✓i is an independent draw from some prior distribution q over ⇥. All random variables are defined on a common probability space (⌦, F, P). We define a policy, ⇡:= (⇡t, t 2 N), to be a stochastic process taking values in A. We require that ⇡be adapted to the filtration Ft generated by the history of arm pulls and their corresponding rewards up to and including time t −1. 2 Over time, the agent accumulates rewards, and we denote by V (⇡, T, ✓) := E "X t X⇡t,N⇡t(t) ##### ✓ # the reward accumulated up to time T when using policy ⇡. We write V (⇡, T) := E [V (⇡, T, ✓)]. The regret of a policy over T time periods, for a specific realization ✓2 ⇥A, is the expected shortfall against always pulling the optimal arm, namely Regret (⇡, T, ✓) := Tµ⇤(✓) −V (⇡, T, ✓) In a seminal paper, [15], Lai and Robbins established a lower bound on achievable regret. They considered the class of policies under which for any choice of ✓and positive constant a, any policy in the class achieves o(na) regret. They showed that for any policy ⇡in this class, and any ✓with a unique maximum, we must have lim inf T Regret (⇡, T, ✓) log T ≥ X i µ⇤(✓) −µi(✓i) dKL (p✓i, p✓i⇤) (1) where dKL is the Kullback-Liebler divergence. The Bayes’ risk (or Bayesian regret) is simply the expected regret over draws of ✓according to the prior q: Regret (⇡, T) := TE [µ⇤(✓)] −V (⇡, T). In yet another landmark paper, [15] showed that for a restricted class of priors q a similar class of algorithms to those found to be regret optimal in [14] were also Bayes optimal. Interestingly, however, this class of algorithms ignores information about the prior altogether. A number of algorithms that do exploit prior information have in recent years received a good deal of attention; these include Thompson sampling [20], Bayes-UCB [12], KL-UCB [9], and Information Directed Sampling [19]. The Bayesian setting endows us with the structure of a (high dimensional) Markov Decision process. An alternative objective to minimizing Bayes risk, is the maximization of the cumulative reward discounted over an infinite horizon. Specifically, for any positive discount factor γ < 1, define Vγ(⇡) := Eq " 1 X t=1 γt−1X⇡t,N⇡t(t) # . The celebrated Gittins index theorem provides an optimal, efficient solution to this problem that we will describe in greater detail shortly; unfortunately as alluded to earlier even a minor ‘tweak’ to the objective above – such as maximizing cumulative expected reward over a finite horizon renders the Gittins index sub-optimal [17]. As a final point of notation, every scheme we consider will maintain a posterior on the mean of an arm at every point in time. We denote by qi,s the posterior on the mean of the ith arm after s −1 pulls of that arm; qi,1 := q. Since our prior on ✓i will frequently be conjugate to the distribution of the reward Xi, qi,s will permit a succinct description via a sufficient statistic we will denote by yi,s; denote the set of all such sufficient statistics Y. We will thus use qi,s and yi,s interchangeably and refer to the latter as the ‘state’ of the ith arm after s −1 pulls. 3 Gittins Indices and Optimistic Approximations One way to compute the Gittins Index is via the so-called retirement value formulation [23]. The Gittins Index for arm i in state y is the value for λ that solves λ 1 −γ = sup ⌧>1 E "⌧−1 X t=1 γt−1Xi,t + γ⌧−1 λ 1 −γ ##### yi,1 = y # . (2) We denote this quantity by ⌫γ(y). If one thought of the notion of retiring as receiving a deterministic reward λ in every period, then the value of λ that solves the above equation could be interpreted as the per-period retirement reward that makes us indifferent between retiring immediately and the option of continuing to play arm i with the potential of retiring at some future time. The Gittins index policy can thus succinctly be stated as follows: at time t, play an arm in the set arg maxi vγ(yi,Ni(t)). Ignoring computational considerations, we cannot hope for a scheme such as the one above to achieve acceptable regret or Bayes risk. Specifically, denoting the Gittins policy by ⇡G,γ, we have 3 Lemma 3.1. There exists an instance of the multi armed bandit problem with |A| = 2 for which Regret ! ⇡G,γ, T " = ⌦(T) for any γ 2 (0, 1). The above result is expected. If the posterior means on the two arms are sufficiently apart, the Gittins index policy will pick the arm with the larger posterior mean. The threshold beyond which the Gittins policy ‘exploits’ depends on the discount factor and with a fixed discount factor there is a positive probability that the superior arm is never explored sufficiently so as to establish that it is, in fact, the superior arm. Fixing this issue then requires that the discount factor employed increase over time. Consider then employing discount factors that increase at roughly the rate 1 −1/t; specifically, consider setting γt = 1 − 1 2bln2 tc+1 and consider using the policy that at time t picks an arm from the set arg maxi ⌫γt(yi,Ni(t)). Denote this policy by ⇡D. The following proposition shows that this ‘doubling’ policy achieves Bayes risk that is within a factor of log T of the optimal Bayes risk. Specifically, we have: Proposition 3.1. Regret(⇡D, T) = O ! log3 T " . where the constant in the big-Oh term depends on the prior q and A. The proof of this simple result (Appendix A.1) relies on showing that the finite horizon regret achieved by using a Gittins index with an appropriate fixed discount factor is within a constant factor of the optimal finite horizon regret. The second ingredient is a doubling trick. While increasing discount factors does not appear to get us to the optimal Bayes risk (the achievable lower bound being log2 T; see [14]); we conjecture that in fact this is a deficiency in our analysis for Proposition 3.1. In any case, the policy ⇡D is not the primary subject of the paper but merely a motivation for the discount factor schedule proposed. Putting aside this issue, one is still left with the computational burden associated with ⇡D – which is clearly onerous relative to any of the incumbent index rules discussed in the introduction. 3.1 Optimistic Approximations to The Gittins Index The retirement value formulation makes clear that computing a Gittins index is equivalent to solving a discounted, infinite horizon stopping problem. Since the state space Y associated with this problem is typically at least countable, solving this stopping problem, although not necessarily intractable, is a non-trivial computational task. Consider the following alternative stopping problem that requires as input the parameters λ (which has the same interpretation as it did before), and K, an integer limiting the number of steps that we need to look ahead. For an arm in state y (recall that the state specifies sufficient statistics for the current prior on the arm reward), let R(y) be a random variable drawn from the prior on expected arm reward specified by y. Define the retirement value Rλ,K(s, y) according to Rλ,K(s, y) = ⇢λ, if s < K + 1 max (λ, R(y)) , otherwise For a given K, the Optimistic Gittins Index for arm i in state y is now defined as the value for λ that solves λ 1 −γ = sup 1<⌧K+1 E "⌧−1 X s=1 γs−1Xi,s + γ⌧−1 Rλ,K(⌧, yi,⌧) 1 −γ &&&&& yi,1 = y # . (3) We denote the solution to this equation by vK γ (y). The problem above admits a simple, attractive interpretation: nature reveals the true mean reward for the arm at time K + 1 should we choose to not retire prior to that time, which enables the decision maker to then instantaneously decide whether to retire at time K + 1 or else, never retire. In this manner one is better off than in the stopping problem inherent to the definition of the Gittins index, so that we use the moniker optimistic. Since we need to look ahead at most K steps in solving the stopping problem implicit in the definition above, the computational burden in index computation is limited. The following Lemma formalizes this intuition 4 Lemma 3.2. For all discount factors γ and states y 2 Y, we have vK γ (y) ≥vγ(y) 8K. Proof. See Appendix A.2. It is instructive to consider the simplest version of the approximation proposed here, namely the case where K = 1. There, equation (3) simplifies to λ = ˆµ(y) + γE ⇥ (λ −R(y))+⇤ (4) where ˆµ(y) := E [R(y)] is the mean reward under the prior given by y. The equation for λ above can also be viewed as an upper confidence bound to an arm’s expected reward. Solving equation (4) is often simple in practice, and we list a few examples to illustrate this: Example 3.1 (Beta). In this case y is the pair (a, b), which specifices a Beta prior distribution. The 1-step Optimistic Gittins Index, is the value of λ that solves λ = a a + b + γE ⇥ (λ −Beta(a, b))+⇤ = a a + b(1 −γF β a+1,b(λ)) + γλ(1 −F β a,b(λ)) where F β a,b is the CDF of a Beta distribution with parameters a, b. Example 3.2 (Gaussian). Here y = (µ, σ2), which specifices a Gaussian prior and the corresponding equation is λ = µ + γE ⇥ (λ −N(µ, σ2))+⇤ = µ + γ  (λ −µ)Φ ✓µ −λ σ ◆ + σφ ✓µ −λ σ ◆& where φ and Φ denote the Gaussian PDF and CDF, respectively. Notice that in both the Beta and Gaussian examples, the equations for λ are in terms of distribution functions. Therefore it’s straightforward to compute a derivative for these equations (which would be in terms of the density and CDF of the prior) which makes finding a solution, using a method such as Newton-Raphson, simple and efficient. We summarize the Optimistic Gittins Index (OGI) algorithm succinctly as follows. Assume the state of arm i at time t is given by yi,t, and let γt = 1 −1/t. Play an arm i⇤2 arg max i vK γt(yi,t), and update the posterior on the arm based on the observed reward. 4 Analysis We establish a regret bound for Optimistic Gittins Indices when the algorithm is given the parameter K = 1, the prior distribution q is uniform and arm rewards are Bernoulli. The result shows that the algorithm, in that case, meets the Lai-Robbins lower bound and is thus asymptotically optimal, in both a frequentist and Bayesian sense. After stating the main theorem, we briefly discuss two generalizations to the algorithm. In the sequel, whenever x, y 2 (0, 1), we will simplify notation and let d(x, y) := dKL(Ber(x), Ber(y)). Also, we will refer to the Optimistic Gittins Index policy simply as ⇡OG, with the understanding that this refers to the case when K, the ‘look-ahead’ parameter, equals 1 and a flat beta prior is used. Moreover, we will denote the Optimistic Gittins Index of the ith arm as vi,t := v1 1−1/t(yi,t). Now we state the main result: Theorem 1. Let ✏> 0. For the multi-armed bandit problem with Bernoulli rewards and any parameter vector ✓⇢[0, 1]A, there exists T ⇤= T ⇤(✏, ✓) and C = C(✏, ✓) such that for all T ≥T ⇤, Regret ' ⇡OG, T, ✓ (  X i=1,...,A i6=i⇤ (1 + ✏)2(✓⇤−✓i) d(✓i, ✓⇤) log T + C(✏, ✓) (5) where C(✏, ✓) is a constant that is only determined by ✏and the parameter ✓. 5 Proof. Because we prove frequentist regret, the first few steps of the proof will be similar to that of UCB and Thompson Sampling. Assume w.l.o.g that arm 1 is uniquely optimal, and therefore ✓⇤= ✓1. Fix an arbitrary suboptimal arm, which for convenience we will say is arm 2. Let jt and kt denote the number of pulls of arms 1 and 2, respectively, by (but not including) time t. Finally, we let st and s0 t be the corresponding integer reward accumulated from arms 1 and 2, respectively. That is, st = jt X s=1 X1,s s0 t = kt X s=1 X2,s. Therefore, by definition, j1 = k1 = s1 = s0 1 = 0. Let ⌘1, ⌘2, ⌘3 2 (✓2, ✓1) be chosen such that ⌘1 < ⌘2 < ⌘3, d(⌘1, ⌘3) = d(✓2,✓1) 1+✏ and d(⌘2, ⌘3) = d(⌘1,⌘3) 1+✏ . Next, we define L(T) := log T d(⌘2,⌘3). We upper bound the expected number of pulls of the second arm as follows, E [kT ] L(T) + T X t=bL(T )c+1 P " ⇡OG t = 2, kt ≥L(T) # L(T) + T X t=1 P (v1,t < ⌘3) + T X t=1 P " ⇡OG t = 2, v1,t ≥⌘3, kt ≥L(T) # L(T) + T X t=1 P (v1,t < ⌘3) + T X t=1 P " ⇡OG t = 2, v2,t ≥⌘3, kt ≥L(T) # (1 + ✏)2 log T d(✓2, ✓1) + 1 X t=1 P (v1,t < ⌘3) | {z } A + T X t=1 P " ⇡OG t = 2, v2,t ≥⌘3, kt ≥L(T) # | {z } B (6) All that remains is to show that terms A and B are bounded by constants. These bounds are given in Lemmas 4.1 and 4.2 whose proofs we describe at a high-level with the details in the Appendix. Lemma 4.1 (Bound on term A). For any ⌘< ✓1, the following bound holds for some constant C1 = C1(✏, ✓1) 1 X t=1 P (v1,t < ⌘) C1. Proof outline. The goal is to bound P (v1,t < ⌘) by an expression that decays fast enough in t so that the series converges. To prove this, we shall express the event {v1,t < ⌘} in the form {Wt < 1/t} for some sequence of random variables Wt. It turns out that for large enough t, P (Wt < 1/t) P " cU 1/(1+h) < 1/t # where U is a uniform random variable, c, h > 0 and therefore P (v1,t < ⌘) = O " 1 t1+h # . The full proof is in Appendix A.4. We remark that the core technique in the proof of Lemma 4.1 is the use of the Beta CDF. As such, our analysis can, in some sense, improve the result for Bayes UCB. In the main theorem of [12], the authors state that the quantile in their algorithm is required to be 1 −1/(t logc T) for some parameter c ≥5, however they show simulations with the quantile 1 −1/t and suggest that, in practice, it should be used instead. By utilizing techniques in our analysis, it is possible to prove that the use of 1 −1/t, as a discount factor, in Bayes UCB would lead to the same optimal regret bound. Therefore the ‘scaling’ by logc T is unnecessary. Lemma 4.2 (Bound on term B). There exists T ⇤= T ⇤(✏, ✓) sufficiently large and a constant C2 = C2(✏, ✓1, ✓2) so that for any T ≥T ⇤, we have T X t=1 P " ⇡OG t = 2, v2,t ≥⌘3, kt ≥L(T) # C2. Proof outline. This relies on a concentration of measure result and the assumption that the 2nd arm was sampled at least L(T) times. The full proof is given in Appendix A.5. 6 Lemma 4.1 and 4.2, together with (6), imply that E [kT ] (1 + ✏)2 log T d(✓2, ✓1) + C1 + C2 from which the regret bound follows. 4.1 Generalizations and a tuning parameter There is an argument in Agrawal and Goyal [2] which shows that any algorithm optimal for the Bernoulli bandit problem, can be modified to yield an algorithm that has O(log T) regret with general bounded stochastic rewards. Therefore Optimistic Gittins Indices is an effective and practical alternative to policies such as Thompson Sampling and UCB. We also suspect that the proof of Theorem 1 can be generalized to all lookahead values (K > 1) and to a general exponential family of distributions. Another important observation is that the discount factor for Optimistic Gittins Indices does not have to be exactly 1 −1/t. In fact, a tuning parameter ↵> 0 can be added to make the discount factor γt+↵= 1 −1/(t + ↵) instead. An inspection of the proofs of Lemmas 4.1 and 4.2 shows that the result in Theorem 1 would still hold were one to use such a tuning parameter. In practice, performance is remarkably robust to our choice of K and ↵. 5 Experiments Our goal is to benchmark Optimistic Gittins Indices (OGI) against state-of-the-art algorithms in the Bayesian setting. Specifically, we compare ourselves against Thomson sampling, Bayes UCB, and IDS. Each of these algorithms has in turn been shown to substantially dominate other extant schemes. We consider the OGI algorithm for two values of the lookahead parameter K (1 and 3) , and in one experiment included for completeness, the case of exact Gittins indices (K = 1). We used a common discount factor schedule in all experiments setting γt = 1 −1/(100 + t). The choice of ↵= 100 is second order and our conclusions remain unchanged (and actually appear to improve in an absolute sense) with other choices (we show this in a second set of experiments). A major consideration in running the experiments is that the CPU time required to execute IDS (the closest competitor) based on the current suggested implementation is orders of magnitudes greater than that of the index schemes or Thompson Sampling. The main bottleneck is that IDS uses numerical integration, requiring the calculation of a CDF over, at least, hundreds of iterations. By contrast, the version of OGI with K = 1 uses 10 iterations of the Newton-Raphson method. In the remainder of this section, we discuss the results. Gaussian This experiment (Table 1) replicates one in [19]. Here the arms generate Gaussian rewards Xi,t ⇠N(✓i, 1) where each ✓i is independently drawn from a standard Gaussian distribution. We simulate 1000 independent trials with 10 arms and 1000 time periods. The implementation of OGI in this experiment uses K = 1. It is difficult to compute exact Gittins indices in this setting, but a classical approximation for Gaussian bandits does exist; see [18], Chapter 6.1.3. We term the use of that approximation ‘OGI(1) Approx’. In addition to regret, we show the average CPU time taken, in seconds, to execute each trial. Algorithm OGI(1) OGI(1) Approx. IDS TS Bayes UCB Mean Regret 49.19 47.64 55.83 67.40 60.30 S.D. 51.07 50.59 65.88 47.38 45.35 1st quartile 17.49 16.88 18.61 37.46 31.41 Median 41.72 40.99 40.79 63.06 57.71 3rd quartile 73.24 72.26 78.76 94.52 86.40 CPU time (s) 0.02 0.01 11.18 0.01 0.02 Table 1: Gaussian experiment. OGI(1) denotes OGI with K = 1, while OGI Approx. uses the approximation to the Gaussian Gittins Index from [18]. The key feature of the results here is that OGI offers an approximately 10% improvement in regret over its nearest competitor IDS, and larger improvements (20 and 40 % respectively) over Bayes 7 UCB and Thompson Sampling. The best performing policy is OGI with the specialized Gaussian approximation since it gives a closer approximation to the Gittins Index. At the same time, OGI is essentially as fast as Thomspon sampling, and three orders of magnitude faster than its nearest competitor (in terms of regret). Bernoulli In this experiment regret is simulated over 1000 periods, with 10 arms each having a uniformly distributed Bernoulli parameter, over 1000 independent trials (Table 2). We use the same setup as in [19] for consistency. Algorithm OGI(1) OGI(3) OGI(1) IDS TS Bayes UCB Mean Regret 18.12 18.00 17.52 19.03 27.39 22.71 1st quartile 6.26 5.60 4.45 5.85 14.62 10.09 Median 15.08 14.84 12.06 14.06 23.53 18.52 3rd quartile 27.63 27.74 24.93 26.48 36.11 30.58 CPU time (s) 0.19 0.89 (?) hours 8.11 0.01 0.05 Table 2: Bernoulli experiment. OGI(K) denotes the OGI algorithm with a K step approximation and tuning parameter ↵= 100. OGI(1) is the algorithm that uses Gittins Indices. Each version of OGI outperforms other algorithms and the one that uses (actual) Gittins Indices has the lowest mean regret. Perhaps, unsurprisingly, when OGI looks ahead 3 steps it performs marginally better than with a single step. Nevertheless, looking ahead 1 step is a reasonably close approximation to the Gittins Index in the Bernoulli problem. In fact the approximation error, when using an optimistic 1 step approximation, is around 15% and if K is increased to 3, the error drops to around 4%. (a) Gaussian experiment (b) Bernoulli experiment Figure 1: Bayesian regret. In the legend, OGI(K)-↵is the format used to indicate parameters K and ↵. The OGI Appox policy uses the approximation to the Gittins index from [18]. Longer Horizon and Robustness For this experiment, we simulate the earlier Bernoulli and Gaussian bandit setups with a longer horizon of 5000 steps and with 3 arms. The arms’ parameters are drawn at random in the same manner as the previous two experiments and regret is averaged over 100,000 independent trials. Results are shown in Figures 1a and 1b. In the Bernoulli experiment of this section, due to the computational cost, we are only able to simulate OGI with K = 1. In addition, to show robustness with respect to the choice of tuning parameter ↵, we show results for ↵= 50, 100, 150. The message here is essentially the same as in the earlier experiments: the OGI scheme offers a non-trivial performance improvement at a tiny fraction of the computational effort required by its nearest competitor. We omit Thompson Sampling and Bayes UCB from the plots in order to more clearly see the difference between OGI and IDS. The complete graphs can be found in Appendix A.6. 8 References [1] AGRAWAL, R. Sample mean based index policies with O(log n) regret for the multi-armed bandit problem. Advances in Applied Probability (1995), 1054–1078. [2] AGRAWAL, S., AND GOYAL, N. Analysis of Thompson Sampling for the Multi-armed Bandit Problem. In Proceedings of The 25th Conference on Learning Theory, pp. 39.1—-39.26. [3] AGRAWAL, S., AND GOYAL, N. Further Optimal Regret Bounds for Thompson Sampling. In Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics (2013), pp. 99–107. [4] AUER, P., CESA-BIANCHI, N., AND FISCHER, P. Finite-time analysis of the multiarmed bandit problem. Machine learning 47, 2-3 (2002), 235–256. [5] BERRY, D. A., AND FRISTEDT, B. Bandit problems: sequential allocation of experiments (Monographs on statistics and applied probability). Springer, 1985. [6] BERTSIMAS, D., AND NIÑO-MORA, J. Conservation laws, extended polymatroids and multiarmed bandit problems; a polyhedral approach to indexable systems. Mathematics of Operations Research 21, 2 (1996), 257–306. [7] CHAPELLE, O., AND LI, L. An empirical evaluation of Thompson Sampling. In Advances in neural information processing systems (2011), pp. 2249–2257. [8] COVER, T. M., AND THOMAS, J. A. Elements of information theory. John Wiley & Sons, 2012. [9] GARIVIER, A. The KL-UCB algorithm for bounded stochastic bandits and beyond. In COLT (2011). [10] GITTINS, J. C. Bandit processes and dynamic allocation indices. Journal of the Royal Statistical Society. Series B (Methodological) (1979), 148–177. [11] JOGDEO, K., AND SAMUELS, S. M. Monotone convergence of binomial probabilities and a generalization of ramanujan’s equation. The Annals of Mathematical Statistics (1968), 1191– 1195. [12] KAUFMANN, E., KORDA, N., AND MUNOS, R. Thompson Sampling: An asymptotically optimal finite-time analysis. In Algorithmic Learning Theory (2012), Springer, pp. 199–213. [13] KORDA, N., KAUFMANN, E., AND MUNOS, R. Thompson Sampling for 1-dimensional exponential family bandits. In Advances in Neural Information Processing Systems (2013), pp. 1448–1456. [14] LAI, T. L. Adaptive treatment allocation and the multi-armed bandit problem. The Annals of Statistics (1987), 1091–1114. [15] LAI, T. L., AND ROBBINS, H. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics 6, 1 (1985), 4–22. [16] LATTIMORE, T. Regret Analysis of the Finite-Horizon Gittins Index strategy for Multi-Armed Bandits. In Proceedings of The 29th Conference on Learning Theory (2016), pp. 1–32. [17] NIÑO-MORA, J. Computing a classic index for finite-horizon bandits. INFORMS Journal on Computing 23, 2 (2011), 254–267. [18] POWELL, W. B., AND RYZHOV, I. O. Optimal learning, vol. 841. John Wiley & Sons, 2012. [19] RUSSO, D., AND VAN ROY, B. Learning to optimize via information-directed sampling. In Advances in Neural Information Processing Systems (2014), pp. 1583–1591. [20] THOMPSON, W. R. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika (1933), 285–294. [21] TSITSIKLIS, J. N. A short proof of the Gittins Index theorem. The Annals of Applied Probability (1994), 194–199. [22] WEBER, R. On the Gittins Index for Multi-Armed Bandits. The Annals of Applied Probability 2, 4 (1992), 1024–1033. [23] WHITTLE, P. Multi-armed bandits and the gittins index. Journal of the Royal Statistical Society. Series B (Methodological) (1980), 143–149. 9
2016
166
6,068
Stochastic Gradient Methods for Distributionally Robust Optimization with f-divergences Hongseok Namkoong Stanford University hnamk@stanford.edu John C. Duchi Stanford University jduchi@stanford.edu Abstract We develop efficient solution methods for a robust empirical risk minimization problem designed to give calibrated confidence intervals on performance and provide optimal tradeoffs between bias and variance. Our methods apply to distributionally robust optimization problems proposed by Ben-Tal et al., which put more weight on observations inducing high loss via a worst-case approach over a non-parametric uncertainty set on the underlying data distribution. Our algorithm solves the resulting minimax problems with nearly the same computational cost of stochastic gradient descent through the use of several carefully designed data structures. For a sample of size n, the per-iteration cost of our method scales as O(log n), which allows us to give optimality certificates that distributionally robust optimization provides at little extra cost compared to empirical risk minimization and stochastic gradient methods. 1 Introduction In statistical learning or other data-based decision-making problems, it is desirable to give solutions that come with guarantees on performance, at least to some specified confidence level. For tasks such as driving or medical diagnosis where safety and reliability are crucial, confidence levels have additional importance. Classical techniques in machine learning and statistics, including regularization, stability, concentration inequalities, and generalization guarantees [6, 25] provide such guarantees, though often a more fine-tuned certificate—one with calibrated confidence—is desirable. In this paper, we leverage techniques from the robust optimization literature [e.g. 2], building an uncertainty set around the empirical distribution of the data and studying worst case performance in this uncertainty set. Recent work [15, 13] shows how this approach can give (i) calibrated statistical optimality certificates for stochastic optimization problems, (ii) performs a natural type of regularization based on the variance of the objective and (iii) achieves fast rates of convergence under more general conditions than empirical risk minimization by trading off bias (approximation error) and variance (estimation error) optimally. In this paper, we propose efficient algorithms for such distributionally robust optimization problems. We now provide our formal setting. Let X ⇢Rd be a compact convex set, and for a convex function f : R+ ! R with f(1) = 0, define the f-divergence between distributions P and Q by Df (P||Q) = R f( dP dQ)dQ. Letting P⇢,n := {p 2 Rn : p> = 1, p ≥0, Df (p|| /n) ⇢ n} be an uncertainty set around the uniform distribution /n, we develop methods for solving the robust empirical risk minimization problem minimize x2X sup p2P⇢,n n X i=1 pi`i(x). (1) 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. In problem (1), the functions `i : X ! R+ are convex and subdifferentiable, and we consider the situation in which `i(x) = `(x; ⇠i) for ⇠i iid ⇠P0. We let `(x) = [`1(x) · · · `n(x)]> 2 Rn denote the vector of convex losses, so the robust objective (1) is supp2P⇢,n pT `(x). A number of authors show how the robust formulation (1) provides guarantees. Duchi et al. [15] show that the objective (1) is a convex approximation to regularizing the empirical risk by variance, sup p2P⇢,n n X i=1 pi`i(x) = 1 n n X i=1 `i(x) + r ⇢ nVarP0(`(x; ⇠)) + oP0(n−1 2 ) (2) uniformly in x 2 X. Since the right hand side naturally trades off good loss performance (approximation error) and minimizing variance (estimation error) which is usually non-convex, the robust formulation (1) provides a convex regularization for the standard empirical risk minimization (ERM) problem. This trading between bias and variance leads to certificates on the optimal value infx2X EP0[`(x; ⇠)] so that under suitable conditions, we have lim n!1 P ✓ inf x2X EP0[`(x; ⇠)] un ◆ = P (W ≥−p⇢) for W ⇠N(0, 1) (3) where un = infx2X supp2P⇢,n pT `(x) is the optimal robust objective. Duchi and Namkoong [13] provide finite sample guarantees for the special case that f(t) = 1 2(t −1)2, making the expansion (2) more explicit and providing a number of consequences for estimation and optimization based on this expansion (including fast rates for risk minimization). A special case of their results [13, §3.1] is as follows. Let bxrob 2 argminx2X supp2P⇢,n pT `(x), let VC(F) denote the VC-(subgraph)-dimension of the class of functions F := {`(x; ·) | x 2 X}, assume that M ≥`(x; ⇠) for all x 2 X, ⇠2 ⌅, and for some fixed δ > 0, define ⇢= log 1 δ + 10 VC(F) log VC(F). Then, with probability at least 1 −δ, EP0[`(bxrob; ⇠)] un + O(1)M⇢ n inf x2X 8 < :EP0[`(x; ⇠)] + 2 s 2⇢Var b Pn(`(x; ⇠)) n 9 = ; + O(1)M⇢ n (4) For large n, evaluating the objective (1) may be expensive; with fixed p = /n, this has motivated an extensive literature in stochastic and online optimization [27, 23, 19, 16, 18]. The problem (1) does not admit quite such a straightforward approach. A first idea, common in the robust optimization literature [3], is to obtain a problem that may be written as a sum of individual terms by taking the dual of the inner supremum, yielding the convex problem inf x2X sup p2P⇢,n p>`(x) = inf x2X,λ≥0,⌘2R 1 n n X i=1 λf ⇤ ✓`i(x) −⌘ λ ◆ + ⇢ nλ + ⌘. (5) Here f ⇤(s) = supt≥0{st −f(t)} is the Fenchel conjugate of the convex function f. While the above dual reformulation is jointly convex in (x, λ, ⌘), canonical stochastic gradient descent (SGD) procedures [23] generally fail because the variance of the objective (and its subgradients) explodes as λ ! 0. (This is not just a theoretical issue: in extensive simulations that we omit because they are a bit boring, SGD and other heuristic approaches that impose shrinking bounds of the form λt ≥ct > 0 at each iteration t all fail to optimize the objective (5).) Instead, we view the robust ERM problem (1) as a game between the x (minimizing) player and p (maximizing) player. Each player performs a variant of mirror descent (ascent), and we show how such an approach yields strong convergence guarantees, as well as good empirical performance. In particular, we show (for many suitable divergences f) that if `i is L-Lipschitz and X has radius bounded by R, then our procedure requires at most O( R2L2+⇢ ✏2 ) iterations to achieve an ✏-accurate solution to problem (1), which is comparable to the number of iterations required by SGD [23]. Our solution strategy builds off of similar algorithms due to Nemirovski et al. [23, Sec. 3] and Ben-Tal et al. [4], and more directly procedures developed by Clarkson et al. [10] for solving two-player convex games. Most directly relevant to our approach is that of Shalev-Shwartz and Wexler [26], which solves problem (1) under the assumption that P⇢,n = {p 2 Rn + : pT = 1} and that there is some x with perfect loss performance, that is, Pn i=1 `i(x) = 0. We generalize these approaches to more challenging f-divergence-constrained problems, and, for the χ2 divergence with f(t) = 1 2(t −1)2, 2 develop efficient data structures that give a total run-time for solving problem (1) to ✏-accuracy scaling as O((Cost(grad) + log n) R2L2+⇢ ✏2 ). Here Cost(grad) is the cost to compute the gradient of a single term r`i(x) and perform a mirror descent step with x. Using SGD to solve the empirical minimization problem to ✏-accuracy has run-time O(Cost(grad) R2L2 ✏2 ), so we see that we can achieve the guarantees (3)–(4) offered by the robust formulation (1) at little additional computational cost. The remainder of the paper is organized as follows. We present our abstract algorithm in Section 2 and give guarantees on its performance in Section 3. In Section 4, we give efficient computational schemes for the case that f(t) = 1 2(t −1)2, presenting experiments in Section 5. 2 A bandit mirror descent algorithm for the minimax problem Under the conditions that ` is convex and X is compact, standard results [7] show that there exists a saddle point (x?, p?) 2 X ⇥P⇢,n for the robust problem (1) satisfying sup / p>`(x?) | p 2 P⇢,n p?>`(x?) inf / p?>`(x) | x 2 X . We now describe a procedure for finding this saddle point by alternating a linear bandit-convex optimization procedure [8] for p and a stochastic mirror descent procedure for x. Our approach builds off of Nemirovski et al.’s [23] development of mirror descent for two-player stochastic games. To describe our algorithm, we require a few standard tools. Let k·kx denote a norm on the space X with dual norm kykx,⇤= sup{hx, yi : kxk 1}, and let x be a differentiable strongly convex function on X, meaning x(x+∆) ≥ x(x)+r x(x)>∆+ 1 2 k∆k2 x for all ∆. Let p a differentiable strictly convex function on P⇢,n. For a differentiable convex function h, we define the Bregman divergence Bh(x, y) = h(x) −h(y) −hrh(y), x −yi 0. The Fenchel conjugate ⇤ p of p is ⇤ p(s) := sup p {hs, pi − p(p)} and r ⇤ p(s) = argmax p {hs, pi − p(p)} . ( ⇤ p is differentiable because p is strongly convex [20, Chapter X].) We let gi(x) 2 @`i(x) be a particular subgradient selection. With this notation in place, we now give our algorithm, which alternates between gradient ascent steps on p and subgradient descent steps on x. Roughly, we would like to alternate gradient ascent steps for p, pt+1 pt +↵p`(xt), and descent steps xt+1 xt −↵xgi(xt) for x, where i is a random index drawn according to pt. This procedure is inefficient—requiring time of order nCost(grad) in each iteration—so that we use stochastic estimates of the loss vector `(xt) developed in the linear bandit literature [8] and variants of mirror descent to implement our algorithm. Algorithm 1 Two-player Bandit Mirror Descent 1: Input: Stepsize ↵x, ↵p > 0, initialize: x1 2 X, p1 = /n 2: for t = 1, 2, . . . , T do 3: Sample It ⇠pt, that is, set It = i with probability pt,i 4: Compute estimated loss for i 2 [n]: b`t,i(x) = `i(x) pi,t 1 {It = i} 5: Update p: wt+1 r ⇤ p(r p(pt) + ↵pb`t(xt)), pt+1 argminp2P⇢,n B p(p, wt+1) 6: Update x: yt+1 r ⇤ x ( x(xt) −↵xgIt(xt)), xt+1 argminx2X B x(x, yt+1) 7: end for We specialize this general algorithm for specific choices of the divergence f and the functions x and p presently, first briefly discussing the algorithm. Note that in Step 5, the updates for p depend only on a single index It 2 {1, . . . , n} (the vector b`(xt) is 1-sparse), which, as long as the updates for p are efficiently computable, can yield substantial performance benefits. 3 Regret bounds With our algorithm described, we now describe its convergence properties, specializing later to specific families of f-divergences. We begin with the following result on pseudo-regret, which (with minor modifications) is known [23, 10, 26]. We provide a proof for completeness in Appendix A.1. 3 Lemma 1. Let the sequences xt and pt be generated by Algorithm 1. Define bxT := 1 T PT t=1 xt and bpT := 1 T PT t=1 pt. Then for the saddle point (x?, p?) we have TE[p?>`(bxT ) −bp> T `(x?)] 1 ↵x B x(x?, x1) + ↵x 2 T X t=1 E[kgIt(xt)k2 x,⇤] | {z } T1: ERM regret + T X t=1 E[b`t(xt)>(p? −pt)] | {z } T2: robust regret where the expectation is taken over the random draws It ⇠pt. Moreover, E[b`t(xt)>(p −pt)] = E[`(xt)>(p −pt)] for any vector p. In the lemma, T1 is the standard regret when applying mirror descent to the ERM problem. In particular, if B x(x?, x1) R2 and `i(x) is L-Lipschitz, then choosing ↵x = R L p 2/T yields T1 RL p T. Because it is (relatively) easy to bound the term T1, the remainder of our arguments focus on bounding the the second term T2, which is the regret that comes as a consequence of the random sampling for the loss vector b`t. This regret depends strongly on the distance-generating function p. To the end of bounding T2, we use the following bound for the pseudo-regret of p, which is standard [9, Chapter 11], [8, Thm 5.3]. For completeness we outline the proof in Appendix A.2. Lemma 2. For any p 2 P⇢,n, Algorithm 1 satisfies T X t=1 b`t(xt)>(p −pt) B p(p, p1) ↵p + 1 ↵p T X t=1 B ⇤p ⇣ r p(pt) + ↵pb`t(xt), r p(pt) ⌘ . (6) Lemma 2 shows that controlling the Bregman divergences B p and B ⇤p is sufficient to bound T2 in the basic regret bound of Lemma 1. Now, we narrow our focus slightly to a specialized—but broad—family of divergences for which we can give more explicit results. For k 2 R, the Cressie-Read divergence [12] of order k is fk(t) = tk −kt + k −1 k(k −1) , (7) where fk(t) = 1 for t < 0, and for k 2 {0, 1} we define fk by its limits as k ! 0 or 1 (we have f1(t) = t log t −t + 1 and f0(t) = −log t + t −1). Inspecting expression (6), we might hope that careful choices of p could yield regret bounds that grow slowly with T and have small dependence on the sample size n. Indeed, this is the case, as we show in the sequel: for each divergence fk, we may carefully choose p to achieve small regret. To prove our bounds, however, it is crucial that the importance sampling estimator b`t has small variance, which in turn necessitates that pt,i is not too small. Generally, this means that in the update (Alg. 1, Line 5) to construct pt+1, we choose (p) to grow quickly as pi ! 0 (e.g. | @ @pi p(p)| ! 1), but there is a tradeoff in that this may cause large Bregman divergence terms (6). In the coming sections, we explore this tradeoff for various k, providing regret bounds for each of the Cressie-Read divergences (7). To control the B ⇤ p terms in the bound (6), we use the curvature of p (dually, smoothness of ⇤ p) to show that B ⇤p (u, v) ⇡P(ui −vi)2. For this approximation to hold, we shift our loss functions based on the f-divergence. When k ≥2, we assume that `(x) 2 [0, 1]n. If k < 2, we instead apply Algorithm 1 with shifted losses `0(x) = `(x) − , so that `0(x) 2 [−1, 0]n. We call the method with `0 Algorithm 1’, noting that b`t,i(xt) = `i(xt)−1 pt,i 1 {It = i} in this case. 3.1 Power divergences when k 62 {0, 1} For our first results, we prove a generic regret bound for Algorithm 1 when k 62 {0, 1} by taking the distance-generating function p(p) = 1 k(k−1) Pn i=1 pk i , which is differentiable and strictly convex on Rn +. Before proceeding further, we first note that for p 2 P⇢,n and p1 = 1 n , we have B p(p, p1) = p(p) − p(p1) −r p(p1)>(p −p1) = n−k k(k −1) n X i=1 / (npi)k −knpi + k −1 = n−kDf (p|| /n) n−k⇢ (8) 4 bounding the first term in expression (6). From Lemma 2, it remains to bound the Bregman divergence terms B ⇤p . Using smoothness of ⇤ p in the positive orthant, we obtain the following bound. Theorem 1. Assume that `(x) 2 [0, 1]n. For any real-valued k ≥2 and any p 2 P⇢,n, Algorithm 1 satisfies T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)] n−k⇢ ↵p + ↵p 2 T X t=1 E 2 4 X i:pt,i>0 p1−k t,i 3 5 . (9) For k 2 with k 62 {0, 1}, an identical bound holds for Algorithm 1’ with `0(x) = `(x) − . See Appendix A.3 for the proof. We now use Theorem 1 to obtain concrete convergence guarantees for Cressie-Read divergences with parameter k < 1, giving sublinear (in T) regret bounds independent of n. In the corollary, whose proof we provide in Appendix A.4, we let Ck,⇢= (1−k)(1−k⇢) −k , which is positive for k < 0. Corollary 1. For k 2 (−1, 0) and ↵p = C k−1 2 k,⇢n−kp 2⇢/T Algorithm 1’ with `0(x) = `(x) − 2 [−1, 0]n acheives the regret bound T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)]  q 2C1−k k,⇢⇢T. For k 2 (0, 1) and ↵p = n−kp 2⇢/T, Algorithm 1’ with `0(x) = `(x) − 2 [−1, 0]n acheives the regret bound T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)]  p 2⇢T. It is worth noting that despite the robustification, the above regret is independent of n. In the special case that k 2 (0, 1), Theorem 1 is the regret bound for the implicitly normalized forecaster of Audibert and Bubeck [1] (cf. [8, Ch 5.4]). 3.2 Regret bounds using the KL divergences (k = 1 and k = 0) The choice f1(t) = t log t −t + 1 yields Df (P||Q) = Dkl (P||Q), and in this case, we take p(p) = Pn i=1 pi log pi, which means that Algorithm 1 performs entropic gradient ascent. To control the divergence B ⇤p , we use the rescaled losses `0(x) = `(x) − (as we have k < 2). Then we have the following bound, whose proof we provide in Appendix A.5. Theorem 2. Algorithm 1’ with loss `0(x) = `(x) − yields T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)]  ⇢ n↵p + ↵p 2 nT. (10) In particular, when ↵p = 1 n q 2⇢ T , we have PT t=1 E[`(xt)>(p −pt)] p2⇢T. Using k = 0, so that f0(t) = −log t + t −1, we obtain Df (P||Q) = Dkl (Q||P), which results in a robustification technique identical to Owen’s original empirical likelihood [24]. We again use the rescaled losses `0(x) = `(x) − , but in this scenario we use the proximal function p(p) = −Pn i=1 log pi in Algorithm 1’. Then we have the following regret bound (see Appendix A.6). Theorem 3. Algorithm 1’ with loss `0(x) = `(x) − yields T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)]  ⇢ ↵p + ↵p 2 T. In particular, when ↵p = q 2⇢ T , we have PT t=1 E[`(xt)>(p −pt)] p2⇢T. In both of these cases, the expected pseudo-regret of our robust gradient procedure is independent of n and grows as p T, which is essentially identical to that achieved by pure online gradient methods. 5 3.3 Power divergences (k > 1) Corollary 1 provides convergence guarantees for power divergences fk with k < 1, but says nothing about the case that k > 1; the choice p(p) = 1 k(k−1) Pn i=1 pk i allows the individual probabilities pt,i to be too small, which can cause excess variance of b`. To remedy this, we regularize the robust problem (1) by re-defining our robust empirical distributions set, taking P⇢,n,δ := n p 2 Rn + | p ≥δ n, n X i=1 f(npi) ⇢ o , where we no longer constrain the weights p to satisfy >p = 1. Nonetheless, it is still possible to show that the guarantees (2) and (3) hold with P⇢,n,δ replacing P⇢,n. Indeed, we may give bounds for the pseudo-regret of the regularized problem with P⇢,n,δ, where we apply Algorithm 1 with a slightly modified sampling strategy, drawing indices i according to the normalized distribution pt/ Pn i=1 pt,i and appropriately normalizing the loss estimate via b`t,i(xt) = n X i=1 pt,i ! `i(xt) pt,i 1 {It = i} . This vector is still unbiased for `(xt). Define the constant Ck := max {t : fk(t) t} _ ⇢ n < 1 (so C2 = 2 + p 3). With our choice p(p) = 1 k(k−1) Pn i=1 pk i and for δ > 0, we obtain the following result, whose proof we provide in Appendix A.7. Theorem 4. For k 2 [2, 1), any p 2 P⇢,n,δ, Algorithm 1 with ↵p = n−kp ⇢δk−1/ (4C3 kT) yields T X t=1 E[`(xt)>(p −pt)] = T X t=1 E[b`t(xt)>(p −pt)] 2Ck p ⇢Ckδ1−kT For k 2 (1, 2), assume that `(x) 2 [−1, 0]n. Then, Algorithm 1 gives identical bounds. 4 Efficient updates when k = 2 The previous section shows that Algorithm 1 with careful choice of p yields sublinear regret bounds. The projection step pt+1 = argminp2P⇢,n,δ B p(p, wt+1), however, can still take time linear in n despite the sparsity of b`(xt) (see Appendix B for concrete updates for each of our cases). In this section, we show how to compute the bandit mirror descent update in Alg. 1, line 5, in time O(log n) time for f2(t) = 1 2(t −1)2 and p(p) = 1 2 Pn i=1 p2 i . Building off of Duchi et al. [14], we use carefully designed balanced binary search trees (BSTs) to this end. The Lagrangian for the update pt+1 = argminp2P⇢,n,δ B p(p, wt+1) (suppressing t) is L(p, λ, ✓) = B p(p, w) −λ n2 ⇢− n X i=1 f2(npi) ! −✓> ✓ p −δ n ◆ where λ ≥0, ✓2 Rn +. The KKT conditions imply (1+λ)p = w+ λ n +✓, and strict complementarity yields p(λ) = ✓ 1 1 + λw + λ 1 + λ 1 n −δ n ◆ + + δ n , (11) where p(λ) = argminp2P⇢,n,δ inf✓2Rn + L(p, λ, ✓). Substituting this into the Lagrangian, we obtain the concave dual objective g(λ) := sup ✓ inf p2P⇢,n,δ L(p, λ, ✓) = B p(p(λ), w) −λ ⇢− n X i=1 fk(npi(λ)) ! . We can run a bisection search on the nondecreasing function g0(λ) to find λ such that g0(λ) = 0. After algebraic manipulations, we have that @ @λg(λ) = g1(λ) X i2I(λ) w2 i + g2(λ) X i2I(λ) wi + g3(λ)|I(λ)| + (1 −δ)2 2n −⇢ n2 , 6 where I(λ) := {1 i n : wi ≥δ n + ( δ n −1)λ} and (see expression (18) in Appendix B.4) g1(λ) = 1 (1 + λ)2 , g2(λ) = −2 n(1 + λ)2 , g3(λ) = 1 n2(1 + λ)2 −(1 −δ)2 2n . To see that we can solve for λ⇤that acheives |g0(λ⇤)| ✏in O(log n + log 1 ✏) time, it suffices to evaluate P i2I(λ) wq i for q = 0, 1, 2 in time O(log n). To this end, we store the w’s in a balanced search tree (e.g., red-black tree) keyed on the weights up to a multiplicative and an additive constant. A key ingredient in our implementation is that the BST stores in each node the sum of the appropriate powers of values in the left and right subtree [14]. See Appendix C for detailed pseudocode for all operations required in Algorithm 1: each subroutine (sampling It ⇠pt, updating w, computing λ⇤, and updating p(λ⇤)) require time O(log n) using standard BST operations. 5 Experiments In this section, we present experimental results demonstrating the efficiency of our algorithm. We first compare our method with existing algorithms for solving the robust problem (1) on a synthetic dataset, then investigating the robust formulation on real datasets to show how the calibrated confidence guarantees behave in practice, especially in comparison to the ERM. We experiment on natural high dimensional datasets as well as those with many training examples. Our implementation uses the efficient updates outlined in Section 4. Throughout our experiments, we use the best tuned step sizes for all methods. For the first two experiments, we set ⇢= χ2 1,.9 so that the resulting robust objective (1) will be a calibrated 95% upper confidence bound on the optimal population risk. For our last experiment, the asymptotic regime (3) fails to hold due to the high dimensional nature of the problem, so we choose ⇢= 50 (somewhat arbitrarily, but other ⇢give similar behavior). We take X = / x 2 Rd : kxk2 R for our experiments. For the experiment with synthetic data, we compare our algorithm against two benchmark methods for solving the robust problem (1). The first is the interior point method for the dual reformulation (5) using the Gurobi solver [17]. The second is using gradient descent, viewing the robust formulation (1) as a minimization problem with the objective x 7! supp2P⇢,n,δ p>`(x). To efficiently compute the gradient, we bisect over the dual form (5) with respect to λ ≥0, ⌘. We use the best step sizes for both our proposed bandit-based algorithm and gradient descent. To generate the data, we choose a true classifier x⇤2 Rd and sample the feature vectors ai iid ⇠N(0, I) for i 2 [n]. We set the labels to be bi = sign(a> i x⇤) and flip them with probability 10%. We use the hinge loss `i(x) = A 1 −bia> i x B + with n = 2000, d = 500 and R = 10 in our experiment. In Figure 1a, we plot the log optimality ratio (log of current objective value over optimal value) with respect to the runtime for the three algorithms. While the interior point method (IPM) obtains accurate solutions, it scales relatively poorly in n and d (the initial flat region in the plot is due to pre-computations for factorizing within the solver). Gradient descent performs quite well in this moderate sized example although each iteration takes time ⌦(n). We also perform experiments on two datasets with larger n: the Adult dataset [22] and the Reuters RCV1 Corpus [21]. The Adult dataset has n = 32,561 training and 16,281 test examples with 123-dimensional features. We use binary logistic loss `i(x) = log(1 + exp(−bia> i x)) to classify whether the income level is greater than $5K. For the Reuters RCV1 Corpus, our task is to classify whether a document belongs to the Corporate category. With d = 47,236 features, we randomly split the 804,410 examples into 723,969 training (90% of data) and 80,441 (10% of data) test examples. We use the hinge loss and solve the binary classification problem for the document type. To test the efficiency of our method in large scale settings, we plot the log ratio log Rn(x) Rn(x?), where Rn(x) = supp2P⇢,n,δ p>`(x), versus CPU time for our algorithm and gradient descent in Figure 1b. As is somewhat typical of stochastic gradient-based methods, our bandit-based optimization algorithm quickly obtains a solution with small optimality gap (about 2% relative error), while the gradient descent method eventually achieves better loss. In Figures 2a–2d, we plot the loss value and the classification error compared with applying pure stochastic gradient descent to the standard empirical loss, plotting the confidence bound for the robust 7 (a) Synthetic Data (n = 2000, d = 500) (b) Reuters Corpus (n = 7.2 · 105, d ⇡5 · 104) Figure 1: Comparison of Solvers (a) Adult: Logistic Loss (b) Adult: Classification Error (c) Reuters: Hinge Loss (d) Reuters: Classification Error Figure 2: Comparison with ERM method as well. As the theory suggests [15, 13], the robust objective provides upper confidence bounds on the true risk (approximated by the average loss on the test sample). Acknowledgments JCD and HN were partially supported by the SAIL-Toyota Center for AI Research and the National Science Foundation award NSF-CAREER-1553086. HN was also partially supported Samsung Fellowship. 8 References [1] J.-Y. Audibert and S. Bubeck. Regret bounds and minimax policies under partial monitoring. In Journal of Machine Learning Research, pages 2635–2686, 2010. [2] A. Ben-Tal, L. E. Ghaoui, and A. Nemirovski. Robust Optimization. Princeton University Press, 2009. [3] A. Ben-Tal, D. den Hertog, A. D. Waegenaere, B. Melenberg, and G. Rennen. Robust solutions of optimization problems affected by uncertain probabilities. Management Science, 59(2): 341–357, 2013. [4] A. Ben-Tal, E. Hazan, T. Koren, and S. Mannor. Oracle-based robust optimization via online learning. Operations Research, 63(3):628–638, 2015. [5] J. Borwein, A. J. Guirao, P. Hájek, and J. Vanderwerff. Uniformly convex functions on Banach spaces. Proceedings of the American Mathematical Society, 137(3):1081–1091, 2009. [6] S. Boucheron, O. Bousquet, and G. Lugosi. Theory of classification: a survey of some recent advances. ESAIM: Probability and Statistics, 9:323–375, 2005. [7] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [8] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [9] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [10] K. Clarkson, E. Hazan, and D. Woodruff. Sublinear optimization for machine learning. Journal of the Association for Computing Machinery, 59(5), 2012. [11] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to Algorithms. MIT Press, 2001. [12] N. Cressie and T. R. Read. Multinomial goodness-of-fit tests. Journal of the Royal Statistical Society. Series B (Methodological), pages 440–464, 1984. [13] J. C. Duchi and H. Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. arXiv:1610.02581 [stat.ML], 2016. URL https://arxiv.org/abs/ 1610.02581. [14] J. C. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the `1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008. [15] J. C. Duchi, P. W. Glynn, and H. Namkoong. Statistics of robust optimization: A generalized empirical likelihood approach. arXiv:1610.03425 [stat.ML], 2016. URL https://arxiv. org/abs/1610.03425. [16] S. Ghadimi and G. Lan. Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework. SIAM Journal on Optimization, 22(4):1469–1492, 2012. [17] I. Gurobi Optimization. Gurobi optimizer reference manual, 2015. URL http://www.gurobi. com. [18] E. Hazan. The convex optimization approach to regret minimization. In Optimization for Machine Learning, chapter 10. MIT Press, 2012. [19] E. Hazan and S. Kale. An optimal algorithm for stochastic strongly convex optimization. In Proceedings of the Twenty Fourth Annual Conference on Computational Learning Theory, 2011. [20] J. Hiriart-Urruty and C. Lemaréchal. Convex Analysis and Minimization Algorithms I & II. Springer, New York, 1993. [21] D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research, 5:361–397, 2004. [22] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ ml. [23] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19(4):1574–1609, 2009. [24] A. B. Owen. Empirical likelihood. CRC press, 2001. [25] S. Shalev-Shwartz and S. Ben-David. Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, 2014. [26] S. Shalev-Shwartz and Y. Wexler. Minimizing the maximal loss: How and why? In Proceedings of the 32nd International Conference on Machine Learning, 2016. [27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the Twentieth International Conference on Machine Learning, 2003. 9
2016
167
6,069
Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation Weihao Gao∗, Sewoong Oh†, and Pramod Viswanath∗ University of Illinois at Urbana-Champaign Urbana, IL 61801 {wgao9,swoh,pramodv}@illinois.edu Abstract Estimators of information theoretic measures such as entropy and mutual information are a basic workhorse for many downstream applications in modern data science. State of the art approaches have been either geometric (nearest neighbor (NN) based) or kernel based (with a globally chosen bandwidth). In this paper, we combine both these approaches to design new estimators of entropy and mutual information that outperform state of the art methods. Our estimator uses local bandwidth choices of k-NN distances with a finite k, independent of the sample size. Such a local and data dependent choice improves performance in practice, but the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show that the asymptotic bias of the proposed estimator is universal; it is independent of the underlying distribution. Hence, it can be precomputed and subtracted from the estimate. As a byproduct, we obtain a unified way of obtaining both kernel and NN estimators. The corresponding theoretical contribution relating the asymptotic geometry of nearest neighbors to order statistics is of independent mathematical interest. 1 Introduction Unsupervised representation learning is one of the major themes of modern data science; a common theme among the various approaches is to extract maximally “informative" features via informationtheoretic metrics (entropy, mutual information and their variations) – the primary reason for the popularity of information theoretic measures is that they are invariant to one-to-one transformations and that they obey natural axioms such as data processing. Such an approach is evident in many applications, as varied as computational biology [11], sociology [20] and information retrieval [17], with the citations representing a mere smattering of recent works. Within mainstream machine learning, a systematic effort at unsupervised clustering and hierarchical information extraction is conducted in recent works of [25, 23]. The basic workhorse in all these methods is the computation of mutual information (pairwise and multivariate) from i.i.d. samples. Indeed, sample-efficient estimation of mutual information emerges as the central scientific question of interest in a variety of applications, and is also of fundamental interest to statistics, machine learning and information theory communities. While these estimation questions have been studied in the past three decades (and summarized in [28]), the renewed importance of estimating information theoretic measures in a sample-efficient manner is persuasively argued in a recent work [2], where the authors note that existing estimators perform poorly in several key scenarios of central interest (especially when the high dimensional random variables are strongly related to each other). The most common estimators (featured in scientific ∗Coordinated Science Lab and Department of Electrical and Computer Engineering †Coordinated Science Lab and Department of Industrial and Enterprise Systems Engineering 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. software packages) are nonparametric and involve k nearest neighbor (NN) distances between the samples. The widely used estimator of mutual information is the one by Kraskov and Stögbauer and Grassberger [10] and christened the KSG estimator (nomenclature based on the authors, cf. [2]) – while this estimator works well in practice (and performs much better than other approaches such as those based on kernel density estimation procedures), it still suffers in high dimensions. The basic issue is that the KSG estimator (and the underlying differential entropy estimator based on nearest neighbor distances by Kozachenko and Leonenko (KL) [9]) does not take advantage of the fact that the samples could lie in a smaller dimensional subspace (more generally, manifold) despite the high dimensionality of the data itself. Such lower dimensional structures effectively act as boundaries, causing the estimator to suffer from what is known as boundary biases. Ameliorating this deficiency is the central theme of recent works [3, 2, 16], each of which aims to improve upon the classical KL (differential) entropy estimator of [9]. A local SVD is used to heuristically improve the density estimate at each sample point in [2], while a local Gaussian density (with empirical mean and covariance weighted by NN distances) is heuristically used for the same purpose in [16]. Both these approaches, while inspired and intuitive, come with no theoretical guarantees (even consistency) and from a practical perspective involve delicate choice of key hyper parameters. An effort towards a systematic study is initiated in [3] which connects the aforementioned heuristic efforts of [2, 16] to the local log-likelihood density estimation methods [6, 15] from theoretical statistics. The local density estimation method is a strong generalization of the traditional kernel density estimation methods, but requires a delicate normalization which necessitates the solution of certain integral equations (cf. Equation (9) of [15]). Indeed, such an elaborate numerical effort is one of the key impediments for the entropy estimator of [3] to be practically valuable. A second key impediment is that theoretical guarantees (such as consistency) can only be provided when the bandwidth is chosen globally (leading to poor sample complexity in practice) and consistency requires the bandwidth h to be chosen such that nhd →∞and h →0, where n is the sample size and d is the dimension of the random variable of interest. More generally, it appears that a systematic application of local log-likelihood methods to estimate functionals of the unknown density from i.i.d. samples is missing in the theoretical statistics literature (despite local log-likelihood methods for regression and density estimation being standard textbook fare [29, 14]). We resolve each of these deficiencies in this paper by undertaking a comprehensive study of estimating the (differential) entropy and mutual information from i.i.d. samples using sample dependent bandwidth choices (typically fixed k-NN distances). This effort allows us to connect disparate threads of ideas from seemingly different arenas: NN methods, local log-likelihood methods, asymptotic order statistics and sample-dependent heuristic, but inspired, methods for mutual information estimation suggested in the work of [10]. Main Results: We make the following contributions. 1. Density estimation: Parameterizing the log density by a polynomial of degree p, we derive simple closed form expressions for the local log-likelihood maximization problem for the cases of p ≤2 for arbitrary dimensions, with Gaussian kernel choices. This derivation, posed as an exercise in [14, Exercise 5.2], significantly improves the computational efficiency upon similar endeavors in the recent efforts of [3, 16, 26]. 2. Entropy estimation: Using resubstitution of the local density estimate, we derive a simple closed form estimator of the entropy using a sample dependent bandwidth choice (of k-NN distance, where k is a fixed small integer independent of the sample size): this estimator outperforms state of the art entropy estimators in a variety of settings. Since the bandwidth is data dependent and vanishes too fast (because k is fixed), the estimator has a bias, which we derive a closed form expression for and show that it is independent of the underlying distribution and hence can be easily corrected: this is our main theoretical contribution, and involves new theorems on asymptotic statistics of nearest neighbors generalizing classical work in probability theory [19], which might be of independent mathematical interest. 3. Generalized view: We show that seemingly very different approaches to entropy estimation – recent works of [2, 3, 16] and the classical work of fixed k-NN estimator of Kozachenko and Leonenko [9] – can all be cast in the local log-likelihood framework as specific kernel and sample dependent bandwidth choices. This allows for a unified view, which we theoretically justify by showing that resubstitution entropy estimation for any kernel choice using fixed k-NN distances as bandwidth involves a bias term that is independent of the underlying 2 distribution (but depends on the specific choice of kernel and parametric density family). Thus our work is a strict mathematical generalization of the classical work of [9]. 4. Mutual Information estimation: The inspired work of [10] constructs a mutual information estimator that subtly altered (in a sample dependent way) the three KL entropy estimation terms, leading to superior empirical performance. We show that the underlying idea behind this change can be incorporated in our framework as well, leading to a novel mutual information estimator that combines the two ideas and outperforms state of the art estimators in a variety of settings. In the rest of this paper we describe these main results, the sections organized in roughly the same order as the enumerated list. 2 Local likelihood density estimation (LLDE) Given n i.i.d. samples X1, . . . , Xn, estimating the unknown density fX(·) in Rd is a very basic statistical task. Local likelihood density estimators [15, 6] constitute state of the art and are specified by a weight function K : Rd →R (also called a kernel), a degree p ∈Z+ of the polynomial approximation, and the bandwidth h ∈R, and maximizes the local log-likelihood: Lx(f) = n X j=1 K Xj −x h  log f(Xj) −n Z K u −x h  f(u) du , (1) where maximization is over an exponential polynomial family, locally approximating f(u) near x: loge fa,x(u) = a0 + ⟨a1, u −x⟩+ ⟨u −x, a2(u −x)⟩+ · · · + ap[u −x, u −x, . . . , u −x] , (2) parameterized by a = (a0, . . . , ap) ∈R1×d×d2×···×dp, where ⟨·, ·⟩denotes the inner-product and ap[u, . . . , u] the p-th order tensor projection. The local likelihood density estimate (LLDE) is defined as bfn(x) = fba(x),x(x) = eba0(x), where ba(x) ∈arg maxa Lx(fa,x). The maximizer is represented by a series of nonlinear equations, and does not have a closed form in general. We present below a few choices of the degrees and the weight functions that admit closed form solutions. Concretely, for p = 0, it is known that LDDE reduces to the standard Kernel Density Estimator (KDE) [15]: bfn(x) = 1 n n X i=1 K x −Xi h  . Z K u −x h  du . (3) If we choose the step function K(u) = I(∥u∥≤1) with a local and data-dependent choice of the bandwidth h = ρk,x where ρk,x is the k-NN distance from x, then the above estimator recovers the popular k-NN density estimate as a special case, namely, for Cd = πd/2/Γ(d/2 + 1), bfn(x) = 1 n Pn i=1 I(∥Xi −x∥≤ρk,x) Vol{u ∈Rd : ∥u −x∥≤ρk,x} = k n Cd ρd k,x . (4) For higher degree local likelihood, we provide simple closed form solutions and provide a proof in Section D. Somewhat surprisingly, this result has eluded prior works [16, 26] and [3] which specifically attempted the evaluation for p = 2. Part of the subtlety in the result is to critically use the fact that the parametric family (eg., the polynomial family in (2)) need not be normalized themselves; the local log-likelihood maximization ensures that the resulting density estimate is correctly normalized so that it integrates to 1. Proposition 2.1. [14, Exercise 5.2] For a degree p ∈{1, 2}, the maximizer of local likelihood (1) admits a closed form solution, when using the Gaussian kernel K(u) = e−∥u∥2 2 . In case of p = 1, bfn(x) = S0 n(2π)d/2hd exp  −1 2 1 S2 0 ∥S1∥2  , (5) where S0 ∈R and S1 ∈Rd are defined for given x ∈Rd and h ∈R as S0 ≡ n X j=1 e− ∥Xj −x∥2 2h2 , S1 ≡ n X j=1 1 h(Xj −x) e− ∥Xj −x∥2 2h2 . (6) 3 In case of p = 2, for S0 and S1 defined as above, bfn(x) = S0 n(2π)d/2hd|Σ|1/2 exp n −1 2 1 S2 0 ST 1 Σ−1S1 o , (7) where |Σ| is the determinant and S2 ∈Rd×d and Σ ∈Rd×d are defined as S2 ≡ n X j=1 1 h2 (Xj −x)(Xj −x)T e− ∥Xj −x∥2 2h2 , Σ ≡S0S2 −S1ST 1 S2 0 , (8) where it follows from Cauchy-Schwarz that Σ is positive semidefinite. One of the major drawbacks of the KDE and k-NN methods is the increased bias near the boundaries. LLDE provides a principled approach to automatically correct for the boundary bias, which takes effect only for p ≥2 [6, 21]. This explains the performance improvement for p = 2 in the figure below (left panel), and the gap increases with the correlation as boundary effect becomes more prominent. We use the proposed estimators with p ∈{0, 1, 2} to estimate the mutual information between two jointly Gaussian random variables with correlation r, from n = 500 samples, using resubstitution methods explained in the next sections. Each point is averaged over 100 instances. In the right panel, we generate i.i.d. samples from a 2-dimensional Gaussian with correlation 0.9, and found local approximation bf(u −x∗) around x∗denoted by the blue ∗in the center. Standard k-NN approach fits a uniform distribution over a circle enclosing k = 20 nearest neighbors (red circle). The green lines are the contours of the degree-2 polynomial approximation with bandwidth h = ρ20,x. The figure illustrates that k-NN method suffers from boundary effect, where it underestimates the probability by over estimating the volume in (4). However, degree-2 LDDE is able to correctly capture the local structure of the pdf, correcting for boundary biases. Despite the advantages of the LLDE, it requires the bandwidth to be data independent and vanishingly small (sublinearly in sample size) for consistency almost everywhere – both of these are impediments to practical use since there is no obvious systematic way of choosing these hyperparameters. On the other hand, if we restrict our focus to functionals of the density, then both these issues are resolved: this is the focus of the next section where we show that the bandwidth can be chosen to be based on fixed k-NN distances and the resulting universal bias easily corrected. 0.001 0.01 0.1 1 10 0.000001 0.0001 0.001 1 p=0 p=1 p=2 (1 −r) where r is correlation E[(I −bI)2] X1 X2 Figure 1: The boundary bias becomes less significant and the gap closes as correlation decreases for estimating the mutual information (left). Local approximation around the blue ∗in the center. The degree-2 local likelihood approximation (contours in green) automatically captures the local structure whereas the standard k-NN approach (uniform distribution in red circle) fails (left). 3 k-LNN Entropy Estimator We consider resubstitution entropy estimators of the form bH(x) = −(1/n) Pn i=1 log bfn(Xi) and propose to use the local likelihood density estimator in (7) and a choice of bandwidth that is local 4 (varying for each point x) and adaptive (based on the data). Concretely, we choose, for each sample point Xi, the bandwidth hXi to be the the distance to its k-th nearest neighbor ρk,i. Precisely, we propose the following k-Local Nearest Neighbor (k-LNN) entropy estimator of degree-2: bH(n) kLNN(X) = −1 n n X i=1 ( log S0,i n(2π)d/2ρd k,i|Σi|1/2 −1 2 1 S2 0,i ST 1,iΣ−1 i S1,i ) −Bk,d , (9) where subtracting Bk,d defined in Theorem 1 removes the asymptotic bias, and k ∈Z+ is the only hyper parameter determining the bandwidth. In practice k is a small integer fixed to be in the range 4 ∼8. We only use the ⌈log n⌉nearest subset of samples Ti = {j ∈[n] : j ̸= i and ∥Xi −Xj∥≤ ρ⌈log n⌉,i} in computing the quantities below: S0,i ≡ X j∈Ti,m e − ∥Xj −Xi∥2 2ρ2 k,i , S1,i ≡ X j∈Ti,m 1 ρk,i (Xj −Xi)e − ∥Xj −Xi∥2 2ρ2 k,i , S2,i ≡ X j∈Ti,m 1 ρ2 k,i (Xj −Xi)(Xj −Xi)T e − ∥Xj −Xi∥2 2ρ2 k,i , Σi ≡S0,iS2,i −S1,iST 1,i S2 0,i . (10) The truncation is important for computational efficiency, but the analysis works as long as m = O(n1/(2d)−ε) for any positive ε that can be arbitrarily small. For a larger m, for example of Ω(n), those neighbors that are further away have a different asymptotic behavior. We show in Theorem 1 that the asymptotic bias is independent of the underlying distribution and hence can be precomputed and removed, under mild conditions on a twice continuously differentiable pdf f(x) (cf. Lemma 3.1 below). Theorem 1. For k ≥3 and X1, X2, . . . , Xn ∈Rd are i.i.d. samples from a twice continuously differentiable pdf f(x), then lim n→∞E[ bH(n) kLNN(X)] = H(X) , (11) where Bk,d in (9) is a constant that only depends on k and d. Further, if E[(log f(X))2] < ∞then the variance of the proposed estimator is bounded by Var[ bH(n) kLNN(X)] = O((log n)2/n). This proves the L1 and L2 consistency of the k-LNN estimator; we relegate the proof to Section F for ease of reading the main part of the paper. The proof assumes Ansatz 1 (also stated in Section F), which states that a certain exchange of limit holds. As noted in [18], such an assumption is common in the literature on consistency of k-NN estimators, where it has been implicitly assumed in existing analyses of entropy estimators including [9, 5, 12, 27], without explicitly stating that such assumptions are being made. Our choice of a local adaptive bandwidth hXi = ρk,i is crucial in ensuring that the asymptotic bias Bk,d does not depend on the underlying distribution f(x). This relies on a fundamental connection to the theory of asymptotic order statistics made precise in Lemma 3.1, which also gives the explicit formula for the bias below. The main idea is that the empirical quantities used in the estimate (10) converge in large n limit to similar quantities defined over order statistics. We make this intuition precise in the next section. We define order statistics over i.i.d. standard exponential random variables E1, E2, . . . , Em and i.i.d. random variables ξ1, ξ2, . . . , ξm drawn uniformly (the Haar measure) over the unit sphere in Rd, for a variable m ∈Z+. We define for α ∈{0, 1, 2}, ˜S(m) α ≡ m X j=1 ξ(α) j (Pj ℓ=1 Eℓ)α ( Pk ℓ=1 Eℓ)α exp ( −( Pj ℓ=1 Eℓ)2 2( Pk ℓ=1 Eℓ)2 ) , (12) where ξ(0) j = 1, ξ(1) j = ξj ∈Rd, and ξ(2) j = ξjξT j ∈Rd×d, and let ˜Sα = limm→∞˜S(m) α and eΣ = (1/ ˜S0)2( ˜S0 ˜S2 −˜S1 ˜ST 1 ). We show that the limiting ˜Sα’s are well-defined (in the proof of Theorem 1) and are directly related to the bias terms in the resubstitution estimator of entropy: Bk,d = E[ log( k X ℓ=1 Eℓ) + d 2 log 2π −log Cd −log ˜S0 + 1 2 log eΣ + ( 1 2 ˜S2 0 ˜ST 1 eΣ−1 ˜S1) ] . (13) 5 In practice, we propose using a fixed small k such as five. For k ≤3 the estimator has a very large variance, and numerical evaluation of the corresponding bias also converges slowly. For some typical choices of k, we provide approximate evaluations below, where 0.0183(±6) indicates empirical mean µ = 183 × 10−4 with confidence interval 6 × 10−4. In these numerical evaluations, we truncated the summation at m = 50, 000. Although we prove that Bk,d converges in m, in practice, one can choose m based on the number of samples and Bk,d can be evaluated for that m. Theoretical contribution: Our key technical innovation is a fundamental connection between nearest neighbor statistics and asymptotic order statistics, stated below as Lemma 3.1: we show that the (normalized) distances ρℓ,i’s jointly converge to the standardized uniform order statistics and the directions (Xjℓ−Xi)/∥Xjℓ−Xi∥’s converge to independent uniform distribution (Haar measure) over the unit sphere. k 4 5 6 7 8 9 d 1 -0.0183(±6) -0.0233(±6) -0.0220(±4) -0.0200(±4) -0.0181(±4) -0.0171(±3) 2 -0.1023(±5) -0.0765(±4) -0.0628(±4) -0.0528(±3) -0.0448(±3) -0.0401(±3) Table 1: Numerical evaluation of Bk,d, via sampling 1, 000, 000 instances for each pair (k, d). Conditioned on Xi = x, the proposed estimator uses nearest neighbor statistics on Zℓ,i ≡Xjℓ−x where Xjℓis the ℓ-th nearest neighbor from x such that Zℓ,i = ((Xjℓ−Xi)/∥Xjℓ−Xi∥)ρℓ,i. Naturally, all the techniques we develop in this paper generalize to any estimators that depend on the nearest neighbor statistics {Zℓ,i}i,ℓ∈[n] – and the value of such a general result is demonstrated later (in Section 4) when we evaluate the bias in similarly inspired entropy estimators [2, 3, 16, 9]. Lemma 3.1. Let E1, E2, . . . , Em be i.i.d. standard exponential random variables and ξ1, ξ2, . . . , ξm be i.i.d. random variables drawn uniformly over the unit (d −1)-dimensional sphere in d dimensions, independent of the Ei’s. Suppose f is twice continuously differentiable and x ∈Rd satisfies that there exists ε > 0 such that f(a) > 0, ∥∇f(a)∥= O(1) and ∥Hf(a)∥= O(1) for any ∥a −x∥< ε. Then for any m = O(log n), we have the following convergence conditioned on Xi = x: lim n→∞dTV((cdnf(x))1/d( Z1,i, . . . , Zm,i ) , ( ξ1E1/d 1 , . . . , ξm( m X ℓ=1 Eℓ)1/d )) = 0 . (14) where dTV(·, ·) is the total variation and cd is the volume of unit Euclidean ball in Rd. Empirical contribution: Numerical experiments suggest that the proposed estimator outperforms state-of-the-art entropy estimators, and the gap increases with correlation. The idea of using k-NN distance as bandwidth for entropy estimation was originally proposed by Kozachenko and Leonenko in [9], and is a special case of the k-LNN method we propose with degree 0 and a step kernel. We refer to Section 4 for a formal comparison. Another popular resubstitution entropy estimator is to use KDE in (3) [7], which is a special case of the k-LNN method with degree 0, and the Gaussian kernel is used in simulations. As comparison, we also study a new estimator [8] based on von Mises expansion (as opposed to simple re-substitution) which has an improved convergence rate in the large sample regime. We relegate simulation results to Section. B in the appendix. 4 Universality of the k-LNN approach In this section, we show that Theorem 1 holds universally for a general family of entropy estimators, specified by the choice of k ∈Z+, degree p ∈Z+, and a kernel K : Rd →R, thus allowing a unified view of several seemingly disparate entropy estimators [9, 2, 3, 16]. The template of the entropy estimator is the following: given n i.i.d. samples, we first compute the local density estimate by maximizing the local likelihood (1) with bandwidth ρk,i, and then resubstitute it to estimate entropy: bH(n) k,p,K(X) = −(1/n) Pn i=1 log bfn(Xi). Theorem 2. For the family of estimators described above, under the hypotheses of Theorem 1, if the solution to the maximization ba(x) = arg maxa Lx(fa,x) exists for all x ∈{X1, . . . , Xn}, then for any choice of k ≥p + 1, p ∈Z+, and K : Rd →R, the asymptotic bias is independent of the underlying distribution: lim n→∞E[ bH(n) k,p,K(X)] = H(X) + eBk,p,K,d , (15) 6 for some constant eBk,d,p,K that only depends on k, p, K and d. We provide a proof in Section G. Although in general there is no simple analytical characterization of the asymptotic bias eBk,p,K,d it can be readily numerically computed: since eBk,p,K,d is independent of the underlying distribution, one can run the estimator over i.i.d. samples from any distribution and numerically approximate the bias for any choice of the parameters. However, when the maximization ba(x) = arg maxa Lx(fa,x) admits a closed form solution, as is the case with proposed k-LNN, then eBk,p,K,d can be characterized explicitly in terms of uniform order statistics. This family of estimators is general: for instance, the popular KL estimator is a special case with p = 0 and a step kernel K(u) = I(∥u∥≤1). [9] showed (in a remarkable result at the time) that the asymptotic bias is independent of the dimension d and can be computed exactly to be log n−ψ(n)+ψ(k)−log k and ψ(k) is the digamma function defined as ψ(x) = Γ−1(x)dΓ(x)/dx. The dimension independent nature of this asymptotic bias term (of O(n−1/2) for d = 1 in [24, Theorem 1] and O(n−1/d) for general d in [4]) is special to the choice of p = 0 and the step kernel; we explain this in detail in Section G, later in the paper. Analogously, the estimator in [2] can be viewed as a special case with p = 0 and an ellipsoidal step kernel. 5 k-LNN Mutual information estimator Given an entropy estimator bHKL, mutual information can be estimated: bI3KL = bHKL(X) + bHKL(Y ) −bHKL(X, Y ). In [10], Kraskov and Stögbauer and Grassberger introduced bIKSG(X; Y ) by coupling the choices of the bandwidths. The joint entropy is estimated in the usual way, but for the marginal entropy, instead of using kNN distances from {Xj}, the bandwidth hXi = ρk,i(X, Y ) is chosen, which is the k nearest neighbor distance from (Xi, Yi) for the joint data {(Xj, Yj)}. Consider bI3LNN(X; Y ) = bHkLNN(X) + bHkLNN(Y ) −bHkLNN(X, Y ). Inspired by [10], we introduce the following novel mutual information estimator we denote by bILNN−KSG(X; Y ). where for the joint (X, Y ) we use the LNN entropy estimator we proposed in (9), and for the marginal entropy we use the bandwidth hXi = ρk,i(X, Y ) coupled to the joint estimator. Empirically, we observe bIKSG outperforms bI3KL everywhere, validating the use of correlated bandwidths. However, the performance of bILNN−KSG is similar to bI3LNN–sometimes better and sometimes worse. Empirical Contribution: Numerical experiments show that for most regimes of correlation, both 3LNN and LNN-KSG outperforms other state-of-the-art estimators, and the gap increases with correlation r. In the large sample limit, all estimators find the correct mutual information, but both LNN and LNN-KSG are significantly more robust compared to other approaches. Mutual information estimators have been recently proposed in [2, 3, 16] based on local likelihood maximization. However, they involve heuristic choices of hyper-parameters or solving elaborate optimization and numerical integrations, which are far from being easy to implement. Simulation results can be found in Section. C in the appendix. 6 Breaking the bandwidth barrier While k-NN distance based bandwidth are routine in practical usage [21], the main finding of this work is that they also turn out to be the “correct" mathematical choice for the purpose of asymptotically unbiased estimation of an integral functional such as the entropy: − R f(x) log f(x); we briefly discuss the ramifications below. Traditionally, when the goal is to estimate f(x), it is well known that the bandwidth should satisfy h →0 and nhd →∞, for KDEs to be consistent. As a rule of thumb, h = 1.06bσn−1/5 is suggested when d = 1 where bσ is the sample standard deviation [29, Chapter 6.3]. On the other hand, when estimating entropy, as well as other integral functionals, it is known that resubstitution estimators of the form −(1/n) Pn i=1 log bf(Xi) achieve variances scaling as O(1/n) independent of the bandwidth [13]. This allows for a bandwidth as small as O(n−1/d). The bottleneck in choosing such a small bandwidth is the bias, scaling as O(h2+(nhd)−1+En) [13], where the lower order dependence on n, dubbed En, is generally not known. The barrier in choosing a global bandwidth of h = O(n−1/d) is the strictly positive bias whose value depends on the unknown distribution and cannot be subtracted off. However, perhaps surprisingly, the proposed local and 7 adaptive choice of the k-NN distance admits an asymptotic bias that is independent of the unknown underlying distribution. Manually subtracting off the non-vanishing bias gives an asymptotically unbiased estimator, with a potentially faster convergence as numerically compared below. Figure 2 illustrates how k-NN based bandwidth significantly improves upon, say a rule-of-thumb choice of O(n−1/(d+4)) explained above and another choice of O(n−1/(d+2)). In the left figure, we use the setting from Figure 3 (right) but with correlation r = 0.999. On the right, we generate X ∼N(0, 1) and U from uniform [0, 0.01] and let Y = X + U and estimate I(X; Y ). Following recent advances in [12, 22], the proposed local estimator has a potential to be extended to, for example, Renyi entropy, but with a multiplicative bias as opposed to additive. 0.0001 0.01 1 100 10000 1x106 1x108 1x1010 1x1012 100 200 400 800 kNN bandwidth Fixed bandwidth N-1/(d+4) Fixed bandwidth N-1/(d+2) number of samples n E[(bI −I)2] 0.0001 0.01 1 100 10000 1x106 1x108 1x1010 1x1012 100 200 400 800 kNN bandwidth Fixed bandwidth N-1/(d+4) Fixed bandwidth N-1/(d+2) E[(bI −I)2] number of samples n Figure 2: Local and adaptive bandwidth significantly improves over rule-of-thumb fixed bandwidth. Acknowledgement This work is supported by NSF SaTC award CNS-1527754, NSF CISE award CCF-1553452, NSF CISE award CCF-1617745. We thank the anonymous reviewers for their constructive feedback. References [1] G. Biau and L. Devroye. Lectures on the Nearest Neighbor Method. Springer, 2016. [2] S. Gao, G. Ver Steeg, and A. Galstyan. Efficient estimation of mutual information for strongly dependent variables. International Conference on Artificial Intelligence and Statistics (AISTATS), 2015. [3] S. Gao, G. Ver Steeg, and A. Galstyan. Estimating mutual information by local gaussian approximation. 31st Conference on Uncertainty in Artificial Intelligence (UAI), 2015. [4] W. Gao, S. Oh, and P. Viswanath. Demystifying fixed k-nearest neighbor information estimators. arXiv preprint arXiv:1604.03006, 2016. [5] M. N. Goria, N. N. Leonenko, V. V. Mergel, and P. L. Novi Inverardi. A new class of random vector entropy estimators and its applications in testing statistical hypotheses. Nonparametric Statistics, 17(3):277–297, 2005. [6] N. Hjort and M. Jones. Locally parametric nonparametric density estimation. The Annals of Statistics, pages 1619–1647, 1996. [7] H. Joe. Estimation of entropy and other functionals of a multivariate density. Annals of the Institute of Statistical Mathematics, 41(4):683–697, 1989. [8] K. Kandasamy, A. Krishnamurthy, B. Poczos, and L. Wasserman. Nonparametric von mises estimators for entropies, divergences and mutual informations. In NIPS, pages 397–405, 2015. [9] L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii, 23(2):9–16, 1987. [10] A. Kraskov, H. Stögbauer, and P. Grassberger. Estimating mutual information. Physical review E, 69(6):066138, 2004. 8 [11] S. Krishnaswamy, M. Spitzer, M. Mingueneau, S. Bendall, O. Litvin, E. Stone, D. Peer, and G. Nolan. Conditional density-based analysis of t cell signaling in single-cell data. Science, 346:1250689, 2014. [12] N. Leonenko, L. Pronzato, and V. Savani. A class of rényi information estimators for multidimensional densities. The Annals of Statistics, 36(5):2153–2182, 2008. [13] H. Liu, L. Wasserman, and J. D. Lafferty. Exponential concentration for mutual information estimation with application to forests. In NIPS, pages 2537–2545, 2012. [14] C. Loader. Local regression and likelihood. Springer Science & Business Media, 2006. [15] C. R. Loader. Local likelihood density estimation. The Annals of Statistics, 24(4):1602–1618, 1996. [16] D. Lombardi and S. Pant. Nonparametric k-nearest-neighbor entropy estimator. Physical Review E, 93(1):013310, 2016. [17] C. D. Manning, P. Raghavan, and H. Schütze. Introduction to information retrieval, volume 1. Cambridge university press Cambridge, 2008. [18] D. Pál, B. Póczos, and C. Szepesvári. Estimation of rényi entropy and mutual information based on generalized nearest-neighbor graphs. In Advances in Neural Information Processing Systems, pages 1849–1857, 2010. [19] R-D Reiss. Approximate distributions of order statistics: with applications to nonparametric statistics. Springer Science & Business Media, 2012. [20] D. Reshef, Y. Reshef, H. Finucane, S. Grossman, G. McVean, P. Turnbaugh, E. Lander, M. Mitzenmacher, and P. Sabeti. Detecting novel associations in large data sets. science, 334(6062):1518–1524, 2011. [21] S. J. Sheather. Density estimation. Statistical Science, 19(4):588–597, 2004. [22] S. Singh and B. Póczos. Analysis of k-nearest neighbor distances with application to entropy estimation. arXiv preprint arXiv:1603.08578, 2016. [23] G. Ver Steeg and A. Galstyan. The information sieve. to appear in ICML, arXiv:1507.02284, 2016. [24] A. B. Tsybakov and E. C. Van der Meulen. Root-n consistent estimators of entropy for densities with unbounded support. Scandinavian Journal of Statistics, pages 75–83, 1996. [25] G. Ver Steeg and A. Galstyan. Discovering structure in high-dimensional data through correlation explanation. In Advances in Neural Information Processing Systems, pages 577–585, 2014. [26] P. Vincent and Y. Bengio. Locally weighted full covariance gaussian density estimation. Technical report, Technical report 1240, 2003. [27] Q. Wang, S. R. Kulkarni, and S. Verdú. Divergence estimation for multidimensional densities via-nearestneighbor distances. Information Theory, IEEE Transactions on, 55(5):2392–2405, 2009. [28] Q. Wang, S. R. Kulkarni, and S. Verdú. Universal estimation of information measures for analog sources. Foundations and Trends in Communications and Information Theory, 5(3):265–353, 2009. [29] L. Wasserman. All of nonparametric statistics. Springer Science & Business Media, 2006. 9
2016
168
6,070
Domain Separation Networks Konstantinos Bousmalis∗ Google Brain Mountain View, CA konstantinos@google.com George Trigeorgis∗† Imperial College London London, UK g.trigeorgis@imperial.ac.uk Nathan Silberman Google Research New York, NY nsilberman@google.com Dilip Krishnan Google Research Cambridge, MA dilipkay@google.com Dumitru Erhan Google Brain Mountain View, CA dumitru@google.com Abstract The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain. We hypothesize that explicitly modeling what is unique to each domain can improve a model’s ability to extract domain-invariant features. Inspired by work on private-shared component analysis, we explicitly learn to extract image representations that are partitioned into two subspaces: one component which is private to each domain and one which is shared across domains. Our model is trained to not only perform the task we care about in the source domain, but also to use the partitioned representation to reconstruct the images from both domains. Our novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process. 1 Introduction The recent success of supervised learning algorithms has been partially attributed to the large-scale datasets [16, 22] on which they are trained. Unfortunately, collecting, annotating, and curating such datasets is an extremely expensive and time-consuming process. An alternative would be creating large-scale datasets in non-realistic but inexpensive settings, such as computer generated scenes. While such approaches offer the promise of effectively unlimited amounts of labeled data, models trained in such settings do not generalize well to realistic domains. Motivated by this, we examine the problem of learning representations that are domain–invariant in scenarios where the data distributions during training and testing are different. In this setting, the source data is labeled for a particular task and we would like to transfer knowledge from the source to the target domain for which we have no ground truth labels. In this work, we focus on the tasks of object classification and pose estimation, where the object of interest is in the foreground of a given image, for both source and target domains. The source and ∗Authors contributed equally. †This work was completed while George Trigeorgis was at Google Brain in Mountain View, CA. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. target pixel distributions can differ in a number of ways. We define “low-level” differences in the distributions as those arising due to noise, resolution, illumination and color. “High-level” differences relate to the number of classes, the types of objects, and geometric variations, such as 3D position and pose. We assume that our source and target domains differ mainly in terms of the distribution of low level image statistics and that they have high level parameters with similar distributions and the same label space. We propose a novel architecture, which we call Domain Separation Networks (DSN), to learn domaininvariant representations. Previous work attempts to either find a mapping from representations of the source domain to those of the target [26], or find representations that are shared between the two domains [8, 28, 17]. While this, in principle, is a good idea, it leaves the shared representations vulnerable to contamination by noise that is correlated with the underlying shared distribution [24]. Our model, in contrast, introduces the notion of a private subspace for each domain, which captures domain specific properties, such as background and low level image statistics. A shared subspace, enforced through the use of autoencoders and explicit loss functions, captures representations shared by the domains. By finding a shared subspace that is orthogonal to the subspaces that are private, our model is able to separate the information that is unique to each domain, and in the process produce representations that are more meaningful for the task at hand. Our method outperforms the state-of-the-art domain adaptation techniques on a range of datasets for object classification and pose estimation, while having an interpretability advantage by allowing the visualization of these private and shared representations. In Sec. 2, we survey related work and introduce relevant terminology. Our architecture, loss functions, and learning regime are presented in Sec. 3. Experimental results and discussion are given in Sec. 4. Finally, conclusions and directions for future work are in Sec. 5. 2 Related Work Learning to perform unsupervised domain adaptation is an open theoretical and practical problem. While much prior art exists, our literature review focuses primarily on Convolutional Neural Network (CNN) based methods due to their empirical superiority on this problem [8, 17, 26, 29]. Ben-David et al. [4] provide upper bounds on a domain-adapted classifier in the target domain. They introduce the idea of training a binary classifier trained to distinguish source and target domains. The error that this “domain incoherence” classifier provides (along with the error of a source domain specific classifier) combine to give the overall bounds. Mansour et al. [18] extend the theory of [4] to handle the case of multiple source domains. Ganin et al. [7, 8] and Ajakan et al. [2] use adversarial training to find domain–invariant representations in-network. Their Domain–Adversarial Neural Networks (DANN) exhibit an architecture whose first few feature extraction layers are shared by two classifiers trained simultaneously. The first is trained to correctly predict task-specific class labels on the source data while the second is trained to predict the domain of each input. DANN minimizes the domain classification loss with respect to parameters specific to the domain classifier, while maximizing it with respect to the parameters that are common to both classifiers. This minimax optimization becomes possible via the use of a gradient reversal layer (GRL). Tzeng et al. [29] and Long et al. [17] proposed versions of this model where the maximization of the domain classification loss is replaced by the minimization of the Maximum Mean Discrepancy (MMD) metric [11]. The MMD metric is computed between features extracted from sets of samples from each domain. The Deep Domain Confusion Network by Tzeng et al. [29] has an MMD loss at one layer in the CNN architecture while Long et al. [17] proposed the Deep Adaptation Network that has MMD losses at multiple layers. Other related techniques involve learning a transformation from one domain to the other. In this setup, the feature extraction pipeline is fixed during the domain adaptation optimization. This has been applied in various non-CNN based approaches [9, 5, 10] as well as the recent CNN-based Correlation Alignment (CORAL) [26] algorithm which “recolors” whitened source features with the covariance of features from the target domain. 3 Method While the Domain Separation Networks (DSNs) could in principle be applicable to other learning tasks, without loss of generalization, we mainly use image classification as the cross-domain task. Given a labeled dataset in a source domain and an unlabeled dataset in a target domain, our goal is to train a classifier on data from the source domain that generalizes to the target domain. Like previous 2 Shared Encoder Shared Decoder: Private Target Encoder Private Source Encoder Classifier Figure 1: A shared-weight encoder Ec(x) learns to capture representation components for a given input sample that are shared among domains. A private encoder Ep(x) (one for each domain) learns to capture domain-specific components of the representation. A shared decoder learns to reconstruct the input sample by using both the private and source representations. The private and shared representation components are pushed apart with soft subspace orthogonality constraints Ldifference, whereas the shared representation components are kept similar with a similarity loss Lsimilarity. efforts [7, 8], our model is trained such that the representations of images from the source domain are similar to those from the target domain. This allows a classifier trained on images from the source domain to generalize as the inputs to the classifier are in theory invariant to the domain of origin. However, these representations might trivially include noise that is highly correlated with the shared representation, as shown by Salzmann et al. [24]. Our main novelty is that, inspired by recent work [14, 24, 30] on shared-space component analysis, DSNs explicitly model both private and shared components of the domain representations. The two private components of the representation are specific to each domain and the shared component of the representation is shared by both domains. To induce the model to produce such split representations, we add a loss function that encourages independence of these parts. Finally, to ensure that the private representations are still useful (avoiding trivial solutions) and to add generalizability, we also add a reconstruction loss. The combination of these objectives is a model that produces a shared representation that is similar for both domains and a private representation that is domain specific. By partitioning the space in such a manner, the classifier trained on the shared representation is better able to generalize across domains as its inputs are uncontaminated with aspects of the representation that are unique to each domain. Let XS = {(xs i, ys i)}Ns i=0 represent a labeled dataset of Ns samples from the source domain where xs i ∼DS and let Xt = {xt i}Nt i=0 represent an unlabeled dataset of Nt samples from the target domain where xt i ∼DT . Let Ec(x; θc) be a function parameterized by θc which maps an image x to a hidden representation hc representing features that are common or shared across domains. Let Ep(x; θp) be an analogous function which maps an image x to a hidden representation hp representing features that are private to each domain. Let D(h; θd) be a decoding function mapping a hidden representation h to an image reconstruction ˆx. Finally, G(h; θg) represents a task-specific function, parameterized by θg that maps from hidden representations h to the task-specific predictions ˆy. The resulting Domain Separation Network (DSN) model is depicted in Fig. 1. 3.1 Learning Inference in a DSN model is given by ˆx = D(Ec(x) + Ep(x)) and ˆy = G(Ec(x)) where ˆx is the reconstruction of the input x and ˆy is the task-specific prediction. The goal of training is to minimize the following loss with respect to parameters Θ = {θc, θp, θd, θg}: L = Ltask + α Lrecon + β Ldifference + γ Lsimilarity (1) 3 where α, β, γ are weights that control the interaction of the loss terms. The classification loss Ltask trains the model to predict the output labels we are ultimately interested in. Because we assume the target domain is unlabeled, the loss is applied only to the source domain. We want to minimize the negative log-likelihood of the ground truth class for each source domain sample: Ltask = − Ns X i=0 ys i · log ˆys i, (2) where ys i is the one-hot encoding of the class label for source input i and ˆys i are the softmax predictions of the model: ˆys i = G(Ec(xs i)). We use a scale-invariant mean squared error term [6] for the reconstruction loss Lrecon which is applied to both domains: Lrecon = Ns X i=1 Lsi_mse(xs i, ˆxs i) + Nt X i=1 Lsi_mse(xt i, ˆxt i) (3) Lsi_mse(x, ˆx) = 1 k ∥x −ˆx∥2 2 −1 k2 ([x −ˆx] · 1k)2, (4) where k is the number of pixels in input x, 1k is a vector of ones of length k; and ∥· ∥2 2 is the squared L2-norm. While a mean squared error loss is traditionally used for reconstruction tasks, it penalizes predictions that are correct up to a scaling term. Conversely, the scale-invariant mean squared error penalizes differences between pairs of pixels. This allows the model to learn to reproduce the overall shape of the objects being modeled without expending modeling power on the absolute color or intensity of the inputs. We validated that this reconstruction loss was indeed the correct choice experimentally in Sec. 4.3 by training a version of our best DSN model with the traditional mean squared error loss instead of the scale-invariant loss in Eq. 3. The difference loss is also applied to both domains and encourages the shared and private encoders to encode different aspects of the inputs. We define the loss via a soft subspace orthogonality constraint between the private and shared representation of each domain. Let Hs c and Ht c be matrices whose rows are the hidden shared representations hs c = Ec(xs) and ht c = Ec(xt) from samples of source and target data respectively. Similarly, let Hs p and Ht p be matrices whose rows are the private representation hs p = Es p(xs) and ht p = Et p(xt) from samples of source and target data respectively3. The difference loss encourages orthogonality between the shared and the private representations: Ldifference = Hs c ⊤Hs p 2 F + Ht c ⊤Ht p 2 F , (5) where ∥·∥2 F is the squared Frobenius norm. Finally, Lsimilarity encourages the hidden representations hs c and ht c from the shared encoder to be as similar as possible irrespective of the domain. We experimented with two similarity losses, which we discuss in detail. 3.2 Similarity Losses The domain adversarial similarity loss [7, 8] is used to train a model to produce representations such that a classifier cannot reliably predict the domain of the encoded representation. Maximizing such “confusion” is achieved via a Gradient Reversal Layer (GRL) and a domain classifier trained to predict the domain producing the hidden representation. The GRL has the same output as the identity function, but reverses the gradient direction. Formally, for some function f(u), the GRL is defined as Q (f(u)) = f(u) with a gradient d duQ(f(u)) = −d duf(u). The domain classifier Z(Q(hc); θz) →ˆd parameterized by θz maps a shared representation vector hc = Ec(x; θc) to a prediction of the label ˆd ∈{0, 1} of the input sample x. Learning with a GRL is adversarial in that θz is optimized to increase Z’s ability to discriminate between encodings of images from the source or target domains, while the reversal of the gradient results in the model parameters θc learning representations from which domain classification accuracy is reduced. Essentially, we maximize the binomial cross-entropy for the domain prediction task with respect to θz, while minimizing it with respect to θc: LDANN similarity = Ns+Nt X i=0 n di log ˆdi + (1 −di) log(1 −ˆdi) o . (6) 3The matrices are transformed to have zero mean and unit l2 norm. 4 where di ∈{0, 1} is the ground truth domain label for sample i. The Maximum Mean Discrepancy (MMD) loss [11] is a kernel-based distance function between pairs of samples. We use a biased statistic for the squared population MMD between shared encodings of the source samples hs c and the shared encodings of the target samples ht c: LMMD similarity = 1 (N s)2 N s X i,j=0 κ(hs ci, hs cj) − 2 N sN t N s,N t X i,j=0 κ(hs ci, ht cj) + 1 (N t)2 N t X i,j=0 κ(ht ci, ht cj), (7) where κ(·, ·) is a PSD kernel function. In our experiments we used a linear combination of multiple RBF kernels: κ(xi, xj) = P n ηn exp{− 1 2σn ∥xi −xj∥2}, where σn is the standard deviation and ηn is the weight for our nth RBF kernel. Any additional kernels we include in the multi–RBF kernel are additive and guarantee that their linear combination remains characteristic. Therefore, having a large range of kernels is beneficial since the distributions of the shared features change during learning, and different components of the multi–RBF kernel might be responsible at different times for making sure we reject a false null hypothesis, i.e. that the loss is sufficiently high when the distributions are not similar [17]. The advantage of using an RBF kernel with the MMD distance is that the Taylor expansion of the Gaussian function allows us to match all the moments of the two populations. The caveat is that it requires finding optimal kernel bandwidths σn. 4 Evaluation We are motivated by the problem of learning models on a clean, synthetic dataset and testing on noisy, real–world dataset. To this end, we evaluate on object classification datasets used in previous work4 including MNIST and MNIST-M [8], the German Traffic Signs Recognition Benchmark (GTSRB) [25], and the Streetview House Numbers (SVHN) [20]. We also evaluate on the cropped LINEMOD dataset, a standard for object instance recognition and 3D pose estimation [12, 31], for which we have synthetic and real data5. We tested the following unsupervised domain adaptation scenarios: (a) from MNIST to MNIST-M; (b) from SVHN to MNIST; (c) from synthetic traffic signs to real ones with GTSRB; (d) from synthetic LINEMOD object instances rendered on a black background to the same object instances in the real world. We evaluate the efficacy of our method with each of the two similarity losses outlined in Sec. 3.2 by comparing against the prevailing visual domain adaptation techniques for neural networks: Correlation Alignment (CORAL) [26], Domain-Adversarial Neural Networks (DANN) [7, 8], and MMD regularization [29, 17]. For each scenario we provide two additional baselines: the performance on the target domain of the respective model with no domain adaptation and trained (a) on the source domain (“Source-only” in Tab. 1) and (b) on the target domain (“Target-only”), as an empirical lower and upper bound respectively. We have not found a universally applicable way to optimize hyperparameters for unsupervised domain adaptation. Previous work [8] suggests the use of reverse validation. We implemented this (see Supplementary Material for details) but found that that the reverse validation accuracy did not always align well with test accuracy. Ideally we would like to avoid using labels from the target domain, as it can be argued that if ones does have target domain labels, they should be used during training. However, there are applications where a labeled target domain set cannot be used for training. An example is the labeling of a dataset with the use of AprilTags [21], 2D barcodes that can be used to label the pose of an object, provided that a camera is calibrated and the physical dimensions of the barcode are known. These images should not be used when learning features from pixels, because the model might be able to decipher the tags. However, they can be part of a test set that is not available during training, and an equivalent dataset without the tags could be used for unsupervised domain adaptation. We thus chose to use a small set of labeled target domain data as a validation set for 4The most commonly used dataset for visual domain adaptation in the context of object classification is Office [23]. However, this dataset exhibits significant variations in both low-level and high-level parameter distributions. Low-level variations are due to the different cameras and background textures in the images (e.g. Amazon versus DSLR). However, there are significant high-level variations due to object identity: e.g. the motorcycle class contains non-motorcycle objects; the backpack class contains a laptop; some domains contain the object in only one pose. Other commonly used datasets such as Caltech-256 suffer from similar problems. We therefore exclude these datasets from our evaluation. For more information, see our Supplementary Material. 5https://cvarlab.icg.tugraz.at/projects/3d_object_detection/ 5 Table 1: Mean classification accuracy (%) for the unsupervised domain adaptation scenarios we evaluated all the methods on. We have replicated the experiments from Ganin et al. [8] and in parentheses we show the results reported in their paper. The “Source-only” and “Target-only” rows are the results on the target domain when using no domain adaptation and training only on the source or the target domain respectively. Model MNIST to Synth Digits to SVHN to Synth Signs to MNIST-M SVHN MNIST GTSRB Source-only 56.6 (52.2) 86.7 (86.7) 59.2 (54.9) 85.1 (79.0) CORAL [26] 57.7 85.2 63.1 86.9 MMD [29, 17] 76.9 88.0 71.1 91.1 DANN [8] 77.4 (76.6) 90.3 (91.0) 70.7 (73.8) 92.9 (88.6) DSN w/ MMD (ours) 80.5 88.5 72.2 92.6 DSN w/ DANN (ours) 83.2 91.2 82.7 93.1 Target-only 98.7 92.4 99.5 99.8 the hyperparameters of all the methods we compare. All methods were evaluated using the same protocol, so comparison numbers are fair and meaningful. The performance on this validation set can serve as an upper bound of a satisfactory validation metric for unsupervised domain adaptation, which to our knowledge validating the parameters in an unsupervised manner is still an open research question, and out of the scope of this work. 4.1 Datasets and Adaptation Scenarios MNIST to MNIST-M. In this domain adaptation scenario we use the popular MNIST [15] dataset of handwritten digits as the source domain, and MNIST-M, a variation of MNIST proposed for unsupervised domain adaptation by [8]. MNIST-M was created by using each MNIST digit as a binary mask and inverting with it the colors of a background image. The background images are random crops uniformly sampled from the Berkeley Segmentation Data Set (BSDS500) [3]. In all our experiments, following the experimental protocol by [8]. Out of the 59, 001 MNIST-M training examples, we used the labels for 1, 000 of them to find optimal hyperparameters for our models. This scenario, like all three digit adaptation scenarios, has 10 class labels. Synthetic Digits to SVHN. In this scenario we aim to learn a classifier for the Street-View House Number data set (SVHN) [20], our target domain, from a dataset of purely synthesized digits, our source domain. The synthetic digits [8] dataset was created by rasterizing bitmap fonts in a sequence (one, two, and three digits) with the ground truth label being the digit in the center of the image, just like in SVHN. The source domain samples are further augmented by variations in scale, translation, background colors, stroke colors, and Gaussian blurring. We use 479, 400 Synthetic Digits for our source domain training set, 73, 257 unlabeled SVHN samples for domain adaptation, and 26, 032 SVHN samples for testing. Similarly to above, we use the labels of 1, 000 SVHN training examples for hyperparameter validation. SVHN to MNIST. Although the SVHN dataset contains significant variations (in scale, background clutter, blurring, embossing, slanting, contrast, rotation, sequences to name a few) there is not a lot of variation in the actual digits shapes. This makes it quite distinct from a dataset of handwritten digits, like MNIST, where there are a lot of elastic distortions in the shapes, variations in thickness, and noise on the digits themselves. Since the ground truth digits in both datasets are centered, this is a well-posed and rather difficult domain adaptation scenario. As above, we used the labels of 1, 000 MNIST training examples for validation. Synthetic Signs to GTSRB. We also perform an experiment using a dataset of synthetic traffic signs from [19] to real world dataset of traffic signs (GTSRB) [25]. While the three-digit adaptation scenarios have 10 class labels, this scenario has 43 different traffic signs. The synthetic signs were obtained by taking relevant pictograms and adding various types of variations, including random backgrounds, brightness, saturation, 3D rotations, Gaussian and motion blur. We use 90, 000 synthetic signs for training, 1, 280 random GTSRB real-world signs for domain adaptation and validation, and the remaining 37, 929 GTSRB real signs as the test set. 6 Table 2: Mean classification accuracy and pose error for the “Synth Objects to LINEMOD” scenario. Method Classification Accuracy Mean Angle Error Source-only 47.33% 89.2◦ MMD 72.35% 70.62◦ DANN 99.90% 56.58◦ DSN w/ MMD (ours) 99.72% 66.49◦ DSN w/ DANN (ours) 100.00% 53.27◦ Target-only 100.00% 6.47◦ Synthetic Objects to LineMod. The LineMod dataset [31] consists of CAD models of objects in a cluttered environment and a high variance of 3D poses for each object. We use the 11 non-symmetric objects from the cropped version of the dataset, where the images are cropped with the object in the center, for the task of object instance recognition and 3D pose estimation. We train our models on 16, 962 images for these objects rendered on a black background without additional noise. We use a target domain training set of 10, 673 real-world images for domain adaptation and validation, and a target domain test set of 2, 655 for testing. For this scenario our task is both classification and pose estimation; our task loss is therefore Ltask = PNs i=0{−ys i · log ˆys i + ξ log(1 −|qs · ˆqs|)}, where qs is the positive unit quaternion vector representing the ground truth 3D pose, and ˆqs is the equivalent prediction. The first term is the classification loss, similar to the rest of the experiments, the second term is the log of a 3D rotation metric for quaternions [13], and ξ is the weight for the pose loss. In Tab. 2 we report the mean angle the object would need to be rotated (on a fixed 3D axis) to move from the predicted to the ground truth pose [12]. (a) MNIST (source) (b) MNIST-M (target) (c) Synth Objects (source) (d) LINEMOD (target) Figure 2: Reconstructions for the representations of the two domains for “MNIST to MNIST-M” and for “Synth Objects to LINEMOD”. In each block from left to right: the original image xt; reconstructed image D(Ec(xt) + Ep(xt)); shared only reconstruction D(Ec(xt)); private only reconstruction D(Ep(xt)). 4.2 Implementation Details All the models were implemented using TensorFlow 6 [1] and were trained with Stochastic Gradient Descent plus momentum [27]. Our initial learning rate was multiplied by 0.9 every 20, 000 steps (mini-batches). We used batches of 32 samples from each domain for a total of 64 and the input images were mean-centered and rescaled to [−1, 1]. In order to avoid distractions for the main classification task during the early stages of the training procedure, we activate any additional domain adaptation loss after 10, 000 steps of training. For all our experiments our CNN topologies are based on the ones used in [8], to be comparable to previous work in unsupervised domain adaptation. The exact architectures for all models are shown in our Supplementary Material. In our framework, CORAL [26] would be equivalent to fixing our shared representation matrices Hs c and Ht c, normalizing them and then minimizing ∥AHs c ⊤Hs cA⊤−Ht c ⊤Ht c∥2 F with respect to a weight matrix A that aligns the two correlation matrices. For the CORAL experiments, we follow the suggestions of [26], and extract features for both source and target domains from the penultimate layer of each network. Once the correlation matrices for each domain are aligned, we evaluate on 6We provide code at https://github.com/tensorflow/models/domain_adaptation. 7 Table 3: Effect of our difference and reconstruction losses on our best model. The first row is replicated from Tab. 1. In the second row, we remove the soft orthogonality constraint. In the third row, we replace the scale-invariant MSE with regular MSE. Model MNIST to Synth. Digits to SVHN to Synth. Signs to MNIST-M SVHN MNIST GTSRB All terms 83.23 91.22 82.78 93.01 No Ldifference 80.26 89.21 80.54 91.89 With LL2 recon 80.42 88.98 79.45 92.11 the target test data the performance of a linear support vector machine (SVM) classifier trained on the source training data. The SVM penalty parameter was optimized based on the target domain validation set for each of our domain adaptation scenarios. For MMD regularization, we used a linear combination of 19 RBF kernels (details can be found in the Supplementary Material). Preliminary experiments with having MMD applied on more than one layers did not show any performance improvement for our experiments and architectures. For DANN regularization, we applied the GRL and the domain classifier as prescribed in [8] for each scenario. For our Domain Separation Network experiments, our similarity losses are always applied at the first fully connected layer of each network after a number of convolutional and max pooling layers. For each private space encoder network we use a simple convolutional and max pooling structure followed by a fully-connected layer with a number of nodes equal to the number of nodes at the final layer hc of the equivalent shared encoder Ec. The output of the shared and private encoders gets added before being fed to the shared decoder D. 4.3 Discussion The DSN with DANN model outperforms all the other methods we experimented with for all our unsupervised domain adaptation scenarios (see Tab. 1 and 2). Our unsupervised domain separation networks are able to improve both upon MMD regularization and DANN. Using DANN as a similarity loss (Eq. 6) worked better than using MMD (Eq. 7) as a similarity loss, which is consistent with results obtained for domain adaptation using MMD regularization and DANN alone. In order to examine the effect of the soft orthogonality constraints (Ldifference), we took our best model, our DSN model with the DANN loss, and removed these constraints by setting the β coefficient to 0. Without them, the model performed consistently worse in all scenarios. We also validated our choice of our scale-invariant mean squared error reconstruction loss as opposed to the more popular mean squared error loss by running our best model with LL2 recon = 1 k||x −ˆx||2 2. With this variation we also get worse classification results consistently, as shown in experiments from Tab. 3. The shared and private representations of each domain are combined for the reconstruction of samples. Individually decoding the shared and private representations gives us reconstructions that serve as useful depictions of our domain adaptation process. In Fig. 2 we use the “MNIST to MNIST-M” and the “Synth. Objects to LINEMOD” scenarios for such visualizations. In the former scenario, the model cleanly separates the foreground from the background and produces a shared space that is very similar to the source domain. This is expected since the target is a transformation of the source. In the latter scenario, the model is able to produce visualizations of the shared representation that look very similar between source and target domains, which are useful for classification and pose estimation. 5 Conclusion We present in this work a deep learning model that improves upon existing unsupervised domain adaptation techniques. The model does so by explicitly separating representations private to each domain and shared between source and target domains. By using existing domain adaptation techniques to make the shared representations similar, and soft subspace orthogonality constraints to make private and shared representations dissimilar, our method outperforms all existing unsupervised domain adaptation methods in a number of adaptation scenarios that focus on the synthetic-to-real paradigm. 8 Acknowledgments We would like to thank Samy Bengio, Kevin Murphy, and Vincent Vanhoucke for valuable comments on this work. We would also like to thank Yaroslav Ganin and Paul Wohlhart for providing some of the datasets we used. References [1] M. Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. Preprint arXiv:1603.04467, 2016. [2] H. Ajakan, P. Germain, H. Larochelle, F. Laviolette, and M. Marchand. Domain-adversarial neural networks. In Preprint, http://arxiv.org/abs/1412.4446, 2014. [3] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik. Contour detection and hierarchical image segmentation. TPAMI, 33(5):898–916, 2011. [4] S. Ben-David, J. Blitzer, K. Crammer, A. Kulesza, F. Pereira, and J. W. Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151–175, 2010. [5] R. Caseiro, J. F. Henriques, P. Martins, and J. Batist. Beyond the shortest path: Unsupervised Domain Adaptation by Sampling Subspaces Along the Spline Flow. In CVPR, 2015. [6] D. Eigen, C. Puhrsch, and R. Fergus. Depth map prediction from a single image using a multi-scale deep network. In NIPS, pages 2366–2374, 2014. [7] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In ICML, pages 513–520, 2015. [8] Y. Ganin et al. . Domain-Adversarial Training of Neural Networks. JMLR, 17(59):1–35, 2016. [9] B. Gong, Y. Shi, F. Sha, and K. Grauman. Geodesic flow kernel for unsupervised domain adaptation. In CVPR, pages 2066–2073. IEEE, 2012. [10] R. Gopalan, R. Li, and R. Chellappa. Domain Adaptation for Object Recognition: An Unsupervised Approach. In ICCV, 2011. [11] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola. A Kernel Two-Sample Test. JMLR, pages 723–773, 2012. [12] S. Hinterstoisser et al. . Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. In ACCV, 2012. [13] D. Q. Huynh. Metrics for 3d rotations: Comparison and analysis. Journal of Mathematical Imaging and Vision, 35(2):155–164, 2009. [14] Y. Jia, M. Salzmann, and T. Darrell. Factorized latent spaces with structured sparsity. In NIPS, pages 982–990, 2010. [15] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [16] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft coco: Common objects in context. In ECCV 2014, pages 740–755. Springer, 2014. [17] M. Long and J. Wang. Learning transferable features with deep adaptation networks. ICML, 2015. [18] Y. Mansour et al. . Domain adaptation with multiple sources. In NIPS, 2009. [19] B. Moiseev, A. Konev, A. Chigorin, and A. Konushin. Evaluation of Traffic Sign Recognition Methods Trained on Synthetically Generated Data, chapter ACIVS, pages 576–583. 2013. [20] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS Workshops, 2011. [21] E. Olson. Apriltag: A robust and flexible visual fiducial system. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 3400–3407. IEEE, 2011. [22] O. Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. IJCV, 115(3):211–252, 2015. [23] K. Saenko et al. . Adapting visual category models to new domains. In ECCV. Springer, 2010. [24] M. Salzmann et. al. Factorized orthogonal latent spaces. In AISTATS, pages 701–708, 2010. [25] J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks, 2012. [26] B. Sun, J. Feng, and K. Saenko. Return of frustratingly easy domain adaptation. In AAAI. 2016. [27] I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In ICML, pages 1139–1147, 2013. [28] E. Tzeng, J. Hoffman, T. Darrell, and K. Saenko. Simultaneous deep transfer across domains and tasks. In CVPR, pages 4068–4076, 2015. [29] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell. Deep domain confusion: Maximizing for domain invariance. Preprint arXiv:1412.3474, 2014. [30] S. Virtanen, A. Klami, and S. Kaski. Bayesian CCA via group sparsity. In ICML, pages 457–464, 2011. [31] P. Wohlhart and V. Lepetit. Learning descriptors for object recognition and 3d pose estimation. In CVPR, pages 3109–3118, 2015. 9
2016
169
6,071
Integrated Perception with Recurrent Multi-Task Neural Networks Hakan Bilen Andrea Vedaldi Visual Geometry Group, University of Oxford {hbilen,vedaldi}@robots.ox.ac.uk Abstract Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for all perceptual problems together, solving them efficiently and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call multinet, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation. 1 Introduction Natural perception can extract complete interpretations of sensory data in a coherent and efficient manner. By contrast, machine perception remains a collection of disjoint algorithms, each solving specific information extraction sub-problems. Recent advances such as modern convolutional neural networks have dramatically improved the performance of machines in individual perceptual tasks, but it remains unclear how these could be integrated in the same seamless way as natural perception does. In this paper, we consider the problem of learning data representations for integrated perception. The first question we ask is whether it is possible to learn universal data representations that can be used to solve all sub-problems of interest. In computer vision, fine-tuning or retraining has been show to be an effective method to transfer deep convolutional networks between different tasks [9, 29]. Here we show that, in fact, it is possible to learn a single, shared representation that performs well on several sub-problems simultaneously, often as well or even better than specialised ones. A second question, complementary to the one of feature sharing, is how different perceptual subtasks should be combined. Since each subtask extracts a partial interpretation of the data, the problem is to form a coherent picture of the data as a whole. We consider an incremental interpretation scenario, where subtasks collaborate in parallel or sequentially in order to gradually enrich a shared interpretation of the data, each contributing its own “dimension” to it. Informally, many computer vision systems operate in this stratified manner, with different modules running in parallel or in sequence (e.g. object detection followed by instance segmentation). The question is how this can be done end-to-end and systematically. In this paper, we develop an architecture, multinet (fig. 1), that provides an answer to such questions. Multinet builds on the idea of a shared representation, called an integration space, which reflects both 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. airplane location parts class Figure 1: Multinet. We propose a modular multi-task architecture in which several perceptual tasks are integrated in a synergistic manner. The subnetwork φ0 enc encodes the data x0 (an image in the example) producing a representation h shared between K different tasks. Each task estimates one of K different labels xα (object class, location, and parts in the example) using K decoder functions ψα dec. Each task contributes back to the shared representation by means of a corresponding encoder function φα enc. The loop is closed in a recurrent configuration by means of suitable integrator functions (not shown here to avoid cluttering the diagram). the statistics extracted from the data as well as the result of the analysis carried by the individual subtasks. As a loose metaphor, one can think the integration space as a “canvas” which is progressively updated with the information obtained by solving sub-problems. The representation distills this information and makes it available for further task resolution, in a recurrent configuration. Multinet has several advantages. First, by learning the latent integration space automatically, synergies between tasks can be discovered automatically. Second, tasks are treated in a symmetric manner, by associating to each of them encoder, decoder, and integrator functions, making the system modular and easily extensible to new tasks. Third, the architecture supports incremental understanding because tasks contribute back to the latent representation, making their output available to other tasks for further processing. Finally, while multinet is applied here to a image understanding setting, the architecture is very general and could be applied to numerous other domains as well. The new architecture is described in detail in sect. 2 and an instance specialized for computer vision applications is given in sect. 3. The empirical evaluation in sect. 4 demonstrates the benefits of the approach, including that sharing features between different tasks is not only economical, but also sometimes better for accuracy, and that integrating the outputs of different tasks in the shared representation yields further accuracy improvements. Sect. 5 summarizes our findings. 1.1 Related work Multiple task learning (MTL): Multitask learning [5, 25, 1] methods have been studied over two decades by the machine learning community. The methods are based on the key idea that the tasks share a common low-dimensional representation which is jointly learnt with the task specific parameters. While MLT trains many tasks in parallel, Mitchell and Thrun [18] propose a sequential transfer method called Explanation-Based Neural Nets (EBNN) which exploits previously learnt domain knowledge to initialise or constraint the parameters of the current task. Breiman and Freidman [3] devise a hybrid method that first learns separate models and then improves their generalisation by exploiting the correlation between the predictions. Multi-task learning in computer vision: MTL has been shown to improve results in many computer vision problems. Typically, researchers incorporate auxiliary tasks into their target tasks, jointly train them in parallel and achieve performance gains in object tracking [30], object detection [11], facial landmark detection [31]. Differently, Dai et al. [8] propose multi-task network cascades in which convolutional layer parameters are shared between three tasks and the tasks are predicted sequentially. Unlike [8], our method can train multiple tasks in parallel and does not require a specification of task execution. Recurrent networks: Our work is also related to recurrent neural networks (RNN) [22] which has been successfully used in language modelling [17], speech recognition [13], hand-written recognition [12], semantic image segmentation [20] and human pose estimation [2]. Related to our work, Carreira et al. [4] propose an iterative segmentation model that progressively updates an initial 2 Figure 2: Multinet recurrent architecture. The components in the rounded box are repeated K times, one for each task α = 1, . . . , K. solution by feeding back error signal. Najibi et al. [19] propose an efficient grid based object detector that iteratively refine the predicted object coordinates by minimising the training error. While these methods [4, 19] are also based on an iterative solution correcting mechanism, our main goal is to improve generalisation performance for multiple tasks by sharing the previous predictions across them and learning output correlations. 2 Method In this section, we first introduce the multinet architecture for integrated multi-task prediction (sect. 2.1) and then we discuss ordinary multi-task prediction as a special case of multinet (sect. 2.2). 2.1 Multinet: integrated multiple-task prediction We propose a recurrent neural network architecture (fig. 1 and 2) that can address simultaneously multiple data labelling tasks. For symmetry, we drop the usual distinction between input and output spaces and consider instead K label spaces X α, α = 0, 1, . . . , K. A label in the α-th space is denoted by the symbol xα ∈X α. In the following, α = 0 is used for the input (e.g. an image) of the network and is not inferred, whereas x1, . . . , xK are labels estimated by the neural network (e.g. an object class, location, and parts). One reason why it is useful to keep the notation symmetric is because it is possible to ground any label xα and treat it as an input instead. Each task α is associated to a corresponding encoder function φα enc, which maps the label xα to a vectorial representation rα ∈Rα given by rα = φα enc(xα). (1) Each task has also a decoder function ψα dec going in the other direction, from a common representation space h ∈H to the label xα: xα = ψα dec(h). (2) The information r0, r1, . . . , rα extracted from the data and the different tasks by the encoders is integrated in the shared representation h by using an integrator function Γ. Since this update operation is incremental, we associate to it an iteration number t = 0, 1, 2, . . . . By doing so, the update equation can be written as ht+1 = Γ(ht, r0, r1 t, . . . , rK t ). (3) Note that, in the equation above, r0 is constant as the corresponding variable x0 is the input of the network, which is grounded and not updated. Overall, a task α is specified by the triplet T α = (X α, φα enc, ψα dec) and by its contribution to the update rule (3). Full task modularity can be achieved by decomposing the integrator function as a sequence of task-specific updates ht+1 = ΓK(·, rK t ) ◦· · · ◦Γ1(ht, r1 t), such that each task is a quadruplet (X α, φα enc, ψα dec, Γα), but this option is not investigated further here. Given tasks T α, α = 1, . . . , K, several variants of the recurrent architecture are possible. A natural one is to process tasks sequentially, but this has the added complication of having to choose a particular order and may in any case be suboptimal; instead, we propose to update all the task at each recurrent iteration, as follows: 3 t = 0 Ordinary multi-task prediction. At the first iteration, the measurement x0 is acquired and the shared representation h is initialized as h0 = φ0 enc(x0) = Γ(∗, r0, ∗, . . . , ∗). The symbol ∗denotes the initial value of a variable (often zero in practice). Given h0, the output xα 0 = ψα dec(h0) = (ψα dec ◦φ0 enc)(x0) for each task is computed. This step corresponds to ordinary multi-task prediction, as discussed later (sect. 2.2). t > 0 Iterative updates. Each task α = 1, . . . , K is re-encoded using equations rα t = φα enc(xα t ), the shared representation is updated using ht+1 = Γ(ht, r0, r1 t, . . . , rK t ), and the labels are predicted again using xα t+1 = ψα dec(ht+1). The idea of feeding back the network output for further processing exists in several existing recurrent architectures [16, 24]; however, in these cases it is used to process sequential data, passing back the output obtained from the last process element in the sequence; here, instead, the feedback is used to integrate different and complementary labelling tasks. Our model is also reminiscent of encoder/decoder architectures [15, 21, 28]; however, in our case the encoder and decoder functions are associated to the output labels rather than to the input data. 2.2 Ordinary multi-task learning Ordinarily, multiple-task learning [5, 25, 1] is based on sharing features or parameters between different tasks. Multinet reduces to ordinary multi-task learning when there is no recurrence. At the first iteration t = 0, in fact, multinet simply evaluates K predictor functions ψ1 dec◦φ0 enc, . . . , ψK dec◦φ0 enc, one for each task, which share the common subnetwork φ0 enc. While multi-task learning from representation sharing is conceptually simple, it is practically important because it allows learning a universal representation function φ0 enc which works well for all tasks simultaneously. The possibility of learning such a polyvalent representation, which can only be verified empirically, is a non-trivial and useful fact. In particular, in our experiments in image understanding (sect. 4), we will see that, for certain image analysis tasks, it is not only possible and efficient to learn such a shared representation, but that in some cases feature sharing can even improve the performance in the individual sub-problems. 3 A multinet for classification, localization, and part detection In this section we instantiate multinet for three complementary tasks in computer vision: object classification, object detection, and part detection. The main advantage of multinet compared to ordinary multi-task prediction is that, while sharing parameters across related tasks may improve generalization [5], it is not enough to capture correlations in the task input spaces. For example, in our computer vision application ordinary multi-task prediction would not be able to ensure that the detected parts are contained within a detected object. Multinet can instead capture interactions between the different labels and potentially learn to enforce such constraints. The latter is done in a soft and distributed manner, by integrating back the output of the individual tasks in the shared representation. Next, we discuss in some detail the specific architecture components used in our application. As a starting point we consider a standard CNN for image classification. While more powerful networks exist, we choose here a good performing model which is at the same time reasonably efficient to train and evaluate, namely the VGG-M-1024 network of [6]. This model is pre-trained for image classification from the ImageNet ILSVRC 2012 data [23] and was extended in [11] to object detection; here we follow such blueprints, and in particular the Fast R-CNN method of [11], to design the subnetworks for the three tasks. These components are described in some detail below, first focusing on the components corresponding to ordinary multi-task prediction, and then moving to the ones used for multiple task integration. Ordinary multiple-task components. The first several layers of the VGG-M network can be grouped in five convolutional sections, each comprising linear convolution, a non-linear activation function and, in some cases, max pooling and normalization. These are followed by three fullyconnected sections, which are the same as the convolutional ones, but with filter support of the same size as the corresponding input. The last layer is softmax and computes a posterior probability vector over the 1,000 ImageNet ILSVRC classes. 4 VGG-M is adapted for the different tasks as follows. For clarity, we use symbolic names for the tasks rather than numeric indexes, and consider α ∈{img, cls, det, part} instead of α ∈{0, 1, 2, 3}. The five convolutional sections of VGG-M are used as the image encoder φimg enc and hence compute the initial value h0 of the shared representation. Cutting VGG-M at the level of the last convolutional layer is motivated by the fact that the fully-connected layers remove or at least dramatically blur spatial information, whereas we would like to preserve it for object and part localization. Hence, the shared representation is a tensor h ∈RH×W ×C, where H × W are the spatial dimensions and C is the number of feature channels as determined by the VGG-M configuration (see sect. 4). Next, φimg enc is branched off in three directions, choosing a decoder ψα dec for each task: image classification (α = cls), object detection (α = det), and part detection (α = part). For the image classification branch, we choose φα enc as the rest of the original VGG-M network for image classification. In other words, the decoder function ψcls dec for the image-level labels is initialized to be the same as the fully-connected layers of the original VGG-M, such that φVGG-M enc = ψcls dec ◦φimg enc . There are however two differences. The first is the last fully-connected layer is reshaped and reinitialized randomly to predict a different number C of possible objects instead of the 1,000 ImageNet classes. The second difference is that the final output is a vector of binary probabilities obtained using sigmoid instead of a softmax. The object and part detection decoders are instead based on the Fast R-CNN architecture [11], and classify individual image regions as belonging to one of the object classes (part types) or background. To do so, the Selective Search Windows (SSW) method [26] is used to generate a shortlist of M region (bounding box) proposals B(ximg) = {b1, . . . , bM} from image ximg; this set is inputted to the spatial pyramid pooling (SPP) layer [14, 11] ψSPP dec (h, B(ximg)), which extracts subsets of the feature map h in correspondence of each region using max pooling. The object detection decoder (and similarly for the part detector) is then given by ψdet dec(h) = ψdec det(ψSPP dec (h, B(ximg))) where ψdec det contains fully connected layers initialized in the same manner as the classification decoder above (hence, before training one also has φVGG-M enc = ψdec det ◦φimg enc ). The exception is once more the last layer, reshaped and reinitialized as needed, whereas softmax is still used as regions can have only one class. So far, we have described the image encored φimg enc and the decoder branches ψcls dec, ψdet dec and ψpart dec for the three tasks. Such components are sufficient for ordinary multi-task learning, corresponding to the initial multinet iteration. Next, we specify the components that allow to iterate multinet several times. Recurrent components: integrating multiple tasks. For task integration, we need to construct the encoder functions φcls enc, φdet enc and φpart enc for each task as well as the integrator function Γ. While several constructions are possible, here we experiment with simple ones. In order to encode the image label xcls, the encoder rcls = φcls enc(xcls) takes the vector of Ccls binary probabilities xcls ∈RCcls, one for each of the Ccls possible object classes, and broadcasts the corresponding values to all H × W spatial locations (u, v) in h. Formally rcls ∈RH×W ×Ccls and ∀u, v, c : rcls uvc = xcls c . Encoding the object detection label xdet is similar, but reflects the geometric information captured by such labels. In particular, each bounding box bm of the M extracted by SSW is associated to a vector of Ccls + 1 probabilities (one for each object class plus one more for background) xdet m ∈RCcls+1. This is decoded in a heat map rcls ∈RH×W ×(Ccls+1) by max pooling across boxes: ∀u, v, c : rcls uvc = max  xdet mc, ∀m : (u, v) ∈bm ∪{0}. The part label xpart is encoded in an entirely analogous manner. Lastly, we need to construct the integrator function Γ. We experiment with two simple designs. The first one simply stacks evidence from the different sources: h = stack(rimg, rcls, rdet, rpart). Then the update equation is given by ht = Γ(ht−1, rimg, rcls t , rdet t , rpart t ) = stack(rimg, rcls t , rdet t , rpart t ). (4) Note that this formulation requires modifying the first fully-connected layers of each decoder ˆψcls dec, ˆψdet dec and ˆψpart dec as the shared representation h has now C +2Ccls +Cpart +2 channels instead of just C 5 Figure 3: Illustration of the multinet instantiation tackling three computer vision problem: image classification, object detection, and part detection. as for the original VGG-M architecture. This is done by initializing randomly additional dimensions in the linear maps. We also experiment with a second update equation ht = Γ(ht−1, rimg, rcls t , rdet t , rpart t ) = ReLU(A ∗stack(ht−1, rcls, rcls t , rdet t , rpart t )) (5) where A ∈R1×1×(2C+2Ccls+Cpart+2)×C is a filter bank whose purpose is to reduce the stacked representation back to the original C channels. This is a useful design as it maintains the same representation dimensionality regardless of the number of tasks added. However, due to the compression, it may perform less well. 4 Experiments 4.1 Implementation details and training The image encoder φimg enc is initialized from the pre-trained VGG-M model using sections conv1 to conv5. If the input to the network is an RGB image ximg ∈RHimg×W img×3, then, due to downsampling, the spatial dimension H × W × C of rimg = φimg enc (ximg) are H ≈Himg/16 and W ≈W img/16. The number of feature channels is C = 512. As noted above, the decoders contain respectively subnetworks ψdec cls, ψdec det, and ψdec part comprising layers fc6 and fc7 from VGG-M, followed by a randomly-initialized linear predictor with output dimension equal to, respectively, Ccls, Ccls + 1, and Cpart + 1. Max pooling in SPP is performed in a grid of 6 × 6 spatial bins as in [14, 11]. The task encoders φcls enc, φdet enc, φpart enc are given in sect. 2 and contain no parameter. For training, each task is associated with a corresponding loss function. For the classification task, the objective is to minimize the sum of negative posterior log-probabilities of whether the image contains a certain object type or not (this allows different objects to be present in a single image). Combined with the fact that the classification branch uses sigmoid, this is the same as binary logistic regression. For the object and part detection tasks, decoders are optimized to classify the target regions as one of the Ccls or Cpart classes or background (unlike image-level labels, classes in region-level labels are mutually exclusive). Furthermore, we also train a branch performing bounding box refinement to improve the fit of the selective search region as proposed by [11]. The fully connected layers used for softmax classification and bounding-box regression in object and part detection tasks are initialized from zero-mean Gaussian distributions with 0.01 and 0.001 standard deviations respectively. The fully connected layers used for object classification task and the adaptation layer A (see eq. 5) are initialized with zero-mean Gaussian with 0.01 standard deviation. 6 All layers use a learning rate of 1 for filters and 2 for biases. We used SGD to optimize the parameters with a learning rate of 0.001 for 6 epochs and lower it to 0.0001 for another 6 epochs. We observe that running two iterations of recursion is sufficient to reach 99% of the performance, although marginal gains are possible with more. We use the publicly available CNN toolbox MatConvNet [27] in our experiments. 4.2 Results In this section, we describe and discuss experimental results of our models in two benchmarks. PASCAL VOC 2010 [10] and Parts [7]: The dataset contains 4998 training and 5105 validation images for 20 object categories and ground truth bounding box annotations for target categories. We use the PASCAL-Part dataset [7] to obtain bounding box annotations of object parts which consists of 193 annotated part categories such as aeroplane engine, bicycle back-wheel, bird left-wing, person right-upper-leg. After removing annotations that are smaller than 20 pixels on one side and the categories with less than 50 training samples, the number of part categories reduces to 152. The dataset provides annotations for only training and validation splits, thus we train our models in the train split and report results in the validation split for all the tasks. We follow the standard PASCAL VOC evaluation and report average precision (AP) and AP at 50% intersection-over-union (IoU) of the detected boxes with the ground ones for object classification and detection respectively. For the part detection, we follow [7] and report AP at a more relaxed 40% IoU threshold. The results for the tasks are reported in tab. 1. In order to establish the first baseline, we train an independent network for each task. Each network is initialized with the VGG-M model, the last classification and regression layers are initialized with random noise and all the layers are fine-tuned for the respective task. For object and part detection, we use our implementation of Fast-RCNN [11]. Note that, for consistency between the baselines and our method, minimum dimension of each image is scaled to be 600 pixels for all the tasks including object classification. An SPP layer is employed to scale the feature map into 6 × 6 dimensionality. For the second baseline, we train a multi-task network that shares the convolutional layers across the tasks (this setting is called ordinary multi-task prediction in sect. 2.1). We observe in tab. 1 that the multi-task model performs comparable or better than the independent networks, while being more efficient due to the shared convolutional computations. Since the training images are the same in all cases, this shows that just combining multiple labels together improves efficiency and in some cases even performance. Finally we test the full multinet model for two settings defined as update rules (1) and (2) corresponding to eq. 4 and 5 respectively. We first see that both models outperforms the independent networks and multi-task network as well. This is remarkable because our model consists of smaller number of parameters than the sum of three independent networks and yet our best model (update 1) consistently outperforms them by roughly 1.5 points in mean AP. Furthermore, multinet improves over the ordinary multi-task prediction by exploiting the correlations in the solutions of the individual tasks. In addition, we observe that update (1) performs better than update (2) that constraints the shared representation space to 512 dimensions regardless of the number of tasks, as it can be expected due to the larger capacity. Nevertheless, even with the bottleneck we observe improvements compared to ordinary multi-task prediction. We also run a test case to verify whether multinet learns to mix information extracted by the various tasks as presumed. To do so, we exploit the predictions performed by these task in will be able to improve more with ground truth labels during test time. At test time we ground the classification label rcls in the first iteration of multinet to the ground truth class labels and we read the predictions after one iteration. The performances expectedly in the three tasks improve to 90.1, 58.9 and 39.2 respectively. This shows that, the feedback on the class information has a strong effect on class prediction itself, and a more modest but nevertheless significant effect on the other tasks as well. PASCAL VOC 2007 [10]: The dataset consists of 2501 training, 2510 validation, and 5011 test images containing bounding box annotations for 20 object categories. There is no part annotations available for this dataset, thus, we exclude the part detection task and run the same baselines and our best model for object classification and detection. The results are reported for the test split and depicted in tab. 2. Note that our RCNN for the individual networks obtains the same detection score 7 Method / Task classification object-detection part-detection Independent 76.4 55.5 37.3 Multi-task 76.2 57.1 37.2 Ours 77.4 57.5 38.8 Ours (with bottleneck) 76.8 57.3 38.5 Table 1: Object classification, detection and part detection results in the PASCAL VOC 2010 validation split. Method / Task classification object-detection Independent 78.7 59.2 MTL 78.9 60.4 Ours 79.8 61.3 Table 2: Object classification and detection results in the PASCAL VOC 2007 test split. in [11]. In parallel to the former results, our method consistently outperforms both the baselines in classification and detection tasks. 5 Conclusions In this paper, we have presented multinet, a recurrent neural network architecture to solve multiple perceptual tasks in an efficient and coordinated manner. In addition to feature and parameter sharing, which is common to most multi-task learning methods, multinet combines the output of the different tasks by updating a shared representation iteratively. Our results are encouraging. First, we have shown that such architectures can successfully integrate multiple tasks by sharing a large subset of the data representation while matching or even outperforming specialised network. Second, we have shown that the iterative update of a common representation is an effective method for sharing information between different tasks which further improve performance. Acknowledgments This work acknowledges the support of the ERC Starting Grant Integrated and Detailed Image Understanding (EP/L024683/1). References [1] J. Baxter. A model of inductive bias learning. J. Artif. Intell. Res.(JAIR), 12(149-198):3, 2000. [2] V. Belagiannis and A. Zisserman. Recurrent human pose estimation. arXiv preprint arXiv:1605.02914, 2016. [3] L. Breiman and J. H. Friedman. Predicting multivariate responses in multiple linear regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 59(1):3–54, 1997. [4] J. Carreira, P. Agrawal, K. Fragkiadaki, and J. Malik. Human pose estimation with iterative error feedback. CVPR, 2016. [5] R. Caruana. Multitask learning. Machine Learning, 28(1), 1997. [6] K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convolutional nets. In BMVC, 2014. [7] X. Chen, R. Mottaghi, X. Liu, S. Fidler, R. Urtasun, and A. L. Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, pages 1971–1978, 2014. [8] J. Dai, K. He, and J. Sun. Instance-aware semantic segmentation via multi-task network cascades. In CVPR, 2016. [9] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. CoRR, abs/1310.1531, 2013. 8 [10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes (VOC) challenge. IJCV, 88(2):303–338, 2010. [11] R. Girshick. Fast r-cnn. In ICCV, 2015. [12] A. Graves, M. Liwicki, S. Fernández, R. Bertolami, H. Bunke, and J. Schmidhuber. A novel connectionist system for unconstrained handwriting recognition. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(5):855–868, 2009. [13] A. Graves, A. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In ICASSP, pages 6645–6649. IEEE, 2013. [14] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, pages 346–361, 2014. [15] G. E. Hinton and R. R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [16] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735– 1780, 1997. [17] T. Mikolov. Statistical Language Models Based on Neural Networks. PhD thesis, Ph. D. thesis, Brno University of Technology, 2012. [18] T. M. Mitchell and S. B. Thrun. Explanation-based neural network learning for robot control. NIPS, pages 287–287, 1993. [19] M. Najibi, M. Rastegari, and L. S. Davis. G-cnn: an iterative grid based object detector. CVPR, 2016. [20] P. H. O. Pinheiro and R. Collobert. Recurrent convolutional neural networks for scene parsing. arXiv preprint arXiv:1306.2795, 2013. [21] M. A. Ranzato, F. J. Huang, Y. Boureau, and Y. LeCun. Unsupervised learning of invariant feature hierarchies with applications to object recognition. In CVPR, pages 1–8, 2007. [22] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning representations by back-propagating errors. Cognitive modeling, 5(3):1, 1988. [23] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, S. Huang, A. Karpathy, A. Khosla, M. Bernstein, A.C. Berg, and F.F. Li. Imagenet large scale visual recognition challenge. IJCV, 2015. [24] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112, 2014. [25] S. Thrun and L. Pratt, editors. Learning to Learn. Kluwer Academic Publishers, 1998. [26] K. van de Sande, J. Uijlings, T. Gevers, and A. Smeulders. Segmentation as selective search for object recognition. In ICCV, 2011. [27] A. Vedaldi and K. Lenc. Matconvnet – convolutional neural networks for matlab. In Proceeding of the ACM Int. Conf. on Multimedia, 2015. [28] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, pages 1096–1103. ACM, 2008. [29] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional networks. CoRR, abs/1311.2901, 2013. [30] T. Zhang, B. Ghanem, S. Liu, and N. Ahuja. Robust visual tracking via structured multi-task sparse learning. IJCV, 101(2):367–383, 2013. [31] Z. Zhang, P. Luo, C. C. Loy, and X. Tang. Facial landmark detection by deep multi-task learning. In ECCV, pages 94–108. Springer, 2014. 9
2016
17
6,072
A Probabilistic Programming Approach To Probabilistic Data Analysis Feras Saad MIT Probabilistic Computing Project fsaad@mit.edu Vikash Mansinghka MIT Probabilistic Computing Project vkm@mit.edu Abstract Probabilistic techniques are central to data analysis, but different approaches can be challenging to apply, combine, and compare. This paper introduces composable generative population models (CGPMs), a computational abstraction that extends directed graphical models and can be used to describe and compose a broad class of probabilistic data analysis techniques. Examples include discriminative machine learning, hierarchical Bayesian models, multivariate kernel methods, clustering algorithms, and arbitrary probabilistic programs. We demonstrate the integration of CGPMs into BayesDB, a probabilistic programming platform that can express data analysis tasks using a modeling definition language and structured query language. The practical value is illustrated in two ways. First, the paper describes an analysis on a database of Earth satellites, which identifies records that probably violate Kepler’s Third Law by composing causal probabilistic programs with nonparametric Bayes in 50 lines of probabilistic code. Second, it reports the lines of code and accuracy of CGPMs compared with baseline solutions from standard machine learning libraries. 1 Introduction Probabilistic techniques are central to data analysis, but can be difficult to apply, combine, and compare. Such difficulties arise because families of approaches such as parametric statistical modeling, machine learning and probabilistic programming are each associated with different formalisms and assumptions. The contributions of this paper are (i) a way to address these challenges by defining CGPMs, a new family of composable probabilistic models; (ii) an integration of this family into BayesDB [10], a probabilistic programming platform for data analysis; and (iii) empirical illustrations of the efficacy of the framework for analyzing a real-world database of Earth satellites. We introduce composable generative population models (CGPMs), a computational formalism that generalizes directed graphical models. CGPMs specify a table of observable random variables with a finite number of columns and countably infinitely many rows. They support complex intra-row dependencies among the observables, as well as inter-row dependencies among a field of latent random variables. CGPMs are described by a computational interface for generating samples and evaluating densities for random variables derived from the base table by conditioning and marginalization. This paper shows how to package discriminative statistical learning techniques, dimensionality reduction methods, arbitrary probabilistic programs, and their combinations, as CGPMs. We also describe algorithms and illustrate new syntaxes in the probabilistic Metamodeling Language for building composite CGPMs that can interoperate with BayesDB. The practical value is illustrated in two ways. First, we describe a 50-line analysis that identifies satellite data records that probably violate their theoretical orbital characteristics. The BayesDB script builds models that combine non-parametric Bayesian structure learning with a causal probabilistic program that implements a stochastic variant of Kepler’s Third Law. Second, we illustrate coverage and conciseness of the CGPM abstraction by quantifying the improvement in accuracy and reduction in lines of code achieved on a representative data analysis task. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Composable Generative Population Models A composable generative population model represents a data generating process for an exchangeable sequence of random vectors (x1, x2, . . . ), called a population. Each member xr is T-dimensional, and element x[r,t] takes values in an observation space Xt, for t ∈[T] and r ∈N. A CGPM G is formally represented by a collection of variables that characterize the data generating process: G = (α, θ, Z = {zr : r ∈N}, X = {xr : r ∈N}, Y = {yr : r ∈N}). • α: Known, fixed quantities about the population, such as metadata and hyperparameters. • θ: Population-level latent variables relevant to all members of the population. • zr = (z[r,1], . . . z[r,L]): Member-specific latent variables that govern only member r directly. • xr = (x[r,1], . . . x[r,T ]): Observable output variables for member r. A subset of these variables may be observed and recorded in a dataset D. • yr = (y[r,1], . . . y[r,I]): Input variables, such as “feature vectors” in a purely discriminative model. A CGPM is required to satisfy the following conditional independence constraint: ∀r ̸= r′ ∈N, ∀t, t′ ∈[T] : x[r,t] ⊥⊥x[r′,t′] | {α, θ, zr, zr′}. (1) Eq (1) formalizes the notion that all dependencies across members r ∈N are completely mediated by the population parameters θ and member-specific variables zr. However, elements x[r,i] and x[r,j] within a member are generally free to assume any dependence structure. Similarly, the memberspecific latents in Z may be either uncoupled or highly-coupled given population parameters θ. CGPMs differ from the standard mathematical definition of a joint density in that they are defined in terms of a computational interface (Listing 1). As computational objects, they explicitly distinguish between the sampler for the random variables from their joint distribution, and the assessor of their joint density. In particular, a CGPM is required to sample/assess the joint distribution of a subset of output variables x[r,Q] conditioned on another subset x[r,E], and marginalizing over x[r,[T ]\(Q∪E)]. Listing 1 Computational interface for composable generative population models. • s ←simulate (G, member: r, query: Q = {qk}, evidence : x[r,E], input : yr) Generate a sample from the distribution s ∼G x[r,Q]|{x[r,E], yr, D}. • c ←logpdf (G, member: r, query : x[r,Q], evidence : x[r,E], input : yr) Evaluate the log density log pG(x[r,Q]|{x[r,E], yr, D}). • G′ ←incorporate (G, measurement : x[r,t] or yr) Record a measurement x[r,t] ∈Xt (or yr) into the dataset D. • G′ ←unincorporate (G, member : r) Eliminate all measurements of input and output variables for member r. • G′ ←infer (G, program : T ) Adjust internal latent state in accordance with the learning procedure specified by program T . 2.1 Primitive univariate CGPMs and their statistical data types The statistical data type (Figure 1) of a population variable xt generated by a CGPM provides a more refined taxonomy than its “observation space” Xt. The (parameterized) support of a statistical type is the set in which samples from simulate take values. Each statistical type is also associated with a base measure which ensures logpdf is well-defined. In high-dimensional populations with heterogeneous types, logpdf is taken against the product measure of these base measures. The statistical type also identifies invariants that the variable maintains. For instance, the values of a NOMINAL variable are permutation-invariant. Figure 1 shows statistical data types provided by the Metamodeling Language from BayesDB. The final column shows some examples of primitive CGPMs that are compatible with each statistical type; they implement logpdf directly using univariate probability density functions, and algorithms for simulate are well known [4]. For infer their parameters may be fixed, or learned from data using, e.g., maximum likelihood [2, Chapter 7] or Bayesian priors [5]. We refer to an extended version of this paper [14, Section 3] for using these primitives to implement CGPMs for a broad collection of model classes, including non-parametric Bayes, nearest neighbors, PCA, discriminative machine learning, and multivariate kernel methods. 2 Statistical Data Type Parameters Support Measure/σ-Algebra Primitive CGPM BINARY {0, 1} (#, 2{0,1}) BERNOULLI NOMINAL symbols: S {0 . . . S−1} (#, 2[S]) CATEGORICAL COUNT/RATE base: b {0, 1 b , 2 b , . . .} (#, 2N) POISSON, GEOMETRIC CYCLIC period: p (0, p) (λ, B(R)) VON-MISES MAGNITUDE – (0, ∞) (λ, B(R)) LOGNORMAL, EXPON NUMERICAL – (−∞, ∞) (λ, B(R)) NORMAL NUMERICAL-RANGED low: l, high:h (l, h) ⊂R (λ, B(R)) BETA, NORMAL-TRUNC Frequency Nominal Categorical Count Poisson Geometric Magnitude Lognormal Exponential Cyclic Von-Mises Numerical Normal Numerical-Ranged NormalTrunc Beta Figure 1: Statistical data types for population variables generated by CGPMs available in the BayesDB Metamodeling Language, and samples from their marginal distributions. 2.2 Implementing general CGPMs as probabilistic programs in VentureScript In this section, we show how to implement simulate and logpdf (Listing 1) for composable generative models written in VentureScript [8], a probabilistic programming language with programmable inference. For simplicity, this section assumes a stronger conditional independence constraint, ∃l, l′ ∈[L] such that (r, t) ̸= (r′, t′) =⇒x[r,t] ⊥⊥x[r′,t′] | {α, θ, z[r,l], z[r′,l′], yr, y′ r}. (2) In words, for every observable element x[r,t], there exists a latent variable z[r,l] which (in addition to θ) mediates all coupling with other variables in the population. The member latents Z may still exhibit arbitrary dependencies. The approach for simulate and logpdf described below is based on approximate inference in tagged subparts of the Venture trace, which carries a full realization of all random choices (population and member-specific latent variables) made by the program. The runtime system carries a set of K traces {(θk, Zk)}K k=1 sampled from an approximate posterior pG(θ, Z|D). These traces are assigned weights depending on the user-specified evidence x[r,E] in the simulate/logpdf function call. G represents the CGPM as a probabilistic program, and the input yr and latent variables Zk are treated as ambient quantities in θk. The distribution of interest is pG(x[r,Q]|x[r,E], D) = Z θ pG(x[r,Q]|x[r,E], θ, D)pG(θ|x[r,E], D)dθ = Z θ pG(x[r,Q]|x[r,E], θ, D) pG(x[r,E]|θ, D)pG(θ|D) pG(x[r,E]|D)  dθ (3) ≈ 1 PK k=1 wk K X k=1 pG(x[r,Q]|x[r,E], θk, D)wk where θk ∼G |D. (4) The weight wk = pG(x[r,E]|θk, D) of trace θk is the likelihood of the evidence. The weighting scheme (4) is a computational trade-off avoiding the requirement to run posterior inference on population parameters θ for a query about member r. It suffices to derive the distribution for only θk, pG(x[r,Q]|x[r,E], θk, D) = Z zk r pG(x[r,Q], zk r |x[r,E], θk, D)dzk r (5) = Z zk r Y q∈Q pG(x[r,q]|zk r , θk)  pG(zk r |x[r,E], θk, D)dzk r ≈1 J J X j=1 Y q∈Q pG(x[r,q]|zk,j r , θk), (6) where zk,j r ∼G |{x[r,E], θk, D}. Eq (5) suggests that simulate can be implemented by sampling (x[r,Q], zk r ) ∼G |{x[r,E], θk, D} from the joint local posterior, then returning elements x[r,Q]. Eq (6) shows that logpdf can be implemented by first sampling the member latents zk r ∼G |{x[r,E], θk, D} from the local posterior; using the conditional independence constraint (2), the query x[r,Q] then factors into a product of density terms for each element x[r,q]. 3 To aggregate over {θk}K k=1, for simulate the runtime obtains the queried sample by first drawing k ∼CATEGORICAL({w1, . . . , wK}), then returns the sample x[r,Q] drawn from trace θk. Similarly, logpdf is computed using the weighted Monte Carlo estimator (6). Algorithms 2a and 2b summarize implementations of simulate and logpdf in a general probabilistic programming environment. Algorithm 2a simulate for CGPMs in a probabilistic programming environment. 1: function SIMULATE(G, r, Q, x[r,E], yr) 2: for k = 1, . . . , K do ▷for each trace k 3: if zk r ̸∈Zk then ▷if member r has unknown local latents 4: zk r ∼G |{θk, Zk, D} ▷sample them from the prior 5: wk ←Q e∈E pG(x[r,e]|θk, zk r ) ▷weight the trace by likelihood of evidence 6: k ∼CATEGORICAL ({w1, . . . , wk}) ▷importance resample the traces 7: {x[r,Q], zk r } ∼G |{θk, Zk, D ∪{yr, x[r,E]}} ▷run a transition operator leaving target invariant 8: return x[r,Q] ▷select query variables from the resampled trace Algorithm 2b logpdf for CGPMs in a probabilistic programming environment. 1: function LOGPDF(G, r, x[r,Q], x[r,E], yr) 2: for k = 1, . . . , K do ▷for each trace k 3: Run steps 2 through 5 from Algorithm 2a ▷retrieve the trace weight 4: for j = 1, . . . , J do ▷obtain J samples of latents in scope of member r 5: zk,j r ∼G |{θk, Zk, D ∪{yr, x[r,E]}} ▷run a transition operator leaving target invariant 6: hk,j ←Q q∈Q pG(x[r,q]|θk, zk,j r ) ▷compute the density estimate 7: rk ←1 J PJ j=1 hk,j ▷aggregate density estimates by simple Monte Carlo 8: qk ←rkwk ▷importance weight the estimate 9: return log PK k=1 qk −log PK k=1 wk ▷weighted importance sampling over all traces 2.3 Inference in a composite network of CGPMs This section shows how CGPMs are composed by applying the output of one to the input of another. This allows us to build complex probabilistic models out of simpler primitives directly as software. Section 3 demonstrates surface-level syntaxes in the Metamodeling Language for constructing these composite structures. We report experiments including up to three layers of composed CGPMs. Let Ga be a CGPM with output xa ∗and input ya ∗, and Gb have output xb ∗and input yb ∗(the symbol ∗ indexes all members r ∈N). The composition Gb B ◦Ga A applies the subset of outputs xa [∗,A] of Ga to the inputs yb [∗,B] of Gb, where |A| = |B| and the variables are type-matched (Figure 1). This operation results in a new CGPM Gc with output xa ∗∪xb ∗and input ya ∗∪yb [∗,\B]. In general, a collection {Gk : k ∈[K]} of CGPMs can be organized into a generalized directed graph G[K], which itself is a CGPM. Node k is an “internal” CGPM Gk, and the labeled edge aA →bB denotes the composition Ga A ◦Gb B. The directed acyclic edge structure applies only to edges between elements of different CGPMs in the network; elements xk [∗,i], xk [∗,j] within Gk may satisfy the more general constraint (1). Algorithms 3a and 3b show sampling-importance-resampling and ratio-likelihood weighting algorithms that combine simulate and logpdf from each individual Gk to compute queries against network G[K]. The symbol πk = {(p, t) : xp [∗,t] ∈yk ∗} refers to the set of all output elements from upstream CGPMs connected to the inputs of Gk, so that {πk : k ∈[K]} encodes the graph adjacency matrix. Subroutine 3c generates a full realization of all unconstrained variables, and weights forward samples from the network by the likelihood of constraints. Algorithm 3b is based on ratio-likelihood weighting (both terms in line 6 are computed by unnormalized importance sampling) and admits an analysis with known error bounds when logpdf and simulate of each Gk are exact [7]. Algorithm 3a simulate in a directed acyclic network of CGPMs. 1: function SIMULATE(Gk, r, Qk, xk [r,Ek], yk r , for k ∈[K]) 2: for j = 1, . . . , J do ▷generate J importance samples 3: (sj, wj) ←WEIGHTED-SAMPLE ({xk [r,Ek] : k ∈[K]}) ▷retrieve jth weighted sample 4: m ←CATEGORICAL ({w1, . . . , wJ}) ▷resample by importance weights 5: return {xk [r,Qk] ∈sm : k ∈[K]} ▷return query variables from the selected sample 4 Algorithm 3b logpdf in a directed acyclic network of CGPMs. 1: function SIMULATE(Gk, r, xk Q, xk [r,Ek], yk r , for k ∈[K]) 2: for j = 1, . . . , J do ▷generate J importance samples 3: (sj, wj) ←WEIGHTED-SAMPLE ({xk [r,Qk∪Ek] : k ∈[K]}) ▷joint density of query/evidence 4: for j = 1, . . . , J′ do ▷generate J′ importance samples 5: (s′j, w′j) ←WEIGHTED-SAMPLE ({xk [r,Ek] : k ∈[K]}) ▷marginal density of evidence 6: return log P [J] wj/ P [J′] w′j −log(J/J′) ▷return likelihood ratio importance estimate Algorithm 3c Weighted forward sampling in a directed acyclic network of CGPMs. 1: function WEIGHTED-SAMPLE (constraints: xk [r,Ck], for k ∈[K]) 2: (s, log w) ←(∅, 0) ▷initialize empty sample with zero weight 3: for k ∈TOPOSORT ({π1, . . . , πK}) do ▷topologically sort CGPMs using adjacency matrix 4: ˜yk r ←yk r ∪{xp [r,t] ∈s : (p, t) ∈πk} ▷retrieve required inputs at node k 5: log w ←log w + logpdf (Gk, r, xk [r,Ck], ∅, ˜yk r ) ▷update weight by likelihood of constraint 6: xk [r,\Ck] ←simulate (Gk, r, \Ck, xk [r,Ck], ˜yk r ) ▷simulate unconstrained nodes 7: s ←s ∪xk [r,Ck∪\Ck] ▷append all node values to sample 8: return (s, w) ▷return the overall sample and its weight 3 Analyzing satellites using CGPMs built from causal probabilistic programs, discriminative machine learning, and Bayesian non-parametrics This section outlines a case study applying CGPMs to a database of 1163 satellites maintained by the Union of Concerned Scientists [12]. The dataset contains 23 numerical and categorical features of each satellite such as its material, functional, physical, orbital and economic characteristics. The list of variables and examples of three representative satellites are shown in Table 1. A detailed study of this database using BayesDB provided in [10]. Here, we compose the baseline CGPM in BayesDB, CrossCat [9], a non-parametric Bayesian structure learner for high dimensional data tables, with several CGPMs: a classical physics model written in VentureScript, a random forest classifier, factor analysis, and an ordinary least squares regressor. These composite models allow us to identify satellites that probably violate their orbital mechanics (Figure 2), as well as accurately infer the anticipated lifetimes of new satellites (Figure 3). We refer to [14, Section 6] for several more experiments on a broader set of data analysis tasks, as well as comparisons to baseline machine learning solutions. Name International Space Station AAUSat-3 Advanced Orion 5 (NRO L-32, USA 223) Country of Operator Multinational Denmark USA Operator Owner NASA/Multinational Aalborg University National Reconnaissance Office (NRO) Users Government Civil Military Purpose Scientific Research Technology Development Electronic Surveillance Class of Orbit LEO LEO GEO Type of Orbit Intermediate NaN NaN Perigee km 401 770 35500 Apogee km 422 787 35500 Eccentricity 0.00155 0.00119 0 Period minutes 92.8 100.42 NaN Launch Mass kg NaN 0.8 5000 Dry Mass kg NaN NaN NaN Power watts NaN NaN NaN Date of Launch 36119 41330 40503 Anticipated Lifetime 30 1 NaN Contractor Boeing Satellite Systems/Multinational Aalborg University National Reconnaissance Laboratory Country of Contractor Multinational Denmark USA Launch Site Baikonur Cosmodrome Satish Dhawan Space Center Cape Canaveral Launch Vehicle Proton PSLV Delta 4 Heavy Source Used for Orbital Data www.satellitedebris.net 12/12 SC - ASCR SC - ASCR longitude radians of geo NaN NaN 1.761037215 Inclination radians 0.9005899 1.721418241 0 Table 1: Variables in the satellite population, and three representative satellites. The records are multivariate, heterogeneously typed, and contain arbitrary patterns of missing data. 5 1 CREATE TABLE satellites_ucs FROM 'satellites.csv'; 2 CREATE POPULATION satellites FOR satellites_ucs WITH SCHEMA ( GUESS STATTYPES FOR (*) ); 3 4 CREATE METAMODEL satellites_hybrid FOR satellites WITH BASELINE CROSSCAT ( 5 6 OVERRIDE GENERATIVE MODEL FOR type_of_orbit 7 GIVEN apogee_km, perigee_km, period_minutes, users, class_of_orbit 8 USING RANDOM_FOREST (num_categories = 7); 9 10 OVERRIDE GENERATIVE MODEL FOR launch_mass_kg, dry_mass_kg, power_watts, perigee_km, apogee_km 11 USING FACTOR_ANALYSIS (dimensionality = 2); 12 13 OVERRIDE GENERATIVE MODEL FOR period_minutes 14 AND EXPOSE kepler_cluster_id CATEGORICAL, kepler_noise NUMERICAL 15 GIVEN apogee_km, perigee_km USING VENTURESCRIPT (program = ' 16 define dpmm_kepler = () -> { // Definition of DPMM Kepler model program. 17 assume keplers_law = (apogee, perigee) -> { 18 (GM, earth_radius) = (398600, 6378); 19 a = .5*(abs(apogee) + abs(perigee)) + earth_radius; 20 2 * pi * sqrt(a**3 / GM) / 60 }; 21 // Latent variable priors. 22 assume crp_alpha = gamma(1,1); 23 assume cluster_id_sampler = make_crp(crp_alpha); 24 assume noise_sampler = mem((cluster) -> make_nig_normal(1, 1, 1, 1)); 25 // Simulator for latent variables (kepler_cluster_id and kepler_noise). 26 assume sim_cluster_id = mem((rowid, apogee, perigee) -> { 27 cluster_id_sampler() #rowid:1 }); 28 assume sim_noise = mem((rowid, apogee, perigee) -> { 29 cluster_id = sim_cluster_id(rowid, apogee, perigee); 30 noise_sampler(cluster_id)() #rowid:2 }); 31 // Simulator for observable variable (period_minutes). 32 assume sim_period = mem((rowid, apogee, perigee) -> { 33 keplers_law(apogee, perigee) + sim_noise(rowid, apogee, perigee) }); 34 assume outputs = [sim_period, sim_cluster_id, sim_noise]; // List of output variables. 35 }; 36 // Procedures for observing the output variables. 37 define obs_cluster_id = (rowid, apogee, perigee, value, label) -> { 38 $label: observe sim_cluster_id( $rowid, $apogee, $perigee) = atom(value); }; 39 define obs_noise = (rowid, apogee, perigee, value, label) -> { 40 $label: observe sim_noise( $rowid, $apogee, $perigee) = value; }; 41 define obs_period = (rowid, apogee, perigee, value, label) -> { 42 theoretical_period = run(sample keplers_law($apogee, $perigee)); 43 obs_noise( rowid, apogee, perigee, value - theoretical_period, label); }; 44 define observers = [obs_period, obs_cluster_id, obs_noise]; // List of observer procedures. 45 define inputs = ["apogee", "perigee"]; // List of input variables. 46 define transition = (N) -> { default_markov_chain(N) }; // Transition operator. 47 ')); 48 INITIALIZE 10 MODELS FOR satellites_hybrid; 49 ANALYZE satellites_hybrid FOR 100 ITERATIONS; 50 INFER name, apogee_km, perigee_km, period_minutes, kepler_cluster_id, kepler_noise FROM satellites; 0 10000 20000 30000 40000 Perigee [km] 0 1000 2000 3000 4000 5000 Period [mins] Orion6 Geotail Meridian4 Amos5 NavStar Clusters Identified by Kepler CGPM Cluster 1 Cluster 2 Cluster 3 Cluster 4 Theoretically Feasible Orbits 1e-10 1e-5 1e0 1e5 1e10 Magntiude of Deviation from Kepler´s Law [mins2] 20 21 22 23 24 25 26 27 28 Number of Satellites Orion6 Geotail Meridian4 Amos5 Empirical Distribution of Orbital Deviations Negligible Noticeable Large Extreme Figure 2: A session in BayesDB to detect satellites whose orbits are likely violations of Kepler’s Third Law using a causal composable generative population model written in VentureScript. The dpmm_kepler CGPM (line 17) learns a DPMM on the residuals of each satellite’s deviation from its theoretical orbit. Both the cluster identity and inferred noise are exposed latent variables (line 14). Each dot in the scatter plot (left) is a satellite in the population, and its color represents the latent cluster assignment learned by dpmm_kepler. The histogram (right) shows that each of the four detected clusters roughly translates to a qualitative description of the deviation: yellow (negligible), magenta (noticeable), green (large), and blue (extreme). 6 1 CREATE TABLE data_train FROM 'sat_train.csv'; 2 .nullify data_train 'NaN'; 3 4 CREATE POPULATION satellites FOR data_train 5 WITH SCHEMA( 6 GUESS STATTYPES FOR (*) 7 ); 8 9 CREATE METAMODEL crosscat_ols FOR satellites 10 WITH BASELINE CROSSCAT( 11 OVERRIDE GENERATIVE MODEL FOR 12 anticipated_lifetime 13 GIVEN 14 type_of_orbit, perigee_km, apogee_km, 15 period_minutes, date_of_launch, 16 launch_mass_kg 17 USING LINEAR_REGRESSION 18 ); 19 20 INITIALIZE 4 MODELS FOR crosscat_ols; 21 ANALYZE crosscat_ols FOR 100 ITERATION WAIT; 22 23 CREATE TABLE data_test FROM 'sat_test.csv'; 24 .nullify data_test 'NaN'; 25 .sql INSERT INTO data_train 26 SELECT * FROM data_test; 27 28 CREATE TABLE predicted_lifetime AS 29 INFER EXPLICIT 30 PREDICT anticipated_lifetime 31 CONFIDENCE prediction_confidence 32 FROM satellites WHERE _rowid_ > 1000; (a) Full session in BayesDB which loads the training and test sets, creates a hybrid CGPM, and runs the regression using CrossCat+OLS. def dummy_code_categoricals(frame, maximum=10): def dummy_code_categoricals(series): categories = pd.get_dummies( series, dummy_na=1) if len(categories.columns) > maximum-1: return None if sum(categories[np.nan]) == 0: del categories[np.nan] categories.drop( categories.columns[-1], axis=1, inplace=1) return categories def append_frames(base, right): for col in right.columns: base[col] = pd.DataFrame(right[col]) numerical = frame.select_dtypes([float]) categorical = frame.select_dtypes([object]) categorical_coded = filter( lambda s: s is not None, [dummy_code_categoricals(categorical[c]) for c in categorical.columns]) joined = numerical for sub_frame in categorical_coded: append_frames(joined, sub_frame) return joined (b) Ad-hoc Python routine (used by baselines) for coding nominal predictors in a dataframe with missing values and mixed data types. 101 102 Lines of Code 100 101 102 Mean Squared Error ridge ols lasso kernel forest bayesdb(crosscat+ols) bayesdb(crosscat) Figure 3: In a high-dimensional regression problem with mixed data types and missing data, the composite CGPM improves prediction accuracy over purely generative and purely discriminative baselines. The task is to infer the anticipated lifetime of a held-out satellite given categorical and numerical features such as type of orbit, launch mass, and orbital period. As feature vectors in the test set have missing entries, purely discriminative models (ridge, lasso, OLS) either heuristically impute missing features, or ignore the features and predict the anticipated lifetime using the mean in the training set. The purely generative model (CrossCat) can impute missing features from their joint distribution, but only indirectly mediates dependencies between the predictors and response through latent variables. The composite CGPM (CrossCat+OLS) in panel (a) combines advantages of both approaches; statistical imputation followed by regression on the features leads to improved predictive accuracy. The reduced code size is a result of using SQL, BQL, & MML, for preprocessing, model-building and predictive querying, as opposed to collections of ad-hoc scripts such as panel (b). Figure 2 shows the MML program for constructing the hybrid CGPM on the satellites population. In terms of the compositional formalism from Section 2.3, the CrossCat CGPM (specified by the MML BASELINE keyword) learns the joint distribution of variables at the “root” of the network (i.e., all variables from Table 1 which do not appear as arguments to an MML OVERRIDE command). The dpmm_kepler CGPM in line 16 of the top panel in Figure 2 accepts apogee_km and perigee_km as input variables y = (A, P), and produces as output the period_minutes x = (T). These variables characterize the elliptical orbit of a satellite and are constrained by the relationships e = (A −P)/(A + P) and T = 2π p ((A + P)/2))3/GM where e is the eccentricity and GM 7 is a physical constant. The program specifies a stochastic version of Kepler’s Law using a Dirichlet process mixture model for the distribution over errors (between the theoretical and observed period), P ∼DP(α, NORMAL-INVERSE-GAMMA(m, V, a, b)), (µr, σ2 r)|P ∼P ϵr|{µr, σ2 r, yr} ∼NORMAL(·|µr, σ2 r), where ϵr := Tr −KEPLER(Ar, Pr). The lower panels of Figure 2 illustrate how the dpmm_kepler CGPM clusters satellites based on the magnitude of the deviation from their theoretical orbits; the variables (deviation, cluster identity, etc) in these figures are obtained from the BQL query on line 50. For instance, the satellite Orion6 shown in the right panel of Figure 2, belongs to a component with “extreme” deviation. Further investigation reveals that Orion6 has a recorded period 23.94 minutes, most likely a data entry error for the true period of 24 hours (1440 minutes); we have reported such errors to the maintainers of the database. The data analysis task in Figure 3 is to infer the anticipated_lifetime xr of a new satellite, given a set of features yr such as its type_of_orbit and perigee_km. A simple OLS regressor with normal errors is used for the response pGols(xr|yr). The CrossCat baseline learns a joint generative model for the covariates pGcrosscat(yr). The composite CGPM crosscat_ols built Figure 3 (left panel) thus carries the full joint distribution over the predictors and response pG(xr, yr), leading to more accurate predictions. Advantages of this hybrid approach are further discussed in the figure. 4 Related Work and Discussion This paper has shown that it is possible to use a computational formalism in probabilistic programming to uniformly apply, combine, and compare a broad class of probabilistic data analysis techniques. By integrating CGPMs into BayesDB [10] and expressing their compositions in the Metamodeling Language, we have shown it is possible to combine CGPMs synthesized by automatic model discovery [9] with custom probabilistic programs, which accept and produce multivariate inputs and outputs, into coherent joint probabilistic models. Advantages of this hybrid approach to modeling and inference include combining the strengths of both generative and discriminative techniques, as well as savings in code complexity from the uniformity of the CGPM interface. While our experiments have constructed CGPMs using VentureScript and Python implementations, the general probabilistic programming interface of CGPMs makes it possible for BayesDB to interact with a variety systems such as BUGS [15], Stan [1], BLOG [11], Figaro [13], and others. Each of these systems provides varying levels of model expressiveness and inference capabilities, and can be used to be construct domain-specific CGPMs with different performance properties based on the data analysis task on hand. Moreover, by expressing the data analysis tasks in BayesDB using the model-independent Bayesian Query Language [10, Section 3], CGPMs can be queried without necessarily exposing their internal structures to end users. Taken together, these characteristics help illustrate the broad utility of the BayesDB probabilistic programming platform and architecture [14, Section 5], which in principle can be used to create and query novel combinations of black-box machine learning, statistical modeling, computer simulation, and probabilistic generative models. Our applications have so far focused on CGPMs for analyzing populations from standard multivariate statistics. A promising area for future work is extending the computational abstraction of CGPMs, as well as the Metamodeling and Bayesian Query Languages, to cover analysis tasks in other domains such longitudinal populations [3], statistical relational settings [6], or natural language processing and computer vision. Another extension, important in practice, is developing alternative compositional algorithms for querying CGPMs (Section 2.3). The importance sampling strategy used for compositional simulate and logpdf may only be feasible when the networks are shallow and the constituent CGPMs are fairly noisy; better Monte Carlo strategies or perhaps even variational strategies may be needed for deeper networks. Additional future work for composite CGPMs include (i) algorithms for jointly learning the internal parameters of each individual CGPM, using, e.g., imputations from its parents, and (ii) new meta-algorithms for structure learning among a collection of compatible CGPMs, in a similar spirit to the non-parametric divide-and-conquer method from [9]. We hope the formalisms in this paper lead to practical, unifying tools for data analysis that integrate these ideas, and provide abstractions that enable the probabilistic programming community to collaboratively explore these research directions. 8 References [1] B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. A. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. J Stat Softw, 2016. [2] G. Casella and R. Berger. Statistical Inference. Duxbury advanced series in statistics and decision sciences. Thomson Learning, 2002. [3] M. Davidian and D. M. Giltinan. Nonlinear models for repeated measurement data, volume 62. CRC press, 1995. [4] L. Devroye. Sample-based non-uniform random variate generation. In Proceedings of the 18th conference on Winter simulation, pages 260–265. ACM, 1986. [5] D. Fink. A compendium of conjugate priors. 1997. [6] N. Friedman, L. Getoor, D. Koller, and A. Pfeffer. Learning probabilistic relational models. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, IJCAI 99, Stockholm, Sweden, July 31 - August 6, 1999. 2 Volumes, 1450 pages, pages 1300–1309, 1999. [7] D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. MIT press, 2009. [8] V. Mansinghka, D. Selsam, and Y. Perov. Venture: a higher-order probabilistic programming platform with programmable inference. CoRR, abs/1404.0099, 2014. [9] V. Mansinghka, P. Shafto, E. Jonas, C. Petschulat, M. Gasner, and J. B. Tenenbaum. Crosscat: A fully bayesian nonparametric method for analyzing heterogeneous, high dimensional data. arXiv preprint arXiv:1512.01272, 2015. [10] V. Mansinghka, R. Tibbetts, J. Baxter, P. Shafto, and B. Eaves. Bayesdb: A probabilistic programming system for querying the probable implications of data. arXiv preprint arXiv:1512.05006, 2015. [11] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. 1 blog: Probabilistic models with unknown objects. Statistical relational learning, page 373, 2007. [12] U. of Concerned Scientists. UCS Satellite Database, 2015. [13] A. Pfeffer. Figaro: An object-oriented probabilistic programming language. Charles River Analytics Technical Report, 137, 2009. [14] F. Saad and V. Mansinghka. Probabilistic data analysis with probabilistic programming. arXiv preprint arXiv:1608.05347, 2016. [15] D. J. Spiegelhalter, A. Thomas, N. G. Best, W. Gilks, and D. Lunn. Bugs: Bayesian inference using gibbs sampling. Version 0.5,(version ii) http://www. mrc-bsu. cam. ac. uk/bugs, 19, 1996. 9
2016
170
6,073
Assortment Optimization Under the Mallows model Antoine Désir IEOR Department Columbia University antoine@ieor.columbia.edu Vineet Goyal IEOR Department Columbia University vgoyal@ieor.columbia.edu Srikanth Jagabathula IOMS Department NYU Stern School of Business sjagabat@stern.nyu.edu Danny Segev Department of Statistics University of Haifa segevd@stat.haifa.ac.il Abstract We consider the assortment optimization problem when customer preferences follow a mixture of Mallows distributions. The assortment optimization problem focuses on determining the revenue/profit maximizing subset of products from a large universe of products; it is an important decision that is commonly faced by retailers in determining what to offer their customers. There are two key challenges: (a) the Mallows distribution lacks a closed-form expression (and requires summing an exponential number of terms) to compute the choice probability and, hence, the expected revenue/profit per customer; and (b) finding the best subset may require an exhaustive search. Our key contributions are an efficiently computable closed-form expression for the choice probability under the Mallows model and a compact mixed integer linear program (MIP) formulation for the assortment problem. 1 Introduction Determining the subset (or assortment) of items to offer is a key decision problem that commonly arises in several application contexts. A concrete setting is that of a retailer who carries a large universe of products U but can offer only a subset of the products in each store, online or offline. The objective of the retailer is typically to choose the offer set that maximizes the expected revenue/profit1 earned from each arriving customer. Determining the best offer set requires: (a) a demand model and (b) a set optimization algorithm. The demand model specifies the expected revenue from each offer set, and the set optimization algorithm finds (an approximation of) the revenue maximizing subset. In determining the demand, the demand model must account for product substitution behavior, whereby customers substitute to an available product (say, a dark blue shirt) when her most preferred product (say, a black one) is not offered. The substitution behavior makes the demand for each offered product a function of the entire offer set, increasing the complexity of the demand model. Nevertheless, existing work has shown that demand models that incorporate substitution effects provide significantly more accurate predictions than those that do not. The common approach to capturing substitution is through a choice model that specifies the demand as the probability P(a|S) of a random customer choosing product a from offer set S. The most general and popularly studied class of choice models is the rank-based class [9, 24, 12], which models customer purchase decisions through distributions over preference lists or rankings. These models assume that in each choice instance, a customer samples a preference list specifying a preference ordering over a subset of the 1As elaborated below, conversion-rate maximization can be obtained as a special case of revenue/profit maximization by setting the revenue/profit of all the products to be equal. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. products, and chooses the first available product on her list; the chosen product could very well be the no-purchase option. The general rank-based model accommodates distributions with exponentially large support sizes and, therefore, can capture complex substitution patterns; however, the resulting estimation and decision problems become computationally intractable. Therefore, existing work has focused on various parametric models over rankings. By exploiting the particular parametric structures, it has designed tractable algorithms for estimation and decision-making. The most commonly studied models in this context are the Plackett-Luce (PL) [22] model and its variants, the nested logit (NL) model and the mixture of PL models. The key reason for their popularity is that the assumptions made in these models (such as the Gumbel assumption for the error terms in the PL model) are geared towards obtaining closed-form expressions for the choice probabilities P(a|S). On the other hand, other popular models in the machine learning literature such as the Mallows model have largely been ignored because computing choice probabilities under these models has been generally considered to be computationally challenging, requiring marginalization of a distribution with an exponentially-large support size. In this paper, we focus on solving the assortment optimization problem under the Mallows model. The Mallows distribution was introduced in the mid-1950’s [17] and is the most popular member of the so-called distance-based ranking models, which are characterized by a modal ranking ! and a concentration parameter ✓. The probability that a ranking σ is sampled falls exponentially as e−✓·d(σ,!). Different distance functions result in different models. The Mallows model uses the Kendall-Tau distance, which measures the number of pairwise disagreements between the two rankings. Intuitively, the Mallows model assumes that consumer preferences are concentrated around a central permutation, with the likelihood of large deviations being low. We assume that the parameters of the model are given. Existing techniques in machine learning may be applied to estimate the model parameters. In settings of our interest, data are in the form of choice observations (item i chosen from offer set S), which are often collected as part of purchase transactions. Existing techniques focus on estimating the parameters of the Mallows model when the observations are complete rankings [8], partitioned preferences [14] (which include top-k/bottom-k items), or a general partial-order specified in the form of a collection of pairwise preferences [15]. While the techniques based on complete rankings and partitioned preferences don’t apply to this context, the techniques proposed in [15] can be applied to infer the model parameters. Our results. We address the two key computational challenges that arise in solving our problem: (a) efficiently computing the choice probabilities and hence, the expected revenue/profit, for a given offer set S and (b) finding the optimal offer set S⇤. Our main contribution is to propose two alternate procedures to to efficiently compute the choice probabilities P(a|S) under the Mallows model. As elaborated below, even computing choice probabilities is a non-trivial computational task because it requires marginalizing the distribution by summing it over an exponential number of rankings. In fact, computing the probability of a general partial order under the Mallows model is known to be a #P hard problem [15, 3]. Despite this, we show that the Mallows distribution has rich combinatorial structure, which we exploit to derive a closed-form expression for the choice probabilities that takes the form of a discrete convolution. Using the fast Fourier transform, the choice probability expression can be evaluated in O(n2 log n) time (see Theorem 3.2), where n is the number of products. In Section 4, we exploit the repeated insertion method (RIM) [7] for sampling rankings according to the Mallows distribution to obtain a dynamic program (DP) for computing the choice probabilities in O(n3) time (see Theorem 4.2). The key advantage of the DP specification is that the choice probabilities are expressed as the unique solution to a system of linear equations. Based on this specification, we formulate the assortment optimization problem as a compact mixed linear integer program (MIP) with O(n) binary variables and O(n3) continuous variables and constraints. The MIP provides a framework to model a large class of constraints on the assortment (often called “business constraints") that are necessarily present in practice and also extends to mixture of Mallows model. Using a simulation study, we show that the MIP provides accurate assortment decisions in a reasonable amount of time for practical problem sizes. The exact computation approaches that we propose for computing choice probabilities are necessary building blocks for our MIP formulation. They also provide computationally efficient alternatives to computing choice probabilities via Monte-Carlo simulations using the RIM sampling method. In fact, the simulation approach will require exponentially many samples to obtain reliable estimates when 2 products have exponentially small choice probabilities. Such products commonly occur in practice (such as the tail products in luxury retail). They also often command high prices because of which discarding them can significantly lower the revenues. Literature review. A large number of parametric models over rankings have been extensively studied in the areas of statistics, transportation, marketing, economics, and operations management (see [18] for a detailed survey of most of these models). Our work particularly has connections to the work in machine learning and operations management. The existing work in machine learning has focused on designing computationally efficient algorithms for estimating the model parameters from commonly available observations (complete rankings, top-k/bottom-k lists, pairwise comparisons, etc.). The developed techniques mainly consist of efficient algorithms for computing the likelihood of the observed data [14, 11] and sampling techniques for sampling from the distributions conditioned on observed data [15, 20]. The Plackett-Luce (PL) model, the Mallows model, and their variants have been, by far, the most studied models in this literature. On the other hand, the work in operations management has mainly focused on designing set optimization algorithms to find the best subset efficiently. The multinomial logit (MNL) model has been the most commonly studied model in this literature. The MNL model was made popular by the work of [19] and has been shown by [25] to be equivalent to the PL model, introduced independently by Luce [16] and Plackett [22]. When given the model parameters, the assortment optimization problem has been shown to be efficiently solvable for the MNL model by [23], variants of the nested logit (NL) model by [6, 10], and the Markov chain model by [2]. The problem is known to be hard for most other choice models [4], so [13] studies the performance of the local search algorithm for some of the assortment problems that are known to be hard. As mentioned, the literature in operations management has restricted itself to only those models for which choice probabilities are known to be efficiently computed. In the context of the literature, our key contribution is to extend a popular model in the machine learning literature to choice contexts and the assortment problem. 2 Model and problem statement Notation. We consider a universe U of n products. In order to distinguish products from their corresponding ranks, we let U = {a1, . . . , an} denote the universe of products, under an arbitrary indexing. Preferences over this universe are captured by an anti-reflexive, anti-symmetric, and transitive relation ≻, which induces a total ordering (or ranking) over all products; specifically, a ≻b means that a is preferred to b. We represent preferences through rankings or permutations. A complete ranking (or simply a ranking) is a bijection σ: U ! [n] that maps each product a 2 U to its rank σ(a) 2 [n], where [j] denotes the set {1, 2, . . . , j} for any integer j. Lower ranks indicate higher preference so that σ(a) < σ(b) if and only if a ≻σ b, where ≻σ denotes the preference relation induced by the ranking σ. For simplicity of notation, we also let σi denote the product ranked at position i. Thus, σ1σ2 · · · σn is the list of the products written by increasing order of their ranks. Finally, for any two integers i j, let [i, j] denote the set {i, i + 1, . . . , j}. Mallows model. The Mallows model is a member of the distance-based ranking family models [21]. This model is described by a location parameter !, which denotes the central permutation, and a scale parameter ✓2 R+, such that the probability of each permutation σ is given by λ(σ) = e−✓·d(σ,!) (✓) , where (✓) = P σ exp(−✓· d(σ, !)) is the normalization constant, and d(·, ·) is the KendallTau metric of distance between permutations defined as d(σ, !) = P i<j 1l[(σ(ai) −σ(aj)) · (!(ai) −!(aj)) < 0]. In other words, the d(σ, !) counts the number of pairwise disagreements between the permutations σ and !. It can be verified that d(·, ·) is a distance function that is rightinvariant under the composition of the symmetric group, i.e., d(⇡1, ⇡2) = d(⇡1⇡, ⇡2⇡) for every ⇡, ⇡1, ⇡2, where the composition σ⇡is defined as σ⇡(a) = σ(⇡(a)). This symmetry can be exploited to show that the normalization constant (✓) has a closed-form expression [18] given by (✓) = Qn i=1 1−e−i·✓ 1−e−✓. Note that (✓) depends only on the scale parameter ✓and does not depend on the location parameter. Intuitively, the Mallows model defines a set of consumers whose preferences are “similar”, in the sense of being centered around a common permutation, where the probability for 3 deviations thereof are decreasing exponentially. The similarity of consumer preferences is captured by the Kendall-Tau distance metric. Problem statement. We first focus on efficiently computing the probability that a product a will be chosen from an offer set S ✓U under the Mallows model. When offered the subset S, the customer is assumed to sample a preference list according to the Mallows model and then choose the most preferred product from S according to the sampled list. Therefore, the probability of choosing product a from the offer set S is given by P(a|S) = X σ λ(σ) · 1l[σ, a, S], (1) where 1l[σ, a, S] indicates whether σ(a) < σ(a0) for all a0 2 S, a0 6= a. Note that the above sum runs over n! preference lists, meaning that it is a priori unclear if P(a|S) can be computed efficiently. Once we are able to compute the choice probabilities, we consider the assortment optimization problem. For that, we extend the product universe to include an additional product aq that represents the outside (no-purchase) option and extend the Mallows model to n + 1 products. Each product a has an exogenously fixed price ra with the price rq of the outside option set to 0. Then, the goal is to solve the following decision problem: max S✓U X a2S P(a|S [ {rq}) · ra. (2) The above optimization problem is hard to approximate within O(n1−✏) under a general choice model [1]. 3 Choice probabilities: closed-form expression We now show that the choice probabilities can be computed efficiently under the Mallows model. Without loss of generality, we assume from this point on that the products are indexed such that the central permutation ! ranks product ai at position i, for all i 2 [n]. The next theorem shows that, when the offer set is contiguous, the choice probabilities enjoy a rather simple form. Using these expressions as building blocks, we derive a closed-form expressions for general offer sets. Theorem 3.1 (Contiguous offer set) Suppose S = a[i,j] = {ai, . . . , aj} for some 1 i j n. Then, the probability of choosing product ak 2 S under the Mallows model with location parameter ! and scale parameter ✓is given by P(ak|S) = e−✓·(k−i) 1 + e−✓+ · · · + e−✓·(j−i) . The proof of Theorem 3.1 is in Appendix A. The expression for the choice probability under a general offer set is more involved. For that, we need the following additional notation. For a pair of integers 1 m q n, define (q, ✓) = q Y s=1 s−1 X `=0 e−✓·` and (q, m, ✓) = (m, ✓) · (q −m, ✓). In addition, for a collection of M discrete functions hm : Z ! R, m = 1, . . . , M such that hm(r) = 0 for any r < 0, their discrete convolution is defined as (h1 ? · · · ? hm) (r) = X r1,...,rM: P m rm=r h1(r1) · · · hM(rM). Theorem 3.2 (General offer set) Suppose S = a[i1,j1] [ · · · [ a[iM,jM] where im jm for 1  m M and jm < im+1 for 1 m M −1. Let Gm = a[jm,im+1] for 1 m M −1, G = G1 [· · ·[GM, and C = a[i1,jM]. Then, the probability of choosing ak 2 a[i`,j`] can be written as P(ak|S) = e−✓·(k−i1) · QM−1 m=1 (|Gm| , ✓) (|C| , ✓) · (f0 ? ˜f1 ? · · · ? ˜f` ? f`+1 ? · · · ? fM)(|G|), where: 4 • fm(r) = e−✓·r·(jm−i1+1+r/2) · 1 (|Gm|,r,✓), if 0 r |Gm|, for 1 m M. • ˜fm(r) = e✓·r · fm(r), for 1 m M. • f0(r) = (|C| , |G| −r, ✓) · e✓·(|G|−r)2/2 1+e−✓+···+e−✓·(|S|−1+r) , for 0 r |G|. • fm(r) = 0, for 0 m M and any r outside the ranges described above. Proof. At a high level, deriving the choice probability expression for a general offer set involves breaking down the probabilistic event of choosing ak 2 S into simpler events for which we can use the expression given in Theorem 3.1, and then combining these expressions using the symmetries of the Mallows distribution. For a given vector R = (r0, . . . , rM) 2 RM+1 such that r0 + . . . rM = |G|, let h(R) be the set of permutations which satisfy the following two conditions: i) among all the products of S, ak is the most preferred, and ii) for all m 2 [M], there are exactly rm products from Gm which are preferred to ak. We denote this subset of products by ˜Gm for all m 2 [M]. This implies that there are r0 products from G which are less preferred than ak. With this notation, we can write P(ak|S) = X R:r0+...rM=|G| X σ2h(R) λ(σ), where recall that λ(σ) = e−✓·P i,j ⇠(σ,i,j) (✓) with ⇠(σ, i, j) = 1l[(σ(ai) −σ(aj)) · (!(ai) −!(aj)) < 0]. For all σ, we break down the sum in the exponential as follows: P i,j ⇠(σ, i, j) = C1(σ) + C2(σ) + C3(σ), where C1(σ) contains pairs of products (i, j) such that ai 2 ˜Gm for some m 2 [M] and aj 2 S, C2(σ) contains pairs of products (i, j) such that ai 2 ˜Gm for some m 2 [M] and aj 2 Gm0\ ˜Gm0 for some m 6= m0, and C3(σ) contains the remaining pairs of products. For a fixed R, we show that C1(σ) and C2(σ) are constant for all σ 2 h(R). Part 1. C1(σ) counts the number of disagreements (i.e., number of pairs of products that are oppositely ranked in σ and !) between some product in S and some product in ˜Gm for any m 2 [M]. For all m 2 [M], a product in ai 2 ˜Gm induces a disagreement with all product aj 2 S such that j < i. Therefore, the sum of all these disagreements is equal to, C1(σ) = M X m=1 X aj2S ai2 ˜ Gm ⇠(σ, i, j) = M X m=1 rm m X j=1 |Sj|, where Sm = a[im,jm]. Part 2. C2(σ) counts the number of disagreements between some product in any ˜Gm and some product in any Gm0\ ˜Gm0 for m0 6= m. The sum of all these disagreements is equal to, C2(σ) = X m6=m0 X ai2 ˜ Gm aj2Gm0\ ˜ Gm0 ⇠(σ, i, j) = M X m=2 rm · m−1 X j=1 (|Gj| −rj) = M X m=2 rm · m−1 X j=1 |Gj| − M X m=2 rm · m−1 X j=1 rj = M X m=2 rm m−1 X j=1 |Gj| −1 2(|G| −m0)2 + 1 2 M X m=1 r2 m. Consequently, for all σ 2 h(R), we can write d(σ, !) = C1(R) + C2(R) + C3(σ) and therefore, P(ak|S) = X R:r0+···+rM=|G| e−✓·(C1(R)+C2(R)) (✓) · X σ2h(R) e−✓.C3(σ). 5 Computing the inner sum requires a similar but more involved partitioning of the permutations as well as using Theorem 3.1. The details are presented in Appendix B. In particular, we can show that for a fixed R, P σ2h(R) e−✓.C3(σ) is equal to (|G| −m0, ✓) · (|S| + m0, ✓) · e−✓·(k−1−P`−1 m=1 rm) 1 + · · · + e−✓·(|S|+m0−1) · M Y m=1 (|Gm|, ✓) (rm, ✓) · (|G|m −rm, ✓). Putting all the pieces together yields the desired result. Due to representing P(ak|S) as a discrete convolution, we can efficiently compute this probability using fast Fourier transform in O(n2 log n) time [5], which is a dramatic improvement over the exponential sum (1) that defines the choice probabilities. Although Theorem 3.2 allows us to compute the choice probabilities in polynomial time, it is not directly useful in solving the assortment optimization problem under the Mallows model. To this end, we present an alternative (and slightly less efficient) method for computing the choice probabilities by means of dynamic programming. 4 Choice probabilities: a dynamic programming approach In what follows, we present an alternative algorithm for computing the choice probabilities. Our approach is based on an efficient procedure to sample a random permutation according to a Mallows distribution with location parameter ! and scale parameter ✓. The random permutation is constructed sequentially using a repeated insertion method (RIM) as follows. For i = 1, . . . , n and s = 1, . . . , i, insert ai at position s with probability pi,s = e−✓·(i−s)/(1 + e−✓+ · · · + e−✓·(i−1)). Lemma 4.1 (Theorem 3 in [15]) The random insertion method generates a random sample from a Mallows distribution with location parameter ! and scale parameter ✓. Based on the correctness of this procedure, we describe a dynamic program to compute the choice probabilities of a general offer set S. The key idea is to decompose these probabilities to include the position at which a product is chosen. In particular, for i k and s 2 [k], let ⇡(i, s, k) be the probability that product ai is chosen (i.e., first among products in S) at position s after the k-th step of the RIM. In other words, ⇡(i, s, k) corresponds to a choice probability when restricting U to the first k products, a1, . . . , ak. With this notation, we have for all i 2 [n], P(ai|S) = nP s=1 ⇡(i, s, n). We compute ⇡(i, s, k) iteratively for k = 1, . . . , n. In particular, in order to compute ⇡(i, s, k + 1), we use the correctness of the sampling procedure. Specifically, starting from a permutation σ that includes the products a1, . . . , ak, the product ak+1 is inserted at position j with probability pk+1,j, and we have two cases to consider. Case 1: ak+1 /2 S ak+1 /2 S ak+1 /2 S. In this case, ⇡(k + 1, s, k + 1) = 0 for all s = 1, . . . , k + 1. Consider a product ai for i k. In order for ai to be chosen at position s after ak+1 is inserted, one of the following events has to occur: i) ai was already chosen at position s before ak+1 is inserted, and ak+1 is inserted at a position ` > s, or ii) ai was chosen at position s −1, and ak+1 is inserted at a position ` s −1. Consequently, we have for all i k ⇡(i, s, k + 1) = k+1 X `=s+1 pk+1,` · ⇡(i, s, k) + s−1 X `=1 pk+1,` · ⇡(i, s −1, k) = (1 −γk+1,s) · ⇡(i, s, k) + γk+1,s−1 · ⇡(i, s −1, k), where γk,s = Ps `=1 pk,` for all k, s. Case 2 : ak+1 2 S ak+1 2 S ak+1 2 S. Consider a product ai with i k. This product is chosen at position s only if it was already chosen at position s and ak+1 is inserted at a position ` > s. Therefore, for all i k, ⇡(i, s, k + 1) = (1 −γk+1,s) · ⇡(i, s, k). For product ak+1, it is chosen at position s only if all products ai for i k are at positions ` ≥s and ak+1 is inserted at position s, implying that ⇡(k + 1, s, k + 1) = pk+1,s · X ik n X `=s ⇡(i, `, k). Algorithm 1 summarizes this procedure. 6 Algorithm 1 Computing choice probabilities 1: Let S be a general offer set. Without loss of generality, we assume that a1 2 S. 2: Let ⇡(1, 1, 1) = 1. 3: For k = 1, . . . , n −1, (a) For all i k and s = 1, . . . k + 1, let ⇡(i, s, k + 1) = (1 −γk+1,s) · ⇡(i, s, k) + 1l[ak+1 /2 S] · γk+1,s−1 · ⇡(i, s −1, k). (b) For s = 1, . . . , k + 1, let ⇡(k + 1, s, k + 1) = 1l[ak+1 2 S] · pk+1,s · X ik n X `=s ⇡(i, `, k). 4: For all i 2 [n], return P(ai|S) = Pn s=1 ⇡(i, s, n). Theorem 4.2 For any offer set S, Algorithm 1 returns the choice probabilities under a Mallows distribution with location parameter ! and scale parameter ✓. This dynamic programming approach provides an O(n3) time algorithm for computing P(a|S) for all products a 2 S simultaneously. Moreover, as explained in the next section, these ideas lead to an algorithm to solve the assortment optimization problem. 5 Assortment optimization: integer programming formulation In the assortment optimization problem, each product a has an exogenously fixed price ra. Moreover, there is an additional product aq that represents the outside option (no-purchase), with price rq = 0 that is always included. The goal is to determine the subset of products that maximizes the expected revenue, i.e., solve (2). Building on Algorithm 1 and introducing a binary variable for each product, we formulate 2 as an MIP with O(n3) variables and constraints, of which only n variables are binary. We assume for simplicity that the first product of S (say a1) is known. Since this product is generally not known a-priori, in order to obtain an optimal solution to problem (2), we need to guess the first offered product and solve the above integer program for each of the O(n) guesses. We note that the MIP formulation is quite powerful and can handle a large class of constraints on the assortment (such as cardinality and capacity constraints) and also extends to the case of the mixture of Mallows model. Theorem 5.1 Conditional on a1 2 S, the optimal solution to 2 is given by S⇤= {i 2 [n]: x⇤ i = 1}, where x⇤2 {0, 1}n is the optimal solution to the following MIP: max x,⇡,y,z X i,s ri · ⇡(i, s, n) s.t. ⇡(1, 1, 1) = 1, ⇡(1, s, 1) = 0, 8s = 2, . . . , n ⇡(i, s, k + 1) = (1 −wk+1,s) · ⇡(i, s, k) + yi,s,k+1, 8i, s, 8k ≥2 ⇡(k + 1, s, k + 1) = zs,k+1, 8s, 8k ≥2 yi,s,k γk+1,s−1 · ⇡(i, s −1, k −1), 8i, s, 8k ≥2 0 yi,s,k γk+1,s−1 · (1 −xk), 8i, s, 8k ≥2 zs,k pk+1,s · n X `=s k−1 X i=1 ⇡(i, `, k −1), 8s, 8k ≥2 0 zs,k pk+1,s · xk, 8s, 8k ≥2 x1 = 1, xq = 1, xk 2 {0, 1} We present the proof of correctness for this formulation in Appendix C. 7 6 Numerical experiments In this section, we examine how the MIP performs in terms of the running time. We considered the following simulation setup. Product prices are sampled independently and uniformly at random from the interval [0, 1]. The modal ranking is fixed to the identity ranking with the outside option ranked at the top. The outside option being ranked at the top is characteristic of applications in which the retailer captures a small fraction of the market and the outside option represents the (much larger) rest of the market. Because the outside option is always offered, we need to solve only a single instance of the MIP (described in Theorem 5.1). Note that in the more general setting, the number of MIPs that must be solved is equal the minimum of the rank of the outside option and the rank of the highest revenue item2. Because the MIPs are independent of each other, they can be solved in parallel. We solved the MIPs using the Gurobi Optimizer version 6.0.0 on a computer with processor 2.4GHz Intel Core i5, RAM of 8GB, and operating system Mac OSX El Capitan. Strengthening of the MIP formulation. We use structural properties of the optimal solution to tighten some of the upper bounds involving the binary variables in the MIP formulation. In particular, for all i, s, and m, we replace the constraint yi,s,m γm+1,s−1 · (1 −xm) with the constraint yi,s,m γm+1,s−1 · ui,s,m · (1 −xm), where ui,s,m is the probability that product ai is selected at position (s −1) after the mth step of the RIM when the offer set is S = {ai⇤, aq}, i.e. when only the highest priced product is offered. Since we know that the highest price product is always offered in the optimal assortment, this is a valid upper bound to ⇡(i, s −1, m −1) and, therefore, a valid strengthening of the constraint. Similarly, for all s and m, we replace the constraint, zs,m ↵m+1,s · xm with the constraint zs,m ↵m+1,s · vs,m · xm, where vs,m is equal to the probability that product that product i is selected at position ` = s, . . . , n when the offer set is S = {aq} if ai ≻w ai⇤, and S = {aq, ai⇤} otherwise. Again using the fact that the highest price product is always offered in the optimal assortment, we can show that this is a valid upper bound. Results and discussion. Table 1 shows the running time of the strengthened MIP formulation for different values of e−✓and n. For each pair of parameters, we generated 50 different instances. We n Average running time (s) Max running time (s) e−✓= 0.8 e−✓= 0.9 e−✓= 0.8 e−✓= 0.9 10 4.60 4.72 5.64 5.80 15 19.04 21.30 27.08 28.79 20 48.08 105.30 58.09 189.93 25 143.21 769.78 183.78 1,817.98 Table 1: Running time of the strengthened MIP for various values of e−✓and n. note that the strengthening improves the running time considerably. Under the initial formulation, the MIP did not terminate after several hours for n = 25 whereas it was able to terminate in a few minutes with the additional strengthening. Our MIP obtains the optimal solution in a reasonable amount of time for the considered parameter values. Outside of this range, i.e. when e−✓is too small or when n is too large, there are potential numerical instabilities. The strengthening we propose is one way to improve the running time of the MIP but other numerical optimization techniques may be applied to improve the running time even further. Finally, we emphasize that the MIP formulation is necessary because of its flexibility to handle versatile business constraints (such as cardinality or capacity constraints) that naturally arise in practice. Extensions and future work. Although the entire development was focused on a single Mallows model, our results extend to a finite mixture of Mallows model. Specifically, for a Mallows model with T mixture components, we can compute the choice probability by setting ⇡(i, s, n) = PT t=1 ↵t · ⇡t(i, s, n), where ⇡(i, s, n) is the probability term defined in Section 4, ⇡t(·, ·, ·) is the probability for mixture component t, and ↵t > 0 are the mixture weights. We then have P(a|S) = Pn s=1 ⇡(i, s, n) for the mixture model. Correspondingly, the MIP in Section 5 also naturally extends. The natural next step is to develop special purpose algorithm to solve the MIP that exploit the structure of the Mallows distributions allowing to scale to large values of n. 2It can be shown that the highest revenue item is always part of the optimal subset. 8 References [1] A. Aouad, V. Farias, R. Levi, and D. Segev. The approximability of assortment optimization under ranking preferences. Available at SSRN: http://ssrn.com/abstract=2612947, 2015. [2] Jose H Blanchet, Guillermo Gallego, and Vineet Goyal. A markov chain approximation to choice modeling. In EC, pages 103–104, 2013. [3] G. Brightwell and P. Winkler. Counting linear extensions is #P-complete. In STOC ’91 Proceedings of the twenty-third annual ACM Symposium on Theory of Computing, pages 175–181, 1991. [4] Juan José Miranda Bront, Isabel Méndez-Díaz, and Gustavo Vulcano. A column generation algorithm for choice-based network revenue management. Operations Research, 57(3):769–784, 2009. [5] J. Cooley and J. Tukey. An algorithm for the machine calculation of complex fourier series. Mathematics of Computation, 19(90):297–301, 1965. [6] James M Davis, Guillermo Gallego, and Huseyin Topaloglu. Assortment optimization under variants of the nested logit model. Operations Research, 62(2):250–273, 2014. [7] J. Doignon, A. Pekeˇc, and M. Regenwetter. The repeated insertion model for rankings: Missing link between two subset choice models. Psychometrika, 69(1):33–54, 2004. [8] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In ACM, editor, Proceedings of the 10th international conference on World Wide Web, pages 613–622, 2001. [9] V. Farias, S. Jagabathula, and D. Shah. A nonparametric approach to modeling choice with limited data. Management Science, 59(2):305–322, 2013. [10] Guillermo Gallego and Huseyin Topaloglu. Constrained assortment optimization for the nested logit model. Management Science, 60(10):2583–2601, 2014. [11] John Guiver and Edward Snelson. Bayesian inference for plackett-luce ranking models. In proceedings of the 26th annual international conference on machine learning, pages 377–384. ACM, 2009. [12] D. Honhon, S. Jonnalagedda, and X. Pan. Optimal algorithms for assortment selection under ranking-based consumer choice models. Manufacturing & Service Operations Management, 14(2):279–289, 2012. [13] S. Jagabathula. Assortment optimization under general choice. Available at SSRN 2512831, 2014. [14] G. Lebanon and Y. Mao. Non-parametric modeling of partially ranked data. Journal of Machine Learning Research, 9:2401–2429, 2008. [15] T. Lu and C. Boutilier. Learning mallows models with pairwise preferences. In Proceedings of the 28th International Conference on Machine Learning, pages 145–152, 2011. [16] R.D. Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959. [17] C. Mallows. Non-null ranking models. Biometrika, 44(1):114–130, 1957. [18] J. Marden. Analyzing and Modeling Rank Data. Chapman and Hall, 1995. [19] D. McFadden. Modeling the choice of residential location. Transportation Research Record, (672):72–77, 1978. [20] M. Meila, K. Phadnis, A. Patterson, and J. Bilmes. Consensus ranking under the exponential model. arXiv preprint arXiv:1206.5265, 2012. [21] T. Murphy and D. Martin. Mixtures of distance-based models for ranking data. Computational Statistics & Data Analysis, 41(3):645–655, 2003. [22] R.L. Plackett. The analysis of permutations. Applied Statistics, 24(2):193–202, 1975. [23] K. Talluri and G. Van Ryzin. Revenue management under a general discrete choice model of consumer behavior. Management Science, 50(1):15–33, 2004. [24] G. van Ryzin and G. Vulcano. A market discovery algorithm to estimate a general class of nonparametric choice models. Management Science, 61(2):281–300, 2014. [25] John I Yellott. The relationship between luce’s choice axiom, thurstone’s theory of comparative judgment, and the double exponential distribution. Journal of Mathematical Psychology, 15(2):109–144, 1977. 9
2016
171
6,074
An algorithm for ℓ1 nearest neighbor search via monotonic embedding Xinan Wang∗ UC San Diego xinan@ucsd.edu Sanjoy Dasgupta UC San Diego dasgupta@cs.ucsd.edu Abstract Fast algorithms for nearest neighbor (NN) search have in large part focused on ℓ2 distance. Here we develop an approach for ℓ1 distance that begins with an explicit and exactly distance-preserving embedding of the points into ℓ2 2. We show how this can efficiently be combined with random-projection based methods for ℓ2 NN search, such as locality-sensitive hashing (LSH) or random projection trees. We rigorously establish the correctness of the methodology and show by experimentation using LSH that it is competitive in practice with available alternatives. 1 Introduction Nearest neighbor (NN) search is a basic primitive of machine learning and statistics. Its utility in practice hinges on two critical issues: (1) picking the right distance function and (2) using algorithms that find the nearest neighbor, or an approximation thereof, quickly. The default distance function is very often Euclidean distance. This is a matter of convenience and can be partially justified by theory: a classical result of Stone [1] shows that k-nearest neighbor classification is universally consistent in Euclidean space. This means that no matter what the distribution of data and labels might be, as the number of samples n goes to infinity, the kn-NN classifier converges to the Bayes-optimal decision boundary, for any sequence (kn) with kn →∞ and kn/n →0. The downside is that the rate of convergence could be slow, leading to poor performance on finite data sets. A more careful choice of distance function can help, by better separating the different classes. For the well-known MNIST data set of handwritten digits, for instance, the 1NN classifier using Euclidean distance has an error rate of about 3%, whereas a more careful choice of distance function—tangent distance [2] or shape context [3], for instance—brings this below 1%. The second impediment to nearest neighbor search in practice is that a naive search through n candidate neighbors takes O(n) time, ignoring the dependence on dimension. A wide variety of ingenious data structures have been developed to speed this up. The most popular of these fall into two categories: hashing-based and tree-based. Perhaps the best-known hashing approach is locality-sensitive hashing (LSH) [4, 5, 6, 7, 8, 9, 10]. These randomized data structures find approximate nearest neighbors with high probability, where c-approximate solutions are those that are at most c times as far as the nearest neighbor. Whereas hashing methods create a lattice-like spatial partition, tree methods [11, 12, 13, 14] create a hierarchical partition that can also be used to speed up nearest neighbor search. There are families of randomized trees with strong guarantees on the tradeoff between query time and probability of finding the exact nearest neighbor [15]. These hashing and tree methods for ℓ2 distance both use the same primitive: random projection [16]. For data in Rd, they (repeatedly) choose a random direction u from the multivariate Gaussian ∗Supported by UC San Diego Jacobs Fellowship 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. N(0, Id) and then project points x onto this direction: x →u · x. Such projections have many appealing mathematical properties that make it possible to give algorithmic guarantees, and that also produce good performance in practice. For distance functions other than ℓ2, there has been far less work. In this paper, we develop nearest neighbor methods for ℓ1 distance. This is a more natural choice than ℓ2 in many situations, for instance when the data points are probability distributions: documents are often represented as distributions over topics, images as distributions over categories, and so on. Earlier works on ℓ1 search are summarized below. We adopt a different approach, based on a novel embedding. One basic fact is that ℓ1 distance is not embeddable in ℓ2 [17]. That is, given a set of points x1, . . . , xn ∈Rd, it is in general not possible to find corresponding points z1, . . . , zn ∈Rq such that ∥xi −xj∥1 = ∥zi −zj∥2. This can be seen even from the four points at the vertices of a square—any embedding of these into ℓ2 induces a multiplicative distortion of at least √ 2. Interestingly, however, the square root of ℓ1 distance is embeddable in ℓ2 [18]. And the nearest neighbor with respect to ℓ1 distance is the same as the nearest neighbor with respect to ℓ1/2 1 . This observation is the starting point of our approach. It suggests that we might be able to embed data into ℓ2 and then simply apply well-established methods for ℓ2 nearest neighbor search. However, there are numerous hurdles to overcome. First, the embeddability of ℓ1/2 1 into ℓ2 is an existential, not algorithmic, fact. Indeed, all that is known for general case is that there exists such an embedding into Hilbert space. For the special case of data in {0, 1, . . . , M}d, earlier work has suggested a unary embedding into a Hamming space {0, 1}Md (where 0 ≤x ≤M gets mapped to x 1’s followed by (M −x) 0’s) [19], but this is wasteful of space and is inefficient to be used by dimension reduction algorithms [16] when M is large. Our embedding is general and is more efficient. Now, given a finite point set x1, . . . , xn ∈Rd and the knowledge that an embedding exists, we could use multidimensional scaling [20] to find such an embedding. But this takes O(n3) time, which is often not viable. Instead, we exhibit an explicit embedding: we give an expression for points z1, . . . , zn ∈RO(nd) such that ∥xi −xj∥1 = ∥zi −zj∥2 2. This brings us to the second hurdle. The explicit construction avoids infinite-dimensional space but is still much higher-dimensional than we would like. The space requirement for writing down the n embedded points is O(n2d), which is prohibitive in practice. To deal with this, we recall that the two popular schemes for ℓ2 embedding described above are both based on Gaussian random projections, and in fact look at the data only through the lens of such projections. We show how to compute these projections without ever constructing the O(nd)-dimensional embeddings explicitly. Finally, even if it is possible to efficiently build a data structure on the n points, how can queries be incorporated? It turns out that if a query point is added to the original n points, our explicit embedding changes significantly. Nonetheless, by again exploiting properties of Gaussian random projections, we show that it is possible to hold on to the random projections of the original n embedded points and to set the projection of the query point so that the correct joint distribution is achieved. Moreover, this can be done very efficiently. Finally, we run a variety of experiments showing the good practical performance of this approach. Related work The k-d tree [11] is perhaps the prototypical tree-based method for nearest neighbor search, and can be used for ℓ1 distance. It builds a hierarchical partition of the data using coordinate-wise splits, and uses geometric reasoning to discard subtrees during NN search. Its query time can degenerate badly with increasing dimension, as a result of which several variants have been developed, such as trees in which the cells are allowed to overlap slightly [21]. Various tree-based methods have also been developed for general metrics, such as the metric tree and cover tree [14, 12]. For k-d tree variants, theoretical guarantees are available for exact ℓ2 nearest neighbor search when the split direction is chosen at random from a multivariate Gaussian [15]. For a data set of n points, the tree has size O(n) and the query time is O(2d log n), where d is the intrinsic dimension of the data. Such analysis is not available for ℓ1 distance. 2 Also in wide use is locality-sensitive hashing for approximate nearest neighbor search [22]. For a data set of n points, this scheme builds a data structure of size O(n1+ρ) and finds a c-approximate nearest neighbor in time O(nρ), for some ρ > 0 that depends on c, on the specific distance function, and on the hash family. For ℓ2 distance, it is known how to achieve ρ ≈1/c2 [23], although the scheme most commonly used in practice has ρ ≈1/c [8]. This works by repeatedly using the following hash function: h(x) = ⌊(v · x + b)/R⌋, where v is chosen at random from a multivariate Gaussian, R > 0 is a constant, and b is uniformly distributed in [0, R). A similar scheme also works for ℓ1, using Cauchy random projection: each coordinate of v is picked independently from a standard Cauchy distribution. This achieves exponent ρ ≈1/c, although one downside is the high variance of this distribution. Another LSH family [22, 10] uses a randomly shifted grid for ℓ1 nearest neighbor search. But it is less used in practice, due to its restrictions on data. For example, if the nearest neighbor is further away than the width of the grid, it may never be found. Besides LSH, random projection is the basis for some other NN search algorithms [24, 25], classification methods [26], and dimension reduction techniques [27, 28, 29]. There are several impediments to developing NN methods for ℓ1 spaces. 1) There is no JohnsonLindenstrauss type dimension reduction technique for ℓ1 [30]. 2) The Cauchy random projection does not preserve the ℓ1 distance as a norm, which restricts its usage for norm based algorithms [31]. 3) Useful random properties [26] cannot be formulated exactly; only approximations exist. Fortunately, all these three problems are absent in ℓ2 space, which motivates developing efficient embedding algorithms from ℓ1 to ℓ2. 2 Explicit embedding We begin with an explicit isometric embedding from ℓ1 to ℓ2 2 for 1-dimensional data. This extends immediately to multiple dimensions because both ℓ1 and ℓ2 2 distance are coordinatewise additive. 2.1 The 1-dimensional case First, sort the points x1, . . . , xn ∈R so that x1 ≤x2 ≤· · · ≤xn. Then, construct the embedding φ(x1), φ(x2), . . . , φ(xn) ∈Rn−1 as follows: φ(x1)    ⎡ ⎢⎢⎢⎢⎣ 0 0 0 ... 0 ⎤ ⎥⎥⎥⎥⎦ φ(x2)    ⎡ ⎢⎢⎢⎢⎣ √x2 −x1 0 0 ... 0 ⎤ ⎥⎥⎥⎥⎦ φ(x3)    ⎡ ⎢⎢⎢⎢⎣ √x2 −x1 √x3 −x2 0 ... 0 ⎤ ⎥⎥⎥⎥⎦ . . . φ(xn)    ⎡ ⎢⎢⎢⎢⎣ √x2 −x1 √x3 −x2 √x4 −x3 ... √xn −xn−1 ⎤ ⎥⎥⎥⎥⎦ (1) For any 1 ≤i < j ≤n, φ(xi) and φ(xj) agree on all coordinates except i to (j −1). Therefore, ∥φ(xi) −φ(xj)∥2 = j k=i+1  xk −xk−1 2 1/2 = j k=i+1 xk −xk−1 1/2 = |xj −xi|1/2, (2) so the embedding preserves the ℓ1/2 1 distance between these points. Since the construction places no restrictions on the range of x1, x2, . . . , xn, it is applicable to any finite set of points. 2.2 Extension to multiple dimensions We construct an embedding of d-dimensional points by stacking 1-dimensional embeddings. Consider points x1, x2, . . . , xn ∈ Rd. Suppose we have a collection of embedding maps φ1, φ2, . . . , φd, one per dimension. Each of the embeddings is constructed from the values on a single coordinate: if we let x(j) i denote the j-th coordinate of xi, for 1 ≤j ≤d, then embedding φj 3 is based on x(j) 1 , x(j) 2 , . . . , x(j) n ∈R. The overall embedding is the concatenation φ (xi) =  φτ 1  x(1) i  , φτ 2  x(2) i  , . . . , φτ d  x(d) i τ ∈Rd(n−1) (3) where 1 ≤i ≤n, and τ denotes transpose. For any 1 ≤i < j ≤n, ∥φ (xi) −φ (xj)∥2 = d k=1 φk  x(k) i  −φk  x(k) j  2 2 1/2 (4) = d k=1 x(k) i −x(k) j  1/2 = ∥xi −xj∥1/2 1 (5) It may be of independent interest to consider the properties of this explicit embedding. We can represent it by a matrix of n columns with one embedded point per column. The rank of this matrix—and, therefore, the dimensionality of the embedded points—turns out to be O(n). But we can show that the “effective rank” [32] of the centered matrix is just O(d log n); see Appendix B. 3 Incorporating a query Once again, we begin with the 1-dimensional case and then extend to higher dimension. 3.1 The 1-dimensional case For nearest neighbor search, we need a joint embedding of the data points S = {x1, x2, . . . , xn} with the subsequent query point q. In fact, we need to embed S first and then incorporate q later, but this is non-trivial since adding q changes the explicit embedding of other points. We start with an example. Again, assume x1 ≤x2 ≤· · · ≤xn. Example 1. Suppose query q has x2 ≤q < x3. Adding q to the original n points changes the embedding φ(·) ∈Rn−1 of Eq. 1 to φ(·) ∈Rn. Notice that the dimension increases by one. φ(x1)    ⎡ ⎢⎢⎢⎢⎢⎢⎣ 0 0 0 0 ... 0 ⎤ ⎥⎥⎥⎥⎥⎥⎦ φ(x2)    ⎡ ⎢⎢⎢⎢⎢⎢⎣ √x2 −x1 0 0 0 ... 0 ⎤ ⎥⎥⎥⎥⎥⎥⎦ φ(x3)    ⎡ ⎢⎢⎢⎢⎢⎢⎣ √x2 −x1 √q −x2 √x3 −q 0 ... 0 ⎤ ⎥⎥⎥⎥⎥⎥⎦ . . . φ(xn)    ⎡ ⎢⎢⎢⎢⎢⎢⎣ √x2 −x1 √q −x2 √x3 −q √x4 −x3 ... √xn −xn−1 ⎤ ⎥⎥⎥⎥⎥⎥⎦ (6) The query point is mapped to φ(q) = (√x2 −x1, √q −x2, 0, . . . , 0)τ ∈Rn. From the example above, it is clear what happens when q lies between some xi and xi+1. There are also two “corner cases” that can occur: q < x1 and q > xn. Fortunately, the embedding of S is almost unchanged for the corner cases: φ(xi) = (φτ(xi), 0)τ ∈Rn, appending a zero at the end. For q < x1, the query is mapped to φ(q) = (0, . . . , 0, √x1 −q)τ ∈Rn; for q ≥xn, the query is mapped to φ(q) = (√x2 −x1, √x3 −x2, . . . , √xn −xn−1, √q −xn)τ ∈Rn. 3.2 Random projection for the 1-dimensional case We would like to generate Gaussian random projections of the ℓ2 embeddings of the data points. In this subsection, we mainly focus on the typical case when the query q lies between two data points, and we leave the treatment of the (simpler) corner cases to Alg. 1. The notation follows section 3.1, and we assume the xi are arranged in increasing order for i = 1, 2, . . . , n. Setting 1. The query lies between two data points: xα ≤q < xα+1 for some 1 ≤α ≤n −1. 4 We will consider two methods for randomly projecting the embedding of S ∪{q} and show that they yield exactly the same joint distribution. The first method applies Gaussian random projection to the embedding φ of S ∪{q}. Sample a multivariate Gaussian vector v from N(0, In). For any x ∈S ∪{q}, the projection is pg(x) := v · φ(x) (7) This is exactly the projection we want. However, it requires both S and q, whereas in practice, we will initially have to project just S by itself, and we will only later be given some (arbitrary) q. The second method starts by projecting the explicitly embedded points S. Later, it receives query q and finds a suitable projection for it as well. So, we begin by sampling a multivariate Gaussian vector u from N(0, In−1), and for any x ∈S, use the projection pe(x) := u · φ(x) (8) where the subindex e stands for embedding. Conditioned on the value (pe (xα+1)−pe (xα)), namely √xα+1 −xα · u(α), the projection of a subsequent query q is taken to be pe(q) = pe (xα) + Δ Δ ∼N σ2 1 · (pe (xα+1) −pe (xα)) σ2 1 + σ2 2 , σ2 1σ2 2 σ2 1 + σ2 2  (9) where σ2 1 = q −xα, σ2 2 = xα+1 −q. Theorem 1. Fix any x1, . . . , xn, q ∈R satisfying Setting 1. Consider the joint distribution of [pg (x1) , pg (x2) , . . . , pg (xn) , pg(q)] induced by a random choice of v (as per Eq. 7), and the joint distribution of [pe (x1) , pe (x2) , . . . , pe (xn) , pe(q)] induced by a random choice of u and Δ (as per Eqs. 8 and 9). These distributions are identical. The details are in Appendix A: briefly, we show that both joint distributions are multivariate Gaussians, and that they have the same mean and covariance. We highlight the advantages of our method. First, projecting the data set using Eq. 8 does not require advance knowledge of the query, which is crucial for nearest neighbor search; second, generating the projection for the 1-dimensional query takes O(log n) time, which makes this method efficient. We describe the 1-dimensional algorithm in Alg. 1, where we assume that a permutation that sorts the points, denoted Π, is provided, along with the location of q within this ordering, denoted α. We will resolve this later in Alg. 2. 3.3 Random projection for the higher dimensional case We will henceforth use ERP (Euclidean random projection) to denote our overall scheme consisting of embedding ℓ1 into ℓ2 2, followed by random Gaussian projection (Alg. 2). A competitor scheme, as described earlier, applies Cauchy random projection directly in the ℓ1 space; we refer to this as CRP. The time and space costs for ERP are shown in Table 1, if we generate k projections for n data points and m queries in Rd. The costs scale linearly in d, since the constructions and computation are dimension by dimension. We have a detailed analysis below. Preprocessing: This involves sorting the points along each coordinate separately and storing the resulting permutations Π1, . . . , Πd. The time and space costs are acceptable, because reading or storing the data takes as much as O(nd). Project data: The time taken by ERP to project the n points is comparable to that of CRP. But ERP requires a factor O(n) more space, compared to O(kd) for CRP, because it needs to store the projections of each of the individual coordinates of the data points. Project query: ERP methods are efficient for query answering. The projection is calculated directly in the original d-dimensional space. The log n overhead comes from using binary search, coordinatewise, to place the query within the ordering of the data points. Once these ranks are obtained, they can be reused for as many projections as needed. 5 Algorithm 1 Random projection (1-dimensional case) function project-data (S, Π) input: — data set S = (xi : 1 ≤i ≤n) — sorted indices Π = (πi : 1 ≤i ≤n) such that xπ1 ≤xπ2 ≤, . . . , ≤xπn output: — projections P = (pi : 1 ≤i ≤n) for S pπ1 ←0 for i = 2, 3, . . . , n do ui ←N(0, 1) pπi ←pπi−1 + ui · xπi −xπi−1 end for return P function project-query(q, α, S, Π, P) input: — query q and its rank α in data set S — sorted indices Π of S — projections P of S output: — projection pq for q case: 1 ≤α ≤n −1 σ2 1 ←q −xπα σ2 2 ←xπα+1 −q Δ ←N σ2 1 · (pπα+1 −pπα) σ2 1 + σ2 2 , σ2 1σ2 2 σ2 1 + σ2 2  pq ←pπα + Δ case: α = 0 r ←N(0, 1), pq ←r · √xπ1 −q case: α = n r ←N(0, 1), pq ←pπn + r · √q −xπn return pq Table 1: Efficiency of ERP algorithm: Generate k projections for n data points and m queries in Rd. Preprocessing Project data Project query Time cost O(dn log n) O(knd) O(md(k + log n)) Space cost O(dn) O(knd) NA 4 Experiment In this section, we demonstrate that ERP can be directly used by existing NN search algorithms, such as LSH, for efficient ℓ1 NN search. We choose commonly used data sets for image retrieval and text classification. Besides our method, we also implement the metric tree (a popular tree-type data structure) and Cauchy LSH for comparison. Data sets When data points represent distributions, ℓ1 distance is natural. We use four such data sets. 1) Corel uci [21], available at [33], contains 68,040 histograms (32-dimension) for color images from Corel image collections; 2) Corel hist [34, 21], processed by [21], contains 19,797 histograms (64-dimension, non-zero dimension is 44) for color images from Corel Stock Library; 3) Cade [35], is a collection of documents from Brazilian web pages. Topics are extracted using latent Dirichlet allocation algorithm [36]. We use 13,016 documents with distributions over the 120 topics (120-dimension); 4) We download about 35,000 images from ImageNet [37], and process each of them into a probabilistic distribution over 1,000 classes using trained convolution neural network [38]. Furthermore, we collapse the distribution into a 100-dimension representation, summing each 10 consecutive mass of probability. This reduces the training and testing time. In each data set, we remove duplicates. For either parameter optimization or testing, we randomly separate out 10% of the data as queries such that the query-to-data ratio is 1 : 9. Performance evaluation We evaluate performance using query cost. For linear scan or metric tree, this is the average number of points accessed when answering a query. For LSH, we also need to add the overhead of evaluating the LSH functions. The scheme [8, 39] of LSH is summarized as follows. Given three parameters k, L and R (k, L are positive integers, k is even, R is a positive real), the LSH algorithm uses k-tuple hash functions of the form g(x) = (h1(x), h2(x), . . . , hk(x)) to distribute data or queries to their bins. L is the total number of such g-functions. The h-functions are of the form h(x) = ⌊(v · x + b)/R⌋, each 6 Algorithm 2 Overall algorithm for Random projection, in context of NN search Starting information: — data set S = {xi : 1 ≤i ≤n} ⊂Rd Subsequent arrival: — query q ∈Rd preprocessing: Sort data along each dimension: for j ∈{1, . . . , d} do Sj = {x(j) i : 1 ≤i ≤n} Πj ←index-sort (Sj), where Πj = {πji : 1 ≤i ≤n} satisfying x(j) πj1 ≤x(j) πj2 ≤· · · ≤x(j) πjn end for save Π = (Π1, Π2, . . . , Πd) project data: for j = 1, 2, . . . , d do Pj ←project-data (Sj, Πj) where Pj = {pji : 1 ≤i ≤n} end for save P = (P1, P2, . . . , Pd) projection of xi ∈S is d j=1 pji project query: for j = 1, 2, . . . , d do αj ←binary-search(q(j), Sj, Πj) satisfying x(j) πjαj ≤q(j) ≤x(j) πj(αj +1) end for save rank α for use in multiple projections pq ←0 for j = 1, 2, . . . , d do pg ←pg+ project-query(q(j), αj, Sj, Πj, Pj) end if projection for q is pg Table 2: Performance evaluation: Query cost = Tr + To. Retrieval cost: Tr Overhead: To Linear Scan or Metric Tree # Accessed points 0 CRP-LSH # Accessed points k/2 · √ 2L ERP-LSH # Accessed points k/2 · √ 2L + log n either explicitly or implicitly associated with a random vector v and a uniformly distributed variable b ∈[0, R). As suggested in [39], we implement the reuse of h-functions so that only (k/2 · √ 2L) of them are actually evaluated. For ERP-LSH, there is an additional overhead of log n due to the use of binary search. We summarize these costs in Table 2; for conciseness, we have removed the linear dependence on d in both the retrieval cost and the overhead. Implementations The linear scan and the metric tree are for exact NN search. We use the code [40] for metric tree. For LSH, there is only public code for ℓ2 NN search. We implement the LSH scheme, referring to the manual [39]. In particular, we implement the reuse of the h-functions, such that the number of actually evaluated h-functions is (k/2 · √ 2L), in contrast to (k · L). We choose approximation factor c = 1.5 (the results turn out to be much closer to true NN), and set the success rate to be 0.9, which means that the algorithm should report c-approximate NN successfully for at least 90% of the queries. Taking the parameter suggestions [8] into account, we choose R for CRP-LSH from dNN × {1, 5, 10, 50, 100}; we choose R for ERP-LSH from d′ NN × {1, 2, 3, 4}, where dNN = 1 |Q|  q∈Q ∥q −xNN(q)∥1 is the average ℓ1 NN distance; d′ NN = 1 |Q|  q∈Q  ∥q −xNN(q)∥1 is the average ℓ1/2 1 NN distance. The term dNN or d′ NN normalizes the average NN distance to 1 for LSH. Fixing R, we optimize k and L in the following range: k ∈{2, 4, . . . , 30}, L ∈{1, 2, . . . , 40}. Results Both CRP-LSH and ERP-LSH achieve a competitive efficiency over the other two methods. We list the test results in Table 3, and put parameters in Table 4 in Appendix C. 7 Table 3: Average query cost and average approximation rate if applicable (in parentheses). Corel uci Corel hist Cade ImageNet (d = 32) (d = 44) (d = 120) (d = 100) Linear scan 61220 17809 11715 31458 Metric tree 2575 718 9184 12375 CRP-LSH 329 ± 55 (1.07) 245 ± 43 (1.05) 292 ± 11 (1.11) 548 ± 66 (1.09) ERP-LSH 330 ± 18 (1.11) 250 ± 15 (1.08) 218 ± 8 (1.15) 346 ± 15 (1.13) 5 Conclusion In this paper, we have proposed an explicit embedding from ℓ1 to ℓ2 2, and we have found an algorithm to generate the random projections, reducing the time dependence of n from O(n) to O(log n). In addition, we have observed that the effective rank of the (centered) embedding is as low as O(d ln n), compared to its rank O(n). Algorithms remain to be explored, in order to take advantage of such a low rank. Our current method takes space O(ndm) to store the parameters of the random vectors, where m is the number of hash functions. We have implemented one empirical scheme [39] to reuse the hashing functions. It is still expected to develop other possible schemes. Acknowledgement The authors are grateful to the National Science Foundation for support under grant IIS-1162581. References [1] C. J. Stone. Consistent nonparametric regression. The Annals of Statistics, 5:595–620, 1977. [2] P. Y. Simard, Y. A. LeCun, J. S. Denker, and B. Victorri. Transformation invariance in pattern recognition—Tangent distance and tangent propagation. In Neural networks: Tricks of the trade, volume 1524, pages 239–274. Springer-Verlag, New York, 1998. [3] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. Pattern Anal. Mach. Intell., 24(4):509–522, 2002. [4] A. Broder. On the resemblance and containment of documents. In Proceedings of Compression and Complexity of Sequences, pages 21–29, 1997. [5] P. Indyk and R. Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In STOC, pages 604–613, 1998. [6] A. Broder, M. Charikar, A. Frieze, and M. Mitzenmacher. Min-wise independent permutations. Journal of Computer and System Sciences, 60:630–659, 2000. [7] M. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, pages 380–388, 2002. [8] M. Datar, N. Immorlica, P. Indyk, and V. Mirrokni. Locality-sensitive hashing scheme based on p-stable distributions. In SoCG, pages 253–262, 2004. [9] A. Shrivastava and P. Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (MIPS). In NIPS, pages 2321–2329, 2014. [10] A. Andoni and P. Indyk. Efficient algorithms for substring near neighbor problem. In SODA, pages 1203–1212, 2006. [11] J. L. Bentley. Multidimensional binary search trees used for associative searching. Communications of the ACM, 18(9):509–517, 1975. [12] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbor. In ICML, pages 97–104, 2006. [13] S. M. Omohundro. Bumptrees for efficient function, constraint, and classification learning. In NIPS, volume 40, pages 175–179, 1991. [14] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information processing letters, 40:175–179, 1991. 8 [15] S. Dasgupta and K. Sinha. Randomized partition trees for nearest neighbor search. Algorithmica, 72:237–263, 2015. [16] W. Johnson and J. Lindenstrauss. Extensions of Lipschhitz maps into a Hilbert space. Contemporary Math, 26:189–206, 1984. [17] J. H. Wells and L. R. Williams. Embeddings and extensions in analysis, volume 84. SpringerVerlag, New York, 1975. [18] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Trans. Pattern Anal. Mach. Intell., 34:480–492, 2012. [19] N. Linial, E. London, and Y. Rabinovich. The geometry of graphs and some of its algorithmic applications. In FOCS, pages 577–591, 1994. [20] I. Borg and P. Groenen. Modern multidimensional scaling: Theory and applications. SpringerVerlag, Berlin, 1997. [21] T. Liu, A. Moore, A. Gray, and K. Yang. An investigation of practical approximate nearest neighbor algorithms. In NIPS, pages 825–832, 2004. [22] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In Communications of the ACM, volume 51, pages 117–122, 2008. [23] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions. In FOCS, pages 459–468, 2006. [24] J. Kleinberg. Two algorithms for nearest-neighbor search in high dimensions. In STOC, pages 599–608, 1997. [25] N. Ailon and B. Chazelle. The fast Johnson-Lindenstrauss transform and approximate nearest neighbors. SIAM Journal on Computing, 39:302–322, 2009. [26] P. Li, G. Samorodnitsk, and J. Hopcroft. Sign cauchy projections and chi-square kernel. In NIPS, pages 2571–2579, 2013. [27] S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random Structures & Algorithms, 22:60–65, 2003. [28] D. Achlioptas. Database-friendly random projections. In Proceedings of the Symposium on Principles of Database Systems, pages 274–281, 2001. [29] R. I. Arriaga and S. Vempala. An algorithmic theory of learning: Robust concepts and random projection. In FOCS, pages 616–623, 1999. [30] M. Charikar and A. Sahai. Dimension reduction in the L1 norm. In FOCS, pages 551–560, 2002. [31] P. Indyk. Stable distributions, pseudorandom generators, embeddings, and data stream computation. Journal of the ACM, 53(3):307–323, 2006. [32] M. Rudelson and R. Vershynin. Sampling from large matrices: An approach through geometric functional analysis. Journal of the ACM (JACM), 54(4):21, 2007. [33] https://archive.ics.uci.edu/ml/datasets/Corel+Image+Features. [34] A. Gionis, P. Indyk, and R. Motwani. Similarity search in high dimensions via hashing. In VLDB, volume 99, pages 518–529, 1999. [35] Ana Cardoso-Cachopo. Improving Methods for Single-label Text Categorization. PdD Thesis, Instituto Superior Tecnico, Universidade Tecnica de Lisboa. Data avaliable at http://ana. cachopo.org/datasets-for-single-label-text-categorization, 2007. [36] http://www.cs.columbia.edu/~blei/topicmodeling_software.html. [37] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and F. Li. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255. IEEE, 2009. [38] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [39] A. Andoni and P. Indyk. E2LSH 0.1 user manual. Technical report, 2005. [40] J. K. Uhlmann. Implementing metric trees to satisfy general proximity/similarity queries. Manuscript, 1991. 9
2016
172
6,075
Multi-armed Bandits: Competing with Optimal Sequences Oren Anava The Voleon Group Berkeley, CA oren@voleon.com Zohar Karnin Yahoo! Research New York, NY zkarnin@yahoo-inc.com Abstract We consider sequential decision making problem in the adversarial setting, where regret is measured with respect to the optimal sequence of actions and the feedback adheres the bandit setting. It is well-known that obtaining sublinear regret in this setting is impossible in general, which arises the question of when can we do better than linear regret? Previous works show that when the environment is guaranteed to vary slowly and furthermore we are given prior knowledge regarding its variation (i.e., a limit on the amount of changes suffered by the environment), then this task is feasible. The caveat however is that such prior knowledge is not likely to be available in practice, which causes the obtained regret bounds to be somewhat irrelevant. Our main result is a regret guarantee that scales with the variation parameter of the environment, without requiring any prior knowledge about it whatsoever. By that, we also resolve an open problem posted by Gur, Zeevi and Besbes [8]. An important key component in our result is a statistical test for identifying non-stationarity in a sequence of independent random variables. This test either identifies nonstationarity or upper-bounds the absolute deviation of the corresponding sequence of mean values in terms of its total variation. This test is interesting on its own right and has the potential to be found useful in additional settings. 1 Introduction Multi-Armed Bandit (MAB) problems have been studied extensively in the past, with two important special cases: the Stochastic Multi-Armed Bandit, and the Adversarial (Non-Stochastic) Multi-Armed Bandit. In both formulations, the problem can be viewed as a T-round repeated game between a player and nature. In each round, the player chooses one of k actions1 and observes the loss corresponding to this action only (the so-called bandit feedback). In the adversarial formulation, it is usually assumed that the losses are chosen by an all-powerful adversary that has full knowledge of our algorithm. In particular, the loss sequences need not comply with any distributional assumptions. On the other hand, in the stochastic formulation each action is associated with some mean value that does not change throughout the game. The feedback from choosing an action is an i.i.d. noisy observation of this action’s mean value. The performance of the player is traditionally measured using the static regret, which compares the total loss of the player with the total loss of the benchmark playing the best fixed action in hindsight. A stronger measure of the player’s performance, sometimes referred to as dynamic regret2 (or just regret for brevity), compares the total loss of the player with this of the optimal benchmark, playing the best possible sequence of actions. Notice that in the stochastic formulation both measures coincide, assuming that the benchmark has access to the parameters defining the random process of 1We sometimes use the terminology arm for an action throughout. 2The dynamic regret is occasionally referred to as shifting regret or tracking regret in the literature. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the losses but not to the random bits generating the loss sequences. In the adversarial formulation this is clearly not the case, and it is not hard to show that attaining sublinear regret is impossible in general, whereas obtaining sublinear static regret is possible indeed. This can perhaps explain why most of the literature is concerned with optimizing the static regret rather than its dynamic counterpart. Previous attempts to tackle the problem of regret minimization in the adversarial formulation mostly took advantage of some niceness parameter of nature (that is, some non-adversarial behavior of the loss sequences). This line of research becomes more and more popular, as full characterizations of the regret turn out to be feasible with respect to specific niceness parameters. In this work we focus on a broad family of such niceness parameters —usually called variation type parameters— originated from the work of [8] in the context of (dynamic) regret minimization. Essentially, we consider a MAB setting in which the mean value of each action can vary over time in an adversarial manner, and the feedback to the player is a noisy observation of that mean value. The variation is then defined as the sum of distances between the vectors of mean values over consecutive rounds, or formally, VT def = T X t=2 max i |µt(i) −µt−1(i)|, (1) where µt(i) denotes the mean value of action i at round t. Despite the presentation of VT using the maximum norm, any other norm will lead to similar qualitative formulations. Previous approaches to the problem at hand relied on strong (and sometimes even unrealistic) assumptions on the variation (we refer the reader to Section 1.3, in which related work is discussed in detail). The natural question is whether it is possible to design an algorithm that does not require any assumptions on the variation, yet can achieve o(T) regret whenever VT = o(T). In this paper we answer this question in the affirmative and prove the following. Theorem (Informal). Consider a MAB setting with two arms and time horizon T. Assume that at each round t ∈{1, . . . , T}, the random variables of obtainable losses correspond to a vector of mean values µt. Then, Algorithm 1 achieves a regret bound of ˜O T 0.771 + T 0.82V0.18 T  . Our techniques rely on statistical tests designed to identify changes in the environment on the one hand, but exploit the best option observed so far in case there was no such significant environment change. We elaborate on the key ideas behind our techniques in Section 1.2. 1.1 Model and Motivation A player is faced with a sequential decision making task: In each round t ∈{1, . . . , T} = [T], the player chooses an action it ∈{1, . . . , k} = [k] and observes loss ℓt(it) ∈[0, 1]. We assume that E [ℓt(i)] = µt(i) for any i ∈[k] and t ∈[T], where {µt(i)}T t=1 are fixed beforehand by the adversary (that is, the adversary is oblivious). For simplicity, we assume that {ℓt(i)}T t=1 are also generated beforehand. The goal of the player is to minimize the regret, which is henceforth defined as RT = T X t=1 µt(it) − T X t=1 µt(i∗ t ), where i∗ t = arg mini∈[k]{µt(i)}. A sequence of actions {it}T t=1 has no-regret if RT = o(T). It is well-known that generating no-regret sequences in our setting is generally impossible, unless the benchmark sequence is somehow limited (for example, in its total number of action switches) or alternatively, some characterization of {µt(i)}T t=1 is given (in our case, {µt(i)}T t=1 are characterized via the variation). While limiting the benchmark makes sense only when we have a strong reason to believe that an action sequence from the limited class has satisfactory performance, characterizing the environment is an approach that leads to guarantees of the following type: If the environment is well-behaved (w.r.t. our characterization), then our performance is comparable with the optimal sequence of actions. If not, then no algorithm is capable of obtaining sublinear regret without further assumptions on the environment. Obtaining algorithms with such guarantee is an important task in many real-world applications. For example, an online forecaster must respond to time-related trends in her data, an investor seeks to detect trading trends as quickly as possible, a salesman should adjust himself to the constantly changing taste of his audience, and many other examples can be found. We believe that in many of 2 these examples the environment is often likely to change slowly, making guarantees of the type we present highly desirable. 1.2 Our Techniques An intermediate (noiseless) setting. We begin with an easier setting, in which the observable losses are deterministic. That is, by choosing arm i at round t, rather than observing the realization of a random variable with mean value µt(i) we simply observe µt(i). Note that {µt}T t=1 are still assumed to be generated adversarially. In this setting, the following intuitive solution can be shown to work (for two arms): Pull each arm once and observe two values. Now, in each round pull the arm with the smaller loss w.p. 1 −o(1) and the other arm w.p. o(1), where the latter is decreasing with time. As long as the mean values of the arms did not significantly shift compared to their original values, continue. Once a significant shift is observed, reset all counters and start over. We note that while the algorithm is simple, its analysis is not straightforward and contains some counterintuitive ideas. In particular, we show that the true (unknown) variation can be replaced with a crude proxy called the observed variation (to be later defined), while still maintaining mathematical tractability of the problem. To see the importance of this proxy, let us first describe a different approach to the problem at hand that in particular extends directly the approach of [8] who show that if an upper-bound on VT is known in advance, then the optimal regret is attainable. Therefore, one might guess that having an unbiased estimator for VT will eliminate the need in this prior knowledge. Obtaining such an unbiased estimator is not hard (via importance sampling), but it turns out that it is not sufficient: the values of the variation to be identified are simply too small in order to be accurately estimated. Here comes into the picture the observed variation, which is loosely defined as the loss difference between two successive pulls of the same arm. Clearly, the true variation is only larger, but as we show, it cannot be much larger without us noticing it. We provide a complete analysis of the noiseless setting in Appendix A. This analysis is not directly used for dealing with the noisy setting but acts as a warm up and contains some of the key techniques used for it. Back to our (noisy) setting. Here also we focus on the case of k = 2 arms. When the losses are stochastic the same basic ideas apply but several new major issues come up. In particular, here as well we present an algorithm that resets all counters and starts over once a significant change in the environment is detected. The similarity however ends here, mainly because of the noisy feedback that makes it hard to determine whether the changes we see are due to some environmental shift or due to the stochastic nature of the problem. The straightforward way of overcoming this is to forcefully divide the time into ‘bins’ in which we continuously pull the same arm. By doing this, and averaging the observed losses within a bin we can obtain feedback that is not as noisy. This meta-technique raises two major issues: The first is, how long should these bins be? A long period would eliminate the noise originating from the stochastic feedback but cripple our adaptive capabilities and make us more vulnerable to changes in the environment. The second issue is, if there was a change in the environment that is in some sense local to a single bin, how can we identify it? and assuming we did, when should we tolerate it? The algorithm we present overcomes the first issue by starting with an exploration phase, where both arms are queried with equal probability. We advance to the next phase only once it is clear that the average loss of one arm is greater than the other, and furthermore, we have a solid estimate of the gap between them. In the next exploitation phase, we mimic the above algorithm for deterministic feedback by pulling the arms in bins of length proportional to the (inverted squared) gap between the arms. The techniques from above take care of the regret compared to a strategy that must be fixed inside the bins, or alternatively, against the optimal strategy if we were somehow guaranteed that there are no significant environment changes within bins. This leads us to the second issue, but first consider the following example. Example 1. During the exploration phase, we associated arm #1 with an expected loss of 0.5 and arm #2 with an expected loss of 0.6. Now, consider a bin in which we pull arm #1. In the first half of the bin the expected loss is 0.25 and in the second it is 0.75. The overall expected loss is 0.5, hence without performing some test w.r.t. the pulls inside the bin we do not see any change in the environment, and as far as we know we mimicked the optimal strategy. The optimal strategy however can clearly do much better and we suffer a linear regret in this bin. Furthermore, the variation during the bin is constant! The good news is that in this scenario a simple test would determine that the outcome of the arm pulls inside the bin do not resemble those of an i.i.d. random variables, meaning that the environment change can be detected. 3 Figure 1: The optimal policy of an adversary that minimizes variation while maximizing deviation. Example 1 clearly demonstrates the necessity of a statistical test inside a bin. However, there are cases in which the changes of the environment are unidentifiable and the regret suffered by any algorithm will be linear, as can be seen in the following example. Example 2. Assume that arm #1 has mean value of 0.5 for all t, and arm #2 has mean value of 1 with probability 0.5 and 0 otherwise. The feedback from pulling arm i at a specific round is a Bernoulli random variable with the mean value of that arm. Clearly, there is no way to distinguish between these arms, and thus any algorithm would suffer linear regret. The point, however, is that the variation in this example is also linear, and thus linear regret is unavoidable in general. Example 2 shows that if the adversary is willing to put enough effort (in terms of variation), then linear regret is unavoidable. The intriguing question is whether the adversary can put some less effort (that is, to invest less than linear variation) and still cause us to suffer linear regret, while not providing us the ability to notice that the environment has changed. The crux of our analysis is the design of two tests, one per phase (exploration or exploitation), each is able to identify changes whenever it is possible or to ensure they do not hurt the regret too much whenever it is not. This building block, along with the ‘outer bin regret’ analysis mentioned above, allows us to achieve our result in this setting. The essence of our statistical tests is presented here, while formal statements and proofs are deferred to Section 2. Our statistical tests (informal presentation). Let X1, . . . , Xn ∈[0, 1] be a sequence of realizations, such that each Xi is generated from an arbitrary distribution with mean value µi. Our task is to determine whether it is likely that µi = µ0 for all i, where µ0 is a given constant. In case there is not enough evidence to reject this hypothesis, the test is required to bound the absolute deviation of µn = {µi}n i=1 (henceforth denoted by ∥µn∥ad) in terms of its total variation3 ∥µn∥tv. Assume for simplicity that ¯µ1:n = 1 n Pn i=1 µi is close enough to µ0 (or even exactly equal to it), which eliminates the need to check the deviation of the average from µ0. We are thus left with checking the inner-sequence dynamics. It is worthwhile to consider the problem from the adversary’s point of view: The adversary has full control of the values of {µi}n i=1, and his task is to deviate as much as he can from the average without providing us the ability to identify this deviation. Now, consider a partition of [n] into consecutive segments, such that (µi −µ0) has the same sign for any i within a segment. Given this partition, it can be shown that the optimal policy of an adversary that tries to minimize the total variation of {µi}n i=1 while maximizing its absolute deviation, is to set µi to be equal within each segment. The length of a segment [a, b] is thus limited to be at most 1/|¯µa:b −µ0|2, or otherwise the deviation is notable (this follows by standard concentration arguments). Figure 1 provides a visualization of this optimal policy. Summing the absolute deviation over the segments and using Hölder’s inequality ensures that ∥µn∥ad ≤n2/3∥µn∥1/3 tv , or otherwise there exists a segment in which the distance between the realization average and µ0 is significantly large. Our test is thus the simple test that measures this distance for every segment. Notice that the test runs in polynomial time; further optimization might improve the polynomial degree, but is outside the scope of this paper. The test presented above aims to bound the absolute deviation w.r.t. some given mean value. As such, it is appropriate only for the exploitation phase of our algorithm, in which a solid estimation of each arm’s mean value is given. However, it turns out that bounding the absolute deviation with respect to some unknown mean value can be done using similar ideas, yet is slightly more complicated. 3We use standard notions of total variation and absolute deviation. That is, the total variation of a sequence {µi}n i=1 is defined as ∥µn∥tv = Pn i=2 |µi −µi−1|, and its absolute deviation is ∥µn∥ad = Pn i=1 |µi −¯µ1:n|, where ¯µ1:n = 1 n Pn i=1 µi. 4 Alternative approaches. We point out that the approach of running a meta-bandit algorithm over (logarithmic many) instances of the algorithm proposed by [8] will be very difficult to pursue. In this approach, whenever an EXP3 instance is not chosen by the meta-bandit algorithm it is still forced to play an arm chosen by a different EXP3 instance. We are not aware of an analysis of EXP3 nor other algorithm equipped to handle such a harsh setting. Another idea that will be hard to pursue is tackling the problem using a doubling trick. This idea is common when parameters needed for the execution of an algorithm are unknown in advanced, but can in fact be guessed and updated if necessary. In our case, the variation is not observed due to the bandit feedback, and moreover, estimating it using importance sampling will lead to estimators that are too crude to allow a doubling trick. 1.3 Related Work The question of whether (and when) it is possible to obtain bounds on other than the static regret is long studied in a variety of settings including Online Convex Optimization (OCO), Bandit Convex Optimization (BCO), Prediction with Expert Advice, and Multi-Armed bandits (MAB). Stronger notions of regret include the dynamic regret (see for instance [17, 4]), the adaptive regret [11], the strongly adaptive regret [5], and more. From now on, we focus on the dynamic regret only. Regardless of the setting considered, it is not hard to construct a loss sequence such that obtaining sublinear dynamic regret is impossible (in general). Thus, the problem of minimizing it is usually weakened in one of the two following forms: (1) restricting the benchmark; and (2) characterizing the niceness of the environment. With respect to the first weakening form, [17] showed that in the OCO setting the dynamic regret can be bounded in terms of CT = PT t=2 ∥at −at−1∥, where {at}T t=1 is the benchmark sequence. In particular, restricting the benchmark sequence with CT = 0 gives the standard static regret result. [6] suggested that this type of result is attainable in the BCO setting as well, but we are not familiar with such result. In the MAB setting, [1] defined the hardness of a bencmark sequence as the number of its action switches, and bounded the dynamic regret in terms of this hardness. Here again, the standard static regret bound is obtained if the hardness is restricted to 0. The concept of bounding the dynamic regret in terms of the total number of action switches was studied by [14], in the setting of Prediction with Expert Advice. With respect to the second weakening form, one can find an immense amount of MAB literature that uses stochastic assumptions to model the environment. In particular, [16] coined the term restless bandits; a model in which the loss sequences change in time according to an arbitrary, yet known in advance, stochastic process. To cope with the hard nature of this model, subsequent works offered approximations, relaxations, and more detailed models [3, 7, 15, 2]. Perhaps the first attempt to handle arbitrary loss sequences in the context of dynamic regret and MAB, appears in the work of [8]. In a setting identical to ours, the authors fully characterize the dynamic regret: Θ(T 2/3V1/3 T ), if a bound on VT is known in advance. We provide a high-level description of their approach. Roughly speaking, their algorithm divides the time horizon into (equally-sized) blocks and applies the EXP3 algorithm of [1] in each of them. This guarantees sublinear static regret w.r.t. the best fixed action in the block. Now, since the number of blocks is set to be much larger than the value of VT (if VT = o(T)), it can be shown that in most blocks, the variation inside the block is o(1) and the total loss of the best fixed action (within a block) turns out to be not very far from the total loss of the best sequence of actions. The size of the blocks (which is fixed and determined in advance as a function of T and VT ) is tuned accordingly to obtain the optimal rate in this case. The main shortcomings of this algorithms are the reliance on prior knowledge of VT , and the restarting procedure that does not take the variation into account. We also note the work of [13], in which the two forms of weakening are combined together to obtain dynamic regret bounds that depend both on the complexity of the benchmark and on the niceness of the environment. Another line of work that is close to ours (at least in spirit) aims to minimize the static regret in terms of the variation (see for instance [9, 10]). A word about existing statistical tests. There are actually many different statistical tests such as z-test, t-test, and more, that aim to determine whether a sample data comes from a distribution with a particular mean value. These tests however are not suitable for our setting since (1) they mostly require assumptions on the data generation (e.g., Gaussianity), and (2) they lack our desired bound on the total absolute deviation of the mean sequence in terms of its total variation. The latter is especially important in light of Example 2, which demonstrates that a mean sequence can deviate from its average without providing us any hint. 5 2 Competing with Optimal Sequences Before presenting our algorithm and analysis we introduce some general notation and definitions. Let Xn = {Xi}n i=1 ∈[0, c]n be a sequence of independent random variables, and denote µi = E [Xi]. For any n1, n2 ∈[n], where n1 ≤n2, we denote by ¯Xn1:n2 the average of Xn1, . . . , Xn2, and by ¯Xc n1:n2 the average of the other random variables. That is, ¯Xn1:n2 = 1 n2 −n1 + 1 n2 X i=n1 Xi and ¯Xc n1:n2 = 1 n −n2 + n1 −1 n1−1 X i=1 Xi + n X i=n2+1 Xi ! . We sometimes use the notation P i/∈{n1,...,n2} for the second sum when n is implied from the context. The expected values of ¯Xn1:n2 and ¯Xc n1:n2 are denoted by ¯µn1:n2 and ¯µc n1:n2, respectively. We use two additional quantities defined w.r.t. n1, n2: ε1 (n1, n2) def =  1 n2 −n1 + 1 1/2 and ε2 (n1, n2) def =  1 n2 −n1 + 1 + 1 n −n2 + n1 −1 1/2 . We slightly abuse notation and define Vn1:n2 def = Pn2 i=n1+1 |µi −µi−1| as the total variation of a mean sequence µn = {µi}n i=1 ∈[0, 1]n over the interval {n1, . . . , n2}. Definition 2.1. (weakly stationary, non-stationary) We say that µn = {µi}n i=1 ∈[0, 1]n is α-weakly stationary if V1:n ≤α. We say that µn is α-non-stationary if it is not α-weakly stationary4. Throughout the paper, we mostly use these definitions with α = 1/√n. In this case we will shorthand the notation and simply say that a sequence is weakly stationary (or non-stationary). In the sequel, we somewhat abuse notation and use capital letters (X1, . . . , Xn) both for random variables and realizations. The specific use should be clear from the context, if not spelled out explicitly. Next, we define a notion of a concentrated sequence that depends on a parameter T. In what follows, T will always be used as the time horizon. Definition 2.2. (concentrated, strongly concentrated) We say that a sequence Xn = {Xi}n i=1 ∈ [0, c]n is concentrated w.r.t. µn if for any n1, n2 ∈[n] it holds that: (1) ¯Xn1:n2 −¯µn1:n2 ≤ 2.5c2 log(T) 1/2 ε1 (n1, n2). (2) ¯Xn1:n2 −¯Xc n1:n2 −¯µn1:n2 + ¯µc n1:n2 ≤ 2.5c2 log(T) 1/2 ε2 (n1, n2). We further say that Xn is strongly concentrated w.r.t. µn if any successive sub-sequence {Xi}n2 i=n1 ⊆ Xn is concentrated w.r.t. {µi}n2 i=n1. Whenever the mean sequence is inferred from the context, we will simply say that Xn is concentrated (or strongly concentrated). The parameters in the above definition are set so that standard concentration bounds lead to the statement that any sequence of independent random variables is strongly concentrated with high probability. The formal statement is given below and is proven in Appendix B. Claim 2.3. Let XT = {Xi}T i=1 ∈[0, c]T be a sequence of independent random variables, such that T ≥2 and c > 0. Then, XT is strongly concentrated with probability at least 1 −1 T . 2.1 Statistical Tests for Identifying Non-Stationarity TEST 1 (the offline test). The goal of the offline test is to determine whether a sequence of realizations Xn is likely to be generated from a mean sequence µn that is close (in a sense) to some given value µ0. This will later be used to determine whether a series of pulls of the same arm (inside a single bin) in the exploitation phase exhibit the same behavior as those observed in the exploration phase. We would like to have a two sided guarantee. If the means did not significantly shift the algorithm must state that the sequence is weakly stationary. On the other hand, if the algorithm states that the sequence is weakly stationary we require the absolute deviation of µn to be bounded in terms of its total variation. We provide an analysis of Test 1 in Appendix B. 6 Input: a sequence Xn = {Xi}n i=1 ∈[0, c]n, and a constant µ0 ∈[0, 1]. The test: for any two indices n1, n2 ∈[n] such that n1 < n2, check whether ¯Xn1:n2 −µ0 ≥ √ 2.5c + 2  log1/2(T)ε1 (n1, n2) . Output: non-stationary if such n1, n2 were found; weakly stationary otherwise. TEST 1: (the offline test) The test aims to identify variation during the exploitation phase. Input: a sequence XQ = {Xi}Q i=1 ∈[0, c]Q, that is revealed gradually (one Xi after the other). The test: for n = 2, 3, . . . , Q: (1) observe Xn and set Xn = {Xi}n i=1. (2) for any two indices n1, n2 ∈[n] such that n1 < n2, check whether ¯Xn1:n2 −¯Xc n1:n2 ≥ √ 2.5c + 1  log1/2(T)ε2 (n1, n2) . and terminate the loop if such n1, n2 were found. Output: non-stationary if the loop was terminated before n = Q, weakly stationary otherwise. TEST 2: (the online test) The test aims to identify variation during the exploration phase. TEST 2 (the online test). The online test gets a sequence XQ in an online manner (one variable after the other), and has to stop whenever non-stationarity is exhibited (or the sequence ends). Here, the value of Q is unknown to us beforehand, and might depend on the values of sequence elements Xi. The rationale here is the following: In the exploration phase of the main algorithm we sample the arms uniformly until discovering a significant gap between their average losses. While doing so, we would like to make sure that the regret is not large due to environment changes within the exploration process. We require a similar two sided guarantee as in the previous test, with an additional requirement informally ensuring that if we exit the block in the exploration phase the bound on the absolute deviation still applies. We provide the formal analysis in Appendix B. 2.2 Algorithm and Analysis Having this set of testing tools, we proceed to provide a non-formal description of our algorithm. Basically, the algorithm divides the time horizon into blocks according to the variation it identifies. The blocks are denoted by {Bj}N j=1, where N is the total number of blocks generated by the algorithm. The rounds within block Bj are split into an exploration and exploitation phase, henceforth denoted Ej,1 and Ej,2 respectively. Each exploitation phase is further divided into bins, where the size of the bins within a block is determined in the exploration phase and does not change throughout the block. The bins within block Bj are denoted by {Aj,a}Nj a=1, where Nj is the total number of bins in block Bj. Note that both N and Nj are random variables. We use t(j, τ) to denote the τ-th round in the exploration phase of block Bj, and t(j, a, τ) to denote the τ-th round of the a-th bin in the exploitation phase of block Bj. As before, notice that t(j, τ) might vary from one run of the algorithm to another, yet is uniquely defined per one run of the algorithm (and the same holds for t(j, a, τ)). Our algorithm is formally given in Algorithm 1, and the working scheme is visually presented in Figure 2. We discuss the extension of the algorithm to k arms in Appendix D. Theorem 2.4. Set θ = 1 2 and λ = √ 37−5 2 . Then, with probability of at least 1 −10 T the regret of Algorithm 1 is RT = T X t=1 µt(it) − T X t=1 µt(i∗ t ) ≤O log(T)T 0.82V0.18 T + log(T)T 0.771 . Proof sketch. Notice that the feedback we receive throughout the game is strongly concentrated with high probability, and thus it suffices to prove the theorem for this case. We analyze separately (a) blocks in which the algorithm did not reach the exploitation, and (b) blocks in which it did. 4We use stationarity-related terms to classify mean sequences. Our definition might be not consistent with stationarity-related definitions in the statistical literature, which are usually used to classify sequences of random variables based on higher moments or CDF’s. 7 Figure 2: The time horizon is divided into blocks, where each block is split into an exploration phase and an exploitation phase. The exploitation phase is further divided into bins. Input: parameters λ and θ. Algorithm: In each block j = 1, 2, . . . (Exploration phase) In each round τ = 1, 2, . . . (1) Select action it(j,τ) ∼Uni{1, 2} and observe loss ℓt(j,τ)(it(j,τ)). (2) Set Xt(j,τ)(i) =  2ℓt(j,τ)(it(j,τ)) if i = it(j,τ) 0 otherwise, and add Xt(j,τ)(i) (separately, for i ∈{1, 2}) as an input to TEST 2. (3) If the test identifies non-stationarity (on either one of the actions), exit block. Otherwise, if ∆ def = | ¯ Xt(j,1):t(j,τ)(1) −¯ Xt(j,1):t(j,τ)(2)| ≥16 √ 10 + 2 2 log(T)τ −λ/2 move to the next phase with ˆµ0(i) = ¯ Xt(j,1):t(j,τ)(i) for i ∈{1, 2}. (Exploitation phase) Play in bins, each of size n = 4/∆2. During each bin a = 1, 2, . . . (1) Select action it(j,a,1), . . . , it(j,a,n) =  arg mini{ˆµ0(i)} w.p. 1 −a−θ Uni{1, 2} otherwise, and observe losses {ℓt(j,a,τ)(it(j,a,τ))}n τ=1. (2) Run TEST 1 on {ℓt(j,a,τ)(it(j,a,τ))}n τ=1, and exit the block if it returned non-stationary. Algorithm 1: An algorithm for the non-stationary multi-armed bandit problem. Analysis of part (a). From TEST 2, we know that as long as the test does not identify nonstationarity in the exploration phase E1, we can “trust” the feedback we observe as if we are in the stationary setting, i.e. standard stochastic MAB, up to an additive factor of |E1|2/3V1/3 E1 to the regret. This argument holds even if TEST 2 identified non-stationarity, by simply excluding the last round. Now, since our stopping condition of the exploration phase is roughly ∆≥τ −λ/2, we suffer an additional regret of |E1|1−λ/2 throughout the exploration phase. This gives an overall bound of |E1|2/3V1/3 E1 + |E1|1−λ/2 for the regret (formally proven in Lemma C.4). The terms of the form |E1|1−λ/2 are problematic, as summing them may lead to an expression linear in T. To avoid this we use a lower bound on the variation VE1 guaranteed by the fact that TEST 2 caused the block to end during the exploration phase. This lower bound allows to express |E1|1−λ/2 as |E1|1−λ/3Vλ/3 E1 leading to a meaningful regret bound on the entire time horizon (as detailed in Lemma C.5). Analysis of part (b). The regret suffered in the exploration phase is bounded by the same arguments as before, where the bound on |E1|1−λ/2 is replaced by |E1|1−λ/2 ≤|B|1−λ/3V1−λ/3 B with B being the set of block rounds. This bound is achieved via a lower bound on VB, the variation in the block, guaranteed by the algorithm behavior along with fact that the block ended in the exploitation phase. For the regret in the exploitation phase, we first utilize the guarantees of TEST 1 to show that at the expense of an additive cost of |E2|2/3V1/3 E2 to the regret, we may assume that there is no change to the environment inside bins. From hereon the analysis becomes very similar to that of the deterministic setting, as the noise corresponding to a bin is guaranteed to be lower than the gap ∆between the arms, and thus has no affect on the algorithm’s performance. The final regret bound for blocks of type (b) comes from adding up the above mentioned bounds and is formally given in Lemma C.10. 8 References [1] Peter Auer, Nicolò Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002. [2] Mohammad Gheshlaghi Azar, Alessandro Lazaric, and Emma Brunskill. Online stochastic optimization under correlated bandit feedback. In ICML, volume 32 of JMLR Workshop and Conference Proceedings, pages 1557–1565, 2014. [3] Dimitris Bertsimas and José Niño-Mora. Restless bandits, linear programming relaxations, and a primal-dual index heuristic. Operations Research, 48(1):80–90, 2000. [4] Olivier Bousquet and Manfred K. Warmuth. Tracking a small set of experts by mixing past posteriors. Journal of Machine Learning Research, 3:363–396, 2002. [5] Amit Daniely, Alon Gonen, and Shai Shalev-Shwartz. Strongly adaptive online learning. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 1405–1411, 2015. [6] Abraham Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: gradient descent without a gradient. In SODA, pages 385–394. SIAM, 2005. [7] Sudipto Guha and Kamesh Munagala. Approximation algorithms for partial-information based stochastic control with markovian rewards. In FOCS, pages 483–493. IEEE Computer Society, 2007. [8] Yonatan Gur, Assaf J. Zeevi, and Omar Besbes. Stochastic multi-armed-bandit problem with non-stationary rewards. In NIPS, pages 199–207, 2014. [9] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: regret bounded by variation in costs. Machine Learning, 80(2-3):165–188, 2010. [10] Elad Hazan and Satyen Kale. Better algorithms for benign bandits. Journal of Machine Learning Research, 12:1287–1311, 2011. [11] Elad Hazan and C. Seshadhri. Efficient learning algorithms for changing environments. In ICML, volume 382 of ACM International Conference Proceeding Series, pages 393–400. ACM, 2009. [12] Wassily Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American statistical association, 58(301):13–30, 1963. [13] Ali Jadbabaie, Alexander Rakhlin, Shahin Shahrampour, and Karthik Sridharan. Online optimization : Competing with dynamic comparators. In AISTATS, volume 38 of JMLR Workshop and Conference Proceedings, 2015. [14] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. Inf. Comput., 108(2):212–261, 1994. [15] Ronald Ortner, Daniil Ryabko, Peter Auer, and Rémi Munos. Regret bounds for restless markov bandits. Theor. Comput. Sci., 558:62–76, 2014. [16] Peter Whittle. Restless bandits: Activity allocation in a changing world. Journal of applied probability, pages 287–298, 1988. [17] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928–936. AAAI Press, 2003. 9
2016
173
6,076
NESTT: A Nonconvex Primal-Dual Splitting Method for Distributed and Stochastic Optimization Davood Hajinezhad, Mingyi Hong ∗ Tuo Zhao† Zhaoran Wang‡ Abstract We study a stochastic and distributed algorithm for nonconvex problems whose objective consists of a sum of N nonconvex Li/N-smooth functions, plus a nonsmooth regularizer. The proposed NonconvEx primal-dual SpliTTing (NESTT) algorithm splits the problem into N subproblems, and utilizes an augmented Lagrangian based primal-dual scheme to solve it in a distributed and stochastic manner. With a special non-uniform sampling, a version of NESTT achieves ϵ-stationary solution using O((PN i=1 p Li/N)2/ϵ) gradient evaluations, which can be up to O(N) times better than the (proximal) gradient descent methods. It also achieves Q-linear convergence rate for nonconvex ℓ1 penalized quadratic problems with polyhedral constraints. Further, we reveal a fundamental connection between primal-dual based methods and a few primal only methods such as IAG/SAG/SAGA. 1 Introduction Consider the following nonconvex and nonsmooth constrained optimization problem min z∈Z f(z) := 1 N N X i=1 gi(z) + g0(z) + p(z), (1.1) where Z ⊆Rd; for each i ∈{0, · · · , N}, gi : Rd →R is a smooth possibly nonconvex function which has Li-Lipschitz continuous gradient; p(z) : Rd →R is a lower semi-continuous convex but possibly nonsmooth function. Define g(z) := 1 N PN i=1 gi(z) for notational simplicity. Problem (1.1) is quite general. It arises frequently in applications such as machine learning and signal processing; see a recent survey [7]. In particular, each smooth functions {gi}N i=1 can represent: 1) a mini-batch of loss functions modeling data fidelity, such as the ℓ2 loss, the logistic loss, etc; 2) nonconvex activation functions for neural networks, such as the logit or the tanh functions; 3) nonconvex utility functions used in signal processing and resource allocation, see [4]. The smooth function g0 can represent smooth nonconvex regularizers such as the non-quadratic penalties [2], or the smooth part of the SCAD or MCP regularizers (which is a concave function) [26]. The convex function p can take the following form: 1) nonsmooth convex regularizers such as ℓ1 and ℓ2 functions; 2) an indicator function for convex and closed feasible set Z, denoted as ιZ(·); 3) convex functions without global Lipschitz continuous gradient, such as p(z) = z4 or p(z) = 1/z +ιz≥0(z). In this work we solve (1.1) in a stochastic and distributed manner. We consider the setting in which N distributed agents each having the knowledge of one smooth function {gi}N i=1, and they are connected to a cluster center which handles g0 and p. At any given time, a randomly selected agent is activated and performs computation to optimize its local objective. Such distributed computation model has been popular in large-scale machine learning and signal processing [6]. Such model is also closely related to the (centralized) stochastic finite-sum optimization problem [1, 9, 14, 15, ∗Department of Industrial & Manufacturing Systems Engineering and Department of Electrical & Computer Engineering, Iowa State University, Ames, IA, {dhaji,mingyi}@iastate.edu †School of Industrial and Systems Engineering, Georgia Institute of Technology tourzhao@gatech.edu ‡Department of Operations Research, Princeton University,zhaoran@princeton.edu 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 21, 22], in which each time the iterate is updated based on the gradient information of a random component function. One of the key differences between these two problem types is that in the distributed setting there can be disagreement between local copies of the optimization variable z, while in the centralized setting only one copy of z is maintained. Our Contributions. We propose a class of NonconvEx primal-dual SpliTTing (NESTT) algorithms for problem (1.1). We split z ∈Rd into local copies of xi ∈Rd, while enforcing the equality constraints xi = z for all i. That is, we consider the following reformulation of (1.1) min x,z∈Rd ℓ(x, z) := 1 N N X i=1 gi(xi) + g0(z) + h(z), s.t. xi = z, i = 1, · · · , N, (1.2) where h(z) := ιZ(z) + p(z), x := [x1; · · · ; xN]. Our algorithm uses the Lagrangian relaxation of the equality constraints, and at each iteration a (possibly non-uniformly) randomly selected primal variable is optimized, followed by an approximate dual ascent step. Note that such splitting scheme has been popular in the convex setting [6], but not so when the problem becomes nonconvex. The NESTT is one of the first stochastic algorithms for distributed nonconvex nonsmooth optimization, with provable and nontrivial convergence rates. Our main contribution is given below. First, in terms of some primal and dual optimality gaps, NESTT converges sublinearly to a point belongs to stationary solution set of (1.2). Second, NESTT converges Q-linearly for certain nonconvex ℓ1 penalized quadratic problems. To the best of our knowledge, this is the first time that linear convergence is established for stochastic and distributed optimization of such type of problems. Third, we show that a gradient-based NESTT with non-uniform sampling achieves an ϵ-stationary solution of (1.1) using O((PN i=1 p Li/N)2/ϵ) gradient evaluations. Compared with the classical gradient descent, which in the worst case requires O(PN i=1 Li/ϵ) gradient evaluation to achieve ϵ-stationarity, our obtained rate can be up to O(N) times better in the case where the Li’s are not equal. Our work also reveals a fundamental connection between primal-dual based algorithms and the primal only average-gradient based algorithm such as SAGA/SAG/IAG [5, 9, 22]. With the key observation that the dual variables in NESTT serve as the “memory” of the past gradients, one can specialize NESTT to SAGA/SAG/IAG. Therefore, NESTT naturally generalizes these algorithms to the nonconvex nonsmooth setting. It is our hope that by bridging the primal-dual splitting algorithms and primal-only algorithms (in both the convex and nonconvex setting), there can be significant further research developments benefiting both algorithm classes. Related Work. Many stochastic algorithms have been designed for (1.2) when it is convex. In these algorithms the component functions gi’s are randomly sampled and optimized. Popular algorithms include the SAG/SAGA [9, 22], the SDCA [23], the SVRG [14], the RPDG [15] and so on. When the problem becomes nonconvex, the well-known incremental based algorithm can be used [3, 24], but these methods generally lack convergence rate guarantees. The SGD based method has been studied in [10], with O(1/ϵ2) convergence rate. Recent works [1] and [21] develop algorithms based on SVRG and SAGA for a special case of (1.1) where the entire problem is smooth and unconstrained. To the best of our knowledge there has been no stochastic algorithms with provable, and non-trivial, convergence rate guarantees for solving problem (1.1). On the other hand, distributed stochastic algorithms for solving problem (1.1) in the nonconvex setting has been proposed in [13], in which each time a randomly picked subset of agents update their local variables. However there has been no convergence rate analysis for such distributed stochastic scheme. There has been some recent distributed algorithms designed for (1.1) [17], but again without global convergence rate guarantee. Preliminaries. The augmented Lagrangian function for problem (1.1) is given by: L (x, z; λ) = N X i=1  1 N gi(xi) + ⟨λi, xi −z⟩+ ηi 2 ∥xi −z∥2  + g0(z) + h(z), (1.3) where λ := {λi}N i=1 is the set of dual variables, and η := {ηi > 0}N i=1 are penalty parameters. We make the following assumptions about problem (1.1) and the function (1.3). A-(a) The function f(z) is bounded from below over Z ∩int(dom f): f := minz∈Z f(z) > −∞. p(z) is a convex lower semi-continuous function; Z is a closed convex set. A-(b) The gi’s and g have Lipschitz continuous gradients, i.e., ∥∇g(y) −∇g(z)∥≤L∥y −z∥, and ∥∇gi(y) −∇gi(z)∥≤Li∥y −z∥, ∀y, z 2 Algorithm 1 NESTT-G Algorithm 1: for r = 1 to R do 2: Pick ir ∈{1, 2, · · · , N} with probability pir and update (x, λ) xr+1 ir = arg min xir Vir (xir, zr, λr ir) ; (2.4) λr+1 ir = λr ir + αirηir xr+1 ir −zr ; (2.5) λr+1 j = λr j, xr+1 j = zr, ∀j ̸= ir; (2.6) Update z: zr+1 = arg min z∈Z L({xr+1 i }, z; λr). (2.7) 3: end for 4: Output: (zm, xm, λm) where m randomly picked from {1, 2, · · · , R}. Clearly L ≤1/N PN i=1 Li, and the equality can be achieved in the worst case. For simplicity of analysis we will further assume that L0 ≤1 N PN i=1 Li. A-(c) Each ηi in (1.3) satisfies ηi > Li/N; if g0 is nonconvex, then PN i=1 ηi > 3L0. Assumption A-(c) implies that L (x, z; λ) is strongly convex w.r.t. each xi and z, with modulus γi := ηi −Li/N and γz = PN i=1 ηi −L0, respectively [27, Theorem 2.1]. We then define the prox-gradient (pGRAD) for (1.1), which will serve as a measure of stationarity. It can be checked that the pGRAD vanishes at the set of stationary solutions of (1.1) [20]. Definition 1.1. The proximal gradient of problem (1.1) is given by (for any γ > 0) ˜∇fγ(z) := γ  z −proxγ p+ιZ[z −1/γ∇(g(z) + g0(z))]  , with proxγ p+ιZ[u] := argmin u∈Z p(u)+γ 2 ∥z−u∥2. 2 The NESTT-G Algorithm Algorithm Description. We present a primal-dual splitting scheme for the reformulated problem (1.2). The algorithm is referred to as the NESTT with Gradient step (NESTT-G) since each agent only requires to know the gradient of each component function. To proceed, let us define the following function (for some constants {αi > 0}N i=1): Vi(xi, z; λi) = 1 N gi(z) + 1 N ⟨∇gi(z), xi −z⟩+ ⟨λi, xi −z⟩+ αiηi 2 ∥xi −z∥2. Note that Vi(·) is related to L(·) in the following way: it is a quadratic approximation (approximated at the point z) of L(x, y; λ) w.r.t. xi. The parameters α := {αi}N i=1 give some freedom to the algorithm design, and they are critical in improving convergence rates as well as in establishing connection between NESTT-G with a few primal only stochastic optimization schemes. The algorithm proceeds as follows. Before each iteration begins the cluster center broadcasts z to everyone. At iteration r + 1 a randomly selected agent ir ∈{1, 2, · · · N} is picked, who minimizes Vir(·) w.r.t. its local variable xir, followed by a dual ascent step for λir. The rest of the agents update their local variables by simply setting them to z. The cluster center then minimizes L(x, z; λ) with respect to z. See Algorithm 1 for details. We remark that NESTT-G is related to the popular ADMM method for convex optimization [6]. However our particular update schedule (randomly picking (xi, λi) plus deterministic updating z), combined with the special x-step (minimizing an approximation of L(·) evaluated at a different block variable z) is not known before. These features are critical in our following rate analysis. Convergence Analysis. To proceed, let us define r(j) as the last iteration in which the jth block is picked before iteration r + 1. i.e. r(j) := max{t | t < r + 1, j = i(t)}. Define yr j := zr(j) if j ̸= ir, and yr ir = zr. Define the filtration Fr as the σ-field generated by {i(t)}r−1 t=1 . A few important observations are in order. Combining the (x, z) updates (2.4) – (2.7), we have xr+1 q = zr − 1 αqηq (λr q + 1 N ∇gq(zr)), 1 N ∇gq(zr) + λr q + αqηq(xr+1 q −zr) = 0, with q = ir (2.8a) λr+1 ir = −1 N ∇gir(zr), λr+1 j = −1 N ∇gj(zr(j)), ∀j ̸= ir, ⇒λr+1 i = −1 N ∇gi(yr i ), ∀i (2.8b) xr+1 j (2.6) = zr (2.8b) = zr − 1 αjηj (λr j + 1 N ∇gj(zr(j))), ∀j ̸= ir. (2.8c) 3 The key here is that the dual variables serve as the “memory” for the past gradients of gi’s. To proceed, we first construct a potential function using an upper bound of L(x, y; λ). Note that 1 N gj(xr+1 j ) + ⟨λr j, xr+1 j −zr⟩+ ηj 2 ∥xr+1 j −zr∥2 = 1 N gj(zr), ∀j ̸= ir (2.9) 1 N gir(xr+1 ir ) + ⟨λr ir, xr+1 ir −zr⟩+ ηi 2 ∥xr+1 ir −zr∥2 (i) ≤1 N gir(zr) + ηir + Lir/N 2 ∥xr+1 ir −zr∥2 (ii) = 1 N gir(zr) + ηir + Lir/N 2(αirηir)2 ∥1/N(∇gir(yr−1 ir ) −∇gir(zr))∥2 (2.10) where (i) uses (2.8b) and applies the descent lemma on the function 1/Ngi(·); in (ii) we have used (2.5) and (2.8b). Since each i is picked with probability pi, we have Eir[L(xr+1, zr; λr) | Fr] ≤ N X i=1 1 N gi(zr) + N X i=1 pi(ηi + Li/N) 2(αiηi)2 ∥1/N(∇gi(yr−1 i ) −∇gi(zr))∥2 + g0(zr) + h(zr) ≤ N X i=1 1 N gi(zr) + N X i=1 3piηi (αiηi)2 ∥1/N(∇gi(yr−1 i ) −∇gi(zr))∥2 + g0(zr) + h(zr) := Qr, where in the last inequality we have used Assumption [A-(c)]. In the following, we will use EFr[Qr] as the potential function, and show that it decreases at each iteration. Lemma 2.1. Suppose Assumption A holds, and pick αi = pi = βηi, where β := 1 PN i=1 ηi , and ηi ≥9Li Npi , i = 1, · · · N. (2.11) Then the following descent estimate holds true for NESTT-G E[Qr −Qr−1|Fr−1] ≤− PN i=1 ηi 8 Ezr∥zr −zr−1∥2 − N X i=1 1 2ηi ∥1 N (∇gi(zr−1) −∇gi(yr−2 i ))∥2. (2.12) Sublinear Convergence. Define the optimality gap as the following: E[Gr] := E h ∥˜∇1/βf(zr)∥2i = 1 β2 E h ∥zr −prox1/β h [zr −β∇(g(zr) + g0(zr))]∥2i . (2.13) Note that when h, g0 ≡0, E[Gr] reduces to E[∥∇g(zr)∥2]. We have the following result. Theorem 2.1. Suppose Assumption A holds, and pick (for i = 1, · · · , N) αi = pi = p Li/N PN i=1 p Li/N , ηi = 3 N X i=1 p Li/N ! p Li/N, β = 1 3(PN i=1 p Li/N)2 . (2.14) Then every limit point generated by NESTT-G is a stationary solution of problem (1.2). Further, 1) E[Gm] ≤80 3  N X i=1 p Li/N 2 E[Q1 −QR+1] R ; 2) E[Gm] + E " N X i=1 3η2 i xm i −zm−1 2 # ≤80 3 N X i=1 p Li/N !2 E[Q1 −QR+1] R . Note that Part (1) is useful in the centralized finite-sum minimization setting, as it shows the sublinear convergence of NESTT-G, measured only by the primal optimality gap evaluated at zr. Meanwhile, part (2) is useful in the distributed setting, as it also shows that the expected constraint violation, which measures the consensus among agents, shrinks in the same order. We also comment that the above result suggests that to achieve an ϵ-stationary solution, the NESTT-G requires about O  PN i=1 p Li/N 2 /ϵ ! number of gradient evaluations (for simplicity we have ignored an additive N factor for evaluating the gradient of the entire function at the initial step of the algorithm). 4 Algorithm 2 NESTT-E Algorithm 1: for r = 1 to R do 2: Update z by minimizing the augmented Lagrangian: zr+1 = arg min z L(xr, z; λr). (3.15) 3: Randomly pick ir ∈{1, 2, · · · N} with probability pir: xr+1 ir = argmin xir Uir(xir, zr+1; λr ir); (3.16) λr+1 ir = λr ir + αirηir xr+1 ir −zr+1 ; (3.17) xr+1 j = xr j, λr+1 j = λr j ∀j ̸= ir. (3.18) 4: end for 5: Output: (zm, xm, λm) where m randomly picked from {1, 2, · · · , R}. It is interesting to observe that our choice of pi is proportional to the square root of the Lipschitz constant of each component function, rather than to Li. Because of such choice of the sampling probability, the derived convergence rate has a mild dependency on N and Li’s. Compared with the conventional gradient-based methods, our scaling can be up to N times better. Detailed discussion and comparison will be given in Section 4. Note that similar sublinear convergence rates can be obtained for the case αi = 1 for all i (with different scaling constants). However due to space limitation, we will not present those results here. Linear Convergence. In this section we show that the NESTT-G is capable of linear convergence for a family of nonconvex quadratic problems, which has important applications, for example in high-dimensional statistical learning [16]. To proceed, we will assume the following. B-(a) Each function gi(z) is a quadratic function of the form gi(z) = 1/2zT Aiz + ⟨b, z⟩, where Ai is a symmetric matrix but not necessarily positive semidefinite; B-(b) The feasible set Z is a closed compact polyhedral set; B-(c) The nonsmooth function p(z) = µ∥z∥1, for some µ ≥0. Our linear convergence result is based upon certain error bound condition around the stationary solutions set, which has been shown in [18] for smooth quadratic problems and has been extended to including ℓ1 penalty in [25, Theorem 4]. Due to space limitation the statement of the condition will be given in the supplemental material, along with the proof of the following result. Theorem 2.2. Suppose that Assumptions A, B are satisfied. Then the sequence {E[Qr+1]}∞ r=1 converges Q-linearly 4 to some Q∗= f(z∗), where z∗is a stationary solution for problem (1.1). That is, there exists a finite ¯r > 0, ρ ∈(0, 1) such that for all r ≥¯r, E[Qr+1 −Q∗]≤ρE[Qr −Q∗]. Linear convergence of this type for problems satisfying Assumption B has been shown for (deterministic) proximal gradient based methods [25, Theorem 2, 3]. To the best of our knowledge, this is the first result that shows the same linear convergence for a stochastic and distributed algorithm. 3 The NESTT-E Algorithm Algorithm Description. In this section, we present a variant of NESTT-G, which is named NESTT with Exact minimization (NESTT-E). Our motivation is the following. First, in NESTT-G every agent should update its local variable at every iteration [cf. (2.4) or (2.6)]. In practice this may not be possible, for example at any given time a few agents can be in the sleeping mode so they cannot perform (2.6). Second, in the distributed setting it has been generally observed (e.g., see [8, Section V]) that performing exact minimization (whenever possible) instead of taking the gradient steps for local problems can significantly speed up the algorithm. The NESTT-E algorithm to be presented in this section is designed to address these issues. To proceed, let us define a new function as follows: U(x, z; λ) := N X i=1 Ui(xi, z; λi) := N X i=1  1 N gi(xi) + ⟨λi, xi −z⟩+ αiηi 2 ∥xi −z∥2  . 4A sequence {xr} is said to converge Q-linearly to some ¯x if lim supr ∥xr+1 −¯x∥/∥xr −¯x∥≤ρ, where ρ ∈(0, 1) is some constant; cf [25] and references therein. 5 Note that if αi = 1 for all i, then the L(x, z; λ) = U(x, z; λ) + p(z) + h(z). The algorithm details are presented in Algorithm 2. Convergence Analysis. We begin analyzing NESTT-E. The proof technique is quite different from that for NESTT-G, and it is based upon using the expected value of the Augmented Lagrangian function as the potential function; see [11, 12, 13]. For the ease of description we define the following quantities: w := (x, z, λ), β := 1 PN i=1 ηi , ci := L2 i αiηiN 2 −γi 2 + 1 −αi αi Li N , α := {αi}N i=1. To measure the optimality of NESTT-E, define the prox-gradient of L(x, z; λ) as: ˜∇L(w) =  (z −proxh[z −∇z(L(w) −h(z))]); ∇x1L(w); · · · ; ∇xN L(w)  ∈R(N+1)d. (3.19) We define the optimality gap by adding to ∥˜∇L(w)∥2 the size of the constraint violation [13]: H(wr) := ∥˜∇L(wr)∥2 + N X i=1 L2 i N 2 ∥xr i −zr∥2. It can be verified that H(wr) →0 implies that wr reaches a stationary solution for problem (1.2). We have the following theorem regarding the convergence properties of NESTT-E. Theorem 3.1. Suppose Assumption A holds, and that (ηi, αi) are chosen such that ci < 0 . Then for some constant f, we have E[L(wr)] ≥E[L(wr+1)] ≥f > −∞, ∀r ≥0. Further, almost surely every limit point of {wr} is a stationary solution of problem (1.2). Finally, for some function of α denoted as C(α) = σ1(α)/σ2(α), we have the following: E[H(wm)] ≤C(α)E[L(w1) −L(wR+1)] R , (3.20) where σ1 := max(ˆσ1(α), ˜σ1) and σ2 := max(ˆσ2(α), ˜σ2), and these constants are given by ˆσ1(α) = max i ( 4 L2 i N 2 + η2 i +  1 αi −1 2 L2 i N 2 ! + 3  L4 i αiη2 i N 4 + L2 i N 2 ) , ˜σ1 = N X i=1 4η2 i + (2 + N X i=1 ηi + L0)2 + 3 N X i=1 L2 i N 2 , ˆσ2(α) = max i  pi γi 2 − L2 i N 2αiηi −1 −αi αi Li N  , ˜σ2 = PN i=1 ηi −L0 2 . We remark that the above result shows the sublinear convergence of NESTT-E to the set of stationary solutions. Note that γi = ηi −Li/N, to satisfy ci < 0, a simple derivation yields ηi > Li  (2 −αi) + p (αi −2)2 + 8αi  2Nαi . Further, the above result characterizes the dependency of the rates on various parameters of the algorithm. For example, to see the effect of α on the convergence rate, let us set pi = Li PN i=1 Li , and ηi = 3Li/N, and assume L0 = 0, then consider two different choices of α: bαi = 1, ∀i and eαi = 4, ∀i. One can easily check that applying these different choices leads to following results: C(bα) = 49 N X i=1 Li/N, C(eα) = 28 N X i=1 Li/N. The key observation is that increasing αi’s reduces the constant in front of the rate. Hence, we expect that in practice larger αi’s will yield faster convergence. 4 Connections and Comparisons with Existing Works In this section we compare NESTT-G/E with a few existing algorithms in the literature. First, we present a somewhat surprising observation, that NESTT-G takes the same form as some well-known algorithms for convex finite-sum problems. To formally state such relation, we show in the following result that NESTT-G in fact admits a compact primal-only characterization. 6 Table 1: Comparison of # of gradient evaluations for NESTT-G and GD in the worst case NESTT-G GD # of Gradient Evaluations O  (PN i=1 p Li/N)2/ϵ  O PN i=1 Li/ϵ  Case I: Li = 1, ∀i O(N/ϵ) O(N/ϵ) Case II : O( √ N) terms with Li = N the rest with Li = 1 O(N/ϵ) O(N 3/2/ϵ) Case III : O(1) terms with Li = N 2 the rest with Li = 1 O(N/ϵ) O(N 2/ϵ) Proposition 4.1. The NESTT-G can be written into the following compact form: zr+1 = arg min z h(z) + g0(z) + 1 2β ∥z −ur+1∥2 (4.21a) with ur+1 := zr −β  1 Nαir (∇gir(zr) −∇gir(yr−1 ir )) + 1 N N X i=1 ∇gi(yr−1 i )  . (4.21b) Based on this observation, the following comments are in order. (1) Suppose h ≡0, g0 ≡0 and αi = 1, pi = 1/N for all i. Then (4.21) takes the same form as the SAG presented in [22]. Further, when the component functions gi’s are picked cyclically in a Gauss-Seidel manner, the iteration (4.21) takes the same form as the IAG algorithm [5]. (2) Suppose h ̸= 0 and g0 ̸= 0, and αi = pi = 1/N for all i. Then (4.21) is the same as the SAGA algorithm [9], which is design for optimizing convex nonsmooth finite sum problems. Note that SAG/SAGA/IAG are all designed for convex problems. Through the lens of primal-dual splitting, our work shows that they can be generalized to nonconvex nonsmooth problems as well. Secondly, NESTT-E is related to the proximal version of the nonconvex ADMM [13, Algorithm 2]. However, the introduction of αi’s is new, which can significantly improve the practical performance but complicates the analysis. Further, there has been no counterpart of the sublinear and linear convergence rate analysis for the stochastic version of [13, Algorithm 2]. Thirdly, we note that a recent paper [21] has shown that SAGA works for smooth and unconstrained nonconvex problem. Suppose that h ≡0, g0 ̸= 0, Li = Lj, ∀i, j and αi = pi = 1/N, the authors show that SAGA achieves ϵ-stationarity using O(N 2/3(PN i=1 Li/N)/ϵ) gradient evaluations. Compared with GD, which achieves ϵ-stationarity using O(PN i=1 Li/ϵ) gradient evaluations in the worse case (in the sense that PN i=1 Li/N = L), the rate in [21] is O(N 1/3) times better. However, the algorithm in [21] is different from NESTT-G in two aspects: 1) it does not generalize to the nonsmooth constrained problem (1.1); 2) it samples two component functions at each iteration, while NESTT-G only samples once. Further, the analysis and the scaling are derived for the case of uniform Li’s, therefore it is not clear how the algorithm and the rates can be adapted for the nonuniform case. On the other hand, our NESTT works for the general nonsmooth constrained setting. The non-uniform sampling used in NESTT-G is well-suited for problems with non-uniform Li’s, and our scaling can be up to N times better than GD (or its proximal version) in the worst case. Note that problems with non-uniform Li’s for the component functions are common in applications such as sparse optimization and signal processing. For example in LASSO problem the data matrix is often normalized by feature (or “column-normalized” [19]), therefore the ℓ2 norm of each row of the data matrix (which corresponds to the Lipschitz constant for each component function) can be dramatically different. In Table 1 we list the comparison of the number of gradient evaluations for NESTT-G and GD, in the worst case (in the sense that PN i=1 Li/N = L). For simplicity, we omitted an additive constant of O(N) for computing the initial gradients. 5 Numerical Results In this section we evaluate the performance of NESTT. Consider the high dimensional regression problem with noisy observation [16], where M observations are generated by y = Xν + ϵ. Here y ∈RM is the observed data sample; X ∈RM×P is the covariate matrix; ν ∈RP is the ground truth, and ϵ ∈RM is the noise. Suppose that the covariate matrix is not perfectly known, i.e., we observe A = X + W where W ∈RM×P is the noise matrix with known covariance matrix ΣW . Let us define ˆΓ := 1/M(A⊤A) −ΣW , and ˆγ := 1/M(A⊤y). To estimate the ground truth ν, let 7 0 100 200 300 400 500 # Grad/N 10 -15 10 -10 10 -5 10 0 10 5 Optimality gap Uniform Sampling SGD NESTT-E( = 10) NESTT-E( = 1) NESTT-G SAGA 0 100 200 300 400 500 # Grad/N 10 -20 10 -15 10 -10 10 -5 10 0 10 5 Optimality gap Non-Uniform Sampling SGD NESTT-E( = 10) NESTT-E( = 1) NESTT-G SAGA Figure 1: Comparison of NESTT-G/E, SAGA, SGD on problem (5.22). The x-axis denotes the number of passes of the dataset. Left: Uniform Sampling pi = 1/N; Right: Non-uniform Sampling (pi = √ Li/N PN i=1 √ Li/N ). Table 2: Optimality gap ∥˜∇1/βf(zr)∥2 for different algorithms, with 100 passes of the datasets. SGD NESTT-E (α = 10) NESTT-G SAGA N Uniform Non-Uni Uniform Non-Uni Uniform Non-Uni Uniform Non-Uni 10 3.4054 0.2265 2.6E-16 6.16E-19 2.3E-21 6.1E-24 2.7E-17 2.8022 20 0.6370 6.9087 2.4E-9 5.9E-9 1.2E-10 2.9E-11 7.7E-7 11.3435 30 0.2260 0.1639 3.2E-6 2.7E-6 4.5E-7 1.4E-7 2.5E-5 0.1253 40 0.0574 0.3193 5.8E-4 8.1E-5 1.8E-5 3.1E-5 4.1E-5 0.7385 50 0.0154 0.0409 8.3E.-4 7.1E-4 1.2E-4 2.7E-4 2.5E-4 3.3187 us consider the following (nonconvex) optimization problem posed in [16, problem (2.4)] (where R > 0 controls sparsity): min z z⊤ˆΓz −ˆγz s.t. ∥z∥1 ≤R. (5.22) Due to the existence of noise, ˆΓ is not positive semidefinite hence the problem is not convex. Note that this problem satisfies Assumption A– B, then by Theorem 2.2 NESTT-G converges Q-linearly. To test the performance of the proposed algorithm, we generate the problem following similar setups as [16]. Let X = (X1; · · · , XN) ∈RM×P with P i Ni = M and each Xi ∈RNi×P corresponds to Ni data points, and it is generated from i.i.d Gaussian. Here Ni represents the size of each minibatch of samples. Generate the observations yi = Xi×ν∗+ϵi ∈RNi, where ν∗is a K-sparse vector to be estimated, and ϵi ∈RNi is the random noise. Let W = [W1; · · · ; WN], with Wi ∈RNi×P generated with i.i.d Gaussian. Therefore we have z⊤ˆΓz = 1 N PN i=1 N M z⊤X⊤ i Xi −W ⊤ i Wi  z. We set M = 100, 000, P = 5000, N = 50, K = 22 ≈ √ P,and R = ∥ν∗∥1. We implement NESTT-G/E, the SGD, and the nonconvex SAGA proposed in [21] with stepsize β = 1 3LmaxN 2/3 (with Lmax := maxi Li). Note that the SAGA proposed in [21] only works for the unconstrained problems with uniform Li, therefore when applied to (5.22) it is not guaranteed to converge. Here we only include it for comparison purposes. In Fig. 1 we compare different algorithms in terms of the gap ∥˜∇1/βf(zr)∥2. In the left figure we consider the problem with Ni = Nj for all i, j, and we show performance of the proposed algorithms with uniform sampling (i.e., the probability of picking ith block is pi = 1/N). On the right one we consider problems in which approximately half of the component functions have twice the size of Li’s as the rest, and consider the non-uniform sampling (pi = p Li/N/PN i=1 p Li/N). Clearly in both cases the proposed algorithms perform quite well. Furthermore, it is clear that the NESTT-E performs well with large α := {αi}N i=1, which confirms our theoretical rate analysis. Also it is worth mentioning that when the Ni’s are non-uniform, the proposed algorithms [NESTT-G and NESTT-E (with α = 10)] significantly outperform SAGA and SGD. In Table 2 we further compare different algorithms when changing the number of component functions (i.e., the number of minibatches N) while the rest of the setup is as above. We run each algorithm with 100 passes over the dataset. Similarly as before, our algorithms perform well, while SAGA seems to be sensitive to the uniformity of the size of the mini-batch [note that there is no convergence guarantee for SAGA applied to the nonconvex constrained problem (5.22)]. 8 References [1] Z. A.-Zhu and E. Hazan. Variance reduction for faster non-convex optimization. 2016. Preprint, available on arXiv, arXiv:1603.05643. [2] A. Antoniadis, I. Gijbels, and M. Nikolova. Penalized likelihood regression for generalized linear models with non-quadratic penalties. Annals of the Institute of Statistical Mathematics, 63(3):585–615, 2009. [3] D. Bertsekas. Incremental gradient, subgradient, and proximal methods f or convex optimization: A survey. 2000. LIDS Report 2848. [4] E. Bjornson and E. Jorswieck. Optimal resource allocation in coordinated multi-cell systems. Foundations and Trends in Communications and Information Theory, 9, 2013. [5] D. Blatt, A. O. Hero, and H. Gauchman. A convergent incremental gradient method with a constant step size. SIAM Journal on Optimization, 18(1):29–51, 2007. [6] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. [7] V. Cevher, S. Becker, and M. Schmidt. Convex optimization for big data: Scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Processing Magazine, 31(5):32–43, Sept 2014. [8] T.-H. Chang, M. Hong, and X. Wang. Multi-agent distributed optimization via inexact consensus admm. IEEE Transactions on Signal Processing, 63(2):482–497, Jan 2015. [9] A. Defazio, F. Bach, and S. Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. In The Proceeding of NIPS, 2014. [10] S. Ghadimi and G. Lan. Stochastic first- and zeroth-order methods for nonconvx stochastic programming. SIAM Journal on Optimizatnoi, 23(4):2341–2368, 2013. [11] D. Hajinezhad, T. H. Chang, X. Wang, Q. Shi, and M. Hong. Nonnegative matrix factorization using admm: Algorithm and convergence analysis. In 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4742–4746, March 2016. [12] D. Hajinezhad and M. Hong. Nonconvex alternating direction method of multipliers for distributed sparse principal component analysis. In the Proceedings of GlobalSIPT, 2015. [13] M. Hong, Z.-Q. Luo, and M. Razaviyayn. Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems. SIAM Journal On Optimization, 26(1):337–364, 2016. [14] R. Johnson and T. Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In the Proceedings of the Neural Information Processing (NIPS). 2013. [15] G. Lan. An optimal randomized incremental gradient method. 2015. Preprint. [16] P.-L. Loh and M. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. The Annals of Statistics, 40(3):1637–1664, 2012. [17] P. D. Lorenzo and G. Scutari. Next: In-network nonconvex optimization. 2016. Preprint. [18] Z.-Q. Luo and P. Tseng. On the linear convergence of descent methods for convex essentially smooth minimization. SIAM Journal on Control and Optimization, 30(2):408–425, 1992. [19] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of m-estimators with decomposable regularizers. Statist. Sci., 27(4):538– 557, 11 2012. [20] M. Razaviyayn, M. Hong, Z.-Q. Luo, and J. S. Pang. Parallel successive convex approximation for nonsmooth nonconvex optimization. In the Proceedings of NIPS, 2014. [21] S. J. Reddi, S. Sra, B. Poczos, and A. Smola. Fast incremental method for nonconvex optimization. 2016. Preprint, available on arXiv: arXiv:1603.06159. [22] M. Schmidt, N. L. Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. 2013. Technical report, INRIA. [23] S. Shalev-Shwartz and T. Zhang. Proximal stochastic dual coordinate ascent methods for regularzied loss minimization. Journal of Machine Learning Rsearch, 14:567–599, 2013. [24] S. Sra. Scalable nonconvex inexact proximal splitting. In Advances in Neural Information Processing Systems (NIPS), 2012. [25] P. Tseng and S. Yun. A coordinate gradient descent method for nonsmooth separable minimization. Mathematical Programming, 117:387–423, 2009. [26] Z. Wang, H. Liu, and T. Zhang. Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Annals of Statistics, 42(6):2164–2201, 2014. [27] S. Zlobec. On the Liu - Floudas convexification of smooth programs. Journal of Global Optimization, 32:401 – 407, 2005. 9
2016
174
6,077
Probing the Compositionality of Intuitive Functions Eric Schulz University College London e.schulz@cs.ucl.ac.uk Joshua B. Tenenbaum MIT jbt@mit.edu David Duvenaud University of Toronto duvenaud@cs.toronto.edu Maarten Speekenbrink University College London m.speekenbrink@ucl.ac.uk Samuel J. Gershman Harvard University gershman@fas.harvard.edu Abstract How do people learn about complex functional structure? Taking inspiration from other areas of cognitive science, we propose that this is accomplished by harnessing compositionality: complex structure is decomposed into simpler building blocks. We formalize this idea within the framework of Bayesian regression using a grammar over Gaussian process kernels. We show that participants prefer compositional over non-compositional function extrapolations, that samples from the human prior over functions are best described by a compositional model, and that people perceive compositional functions as more predictable than their non-compositional but otherwise similar counterparts. We argue that the compositional nature of intuitive functions is consistent with broad principles of human cognition. 1 Introduction Function learning underlies many intuitive judgments, such as the perception of time, space and number. All of these tasks require the construction of mental representations that map inputs to outputs. Since the space of such mappings is infinite, inductive biases are necessary to constrain the plausible inferences. What is the nature of human inductive biases over functions? It has been suggested that Gaussian processes (GPs) provide a good characterization of these inductive biases [15]. As we describe more formally below, GPs are distributions over functions that can encode properties such as smoothness, linearity, periodicity, and other inductive biases indicated by research on human function learning [5, 3]. Lucas et al. [15] showed how Bayesian inference with GP priors can unify previous rule-based and exemplar-based theories of function learning [18]. A major unresolved question is how people deal with complex functions that are not easily captured by any simple GP. Insight into this question is provided by the observation that many complex functions encountered in the real world can be broken down into compositions of simpler functions [6, 11]. We pursue this idea theoretically and experimentally, by first defining a hypothetical compositional grammar for intuitive functions (based on [6]) and then investigating whether this grammar quantitatively predicts human function learning performance. We compare the compositional model to a flexible non-compositional model (the spectral mixture representation proposed by [21]). Both models use Bayesian inference to reason about functions, but differ in their inductive biases. We show that (a) participants prefer compositional pattern extrapolations in both forced choice and manual drawing tasks; (b) samples elicited from participants’ priors over functions are more consistent with the compositional grammar; and (c) participants perceive compositional functions as more predictable than non-compositional ones. Taken together, these findings provide support for the compositional nature of intuitive functions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Gaussian process regression as a theory of intuitive function learning A GP is a collection of random variables, any finite subset of which are jointly Gaussian-distributed (see [18] for an introduction). A GP can be expressed as a distribution over functions: f ∼GP(m, k), where m(x) = E[f(x)] is a mean function modeling the expected output of the function given input x, and k(x, x′) = E [(f(x) −m(x))(f(x′) −m(x′))] is a kernel function modeling the covariance between points. Intuitively, the kernel encodes an inductive bias about the expected smoothness of functions drawn from the GP. To simplify exposition, we follow standard convention in assuming a constant mean of 0. Conditional on data D = {X, y}, where yn ∼N(f(xn), σ2), the posterior predictive distribution for a new input x∗is Gaussian with mean and variance given by: E[f(x⋆)|D] = k⊤ ⋆(K + σ2I)−1y (1) V[f(x⋆)|D] = k(x⋆, x⋆) −k⊤ ⋆(K + σ2I)−1k⋆, (2) where K is the N × N matrix of covariances evaluated at each input in X and k⋆ = [k(x1, x∗), . . . , k(xN, x∗)]. As pointed out by Griffiths et al. [10] (see also [15]), the predictive distribution can be viewed as an exemplar (similarity-based) model of function learning [5, 16], since it can be written as a linear combination of the covariance between past and current inputs: f(x∗) = N X n=1 αnk(xn, x⋆) (3) with α = (K + σ2I)−1y. Equivalently, by Mercer’s theorem any positive definite kernel can be expressed as an outer product of feature vectors: k(x, x′) = ∞ X d=1 λdφd(x)φd(x′), (4) where {φd(x)} are the eigenfunctions of the kernel and {λd} are the eigenvalues. The posterior predictive mean is a linear combination of the features, which from a psychological perspective can be thought of as encoding “rules” mapping inputs to outputs [4, 14]. Thus, a GP can be expressed as both an exemplar (similarity-based) model and a feature (rule-based) model, unifying the two dominant classes of function learning theories in cognitive science [15]. 3 Structure learning with Gaussian processes So far we have assumed a fixed kernel function. However, humans can adapt to a wide variety of structural forms [13, 8], suggesting that they have the flexibility to learn the kernel function from experience. The key question addressed in this paper is what space of kernels humans are optimizing over—how rich is their representational vocabulary? This vocabulary will in turn act as an inductive bias, making some functions easier to learn, and other functions harder to learn. Broadly speaking, there are two approaches to parameterizing the kernel space: a fixed functional form with continuous parameters, or a combinatorial space of functional forms. These approaches are not mutually exclusive; indeed, the success of the combinatorial approach depends on optimizing the continuous parameters for each form. Nonetheless, this distinction is useful because it allows us to separate different forms of functional complexity. A function might have internal structure such that when this structure is revealed, the apparent functional complexity is significantly reduced. For example, a function composed of many piecewise linear segments might have a long description length under a typical continuous parametrization (e.g., the radial basis kernel described below), because it violates the smoothness assumptions of the prior. However, conditional on the changepoints between segments, the function can be decomposed into independent parts each of which is well-described by a simple continuous parametrization. If internally structured functions are “natural kinds,” then the combinatorial approach may be a good model of human intuitive functions. In the rest of this section, we describe three kernel parameterizations. The first two are continuous, differing in their expressiveness. The third one is combinatorial, allowing it to capture complex patterns by composing simpler kernels. For all kernels, we take the standard approach of choosing the parameter values that optimize the log marginal likelihood. 2 3.1 Radial basis kernel The radial basis kernel is a commonly used kernel in machine learning applications, embodying the assumption that the covariance between function values decays exponentially with input distance: k(x, x′) = θ2 exp  −|x −x′|2 2l2  , (5) where θ is a scaling parameter and l is a length-scale parameter. This kernel assumes that the same smoothness properties apply globally for all inputs. It provides a standard baseline to compare with more expressive kernels. 3.2 Spectral mixture kernel The second approach is based on the fact that any stationary kernel can be expressed as an integral using Bochner’s theorem. Letting τ = |x −x′| ∈RP , then k(τ) = Z RP e2πis⊤τ ψ(ds). (6) If ψ has a density S(s), then S is the spectral density of k; S and k are thus Fourier duals [18]. This means that a spectral density fully defines the kernel and that furthermore every stationary kernel can be expressed as a spectral density. Wilson & Adams [21] showed that the spectral density can be approximated by a mixture of Q Gaussians, such that k(τ) = Q X q=1 wq P Y p=1 exp −2π2τ 2 p υp q  cos  2πτpµ(p) q  (7) Here, the qth component has mean vector µq =  µ(1) q , . . . , µ(P ) q  and a covariance matrix Mq = diag  υ(1) q , . . . , υ(P ) q  . The result is a non-parametric approach to Gaussian process regression, in which complex kernels are approximated by mixtures of simpler ones. This approach is appealing when simpler kernels fail to capture functional structure. Its main drawback is that because structure is captured implicitly via the spectral density, the building blocks are psychologically less intuitive: humans appear to have preferences for linear [12] and periodic [1] functions, which are not straightforwardly encoded in the spectral mixture (though of course the mixture can approximate these functions). Since the spectral kernel has been successfully applied to reverse engineer human kernels [22], it is a useful reference of comparison to more structured compositional approaches. 3.3 Compositional kernel As positive semidefinite kernels are closed under addition and multiplication, we can create richly structured and interpretable kernels from well understood base components. For example, by summing kernels, we can model the data as a superposition of independent functions. Figure 1 shows an example of how different kernels (radial basis, linear, periodic) can be combined. Table 1 summarizes the kernels used in our grammar. RBF LIN PER PER+LIN RBFxPER x f(x) Figure 1: Examples of base and compositional kernels. Many other compositional grammars are possible. For example, we could have included a more diverse set of kernels, and other composition operators (e.g., convolution, scaling) that generate valid kernels. However, we believe that our simple grammar is a useful starting point, since the components are intuitive and likely to be psychologically plausible. For tractability, we fix the maximum number of combined kernels to 3. Additionally, we do not allow for repetition of kernels in order to restrict the complexity of the kernel space. 3 Linear Radial basis function Periodic k(x, x′) = (x −θ1)(x′ −θ1) k(τ) = θ2 2 exp  −(τ )2 2θ2 3  k(τ) = θ2 4 exp  −2 sin2(πτ θ5) θ2 6  Table 1: Utilized base kernels in our compositional grammar. τ = |x −x′| . 4 Experiment 1: Extrapolation The first experiment assessed whether people prefer compositional over non-compositional extrapolations. In experiment 1a, functions were sampled from a compositional GP and different extrapolations (mean predictions) were produced using each of the aforementioned kernels. Participants were then asked to choose among the 3 different extrapolations for a given function (see Figure 2). In detail, the outputs for xlearn = [0, 0.1, · · · , 7] were used as a training set to which all three kernels were fitted and then used to generate predictions for the test set xtest = [7.1, 7.2, · · · , 10]. Their mean predictions were then used to generate one plot for every approach that showed the learned input as a blue line and the extrapolation as a red line. The procedure was repeated for 20 different compositional functions. Figure 2: Screen shot of first choice experiment. Predictions in this example (from left to right) were generated by a spectral mixture, a radial basis, and a compositional kernel. 52 participants (mean age=36.15, SD = 9.11) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to select one of 3 extrapolations (displayed as red lines) they thought best completed a given blue line. Results showed that participants chose compositional predictions 69%, spectral mixture predictions 17%, and radial basis predictions 14% of the time. Overall, the compositional predictions were chosen significantly more often than the other two (χ2 = 591.2, p < 0.01) as shown in Figure 3a. G G G 0.00 0.25 0.50 0.75 1.00 Compositional RBF Spectral Kernel Proportion chosen Compositional (a) Choice proportion for compositional ground truth. G G 0.00 0.25 0.50 0.75 1.00 Compositional Spectral Kernel Proportion chosen Spectral mixture (b) Choice proportion for spectral mixture ground truth. Figure 3: Results of extrapolation experiments. Error bars represent the standard error of the mean. In experiment 1b, again 20 functions were sampled but this time from a spectral mixture kernel and 65 participants (mean age=30, SD = 9.84) were asked to choose among either compositional or spectral mixture extrapolations and received $0.5 as before. Results (displayed in Figure 3b) showed that participants again chose compositional extrapolations more frequently (68% vs. 32%, χ2 = 172.8, p < 0.01), even if the ground truth happened to be generated by a spectral mixture kernel. Thus, people seem to prefer compositional over non-compositional extrapolations in forced choice extrapolation tasks. 4 5 Markov chain Monte Carlo with people In a second set of experiments, we assessed participants’ inductive biases directly using a Markov chain Monte Carlo with People (MCMCP) approach [19]. Participants accept or reject proposed extrapolations, effectively simulating a Markov chain whose stationary distribution is in this case the posterior predictive. Extrapolations from all possible kernel combinations (up to 3 combined kernels) were generated and stored a priori. These were then used to generate plots of different proposal extrapolations (as in the previous experiment). On each trial, participants chose between their most recently accepted extrapolation and a new proposal. 5.1 Experiment 2a: Compositional ground truth In the first MCMCP experiment, we sampled functions from compositional kernels. Eight different functions were sampled from various compositional kernels, the input space was split into training and test sets, and then all kernel combinations were used to generate extrapolations. Proposals were sampled uniformly from this set. 51 participants with an average age of 32.55 (SD = 8.21) were recruited via Amazon’s Mechanical Turk and paid $1. There were 8 blocks of 30 trials, where each block corresponded to a single training set. We calculated the average proportion of accepted kernels over the last 5 trials, as shown in Figure 4. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G LIN + PER 1 LIN + PER 2 LIN + PER 3 LIN + PER 4 LIN x PER PER x RBF + LIN PER LIN + PER + RBF 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.0 0.1 0.2 0.3 0.00 0.05 0.10 0.15 0.20 0.25 0.0 0.1 0.2 0.3 0.0 0.1 0.2 0.0 0.1 0.2 0.3 0.0 0.1 0.2 l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr l p r l+p l+r p+r lxp lxr pxr p+r+lpxr+lpxl+rrxl+ppxlxr Proportion of accepted kernels Figure 4: Proportions of chosen predictions over last 5 trials. Generating kernel marked in red. In all cases participants’ subjective probability distribution over kernels corresponded well with the data-generating kernels. Moreover, the inverse marginal likelihood, standardized over all kernels, correlated highly with the subjective beliefs assessed by MCMCP (ρ = 0.91, p < .01). Thus, participants seemed to converge to sensible structures when the functions were generated by compositional kernels. 5 5.2 Experiment 2b: Naturalistic functions The second MCMCP experiment assessed what structures people converged to when faced with real world data. 51 participants with an average age of 32.55 (SD = 12.14) were recruited via Amazon Mechanical Turk and received $1 for their participation. The functions were an airline passenger data set, volcano CO2 emission data, the number of gym memberships over 5 years, and the number of times people googled the band “Wham!” over the last 8 years; all shown in Figure 5a. Participants were not told any information about the data set (including input and output descriptions) beyond the input-output pairs. As periodicity in the real world is rarely ever purely periodic, we adapted the periodic component of the grammar by multiplying a periodic kernel with a radial basis kernel, thereby locally smoothing the periodic part of the function.1 Apart from the different training sets, the procedure was identical to the last experiment. Airline passengers Gym memberships Volcano Wham! −200 −100 0 100 200 300 40 60 80 100 325 350 375 400 5 10 15 20 0 50 100 150 0 200 400 600 1960 1980 2000 0 100 200 300 400 x y Real world data (a) Data. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Airline passengers Gym memberships Volcano Wham! 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.0 0.1 0.2 0.3 0.0 0.2 0.4 0.6 l p r l+p l+r p+r lxp lxr pxr p+r+l pxr+l pxl+r rxl+p pxlxr l p r l+p l+r p+r lxp lxr pxr p+r+l pxr+l pxl+r rxl+p pxlxr l p r l+p l+r p+r lxp lxr pxr p+r+l pxr+l pxl+r rxl+p pxlxr l p r l+p l+r p+r lxp lxr pxr p+r+l pxr+l pxl+r rxl+p pxlxr Proportion of accepted kernels (b) Proportions of chosen predictions over last 5 trials. Figure 5: Real world data and MCMCP results. Error bars represent the standard error of the mean. Results are shown in Figure 5b, demonstrating that participants converged to intuitively plausible patterns. In particular, for both the volcano and the airline passenger data, participants converged to compositions resembling those found in previous analyses [6]. The correlation between the mean proportion of accepted predictions and the inverse standardized marginal likelihoods of the different kernels was again significantly positive (ρ = 0.83, p < .01). 6 Experiment 3: Manual function completion In the next experiment, we let participants draw the functions underlying observed data manually. As all of the prior experiments asked participants to judge between “pre-generated” predictions of functions, we wanted to compare this to how participants generate predictions themselves. On each round of the experiment, functions were sampled from the compositional grammar, the number of points to be presented on each trial was sampled uniformly between 100 and 200, and the noise variance was sampled uniformly between 0 and 25. Finally, the size of an unobserved region of the 1See the following page for an example: http://learning.eng.cam.ac.uk/carl/mauna. 6 function was sampled to lie between 5 and 50. Participants were asked to manually draw the function best describing observed data and to inter- and extrapolate this function in two unobserved regions. A screen shot of the experiment is shown in Figure 6. Figure 6: Manual pattern completion experiment. Extrapolation region is delimited by vertical lines. 36 participants with a mean age of 30.5 (SD = 7.15) were recruited from Amazon Mechanical Turk and received $2 for their participation. Participants were asked to draw lines in a cloud of dots that they thought best described the given data. To facilitate this process, participants placed black dots into the cloud, which were then automatically connected by a black line based on a cubic Bezier smoothing curve. They were asked to place the first dot on the left boundary and the final dot on the right boundary of the graph. In between, participants were allowed to place as many dots as they liked (from left to right) and could remove previously placed dots. There were 50 trials in total. We assessed the average root mean squared distance between participants’ predictions (the line they drew) and the mean predictions of each kernel given the data participants had seen, for both interpolation and extrapolation areas. Results are shown in Figure 7. G G G 0 25 50 75 Compositional RBF Spectral Kernel RMSE Interpolation (a) Distance for interpolation drawings. G G G 0 30 60 90 Compositional RBF Spectral Kernel RMSE Extrapolation (b) Distance for extrapolation drawings. Figure 7: Root mean squared distances. Error bars represent the standard error of the mean. The mean distance from participants’ drawings was significantly higher for the spectral mixture kernel than for the compositional kernel in both interpolation (86.96 vs. 58.33, t(1291.1) = −6.3, p < .001) and extrapolation areas (110.45 vs 83.91, t(1475.7) = 6.39, p < 0.001). The radial basis kernel produced similar distances as the compositional kernel in interpolation (55.8), but predicted participants’ drawings significantly worse in extrapolation areas (97.9, t(1459.9) = 3.26, p < 0.01). 7 Experiment 4: Assessing predictability Compositional patterns might also affect the way in which participants perceive functions a priori [20]. To assess this, we asked participants to judge how well they thought they could predict 40 different functions that were similar on many measures such as their spectral entropy and their average wavelet distance to each other, but 20 of which were sampled from a compositional and 20 from a spectral mixture kernel. Figure 8 shows a screenshot of the experiment. 50 participants with a mean age of 32 (SD = 7.82) were recruited via Amazon Mechanical Turk and received $0.5 for their participation. Participants were asked to rate the predictability of different functions. On each trial participants were shown a total of nj ∈{50, 60, . . . , 100} randomly sampled input-output points of a given function and asked to judge how well they thought they could predict the output for a randomly sampled input point on a scale of 0 (not at all) to 100 (very well). Afterwards, they had to rate which of two functions was easier to predict (Figure 8) on a scale from -100 (left graph is definitely easier to predict) to 100 (right graph is definitely easier predict). As shown in Figure 9, compositional functions were perceived as more predictable than spectral functions in isolation (t(948) = 11.422, p < 0.01) and in paired comparisons (t(499) = 13.502, p < 0.01). Perceived predictability increases with the number of observed outputs (r = 0.23, 7 (a) Predictability judgements. (b) Comparative judgements. Figure 8: Screenshot of the predictablity experiment. G G G G G G G G G G G G 30 40 50 60 70 50 60 70 80 90 100 Sample Size Mean Judgement Group G G Compositional Spectral Predictability (a) Predictability judgements. G G G G G G 0 10 20 30 40 50 50 60 70 80 90 100 Sample Size Mean Judgement Direct Comparison (b) Comparative judgements. Figure 9: Results of the predictablity experiment. Error bars represent the standard error of the mean. p < 0.01) and the larger the number of observations, the larger the difference between compositional and spectral mixture functions (r = 0.14, p < 0.01). 8 Discussion In this paper, we probed human intuitions about functions and found that these intuitions are best described as compositional. We operationalized compositionality using a grammar over kernels within a GP regression framework and found that people prefer extrapolations based on compositional kernels over other alternatives, such as a spectral mixture or the standard radial basis kernel. Two Markov chain Monte Carlo with people experiments revealed that participants converge to extrapolations consistent with the compositional kernels. These findings were replicated when people manually drew the functions underlying observed data. Moreover, participants perceived compositional functions as more predictable than non-compositional – but otherwise similar – ones. The work presented here is connected to several lines of previous research, most importantly that of Lucas et al. [15], which introduced GP regression as a model of human function learning, and Wilson et al. [22], which attempted to reverse-engineer the human kernel using a spectral mixture. We see our work as complementary; we need both a theory to describe how people make sense of structure as well as a method to indicate what the final structure might look like when represented as a kernel. Our approach also ties together neatly with past attempts to model structure in other cognitive domains such as motion perception [9] and decision making [7]. Our work can be extended in a number of ways. First, it is desirable to more thoroughly explore the space of base kernels and composition operators, since we used an elementary grammar in our analyses that is probably too simple. Second, the compositional approach could be used in traditional function learning paradigms (e.g., [5, 14]) as well as in active input selection paradigms [17]. Another interesting avenue for future research would be to explore the broader implications of compositional function representations. For example, evidence suggests that statistical regularities reduce perceived numerosity [23] and increase memory capacity [2]; these tasks can therefore provide clues about the underlying representations. If compositional functions alter number perception or memory performance to a greater extent than alternative functions, that suggests that our theory extends beyond simple function learning. 8 References [1] L. Bott and E. Heit. Nonmonotonic extrapolation in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30:38–50, 2004. [2] T. F. Brady, T. Konkle, and G. A. Alvarez. A review of visual memory capacity: Beyond individual items and toward structured representations. Journal of Vision, 11:4–4, 2011. [3] B. Brehmer. Hypotheses about relations between scaled variables in the learning of probabilistic inference tasks. Organizational Behavior and Human Performance, 11(1):1–27, 1974. [4] J. D. Carroll. Functional learning: The learning of continuous functional mappings relating stimulus and response continua. Educational Testing Service, 1963. [5] E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel. Extrapolation: The sine qua non for abstraction in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(4):968, 1997. [6] D. Duvenaud, J. R. Lloyd, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Structure discovery in nonparametric regression through compositional kernel search. Proceedings of the 30th International Conference on Machine Learning, pages 1166–1174, 2013. [7] S. J. Gershman, J. Malmaud, J. B. Tenenbaum, and S. Gershman. Structured representations of utility in combinatorial domains. Decision, 2016. [8] S. J. Gershman and Y. Niv. Learning latent structure: carving nature at its joints. Current Opinion in Neurobiology, 20:251–256, 2010. [9] S. J. Gershman, J. B. Tenenbaum, and F. Jäkel. Discovering hierarchical motion structure. Vision Research, 2016. [10] T. L. Griffiths, C. Lucas, J. Williams, and M. L. Kalish. Modeling human function learning with gaussian processes. In Advances in Neural Information Processing Systems, pages 553–560, 2009. [11] R. Grosse, R. R. Salakhutdinov, W. T. Freeman, and J. B. Tenenbaum. Exploiting compositionality to explore a large space of model structures. Uncertainty in Artificial Intelligence, 2012. [12] M. L. Kalish, T. L. Griffiths, and S. Lewandowsky. Iterated learning: Intergenerational knowledge transmission reveals inductive biases. Psychonomic Bulletin & Review, 14:288–294, 2007. [13] C. Kemp and J. B. Tenenbaum. Structured statistical models of inductive reasoning. Psychological Review, 116:20–58, 2009. [14] K. Koh and D. E. Meyer. Function learning: Induction of continuous stimulus-response relations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 17:811–836, 1991. [15] C. G. Lucas, T. L. Griffiths, J. J. Williams, and M. L. Kalish. A rational model of function learning. Psychonomic bulletin & review, 22(5):1193–1215, 2015. [16] M. A. Mcdaniel and J. R. Busemeyer. The conceptual basis of function learning and extrapolation: Comparison of rule-based and associative-based models. Psychonomic Bulletin & Review, 12:24–42, 2005. [17] P. Parpart, E. Schulz, M. Speekenbrink, and B. C. Love. Active learning as a means to distinguish among prominent decision strategies. In Proceedings of the 37th Annual Meeting of the Cognitive Science Society, pages 1829–1834, 2015. [18] C. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [19] A. N. Sanborn, T. L. Griffiths, and R. M. Shiffrin. Uncovering mental representations with Markov chain Monte Carlo. Cognitive Psychology, 60(2):63–106, 2010. [20] E. Schulz, J. B. Tenenbaum, D. N. Reshef, M. Speekenbrink, and S. J. Gershman. Assessing the perceived predictability of functions. In Proceedings of the 37th Annual Meeting of the Cognitive Science Society, pages 2116–2121. Cognitive Science Society, 2015. [21] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. arXiv preprint arXiv:1302.4245, 2013. [22] A. G. Wilson, C. Dann, C. Lucas, and E. P. Xing. The human kernel. In Advances in Neural Information Processing Systems, pages 2836–2844, 2015. [23] J. Zhao and R. Q. Yu. Statistical regularities reduce perceived numerosity. Cognition, 146:217–222, 2016. 9
2016
175
6,078
Identification and Overidentification of Linear Structural Equation Models Bryant Chen University of California, Los Angeles Computer Science Department Los Angeles, CA, 90095-1596, USA Abstract In this paper, we address the problems of identifying linear structural equation models and discovering the constraints they imply. We first extend the half-trek criterion to cover a broader class of models and apply our extension to finding testable constraints implied by the model. We then show that any semi-Markovian linear model can be recursively decomposed into simpler sub-models, resulting in improved identification and constraint discovery power. Finally, we show that, unlike the existing methods developed for linear models, the resulting method subsumes the identification and constraint discovery algorithms for non-parametric models. 1 Introduction Many researchers, particularly in economics, psychology, and the social sciences, use linear structural equation models (SEMs) to describe the causal and statistical relationships between a set of variables, predict the effects of interventions and policies, and to estimate parameters of interest. When modeling using linear SEMs, researchers typically specify the causal structure (i.e. exclusion restrictions and independence restrictions between error terms) from domain knowledge, leaving the structural coefficients (representing the strength of the causal relationships) as free parameters to be estimated from data. If these coefficients are known, then total effects, direct effects, and counterfactuals can be computed from them directly (Balke and Pearl, 1994). However, in some cases, the causal assumptions embedded in the model are not enough to uniquely determine one or more coefficients from the probability distribution, and therefore, cannot be estimated using data. In such cases, we say that the coefficient is not identified or not identifiable1. In other cases, a coefficient may be overidentified in addition to being identified, meaning that there are at least two minimal sets of logically independent assumptions in the model that are sufficient for identifying a coefficient, and the identified expressions for the coefficient are distinct functions of the covariance matrix (Pearl, 2004). As a result, the model imposes a testable constraint on the probability distribution that the two (or more) identified expressions for the coefficient are equal. As compact and transparent representations of the model’s structure, causal graphs provide a convenient tool to aid in the identification of coefficients. First utilized as a causal inference tool by Wright (1921), graphs have more recently been applied to identify causal effects in non-parametric causal models (Pearl, 2009) and enabled the development of causal effect identification algorithms that are complete for non-parametric models (Huang and Valtorta, 2006; Shpitser and Pearl, 2006). These algorithms can be applied to the identification of coefficients in linear SEMs by identifying non-parametric direct effects, which are closely related to structural coefficients (Tian, 2005; Chen and Pearl, 2014). Algorithms designed specifically for the identification of linear SEMs were de1We will also use the term “identified" with respect to individual variables and the model as a whole. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. veloped by Brito and Pearl (2002), Brito (2004), Tian (2005, 2007, 2009), Foygel et al. (2012), and Chen et al. (2014). Graphs have also proven to be valuable tools in the discovery of testable implications. It is well known that conditional independence relationships can be easily read from the causal graph using dseparation (Pearl, 2009), and Kang and Tian (2009) gave a procedure for linear SEMs that enumerates a set of conditional independences that imply all others. In non-parametric models without latent variables or correlated error terms, these conditional independence constraints represent all of the testable implications of the model (Pearl, 2009). In models with latent variables and/or correlated error terms, there may be additional constraints implied by the model. These non-independence constraints, often called Verma constraints, were first noted by Verma and Pearl (1990), and Tian and Pearl (2002b) and Shpitser and Pearl (2008) developed graphical algorithms for systematically discovering such constraints in non-parametric models. In the case of linear models, Chen et al. (2014) applied their aforementioned identification method to the discovery of overidentifying constraints, which in some cases are equivalent to the non-parametric constraints enumerated in Tian and Pearl (2002b) and Shpitser and Pearl (2008). Surprisingly, naively applying algorithms designed for non-parametric models to linear models enables the identification of coefficients and constraints that the aforementioned methods developed for linear models are unable to, despite utilizing the additional assumption of linearity. In this paper, we first extend the half-trek identification method of Foygel et al. (2012) and apply it to the discovery of half-trek constraints, which generalize the overidentifying constraints given in Chen et al. (2014). Our extensions can be applied to Markovian, semi-Markovian, and non-Markovian models. We then demonstrate how recursive c-component decomposition, which was first utilized in identification algorithms developed for non-parametric models (Tian, 2002; Huang and Valtorta, 2006; Shpitser and Pearl, 2006), can be incorporated into our linear identification and constraint discovery methods for Markovian and semi-Markovian models. We show that doing so allows the identification of additional models and constraints. Further, we will demonstrate that, unlike existing algorithms, our method subsumes the aforementioned identification and constraint discovery methods developed for non-parametric models when applied to linear SEMs. 2 Preliminaries A linear structural equation model consists of a set of equations of the form, X = ΛX + ϵ, where X = [x1, ..., xn]t is a vector containing the model variables, Λ is a matrix containing the coefficients of the model, which convey the strength of the causal relationships, and ϵ = [ϵ1, ..., ϵn]t is a vector of error terms, which represents omitted or latent variables. The matrix Λ contains zeroes on the diagonal, and Λij = 0 whenever xi is not a cause of xj. The error terms are normally distributed random variables and induce the probability distribution over the model variables. The covariance matrix of X will be denoted by Σ and the covariance matrix over the error terms, ϵ, by Ω. An instantiation of a model M is an assignment of values to the model parameters (i.e. Λ and the non-zero elements of Ω). For a given instantiation mi, let Σ(mi) denote the covariance matrix implied by the model and λk(mi) be the value of coefficient λk. Definition 1. A coefficient, λk, is identified if for any two instantiations of the model, mi and mj, we have λk(mi) = λk(mj) whenever Σ(mi) = Σ(mj). In other words, λk is identified if it can be uniquely determined from the covariance matrix, Σ. Now, we define when a structural coefficient, λk, is overidentified. Definition 2. (Pearl, 2004) A coefficient, λk is overidentified if there are two or more distinct sets of logically independent assumptions in M such that (i) each set is sufficient for deriving λk as a function of Σ, λk = f(Σ), (ii) each set induces a distinct function λk = f(Σ), and (iii) each assumption set is minimal, that is, no proper subset of those assumptions is sufficient for the derivation of λk. The causal graph or path diagram of an SEM is a graph, G = (V, D, B), where V are vertices or nodes, D directed edges, and B bidirected edges. The vertices represent model variables. Directed 2 eges represent the direction of causality, and for each coefficient Λij ̸= 0, an edge is drawn from xi to xj. Each directed edge, therefore, is associated with a coefficient in the SEM, which we will often refer to as its structural coefficient. The error terms, ϵi, are not represented in the graph. However, a bidirected edge between two variables indicates that their corresponding error terms may be statistically dependent while the lack of a bidirected edge indicates that the error terms are independent. When the causal graph is acyclic without bidirected edges, then we say that the model is Markovian. Graphs with bidirected edges are non-Markovian, while acyclic graphs with bidirected edges are additionally called semi-Markovian. We will use standard graph terminology with Pa(y) denoting the parents of y, Anc(y) denoting the ancestors of y, De(y) denoting the descendants of y, and Sib(y) denoting the siblings of y, the variables that are connected to y via a bidirected edge. He(E) denotes the heads of a set of directed edges, E, while Ta(E) denotes the tails. Additionally, for a node v, the set of edges for which He(E) = v is denoted Inc(v). Lastly, we will utilize d-separation (Pearl, 2009). Lastly, we establish a couple preliminary definitions around half-treks. These definitions and illustrative examples can also be found in Foygel et al. (2012) and Chen et al. (2014). Definition 3. (Foygel et al., 2012) A half-trek, π, from x to y is a path from x to y that either begins with a bidirected arc and then continues with directed edges towards y or is simply a directed path from x to y. We will denote the set of nodes that are reachable by half-trek from v htr(v). Definition 4. (Foygel et al., 2012) For any half-trek, π, let Right(π) be the set of vertices in π that have an outgoing directed edge in π (as opposed to bidirected edge) union the last node in the trek. In other words, if the trek is a directed path then every node in the path is a member of Right(π). If the trek begins with a bidirected edge then every node other than the first node is a member of Right(π). Definition 5. (Foygel et al., 2012) A system of half-treks, π1, ..., πn, has no sided intersection if for all πi, πj ∈{π1, ..., πn} such that πi ̸= πj, Right(πi)∩Right(πj)= ∅. Definition 6. (Chen et al., 2014) For an arbitrary variable, v, let Pa1, Pa2, ..., Pak be the unique partition of Pa(v) such that any two parents are placed in the same subset, Pai, whenever they are connected by an unblocked path (given the empty set). A connected edge set with head v is a set of directed edges from Pai to v for some i ∈{1, 2, ..., k}. 3 General Half-Trek Criterion The half-trek criterion is a graphical condition that can be used to determine the identifiability of recursive and non-recursive linear models (Foygel et al., 2012). Foygel et al. (2012) use the half-trek criterion to identify the model variables one at a time, where each identified variable may be able to aid in the identification of other variables. If any variable is not identifiable using the half-trek criterion, then their algorithm returns that the model is not HTC-identifiable. Otherwise the algorithm returns that the model is identifiable. Their algorithm subsumes the earlier methods of Brito and Pearl (2002) and Brito (2004). In this section, we extend the half-trek criterion to allow the identification of arbitrary subsets of edges belonging to a variable. As a result, our algorithm can be utilized to identify as many coefficients as possible, even when the model is not identified. Additionally, this extension improves our ability to identify entire models, as we will show. Definition 7. (General Half-Trek Criterion) Let E be a set of directed edges sharing a single head y. A set of variables Z satisfies the general half-trek criterion with respect to E, if (i) |Z| = |E|, (ii) Z ∩(y ∪Sib(y)) = ∅, (iii) There is a system of half-treks with no sided intersection from Z to Ta(E), and (iv) (Pa(y) \ Ta(E)) ∩htr(Z) = ∅. A set of directed edges, E, sharing a head y is identifiable if there exists a set, ZE, that satisfies the general half-trek criterion (g-HTC) with respect to E, and ZE consists only of “allowed” nodes. Intuitively, a node z is allowed if Ezy is identified or empty, where Ezy ⊆Inc(z) is the set of edges 3 Figure 1: The above model is identified using the g-HTC but not the HTC. belonging to z that lie on half-treks from y to z or lie on unblocked paths (given the empty set) between z and Pa(y) \ Ta(E).2 The following definition formalizes this notion. Definition 8. A node, z, is g-HT allowed (or simply allowed) for directed edges E with head y if Ezy = ∅or there exists sequences of sets of nodes, (Z1, ...Zk), and sets of edges, (E1, ..., Ek), with Ezy ⊆E1 ∪... ∪Ek such that (i) Zi satisfies the g-HTC with respect to Ei for all i ∈{1, ..., k}, (ii) EZ1y1 = ∅, where yi = He(Ei) for all i ∈{1, ..., k}, and (iii) EZiyi ⊆(E1 ∪... ∪Ei−1) for all i ∈{1, ...k}. When a set of allowed nodes, ZE, satisfies the g-HTC for a set of edges E, then we will say that ZE is a g-HT admissible set for E. Theorem 1. If a g-HT admissible set for directed edges Ey with head y exists then Ey is g-HT identifiable. Further, let ZEy = {z1, ..., zk} be a g-HT admissible set for Ey, Ta(Ey) = {p1, ..., pk}, and Σ be the covariance matrix of the model variables. Define A as Aij = [(I −Λ)T Σ]zipj, Eziy ̸= ∅ Σzipj, Eziy = ∅ (1) and b as bi = [(I −Λ)T Σ]ziy, Eziy ̸= ∅ Σziy, Eziy = ∅ (2) Then A is an invertible matrix and A · ΛT a(Ey),y = b. Proof. See Appendix for proofs of all theorems and lemmas. The g-HTC impoves upon the HTC because subsets of a variable’s coefficients may be identifiable even when the variable is not. By identifying subsets of a variable’s coefficients, we not only allow the identification of as many coefficients as possible in unidentified models, but we also are able to identify additional models as a whole. For example, Figure 1 is not identifiable using the HTC. In order to identify Y , Z2 needs to be identified first as it is the only variable with a half-trek to X2 without being a sibling of Y . However, to identify Z2, either Y or W1 needs to be identified. Finally to identify W1, Y needs to be identified. This cycle implies that the model is not HTC-identifiable. It is, however, g-HTC identifiable since the g-HTC allows d to be identified independently of f, using {Z1} as a g-HT admissible set, which in turn allows {Y } to be a g-HT admissible set for W1’s coefficient, a. Finding a g-HT admissible set for directed edges, E, with head, y, from a set of allowed nodes, AE, can be accomplished by utilizing the max-flow algorithm described in Chen et al. (2014)3, which we call MaxFlow(G, E, AE). This algorithm returns a maximal set of allowed nodes that satisfies (ii) (iv) of the g-HTC. In some cases, there may be no g-HT admissible set for E ′ but there may be one for E ⊂E ′. In other cases, there may be no g-HT admissible set of variables for a set of edges E but there may be a 2We will continue to use the EZy notation and allow Z to be a set of nodes. 3Brito (2004) utilized a similar max-flow construction in his identification algorithm. 4 (a) (b) (c) (d) Figure 2: (a) The graph is not identified using the g-HTC and cannot be decomposed (b) After removing V6 we are able to decompose the graph (c) Graph for c-component, {V2, V3, V5} (d) Graph for c-component, {V1, V4} g-HT admissible set of variables for E ′ with E ⊂E ′. As a result, if a HT-admissible set does not exist for Ey, where Ey = Inc(y) for some node y, we may have to check whether such a set exists for all possible subsets of Ey in order to identify as many coefficients in Ey as possible. This process can be somewhat simplified by noting that if E is a connected edge set with no g-HT admissible set, then there is no superset E ′ with a g-HT admissible set. An algorithm that utilizes the g-HTC and Theorem 1 to identify as many coefficients as possible in recursive or non-recursive linear SEMs is given in the Appendix. Since we may need to check the identifiability of all subsets of a node’s edges, the algorithm’s complexity is polynomial time if the degree of each node is bounded. 4 Generalizing Overidentifying Constraints Chen et al. (2014) discovered overidentifying constraints by finding two HT-admissible sets for a given connected edge set. When two such sets exist, we obtain two distinct expressions for the identified coefficients, and equating the two expressions gives the overidentifying constraint. However, we may be able to obtain constraints even when |ZE| < |E| and E is not identified. The algorithm, MaxFlow, returns a maximal set, ZE, for which the equations, A · ΛT a(E),y = b, are linearly independent, regardless of whether |ZE| = |E| and E is identified or not. Therefore, if we are able to find an allowed node w that satisfies the conditions below, then the equation aw · ΛT a(E),y = bw will be a linear combination of the equations, A · ΛT a(E),y = b. Theorem 2. Let ZE be a set of maximal size that satisfies conditions (ii)-(iv) of the g-HTC for a set of edges, E, with head y. If there exists a node w such that there exists a half-trek from w to Ta(E), w /∈(y ∪Sib(y)), and w is g-HT allowed for E, then we obtain the equality constraint, awA−1 rightb = bw, where A−1 right is the right inverse of A. We will call these generalized overidentifying constraints, half-trek constraints or HT-constraints. An algorithm that identifies coefficients and finds HT-constraints for a recursive or non-recursive linear SEM is given in the Appendix. 5 Decomposition Tian showed that the identification problem could be simplified in semi-Markovian linear structural equation models by decomposing the model into sub-models according to their c-components (Tian, 2005). Each coefficient is identifiable if and only if it is identifiable in the sub-model to which it belongs (Tian, 2005). In this section, we show that the c-component decomposition can be applied recursively to the model after marginalizing certain variables. This idea was first used to identify interventional distributions in non-parameteric models by Tian (2002) and Tian and Pearl (2002a) and adapting this technique for linear models will allow us to identify models that the g-HTC, even coupled with (non-recursive) c-component decomposition, is unable to identify. Further, it ensures the identification of all coefficients identifiable using methods developed for non-parametric models–a guarantee that none of the existing methods developed for linear models satisfy. 5 The graph in Figure 2a consists of a single c-component, and we are unable to decompose it. As a result, we are able to identify a but no other coefficients using the g-HTC. Moreover, f = ∂ ∂v4 E[v5|do(v6, v4, v3, v2, v1)] is identified using identification methods developed for nonparametric models (e.g. do-calculus) but not the g-HTC or other methods developed for linear models. However, if we remove v6 from the analysis, then the resulting model can be decomposed. Let M be the model depicted in Figure 2a, P(v) be the distribution induced by M, and M ′ be a model that is identical to M except the equation for v6 is removed. M ′ induces the distribution R v6 P(V )dv6, and its associated graph G ′ yields two c-components, as shown in Figure 2b. Now, decomposing G ′ according to these c-components yields the sub-models depicted by Figures 2c and 2d. Both of these sub-models are identifiable using the half-trek criterion. Thus, all coefficients other than h have been shown to be identifiable. Returning to the graph prior to removal, depicted in Figure 2a, we are now able to identify h because both v4 and v5 are now allowed nodes for h, and the model is identified4. As a result, we can improve our identification and constraint-discovery algorithm by recursively decomposing, using the g-HTC and Theorem 2, and removing descendant sets5. Note, however, that we must consider every descendant set for removal. It is possible that removing D1 will allow identification of a coefficient but removing a superset D2 with D1 ⊂D2 will not. Additionally, it is possible that removing D2 will allow identification but removing a subset D1 will not. After recursively decomposing the graph, if some of the removed variables were unidentified, we may be able to identify them by returning to the original graph prior to removal since we may have a larger set of allowed nodes. For example, we were able to identify h in Figure 2a by “un-removing" v6 after the other coefficients were identified. In some cases, however, we may need to again recursively decompose and remove descendant sets. As a result, in order to fully exploit the powers of decomposition and the g-HTC, we must repeat the recursive decomposition process on the original model until all marginalized nodes are identified or no new coefficients are identified in an iteration. Clearly, recursive decomposition also aids in the discovery of HT-constraints in the same way that it aids in the identification of coefficients using the g-HTC. However, note that recursive decomposition may also introduce additional d-separation constraints. Prior to decomposition, if a node Z is dseparated from a node V then we trivially obtain the constraint that ΣZV = 0. However, in some cases, Z may become d-separated from V after decomposition. In this case, the independence constraint on the covariance matrix of the decomposed c-component corresponds to a non-conditional independence constraint in the original joint distribution P(V ). It is for this reason that we output independence constraints in Algorithm 2 (see Appendix). For example, consider the graph depicted in Figure 3a. Theorem 2 does not yield any constraints for the edges of V7. However, after decomposing the graph we obtain the c-component for {V2, V5, V7}, shown in Figure 3b. In this graph, V1 is d-separated from V7 yielding a non-independence constraint in the original model. We can systematically identify coefficients and HT-constraints using recursive c-component decomposition by repeating the following steps for the model’s graph G until the model has been identified or no new coefficients are identified in an iteration: (i) Decompose the graph into c-components, {Si} (ii) For each c-component, utilize the g-HTC and Theorems 1 and 2 to identify coefficients and find HT-constraints (iii) For each descendant set, marginalize the descendant set and repeat steps (i)-(iii) until all variables have been marginalized 4While v4 and v5 are technically not allowed according to Definition 8, they can be used in g-HT admissible sets to identify h using Theorem 1 since their coefficients have been identified. 5Only removing descendant sets have the ability to break up components. For example, removing {v2} from Figure 2a does not break the c-component because removing v2 would relegate its influence to the error term of its child, v3. As a result, the graph of the resulting model would include a bidirected arc between v3 and v6, and we would still have a single c-component. 6 (a) (b) Figure 3: (a) V1 cannot be d-separated from V7 (b) V1 is d-separated from V7 in the graph of the c-component, {V2, V5, V7} If a coefficient α can be identified using the above method (see also Algorithm 3 in the Appendix, which utilizes recursive decomposition to identify coefficients and output HT-constraints), then we will say that α is g-HTC identifiable. We now show that any direct effect identifiable using non-parametric methods is also g-HTC identifiable. Theorem 3. Let M be a linear SEM with variables V . Let M ′ be a non-parametric SEM with identical structure to M. If the direct effect of x on y for x, y ∈V is identified in M ′ then the coefficient Λxy in M is g-HTC identifiable and can be identified using Algorithm 3 (see Appendix). 6 Non-Parametric Verma Constraints Tian and Pearl (2002b) and Shpitser and Pearl (2008) provided algorithms for discovering Verma constraints in recursive, non-parametric models. In this section, we will show that the constraints obtained by the above method and Algorithm 3 (see Appendix) subsume the constraints discovered by both methods when applied to linear models. First, we will show that the constraints identified in (Tian and Pearl, 2002b), which we call Q-constraints, are subsumed by HT-constraints. Second, we will show that the constraints given by Shpitser and Pearl (2008), called dormant independences, are, in fact, equivalent to the constraints given by Tian and Pearl (2002b) for linear models. As a result, both dormant independences and Q-constraints are subsumed by HT-constraints. 6.1 Q-Constraints We refer to the constraints enumerated in (Tian and Pearl, 2002b) as Q-constraints since they are discovered by identifying Q-factors, which are defined below. Definition 9. For any subset, S ⊆V , the Q-factor, QS, is given by QS = Z ϵS Y i|Vi∈S P(vi|pai, ϵi)P(ϵS)dϵS, (3) where ϵS contains the error terms of the variables in S. A Q-factor, QS, is identifiable whenever S is a c-component (Tian and Pearl, 2002a). Lemma 1. (Tian and Pearl, 2002a) Let {v1, ..., vn} be sorted topologically, S be a ccomponent, V (i) = {v1, ..., vi}, and V (0) = ∅. Then QS can be computed as QS = Q {i|vi∈S} P(vi|V (i−1)), j = 1, ..., k. For example, consider again Figure 2b. We have that Q1 = P(v1)P(v4|v3, v2, v1) and Q2 = P(v2|v1)P(v3|v2, v1)P(v5|v4, v3, v2, v1). A Q-factor can also be identified by marginalizing out descendant sets (Tian and Pearl, 2002a). Suppose that QS is identified and D is a descendant set in GS, then QSi\D = X D QSi. (4) If the marginalization over D yields additional c-components in the marginalized graph, then we can again compute each of them from QS\D (Tian and Pearl, 2002b). 7 Figure 4: The above graph induces the Verma constraint, Q[v4] is not a function of v1, and equivalently, v4 ⊥v1|do(v3). Tian’s method recursively computes the Q-factors associated with c-components, marginalizes descendant sets in the graph for the computed Q-factor, and again computes Q-factors associated with c-components in the marginalized graph. The Q-constraint is obtained in the following way. The definition of a Q-factor, QS, given by Equation 3 is a function of Pa(S) only. However, the equivalent expression given by Lemma 1 and Equation 4 may be functions of additional variables. For example, in Figure 4, {v2, v4} is a c-component so we can identify Qv2v4 = P(v4|v3, v2, v1)P(v2|v1). The decomposition also makes v2 a leaf node in Gv2v4. As a result, we can identify Qv4 = R v2 P(v4|v3, v2, v1)P(v2|v1)dv2. Since v1 is not a parent of v4 in Gv4, we have that Qv4 = R v2 P(v4|v3, v2, v1)P(v2|v1)dv2 ⊥v1. Theorem 4. Any Q-constraint, QS ⊥Z, in a linear SEM, has an equivalent set of HT-constraints that can be discovered using Algorithm 3 (see Appendix). 6.2 Dormant Independences Dormant independences have a natural interpretation as independence and conditional independence constraints within identifiable interventional distributions (Shpitser and Pearl, 2008). For example, in Figure 4, the distribution after intervention on v3 can be represented graphically by removing the edge from v2 to v3 since v3 is no longer a function of v2 but is instead a constant. In the resulting graph, v4 is d-separated from v1 implying that v4 is independent of v1 in the distribution, P(v4, v2, v1|do(v3)). In other words, P(v4|do(v3), v1) = P(v4|do(v3)). Now, it is not hard to show that P(v4|v1, do(v3)) is identifiable and equal to P v2 P(v4|v3, v2, v1)P(v2|v1) and we obtain the constraint that P v2 P(v4|v3, v2, v1)P(v2|v1) is not a function of v1, which is exactly the Q-constraint we obtained above. It turns out that dormant independences among singletons and Q-constraints are equivalent, as stated by the following lemma. Lemma 2. Any dormant independence, x |= y|w, do(Z), with x and y singletons has an equivalent Q-constraint and vice versa. Since pairwise independence implies independence in normal distributions, Lemma 2 and Theorem 4 imply the following theorem. Theorem 5. Any dormant independence among sets, x |= y|W, do(Z), in a linear SEM, has an equivalent set of HT-constraints that can be discovered by incorporating recursive c-component decomposition with Algorithm 3 (see Appendix). 7 Conclusion In this paper, we extend the half-trek criterion (Foygel et al., 2012) and generalize the notion of overidentification to discover constraints using the generalized half-trek criterion, even when the coefficients are not identified. We then incorporate recursive c-component decomposition and show that the resulting identification method is able to identify more models and constraints than the existing linear and non-parameteric algorithms. Finally, we note that while we were preparing this manuscript for submission, Drton and Weihs (2016) independently introduced a similar idea to the recursive decomposition discussed in this paper, which they called ancestor decomposition. While ancestor decomposition is more efficient, recursive decomposition is more general in that it enables the identification of a larger set of coefficients. 8 8 Acknowledgments I would like to thank Jin Tian and Judea Pearl for helpful comments and discussions. This research was supported in parts by grants from NSF #IIS-1302448 and #IIS-1527490 and ONR #N00014-131-0153 and #N00014-13-1-0153. References BALKE, A. and PEARL, J. (1994). Probabilistic evaluation of counterfactual queries. In Proceedings of the Twelfth National Conference on Artificial Intelligence, vol. I. MIT Press, Menlo Park, CA, 230–237. BRITO, C. (2004). Graphical methods for identification in structural equation models. Ph.D. thesis, Computer Science Department, University of California, Los Angeles, CA. URL {$<$http://ftp.cs.ucla.edu/pub/stat_ser/r314.pdf$>$} BRITO, C. and PEARL, J. (2002). Generalized instrumental variables. In Uncertainty in Artificial Intelligence, Proceedings of the Eighteenth Conference (A. Darwiche and N. Friedman, eds.). Morgan Kaufmann, San Francisco, 85–93. CHEN, B. and PEARL, J. (2014). Graphical tools for linear structural equation modeling. Tech. Rep. R-432, <http://ftp.cs.ucla.edu/pub/stat_ser/r432.pdf>, Department of Computer Science, University of California, Los Angeles, CA. Forthcoming, Psychometrika. CHEN, B., TIAN, J. and PEARL, J. (2014). Testable implications of linear structual equation models. In Proceedings of the Twenty-eighth AAAI Conference on Artificial Intelligence (C. E. Brodley and P. Stone, eds.). AAAI Press, Palo, CA. <http://ftp.cs.ucla.edu/pub/stat_ser/r428-reprint.pdf>. DRTON, M. and WEIHS, L. (2016). Generic identifiability of linear structural equation models by ancestor decomposition. Scandinavian Journal of Statistics n/a–n/a10.1111/sjos.12227. URL http://dx.doi.org/10.1111/sjos.12227 FOYGEL, R., DRAISMA, J. and DRTON, M. (2012). Half-trek criterion for generic identifiability of linear structural equation models. The Annals of Statistics 40 1682–1713. HUANG, Y. and VALTORTA, M. (2006). Pearl’s calculus of intervention is complete. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (R. Dechter and T. Richardson, eds.). AUAI Press, Corvallis, OR, 217–224. KANG, C. and TIAN, J. (2009). Markov properties for linear causal models with correlated errors. The Journal of Machine Learning Research 10 41–70. PEARL, J. (2004). Robustness of causal claims. In Proceedings of the Twentieth Conference Uncertainty in Artificial Intelligence (M. Chickering and J. Halpern, eds.). AUAI Press, Arlington, VA, 446–453. PEARL, J. (2009). Causality: Models, Reasoning, and Inference. 2nd ed. Cambridge University Press, New York. SHPITSER, I. and PEARL, J. (2006). Identification of conditional interventional distributions. In Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (R. Dechter and T. Richardson, eds.). AUAI Press, Corvallis, OR, 437–444. SHPITSER, I. and PEARL, J. (2008). Dormant independence. In Proceedings of the Twenty-Third Conference on Artificial Intelligence. AAAI Press, Menlo Park, CA, 1081–1087. TIAN, J. (2002). Studies in Causal Reasoning and Learning. Ph.D. thesis, Computer Science Department, University of California, Los Angeles, CA. TIAN, J. (2005). Identifying direct causal effects in linear models. In Proceedings of the National Conference on Artificial Intelligence, vol. 20. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. TIAN, J. (2007). A criterion for parameter identification in structural equation models. In Proceedings of the Twenty-Third Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-07). AUAI Press, Corvallis, Oregon. TIAN, J. (2009). Parameter identification in a class of linear structural equation models. In Proceedings of the Twenty-First International Joint Conference on Artificial Intelligence (IJCAI-09). 9 TIAN, J. and PEARL, J. (2002a). A general identification condition for causal effects. In Proceedings of the Eighteenth National Conference on Artificial Intelligence. AAAI Press/The MIT Press, Menlo Park, CA, 567–573. TIAN, J. and PEARL, J. (2002b). On the testable implications of causal models with hidden variables. In Proceedings of the Eighteenth Conference on Uncertainty in Artificial Intelligence (A. Darwiche and N. Friedman, eds.). Morgan Kaufmann, San Francisco, CA, 519–527. VERMA, T. and PEARL, J. (1990). Equivalence and synthesis of causal models. In Proceedings of the Sixth Conference on Uncertainty in Artificial Intelligence. Cambridge, MA. Also in P. Bonissone, M. Henrion, L.N. Kanal and J.F. Lemmer (Eds.), Uncertainty in Artificial Intelligence 6, Elsevier Science Publishers, B.V., 255–268, 1991. WRIGHT, S. (1921). Correlation and causation. Journal of Agricultural Research 20 557–585. 10
2016
176
6,079
An Architecture for Deep, Hierarchical Generative Models Philip Bachman phil.bachman@maluuba.com Maluuba Research Abstract We present an architecture which lets us train deep, directed generative models with many layers of latent variables. We include deterministic paths between all latent variables and the generated output, and provide a richer set of connections between computations for inference and generation, which enables more effective communication of information throughout the model during training. To improve performance on natural images, we incorporate a lightweight autoregressive model in the reconstruction distribution. These techniques permit end-to-end training of models with 10+ layers of latent variables. Experiments show that our approach achieves state-of-the-art performance on standard image modelling benchmarks, can expose latent class structure in the absence of label information, and can provide convincing imputations of occluded regions in natural images. 1 Introduction Training deep, directed generative models with many layers of latent variables poses a challenging problem. Each layer of latent variables introduces variance into gradient estimation which, given current training methods, tends to impede the flow of subtle information about sophisticated structure in the target distribution. Yet, for a generative model to learn effectively, this information needs to propagate from the terminal end of a stochastic computation graph, back to latent variables whose effect on the generated data may be obscured by many intervening sampling steps. One approach to solving this problem is to use recurrent, sequential stochastic generative processes with strong interactions between their inference and generation mechanisms, as introduced in the DRAW model of Gregor et al. [5] and explored further in [1, 19, 22]. Another effective technique is to use lateral connections for merging bottom-up and top-down information in encoder/decoder type models. This approach is exemplified by the Ladder Network of Rasmus et al. [17], and has been developed further for, e.g. generative modelling and image processing in [8, 23]. Models like DRAW owe much of their success to two key properties: they decompose the process of generating data into many small steps of iterative refinement, and their structure includes direct deterministic paths between all latent variables and the final output. In parallel, models with lateral connections permit different components of a model to operate at well-separated levels of abstraction, thus generating a hierarchy of representations. This property is not explicitly shared by DRAW-like models, which typically reuse the same set of latent variables throughout the generative process. This makes it difficult for any of the latent variables, or steps in the generative process, to individually capture abstract properties of the data. We distinguish between the depth used by DRAW and the depth made possible by lateral connections by describing them respectively as sequential depth and hierarchical depth. These two types of depth are complimentary, rather than competing. Our contributions focus on increasing hierarchical depth without forfeiting trainability. We combine the benefits of DRAW-like models and Ladder Networks by developing a class of models which we 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. call Matryoshka Networks (abbr. MatNets), due to their deeply nested structure. In Section 2, we present the general architecture of a MatNet. In the MatNet architecture we: • Combine the ability of, e.g. LapGANs [3] and Diffusion Nets [21] to learn hierarchicallydeep generative models with the power of jointly-trained inference/generation1. • Use lateral connections, shortcut connections, and residual connections [7] to provide direct paths through the inference network to the latent variables, and from the latent variables to the generated output — this makes hierarchically-deep models easily trainable in practice. Section 2 also presents several extensions to the core architecture including: mixture-based prior distributions, a method for regularizing inference to prevent overfitting in practical settings, and a method for modelling the reconstruction distribution p(x|z) with a lightweight, local autoregressive model. In Section 3, we present experiments showing that MatNets offer state-of-the-art performance on standard benchmarks for modelling simple images and compelling qualitative performance on challenging imputation problems for natural images. Finally, in Section 4 we provide further discussion of related work and promising directions for future work. 2 The Matryoshka Network Architecture Matryoshka Networks combine three components: a top-down network (abbr. TD network), a bottomup network (abbr. BU network), and a set of merge modules which merge information from the BU and TD networks. In the context of stochastic variational inference [10], all three components contribute to the approximate posterior distributions used during inference/training, but only the TD network participates in generation. We first describe the MatNet model formally, and then provide a procedural description of its three components. The full architecture is summarized in Fig. 1. Latent Variables Top-down Network Bottom-up Network Merge Modules (a) merge module TD state BU state latent mean latent logvar merge state merge state (b) Figure 1: (a) The overall structure of a Matryoshka Network, and how information flows through the network during training. First, we perform a feedforward pass through the bottom-up network to generate a sequence of BU states. Next, we sample the initial latent variables conditioned on the final BU state. We then begin a stochastic feedforward pass through the top-down network. Whenever this feedforward pass requires sampling some latent variables, we get the sampling distribution by passing the corresponding TD and BU states through a merge module. This module draws conditional samples of the latent variables via reparametrization [10]. These latent samples are then combined with the current TD state, and the feedforward pass continues. Intuitively, this approach allows the TD network to invert the bottom-up network by tracking back along its intermediate states, and eventually recover its original input. (b) Detailed view of a merge module from the network in (a). This module stacks the relevant BU, TD, and merge states on top of each other, and then passes them through a convolutional residual module, as described in Eqn. 10. The output has three parts — the first provides means for the latent variables, the second provides their log-variances, and the third conveys updated state information to subsequent merge modules. 1A significant downside of LapGANs and Diffusion Nets is that they define their inference mechanisms a priori. This is computationally convenient, but prevents the model from learning abstract representations. 2 2.1 Formal Description The distribution p(x) generated by a MatNet is encoded in its top-down network. To model p(x), the TD network decomposes the joint distribution p(x, z) over an observation x and a sequence of latent variables z ≡{z0, ..., zd} into a sequence of simpler conditional distributions: p(x) = X (zd,...,z0) p(x|zd, ..., z0)p(zd|zd−1, ..., z0)...p(zi|zi−1, ..., z0)...p(z0), (1) which we marginalize with respect to the latent variables to get p(x). The TD network is designed so that each conditional p(zi|zi−1, ..., z0) can be truncated to p(x|ht i) using an internal TD state ht i. See Eqns. 7/8 in Sec. 2.2 for procedural details. The distribution q(z|x) used for inference in an unconditional MatNet involves the BU network, TD network, and merge modules. This distribution can be written: q(zd, ..., z0|x) = q(z0|x)q(z1|z0, x)...q(zi|zi−1, ..., z0, x)...q(zd|zd−1, ..., z0, x), (2) where each conditional q(zi|zi−1, ..., z0, x) can be truncated to q(zi|hm i+1) using an internal merge state hm i+1 produced by the ith merge module. See Eqns. 10/11 in Sec. 2.2 for procedural details. MatNets can also be applied to conditional generation problems like inpainting or pixel-wise segmentation. For, e.g. inpainting with known pixels xk and missing pixels xu, the predictive distribution of a conditional MatNet is given by: p(xu|xk) = X (zd,...,z0) p(xu|zd, ..., z0, xk)p(zd|zd−1, ..., z0, xk)...p(z1|z0, xk)p(z0|xk). (3) Each conditional p(zi|zi−1, ..., z0, xk) can be truncated to p(zi|hm:g i+1), where hm:g i+1 indicates state in a merge module belonging to the generator network. Crucially, conditional MatNets include BU networks and merge modules that participate in generation, in addition to the BU networks and merge modules used by both conditional and unconditional MatNets during inference/training. The distribution used for inference in a conditional MatNet is given by: q(zd, ..., z0|xk, xu) = q(zd|zd−1, ..., z0, xk, xu)...q(z1|z0, xk, xu)q(z0|xk, xu), (4) where each conditional q(zi|zi−1, ..., z0, xk, xu) can be truncated to q(zi|hm:i i+1), where hm:i i+1 indicates state in a merge module belonging to the inference network. Note that, in a conditional MatNet the distributions p(·|·) are not allowed to condition on xu, while the distributions q(·|·) can. MatNets are well-suited to training with Stochastic Gradient Variational Bayes [10]. In SGVB, one maximizes a lower-bound on the data log-likelihood based on the variational free-energy: log p(x) ≥ E z∼q(z|x) [log p(x|z)] −KL(q(z|x) || p(z)), (5) for which p and q must satisfy a few simple assumptions and KL(q(z|x) || p(z)) indicates the KL divergence between the inference distribution q(z|x) and the model prior p(z). This bound is tight when the inference distribution matches the true posterior p(z|x) in the model joint distribution p(x, z) = p(x|z)p(z) — in our case given by Eqns. 1/3. For brevity, we only explicitly write the free-energy bound for a conditional MatNet, which is: log p(xu|xk) ≥ E q(zd,...,z0|xk,xu)  log p(xu|zd, ..., z0, xk)  − (6) KL(q(zd, ..., z0|xk, xu)||p(zd, ..., z0|xk)). With SGVB we can optimize the bound in Eqn. 6 using the “reparametrization trick” to allow easy backpropagation through the expectation over z ∼q(z|xk, xu). See [10, 18] for more details about this technique. The bound for unconditional MatNets is nearly identical — it just removes xk. 2.2 Procedural Description Structurally, top-down networks in MatNets comprise sequences of modules in which each module f t i receives two inputs: a deterministic top-down state ht i from the preceding module f t i−1, and some 3 latent variables zi. Module f t i produces an updated state ht i+1 = f t i (ht i, zi; θt), where θt indicates the TD network’s parameters. By defining the TD modules appropriately, we can reproduce the architectures for LapGANs, Diffusion Nets, and Probabilistic Ladder Networks [23]. Motivated by the success of LapGANs and ResNets [7], we use TD modules in which the latent variables are concatenated with the top-down state, then transformed, after which the transformed values are added back to the top-down state prior to further processing. If the adding occurs immediately before, e.g. a ReLU, then the latent variables can effectively gate the top-down state by knocking particular elements below zero. This allows each stochastic module in the top-down network to apply small refinements to the output of preceding modules. MatNets thus perform iterative stochastic refinement through hierarchical depth, rather than through sequential depth as in DRAW2. More precisely, the top-down modules in our convolutional MatNets compute: ht i+1 = lrelu(ht i + conv(lrelu(conv([ht i; zi], vt i)), wt i)), (7) where [x; x′] indicates tensor concatenation along the “feature” axis, lrelu(·) indicates the leaky ReLU function, conv(h, w) indicates shape-preserving convolution of the input h with the kernel w, and wt i/vt i indicate the trainable parameters for module i in the TD network. We elide bias terms for brevity. When working with fully-connected models we use stochastic GRU-style state updates rather than the stochastic residual updates in Eq. 7. Exhaustive descriptions of the modules can be found in our code at: https://github.com/Philip-Bachman/MatNets-NIPS. These TD modules represent each conditional p(zi|zi−1, ..., z0) in Eq. 1 using p(zi|ht i). TD module f t i places a distribution over zi using parameters [¯µi; log ¯σ2 i ] computed as follows: [¯µi; log ¯σ2 i ] = conv(lrelu(conv(ht i, vt i)), wt i), (8) where we use¯· to distinguish between Gaussian parameters from the generator network and those from the inference network (see Eqn. 11). The distributions p(·) all depend on the parameters θt. Bottom-up networks in MatNets comprise sequences of modules in which each module receives input only from the preceding BU module. Our BU networks are all deterministic and feedforward, but sensibly augmenting them with auxiliary latent variables [16, 15] and/or recurrence is a promising topic for future work. Each non-terminal module f b i in the BU network computes an updated state: hb i = f b i (hb i+1; θb). The final module, f b 0, provides means and log-variances for sampling z0 via reparametrization [10]. To align BU modules with their counterparts in the TD network, we number them in reverse order of evaluation. We structured the modules in our BU networks to take advantage of residual connections. Specifically, each BU module f b i computes: hb i = lrelu(hb i+1 + conv(lrelu(conv(hb i+1, vb i )), wb i)), (9) with operations defined as for Eq. 7. These updates can be replaced by GRUs, LSTMs, etc. The updates described in Eqns. 7 and 9 both assume that module inputs and outputs are the same shape. We thus construct MatNets using groups of “meta modules”, within which module input/output shapes are constant. To keep our network design (relatively) simple, we use one meta module for each spatial scale in our networks (e.g. scales of 14x14, 7x7, and fully-connected for MNIST). We connect meta modules using layers which may upsample, downsample, and change feature dimension via strided convolution. We use standard convolution layers, possibly with up or downsampling, to feed data into and out of the bottom-up and top-down networks. During inference, merge modules compare the current top-down state with the state of the corresponding bottom-up module, conditioned on the current merge state, and choose a perturbation of the top-down information to push it towards recovering the bottom-up network’s input (i.e. minimize reconstruction error). The ith merge module outputs [µi; log σ2 i ; hm i+1] = f m i (hb i, ht i, hm i ; θm), where µi and log σ2 i are the mean and log-variance for sampling zi via reparametrization, and hm i+1 gives the updated merge state. As in the TD and BU networks, we use a residual update: hm i+1 = lrelu(hm i + conv(lrelu(conv([hm i ; hb i; ht i], ui i)), vi i)) (10) [µi; log σ2 i ] = conv(hm i+1, wi i), (11) 2Current DRAW-like models can be extended to incorporate hierarchical depth, and our models can be extended to incorporate sequential depth. 4 in which the convolution kernels ui i, vi i, and wi i constitute the trainable parameters of this module. Each merge module thus computes an updated merge state and then reparametrizes a diagonal Gaussian using a linear function of the updated merge state. In our experiments all modules in all networks had their own trainable parameters. We experimented with parameter sharing and GRU-style state in our convolutional models. The stochastic convolutional GRU is particularly interesting when applied depth-wise (rather than time-wise as in [19]), as it implements a stochastic Neural GPU [9] trainable by variational inference and capable of multi-modal dynamics. We saw no performance gains with these changes, but they merit further investigation. In unconditional MatNets, the top-most latent variables z0 follow a zero-mean, unit-variance Gaussian prior, except in our experiments with mixture-based priors. In conditional MatNets, z0 follows a distribution conditioned on the known values xk. Conditional MatNets use parallel sets of BU and merge modules for the conditional generator and the inference network. BU modules in the conditional generator observe a partial input xk, while BU modules in the inference network observe both xk and the unknown values xu (which the model is trained to predict). The generative BU and merge modules in a conditional MatNet interact with the TD modules analogously to the BU and merge modules used for inference. Our models used independent Bernoullis, diagonal Gaussians, or “integrated” Logistics (see [11]) for the final output distribution p(x|zd, ..., z0)/p(xu|zd, ..., z0, xk). 2.3 Model Extensions We also develop several extensions for the MatNet architecture. The first is to replace the zero-mean, unit-variance Gaussian prior over z0 with a Gaussian Mixture Model, which we train simultaneously with the rest of the model. When using a mixture prior, we use an analytical approximation to the required KL divergence. For Gaussian distribution q, and Gaussian mixture p with components {p1, ..., pk} with uniform mixture weights, we use the KL approximation: KL(q || p) ≈log 1 Pk i=1 e−KL(q || pi) . (12) Our tests with mixture-based priors are only concerned with qualitative behaviour, so we do not worry about the approximation error in Eqn. 12. The second extension is a technique for regularizing the inference model to prevent overfitting beyond that which is present in the generator. This regularization is applied by optimizing: maximize q E x∼p(x)  E z∼q(z|x) [log p(x|z)] −KL(q(z|x) || p(z))  . (13) This maximizes the free-energy bound for samples drawn from our model, but without changing their true log-likelihood. By maximizing Eqn. 13, we implicitly reduce KL(q(z|x) || p(z|x)), which is the gap between the free-energy bound and the true log-likelihood. A similar regularizer can be constructed for minimizing KL(p(z|x) || q(z|x)). We use (13) to reduce overfitting, and slightly boost test performance, in our experiments with MNIST and Omniglot. The third extension off-loads responsibility for modelling sharp local dynamics in images, e.g. precise edge placements and small variations in textures, from the latent variables onto a local, deterministic autoregressive model. We use a simplified version of the masked convolutions in the PixelCNN of [25], modified to condition on the output of the final TD module in a MatNet. This modification is easy — we just concatenate the final TD module’s output and the true image, and feed this into a PixelCNN with, e.g. five layers. A trick we use to improve gradient flow back to the MatNet is to feed the MatNet’s output directly into each internal layer of the PixelCNN. In the masked convolution layers, connections to the MatNet output are unrestricted, since they are already separated from the ground truth by an appropriately-monitored noisy channel. Larger, more powerful mechanisms for combining local autoregressions and conditioning information are explored in [26]. 3 Experiments We measured quantitative performance of MatNets on three datasets: MNIST, Omniglot [13], and CIFAR 10 [12]. We used the 28x28 version of Omniglot described in [2], which can be found at: https://github.com/yburda/iwae. All quantitative experiments measured performance in 5 Figure 2: MatNet performance on quantitative benchmarks. All tables except the lower-right table describe standard unconditional generative NLL results. The lower-right table presents results from the structured prediction task in [22], in which 1-3 quadrants of an MNIST digit are visible, and NLL is measured on predictions for the unobserved quadrants. Figure 3: Class-like structure learned by a MatNet trained on 28x28 Omniglot, without label information. The model used a GMM prior over z0 with 50 mixture components. Each group of three columns corresponds to a mixture component. The top row shows validation set examples whose posterior over the mixture components placed them into each component. Subsequent rows show samples drawn by freely resampling latent variables from the model prior, conditioned on the top k layers of latent variables, i.e. {z0, ..., zk−1} being drawn from the approximate posterior for the example at the top of the column. From the second row down, we show k = {1, 2, 4, 6, 8, 10}. terms of negative log-likelihood, with the CIFAR 10 scores rescaled to bits-per-pixel and corrected for discrete/continuous observations as described in [24]. We used the IWAE bound from [2] to evaluate our models, with 2500 samples in the bound. We performed additional experiments measuring the qualitative performance of MatNets using Omniglot, CelebA faces [14], LSUN 2015 towers, and LSUN 2015 churches. The latter three datasets are 64x64 color images with significant detail and non-trivial structure. Complete hyperparameters for model architecture and optimization can be found in the code at https://github.com/Philip-Bachman/MatNets-NIPS. We performed three quantitative tests using MNIST. The first tests measured generative performance on dynamically-binarized images using a fully-connected model (for comparison with [2, 23]) and on the fixed binarization from [20] using a convolutional model (for comparison with [25, 19]). MatNets improved on existing results in both settings. See the tables in Fig. 2. Our third tests with MNIST measured performance of conditional MatNets for structured prediction. For this, we recreated the tests described in [22]. MatNet performance on these tests was also strong, though the prior results were from a fully-connected model, which skews the comparison. We also measured quantitative performance using the 32x32 color images of CIFAR 10. We trained two models on this data — one with a Gaussian reconstruction distribution and dequantization as described in [24], and the other which added a local autoregression and used the “integrated Logistic” likelihood described in [11]. The Gaussian model fell just short of the best previously reported result for a variational method (from [6]), and well short of the Pixel RNN presented in [25]. Performance on this task seems very dependent on a model’s ability to predict pixel intensities precisely along edges. The ability to efficiently capture global structure has a relatively weak benefit. Mistaking a cat for a dog costs little when amortized over thousands of pixels, while misplacing a single edge can spike the reconstruction cost dramatically. We demonstrate the strength of this effect in Fig. 4, where we plot how the bits paid to encode observations are distributed among the modules in the network over the course of training for MNIST, Omniglot, and CIFAR 10. The plots show a stark difference between these distributions when modelling simple line drawings vs. when modelling more natural 6 (a) (b) (c) Figure 4: This figure shows per module divergences KL(q(zi|hm i+1) || p(zi|ht i)) over the course of training for models trained on MNIST, Omniglot, and CIFAR 10. The stacked area plots are grouped by “meta module” in the TD network. The MNIST and Omniglot models both had a single FC module and meta modules at spatial dimension 7x7 and 14x14. The meta modules at 7x7 and 14x14 both comprised 5 TD modules. The CIFAR10 model (without autoregression) had one FC module, and meta modules at spatial dimension 8x8, 16x16, and 32x32. These meta modules comprised 2, 4, and 4 modules respectively. Light lines separate modules, and dark lines separate meta modules. The encoding cost on CIFAR 10 is clearly dominated by the low-level details encoded by the latent variables in the full-resolution TD modules closest to the output. images. For CIFAR 10, almost all of the encoding cost was spent in the 32x32 layers of the network closest to the generated output. This was our motivation for adding a lightweight autoregression to p(x|z), which significantly reduced the gap between our model and the PixelRNN. Fig. 5 shows some samples from our model, which exhibit occasional glimpses of global and local structure. Our final quantitative test used the Omniglot handwritten character dataset, rescaled to 28x28 as in [2]. These tests used the same convolutional architecture as on MNIST. Our model outperformed previous results, as shown in Fig. 2. Using Omniglot we also experimented with placing a mixture-based prior distribution over the top-most latent variables z0. The purpose of these tests was to determine whether the model could uncover latent class structure in the data without seeing any label information. We visualize results of these tests in Fig. 3. Additional description is provided in the figure caption. We placed a slight penalty on the entropy of the posterior distributions for each input to the model, to encourage a stronger separation of the mixture components. The inputs assigned to each mixture component (based on their posteriors) exhibit clear stylistic coherence. In addition to qualitative tests exploring our model’s ability to uncover latent factors of variation in Omniglot data, we tested the performance of our models at imputing missing regions of higher resolution images. These tests used images of celebrity faces, churches, and towers. These images include far more detail and variation than those in MNIST/Omniglot/CIFAR 10. We used two-stage models for these tests, in which each stage was a conditional MatNet. The first stage formed an initial guess for the missing image content, and the second stage then refined that guess. Both stages used the same architectures for their inference and generator networks. We sampled imputation problems by placing three 20x20 occluders uniformly at random in the image. Each stage had single TD modules at scales 32x32, 16x16, 8x8, and fully-connected. We trained models for roughly 200k updates, and show imputation performance on images from a test set that was held out during training. Results are shown in Fig. 5. 4 Related Work and Discussion Previous successful attempts to train hierarchically-deep models largely fall into a class of methods based on deconstructing, and then reconstructing data. Such approaches are akin to solving mazes by starting at the end and working backwards, or to learning how an object works by repeatedly disassembling and reassembling it. Examples include LapGANs [3], which deconstruct an image by repeatedly downsampling it, and Diffusion Nets [21], which deconstruct arbitrary data by subjecting it to a long sequence of small random perturbations. The power of these approaches stems from the way in which gradually deconstructing the data leaves behind a trail of crumbs which can be followed back to a well-formed observation. In the generative models of [3, 21], the deconstruction processes were defined a priori, which avoided the need for trained inference. This makes training significantly 7 (a) CIFAR 10 samples (b) CelebA Faces (c) LSUN Churches (d) LSUN Towers Figure 5: Imputation results on challenging, real-world images. These images show predictions for missing data generated by a two stage conditional MatNet, trained as described in Section 3. Each occluded region was 20x20 pixels. Locations for the occlusions were selected uniformly at random within the images. One interesting behaviour which emerged in these tests was that our model successfully learned to properly reconstruct the watermark for “shutterstock”, which was a source of many of the LSUN images – see the second input/output pair in the third row of (b). easier, but subverts one of the main motivations for working with latent variables and sample-based approximate inference, i.e. the ability to capture salient factors of variation in the inferred relations between latent variables and observed data. This deficiency is beginning to be addressed by, e.g. the Probabilistic Ladder Networks of [23], which are a special case of our architecture in which the deterministic paths from latent variables to observations are removed and the conditioning mechanism in inference is more restricted. Reasoning about data through the posteriors induced by an appropriate generative model motivates some intriguing work at the intersection of machine learning and cognitive science. This work shows that, in the context of an appropriate generative model, powerful inference mechanisms are capable of exposing the underlying factors of variation in fairly sophisticated data. See, e.g. Lake et al. [13]. Techniques for training coupled generation and inference have now reached a level that makes it possible to investigate these ideas while learning models end-to-end [4]. In future work we plan to apply our models to more “interesting” generative modelling problems, including more challenging image data and problems in language/sequence modelling. The strong performance of our models on benchmark problems suggests their potential for solving difficult structured prediction problems. Combining the hierarchical depth of MatNets with the sequential depth of DRAW is also worthwhile. 8 References [1] P. Bachman and D. Precup. Data generation as sequential decision making. In Advances in Neural Information Processing Systems (NIPS), 2015. [2] Y. Burda, R. Grosse, and R. Salakhutdinov. Importance weighted auto-encoders. arXiv:1509.00519v1 [cs.LG], 2015. [3] E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative models using a laplacian pyramid of adversarial networks. arXiv:1506.05751 [cs.CV], 2015. [4] S. M. A. Eslami, N. Heess, T. Weber, Y. Tassa, K. Kavucuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. arXiv:1603.08575 [cs.CV], 2016. [5] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. In International Conference on Machine Learning (ICML), 2015. [6] K. Gregor, F. Besse, D. J. Rezende, I. Danihelka, and D. Wierstra. Towards conceptual compression. In arXiv:1604.08772v1 [stat.ML], 2016. [7] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. arXiv:1512.03385v1 [cs.CV], 2015. [8] S. Honari, J. Yosinski, P. Vincent, and C. Pal. Recombinator networks: Learning coarse-to-fine feature aggregation. In Computer Vision and Pattern Recognition (CVPR), 2016. [9] L. Kaiser and I. Sutskever. Neural gpus learn algorithms. In International Conference on Learning Representations (ICLR), 2016. [10] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In International Conference on Learning Representations (ICLR), 2014. [11] D. P. Kingma, T. Salimans, and M. Welling. Improving variational inference with inverse autoregressive flow. arXiv:1606.04934 [cs.LG], 2016. [12] A. Krizhevsky and G. E. Hinton. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009. [13] B. M. Lake, R. Salakhutdinov, and J. B. Tenebaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. [14] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In International Conference on Computer Vision (ICCV), 2015. [15] L. Maaløe, C. K. Sønderby, S. K. Sønderby, and O. Winther. Auxiliary deep generative models. In International Conference on Machine Learning (ICML), 2016. [16] R. Ranganath, D. Tran, and D. M. Blei. Hierarchical variational models. In International Conference on Machine Learning (ICML), 2016. [17] A. Rasmus, H. Valpola, M. Honkala, M. Berglund, and T. Raiko. Semi-supervised learning with ladder networks. In Advances in Neural Information Processing Systems (NIPS), 2015. [18] D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In International Conference on Machine Learning (ICML), 2014. [19] D. J. Rezende, S. Mohamed, I. Danihelka, K. Gregor, and D. Wierstra. One-shot generalization in deep generative models. In International Conference on Machine Learning (ICML), 2016. [20] R. Salakhutdinov and I. Murray. On the quantitative analysis of deep belief networks. In International Conference on Machine Learning (ICML), 2008. [21] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning (ICML), 2015. [22] K. Sohn, H. Lee, and X. Yan. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (NIPS), 2015. [23] C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther. How to train deep variational autoencoders and probabilistic ladder networks. International Conference on Machine Learning (ICML), 2016. [24] L. Theis and M. Bethge. Generative image modeling using spatial lstms. In Advances in Neural Information Processing Systems (NIPS), 2015. [25] A. van den Oord, N. Kalchbrenner, and K. Kavucuoglu. Pixel recurrent neural networks. International Conference on Machine Learning (ICML), 2016. [26] A. van den Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavucuoglu. Conditional image generation with pixelcnn decoders. arXiv:1606.05328 [cs.CV], 2016. 9
2016
177
6,080
Towards Conceptual Compression Karol Gregor Google DeepMind karolg@google.com Frederic Besse Google DeepMind fbesse@google.com Danilo Jimenez Rezende Google DeepMind danilor@google.com Ivo Danihelka Google DeepMind danihelka@google.com Daan Wierstra Google DeepMind wierstra@google.com Abstract We introduce convolutional DRAW, a homogeneous deep generative model achieving state-of-the-art performance in latent variable image modeling. The algorithm naturally stratifies information into higher and lower level details, creating abstract features and as such addressing one of the fundamentally desired properties of representation learning. Furthermore, the hierarchical ordering of its latents creates the opportunity to selectively store global information about an image, yielding a high quality ‘conceptual compression’ framework. 1 Introduction Deep generative models with latent variables can capture image information in a probabilistic manner to answer questions about structure and uncertainty. Such models can also be used for representation learning, and the associated procedures for inferring latent variables are vital to important application areas such as (semi-supervised) classification and compression. In this paper we introduce convolutional DRAW, a new model in this class that is able to transform an image into a progression of increasingly detailed representations, ranging from global conceptual aspects to low level details (see Figure 1). It significantly improves upon earlier variational latent variable models (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2014). Furthermore, it is simple and fully convolutional, and does not require complex design choices, just like the recently introduced DRAW architecture (Gregor et al., 2015). It provides an important insight into building good variational auto-encoder models of images: positioning multiple layers of stochastic variables ‘close’ to the pixels (in terms of nonlinear steps in the computational graph) can significantly improve generative performance. Lastly, the system’s ability to stratify information has the side benefit of allowing it to perform high quality lossy compression, by selectively storing a higher level subset of inferred latent variables, while (re)generating the remainder during decompression (see Figure 3). In the following we will first discuss variational auto-encoders and compression. The subsequent sections then describe the algorithm and present results both on generation quality and compression. 1.1 Variational Auto-Encoders Numerous deep generative models have been developed recently, ranging from restricted and deep Boltzmann machines (Hinton & Salakhutdinov, 2006; Salakhutdinov & Hinton, 2009), generative adversarial networks (Goodfellow et al., 2014), autoregressive models (Larochelle & Murray, 2011; Gregor & LeCun, 2011; van den Oord et al., 2016) to variational auto-encoders (Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2014). In this paper we focus on the class of models in the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Figure 1: Conceptual Compression. The top rows show full reconstructions from the model for Omniglot and ImageNet, respectively. The subsequent rows were obtained by storing the first t iteratively obtained groups of latent variables and then generating the remaining latents and visibles using the model (only a subset of all possible t values are shown, in increasing order). Left: Omniglot reconstructions. Each group of four columns shows different samples at a given compression level. We see that the variations in the latter samples concentrate on small details, such as the precise placement of strokes. Reducing the number of stored bits tends to preserve the overall shape, but increases the symbol variation. Eventually a varied set of symbols is generated. Nevertheless even in the first row there is a clear difference between variations produced from a given symbol and those between different symbols. Right: ImageNet reconstructions. Here the latent variables were generated with zero variance (ie. the mean of the latent prior is used). Again the global structure is captured first and the details are filled in later on. variational auto-encoding framework. Since we are also interested in compression, we present them from an information-theoretic perspective. Variational auto-encoders consist of two neural networks: one that generates samples from latent variables (‘imagination’), and one that infers latent variables from observations (‘recognition’). The two networks share the latent variables. Intuitively speaking one might think of these variables as specifying, for a given image, at different levels of abstraction, whether a particular object such as a cat or a dog is present in the input, or perhaps what the exact position and intensity of an edge at a given location might be. During the recognition phase the network acquires information about the input and stores it in the latent variables, reducing their uncertainty. For example, at first not knowing whether a cat or a dog is present in the image, the network observes the input and becomes nearly certain that it is a cat. The reduction in uncertainty is quantitatively equal to the amount of information that the network acquired about the input. During generation the network starts with uncertain latent variables and samples their values from a prior distribution. Different choices will produce different visibles. Variational auto-encoders provide a natural framework for unsupervised learning – we can build hierarchical networks with multiple layers of stochastic variables and expect that, after learning, the representations become more and more abstract for higher levels of the hierarchy. The pertinent questions then are: can such a framework indeed discover such representations both in principle and in practice, and what techniques are required for its satisfactory performance. 1.2 Conceptual Compression Variational auto-encoders can not only be used for representation learning but also for compression. The training objective of variational auto-encoders is to compress the total amount of information needed to encode the input. They achieve this by using information-carrying latent variables that express what, before compression, was encoded using a larger amount of information in the input. The information in the layers and the remaining information in the input can be encoded in practice as explained later in this paper. The achievable amount of lossless compression is bounded by the underlying entropy of the image distribution. Most image information as measured in bits is contained in the fine details of the image. 2 Layer 1 E1 E2 Z1 Z2 D1 D2 R X Prior Generation Appr. Posterior Inference Latent (Information) Layer 2 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0 5 10 15 20 25 30 35 Information (bits) Iteration number 0.01 * Layer 1 Layer 2 Figure 2: Two-layer convolutional DRAW. A schematic depiction of one time slice is shown on the left. X and R denote input and reconstruction, respectively. On the right, the amount of information at different layers and time steps is shown. A two-layer convolutional DRAW was trained on ImageNet, with a convolutional first layer and a fully connected second layer. The amount of information at a given layer and iteration is measured by the KL-divergence between the prior and the posterior (5). When presented with an image, first the top layer acquires information and then the second slowly increases, suggesting that the network first acquires ‘conceptual’ information about the image and only then encodes the remaining details. Note that this is an illustration of a two-layer system, whereas most experiments in this paper, unless otherwise stated, were performed with a one-layer version. Thus we might reasonably expect that future improvements in lossless compression technology will be bounded in scope. Lossy compression, on the other hand, holds much more potential for improvement. In this case the objective is to best compress an image in terms of quality of similarity to the original image, whilst allowing for some information loss. As an example, at a low level of compression (close to lossless compression), we could start by reducing pixel precision, e.g. from 8 bits to 7 bits. Then, as in JPEG, we could express a local 8x8 neighborhood in a discrete cosine transform basis and store only the most significant components. This way, instead of introducing quantization artefacts in the image that would appear if we kept decreasing pixel precision, we preserve higher level structures but to a lower level of precision. Nevertheless, if we want to improve upon this and push the limits of what is possible in compression, we need to be able to identify what the most salient ‘aspects’ of an image are. If we wanted to compress images of cats and dogs down to one bit, what would that bit ideally represent? It is natural to argue that it should represent whether the image contains either a cat or a dog. How would we then produce an image from this single bit? If we have a good generative model, we can simply generate the entire image from this single latent variable by ancestral sampling, yielding an image of a cat if the bit corresponds to ‘cat’, and an image of a dog otherwise. Now let us imagine that instead of compressing down to one bit we wanted to compress down to ten bits. We can then store some other important properties of the animal as well – e.g. its type, color, and basic pose. Conditioned on this information, everything else can be probabilistically ‘filled in’ by the generative model during decompression. Increasing the number of stored bits further we can preserve more and more about the image, still filling in the fine pixel-level details such as precise hair structure, or the exact pattern of the floor, etc. Most bits indeed concern such low level details. We refer to this type of compression – compressing by preferentially storing the higher levels of representation while generating/filling-in the remainder – ‘conceptual compression’. Importantly, if we solve deep representation learning with latent variable generative models that generate high quality samples, we simultaneously achieve the objective of lossy compression mentioned above. We can see this as follows. Assume that the network has learned a hierarchy of progressively more abstract representations. Then, to get different levels of compression, we can store only the corresponding number of topmost layers and generate the rest. By solving unsupervised deep learning, the network would order information according to its importance and store it with that priority. 3 2 Convolutional DRAW Below we present the equations for a one layer system (for a two layer system the reader is referred to the supplementary material): For t = 1, . . . , T ϵt = x −µ(rt−1) (1) he t = RNN(x, ϵt, he t−1, hd t−1) (2) zt ∼ qt = q(zt|he t) (3) pt = p(zt|hd t−1) (4) Lz t = KL(qt|pt) (5) hd t = RNN(zt, hd t−1, rt−1) (6) rt = rt−1 + Whd t (7) At the end, at time T, µ, α = split(rT ) (8) px = N(µ, exp(α))) (9) qx = U(x −s/2, x + s/2) (10) Lx = log(qx/px) (11) L = βLx + PT t=1 Lz t (12) Long Short-Term Memory networks (LSTM; Hochreiter & Schmidhuber, 1997) are used as the recurrent modules (RNN) and convolutions are used for all linear operations. We follow the computations and explain them and the variables as we go along. The input image is x. The canvas variable rt−1, initialized to a bias, carries information about the current reconstruction of the image: a mean µ(rt−1) and a log standard deviation α(rt−1). We compute the reconstruction error ϵt. This, together with x, is fed to the encoder RNN (E in the diagram), which updates its internal state and produces an output vector he t. This goes into the approximate posterior distribution qt from which zt is sampled. The prior distribution pt and the latent loss Lz t are calculated. zt is passed to the decoder and Lz t measures the amount of information about x that is transmitted using zt to the decoder at this time. The decoder (D in the diagram) updates its state and outputs the vector hd t which is then used to update the canvas rt. At the end of the recurrence, the canvas consists of the values of µ and α = log σ of the Gaussian distribution p(x|z1, . . . , zT ) (or analogous parameters for other distributions). This probability is computed for the input x as px. Because we use a real valued distribution, but the original data has 256 values per color channel for a typical image, we encode this discretization as a uniform distribution U(x −s/2, x + s/2) of width equal to the discretization s (typically 1/255) around x. The input cost is then Lx = log(qx/px), it is always non-negative, and measures the number of bits (nats) needed to describe x knowing (z1, . . . , zT ). The final cost is the sum of the two costs L = Lx + PT t=1 Lz t and equals the amount of information that the model uses to compress x losslessly. This is the loss we use to report the likelihood bounds and is the standard loss for variational auto-encoders. However, we also include a constant β and train models with β ̸= 1 to observe the visual effect on generated data and to perform lossy compression as explained in section 3. Values β < 1 put less pressure on the network to reconstruct exact pixel details and increase its capacity to learn a better latent representation. The general multi-layer architecture is summarized in Figure 2 (left). The algorithm is loosely inspired by the architecture of the visual cortex (Carlson et al., 2013). We will describe known cortical properties and in brackets the correspondences in our diagram. The visual cortex consists of hierarchically organized areas such as V1, V2, V4, IT (in our case: layers 1, 2, . . .). Each area such as V1 is a composite structure consisting of six sublayers each most likely performing different functions (in our case: E for encoding, Z for sampling and information measuring, D and R for decoding). Eyes saccade around three times per second with blank periods in between. Thus the cortex has about 250ms to consider each input. When an input is received, there is a feed-forward computation that progresses to high levels of hierarchy such as IT in about 100ms (in our case: the input is passed through the E layers). The architecture is recurrent (our architecture as well) with a large amount of feedback from higher to lower layers (in our case: each D feeds into the E, Z, D, R layers of the next step), and can still perform significant computations before the next input is processed (in our case: the iterations of DRAW). 3 Compression Methodology In this section we show how instances of the variational auto-encoder paradigm (including convolutional DRAW) can be turned into compression algorithms. Note however that storing subsets of 4 Figure 3: Lossy Compression. Example images for various methods and levels of compression. Top row: original images. Each subsequent block has four rows corresponding to four methods of compression: (a) JPEG, (b) JPEG2000, (c) convolutional DRAW with full prior variance for generation and (d) convolutional DRAW with zero prior variance. Each block corresponds to a different compression level; in order, the average number of bits per input dimension are: 0.05, 0.1, 0.15, 0.2, 0.4, 0.8 (bits per image: 153, 307, 460, 614, 1228, 2457). In the first block, JPEG was left gray because it does not compress to this level. Images are of size 32 × 32. See appendix for 64 × 64 images. latents as described above results in good compression only if the network separates high level from low level information. It is not obvious whether this should occur to a satisfactory extent, or at all. In the following sections we will show that convolutional DRAW does in fact have this desirable property. It stratifies information into a progression of increasingly abstract features, allowing the resulting compression algorithm to select a degree of compression. What is appealing here is that this occurs naturally in such a simple homogeneous architecture. The underlying compression mechanism is arithmetic coding (Witten et al., 1987). Arithmetic coding takes as input a sequence of discrete variables x1, . . . , xt and a set of probabilities p(xt|x1, . . . , xt−1) that predict the variable at time t from the previous ones. It then compresses this sequence to L = −P t log2 p(xt|x1, . . . , xt−1) bits plus a constant of order one. We can use variational auto-encoders for compression as follows. First, train the model with an approximate posterior q that has a variance independent from the input. After training, discretize the latent variables z to the size of the variance of q. When compressing an input, assign z to the nearest discretized point to the mean of q instead of sampling from q. Calculate the discrete probabilities p over the values of z. Retrain decoder and p to perform well with the discretized values. Now, we can use arithmetic coding directly, having the probabilities over discrete values of z. This procedure might require tuning to achieve the best performance. However such process is likely to work since there is another, less practical way to compress that is guaranteed to achieve the theoretical value. This second approach uses bits-back coding (Hinton & Van Camp, 1993). We explain only the basic idea here. First, discretize the latents down to a very high level of precision and use p to transmit the information. Because the discretization precision is high, the probabilities for discrete values are easily assigned. That will preserve the information but it will cost many bits, namely −log2 pd(z) where pd is the prior under that discretization. Now, instead of choosing a random sample z from the approximate posterior qd under the discretization when encoding, use another stream of bits that needs to be transmitted, to choose z, in effect encoding these bits into the choice of z. The encoded amount is −log2 qd(z) bits. When z is recovered at the receiving end, both the information about the current input and the other information is recovered and thus the information needed to encode the 5 Figure 4: Generated samples on Omniglot. Figure 5: Generated samples on ImageNet for different input cost scales. On the left, 32 × 32 samples are shown with input cost β in (12) equal to {0.2, 0.4, 0.6, 0.8, 1} for each respective block of two rows. On the right, 64 × 64 are shown with input cost scale β is {0.4, 0.5, 0.6, 0.8, 1} for each row respectively. For smaller values of β the network is less compelled to explain finer details of images, and produces ‘cleaner’ larger structures. current input is −log2 pd(z) + log2 qd(z) = −log2(pd(z)/qd(z)). The expectation of this quantity is the KL-divergence in (5), which therefore measures the amount of information stored in a given latent layer. The disadvantage of this approach is that we need this extra data to encode a given input. However, this coding scheme works even if the variance of the approximate posterior is dependent on the input. 4 Results All models (except otherwise specified) were single-layer, with the number of DRAW time steps nt = 32, a kernel size of 5 × 5, and stride 2 convolutions between input layers and hidden layers with 12 latent feature maps. We trained the models on Cifar-10, Omniglot and ImageNet with 320, 160 and 160 LSTM feature maps, respectively. We use the version of ImageNet presented in (van den Oord et al., 2016). We train the network with Adam optimization (Kingma & Ba, 2014) with learning rate 5 × 10−4. We found that the cost occasionally increased dramatically during training. This is probably due to the Gaussian nature of the distribution, when a given variable is produced too far from the mean relative to sigma. We observed this happening approximately once per run. To be able to keep training we store older parameters, detect such jumps and revert to the old parameters when they occur. In these instances training always continued unperturbed. 4.1 Modeling Quality Omniglot The recently introduced Omniglot dataset Lake et al. (2015) is comprised of 1628 character classes drawn from multiple alphabets with just 20 samples per class. Referred to by some as the 6 ‘transpose of MNIST’, it was designed to study conceptual representations and generative models in a low-data regime. Table 1 shows likelihoods of different models compared to ours. For our model, we only calculate the upper bound (variational bound) and therefore underestimate its quality. Samples generated by the model are shown in Figure 4. Cifar-10 Table 1 also shows reported likelihoods of different models on Cifar-10. Convolutional DRAW outperforms most previous models. The recently introduced Pixel RNN model (van den Oord et al., 2016) yields better likelihoods, but as it is not a latent variable model, it does not build representations, cannot be used for lossy compression, and is slow to sample from due to its autoregressive nature. At the same time, we must emphasize that the two approaches might be complementary, and could be combined by feeding the output of convolutional DRAW into the recurrent network of Pixel RNN. We also show the likelihood for a (non-recurrent) variational auto-encoder that we obtained internally. We tested architectures with multiple layers, both deterministic and stochastic but with standard functional forms, and reported the best result that we were able to obtain. Convolutional DRAW performs significantly better. ImageNet Additionaly, we trained on the version of ImageNet as prepared in (van den Oord et al., 2016) which was created with the aim of making a standardized dataset to test generative models. The results are in Table 1. Note that since this is a new dataset, few other methods have yet been applied to it. In Figure 5 we show generations from the model. We trained networks with varying input cost scales as explained in the next section. The generations are sharp and contain many details, unlike previous versions of variational auto-encoder that tend to generate blurry images. Table 1: Test set performance of different models. Results on 28 × 28 Omniglot are shown in nats, results on CIFAR-10 and ImageNet are shown in bits/dim. Training losses are shown in brackets. Omniglot NLL VAE (2 layers, 5 samples) 106.31 IWAE (2 layers, 50 samples) 103.38 RBM (500 hidden) 100.46 DRAW < 96.5 Conv DRAW < 92.0 ImageNet NLL Pixel RNN (32 × 32) 3.86 (3.83) Pixel RNN (64 × 64) 3.63 (3.57) Conv DRAW (32 × 32) 4.40 (4.35) Conv DRAW (64 × 64) 4.10 (4.04) CIFAR-10 NLL Uniform Distribution 8.00 Multivariate Gaussian 4.70 NICE [1] 4.48 Deep Diffusion [2] 4.20 Deep GMMs [3] 4.00 Pixel RNN [4] 3.00 (2.93) Deep VAE < 4.54 DRAW < 4.13 Conv DRAW < 3.58 (3.57) 4.2 Reconstruction vs Latent Cost Scaling Each pixel (and color channel) of the data consists of 256 values, and as such, likelihood and lossless compression are well defined. When compressing the image there is much to be gained in capturing precise correlations between nearby pixels. There are a lot more bits in these low level details than in the higher level structure that we are actually interested in when learning higher level representations. The network might focus on these details, ignoring higher level structure. One way to make it focus less on the details is to scale down the cost of the input relative to the latents, that is, setting β < 1 in (12). Generations for different cost scalings are shown in Figure 5, with the original objective being scale β = 1. Visually we can verify that lower scales indeed have a ‘cleaner’ high level structure. Scale 1 contains a lot of information at the precise pixel values and the network tries to capture that, while not being good enough to properly align details and produce real-looking patterns. Improving this might simply be a matter of network capacity and scaling: increasing layer size and depth, using more iterations, or using better functional forms. 7 4.3 Information Distribution We look at how much information is contained at different levels and time steps. This information is simply the KL-divergence in (5) during inference. For a two layer system with one convolutional and one fully connected layer, this is shown in Figure 2 (right). We see that the higher level contains information mainly at the beginning of computation, whereas the lower layer starts with low information which then gradually increases. This is desirable from a conceptual point of view. It suggests that the network first captures the overall structure of the image, and only then proceeds to ‘explain’ the details contained within that structure. Understanding the overall structure rapidly is also convenient if the algorithm needs to respond to observations in a timely manner. For the single layer system used in all other experiments, the information distribution is similar to the blue curve of Figure 2 (right). Thus, while the variables in the last set of iterations contain the most bits, they don’t seem to visually affect the quality of reconstructed images to a large extent, as shown in Figure 1. This demonstrates the separation of information into global aspects that humans consider important from low level details. 4.4 Lossy Compression Results We can compress an image lossily by storing only the subset of the latent variables associated with the earlier iterations of convolutional DRAW, namely those that encode the more high-level information about the image. The units not stored should be generated from the prior distribution (4). This amounts to decompression. We can also generate a more likely image by lowering the variance of the prior Gaussian. We show generations with full variance in row 3 of each block of Figure 3 and with zero variance in row 4. We see that using the original variance, the network generates sharp details. Because the generative model is not perfect, the resulting images are less realistic looking as we lower the number of stored time steps. For zero variance we see that the network starts with rough details making a smooth image and then refines it with more time steps. All these generations are produced with a single-layer convolutional DRAW, and thus, despite being single-layer, it achieves some level of ‘conceptual compression’ by first capturing the global structure of the image and then focusing on details. There is another dimension we can vary for lossy compression – the input scale introduced in subsection 4.2. Even if we store all the latent variables (but not the input bits), the reconstructed images will get less detailed as we scale down the input cost. To build a high performing compressor, at each compression rate, we need to find which of the networks, input scales and number of time steps would produce visually good images. We have done the following. For several compression levels, we have looked at images produced by different methods and selected qualitatively which network gave the best looking images. We have not done this per image, just per compression level. We then display compressed images that we have not seen with this selection. We compare our results to JPEG and JPEG2000 compression which we obtained using ImageMagick. We found however that these compressors were unable to produce reasonable results for small images (3×32×32) at high compression rates. Instead, we concatenated 100 images into one 3×320×320 image, compressed that and extracted back the compressed small images. The number of bits per image reported is then the number of bits of this image divided by 100. This is actually unfair to our algorithm since any correlations between nearby images can be exploited. Nevertheless we show the comparison in Figure 3. Our algorithm shows better quality than JPEG and JPEG 2000 at all levels where a corruption is easily detectable. Note that even if our algorithm was trained on one specific image size, it can be used on arbitrarily sized images as it contains only convolutional operators. 5 Conclusion In this paper we introduced convolutional DRAW, a state-of-the-art latent variable generative model which demonstrates the potential of sequential computation and recurrent neural networks in scaling up the performance of deep generative models. During inference, the algorithm arrives at a natural stratification of information, ranging from global aspects to low-level details. An interesting feature of the method is that, when we restrict ourselves to storing just the high level latent variables, we arrive at a ‘conceptual compression’ algorithm that rivals the quality of JPEG2000. 8 References Carlson, Thomas, Tovar, David A, Alink, Arjen, and Kriegeskorte, Nikolaus. Representational dynamics of object vision: the first 1000 ms. Journal of vision, 13(10):1–1, 2013. Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In Advances in Neural Information Processing Systems, pp. 2672–2680, 2014. Gregor, Karol and LeCun, Yann. Learning representations by maximizing compression. arXiv preprint arXiv:1108.1169, 2011. Gregor, Karol, Danihelka, Ivo, Mnih, Andriy, Blundell, Charles, and Wierstra, Daan. Deep autoregressive networks. In Proceedings of the 31st International Conference on Machine Learning, 2014. Gregor, Karol, Danihelka, Ivo, Graves, Alex, Rezende, Danilo Jimenez, and Wierstra, Daan. Draw: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, 2015. Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. Hinton, Geoffrey E and Van Camp, Drew. Keeping the neural networks simple by minimizing the description length of the weights. In Proceedings of the sixth annual conference on Computational learning theory, pp. 5–13. ACM, 1993. Hochreiter, Sepp and Schmidhuber, Jürgen. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Kingma, Diederik and Ba, Jimmy. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Kingma, Diederik P and Welling, Max. Auto-encoding variational bayes. In Proceedings of the International Conference on Learning Representations (ICLR), 2014. Lake, Brenden M, Salakhutdinov, Ruslan, and Tenenbaum, Joshua B. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015. Larochelle, Hugo and Murray, Iain. The neural autoregressive distribution estimator. Journal of Machine Learning Research, 15:29–37, 2011. Rezende, Danilo J, Mohamed, Shakir, and Wierstra, Daan. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings of the 31st International Conference on Machine Learning, pp. 1278–1286, 2014. Salakhutdinov, Ruslan and Hinton, Geoffrey E. Deep boltzmann machines. In International Conference on Artificial Intelligence and Statistics, pp. 448–455, 2009. van den Oord, Aaron, Kalchbrenner, Nal, and Kavukcuoglu, Koray. Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759, 2016. Witten, Ian H, Neal, Radford M, and Cleary, John G. Arithmetic coding for data compression. Communications of the ACM, 30(6):520–540, 1987. 9
2016
178
6,081
Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters∗ Zeyuan Allen-Zhu† Princeton University / IAS zeyuan@csail.mit.edu Yang Yuan† Cornell University yangyuan@cs.cornell.edu Karthik Sridharan Cornell University sridharan@cs.cornell.edu Abstract The amount of data available in the world is growing faster than our ability to deal with it. However, if we take advantage of the internal structure, data may become much smaller for machine learning purposes. In this paper we focus on one of the fundamental machine learning tasks, empirical risk minimization (ERM), and provide faster algorithms with the help from the clustering structure of the data. We introduce a simple notion of raw clustering that can be efficiently computed from the data, and propose two algorithms based on clustering information. Our accelerated algorithm ClusterACDM is built on a novel Haar transformation applied to the dual space of the ERM problem, and our variance-reduction based algorithm ClusterSVRG introduces a new gradient estimator using clustering. Our algorithms outperform their classical counterparts ACDM and SVRG respectively. 1 Introduction For large-scale machine learning applications, n, the number of training data examples, is usually very large. To search for the optimal solution, it is often desirable to use stochastic gradient methods which only require one (or a batch of) random example(s) from the given training set per iteration in order to form an estimator of the true gradient. For empirical risk minimization problems (ERM) in particular, stochastic gradient methods have received a lot of attention in the past decade. The original stochastic gradient descent (SGD) [4, 26] simply defines the estimator using one random data example and converges slowly. Recently, variancereduction methods were introduced to improve the running time of SGD [6, 7, 13, 18, 20–22, 24], and accelerated gradient methods were introduced to further improve the running time when the regularization parameter is small [9, 16, 17, 23, 27]. None of the above cited results, however, have considered the internal structure of the dataset, that is, using the stochastic gradient with respect to one data vector p to estimate the stochastic gradients of other data vectors close to p. To illustrate why internal structure can be helpful, consider the following extreme case: if all the data vectors are located at the same spot, then every stochastic gradient represents the full gradient of the entire dataset. In a non-extreme case, if data vectors form clusters, then the stochastic gradient of one data vector could provide a rough estimation for its neighbors. Therefore, one should expect ERM problems to be easier if the data vectors are clustered. More importantly, well-clustered datasets are abundant in big-data scenarios. For instance, although there are more than 1 billion of users on Facebook, the intrinsic “feature vectors” of these users can be naturally categorized by the users’ occupations, nationalities, etc. As another example, although there ∗The full version of this paper can be found on https://arxiv.org/abs/1602.02151. †These two authors equally contribute to this paper. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. are 581,012 vectors in the famous Covtype dataset [8], these vectors can be efficiently categorized into 1,445 clusters of diameter 0.1 — see Section 5. With these examples in mind, we investigate in this paper how to train an ERM problem faster using clustering information. 1.1 Known Result and Our Notion of Raw Clustering In a seminal work by Hofmann et al. published in NIPS 2015 [11], they introduced N-SAGA, the first ERM training algorithm that takes into account the similarities between data vectors. In each iteration, N-SAGA computes the stochastic gradient of one data vector p, and uses this information as a biased representative for a small neighborhood of p (say, 20 nearest neighbors of p). In this paper, we focus on a more general and powerful notion of clustering yet capturing only the minimum requirement for a cluster to have similar vectors. Assume without loss of generality data vectors have norm at most 1. We say that a partition of the data vectors is an (s, δ) raw clustering if the vectors are divided into s disjoint sets, and the average distance between vectors in each set is at most δ. For different values of δ, one can obtain an (sδ, δ) raw clustering where sδ is a function on δ. For example, a (1445, 0.1) raw clustering exists for the Covtype dataset that contains 581,012 data vectors. Raw clustering enjoys the following nice properties. • It allows outliers to exist in a cluster and nearby vectors to be split into multiple clusters. • It allows large clusters. This is in contrast to N-SAGA which requires each cluster to be very small (say of size 20) due to their algorithmic limitation. Computation Overhead. Since we do not need exactness, raw clusterings can be obtained very efficiently. We directly adopt the approach of Hofmann et al. [11] because finding approximate clustering is the same as finding approximate neighbors. Hofmann et al. [11] proposed to use approximate nearest neighbor algorithms such as LSH [2, 3] and product quantization [10, 12], and we use LSH in our experiments. Without trying hard to optimize the code, we observed that in time 0.3T we can detect if good clustering exists, and if so, in time around 3T we find the actual clustering. Here T is the running time for a stochastic method such as SAGA to perform n iterations (i.e., one pass) on the dataset. We repeat three remarks from Hofmann et al. First, the better the clustering quality the better performance we can expect; yet one can always use the trivial clustering as a fallback option. Second, the clustering time should be amortized over multiple runs of the training program: if one performs 30 runs to choose between loss functions and tune parameters, the amortized cost to compute a raw clustering is at most 0.1T. Third, since stochastic gradient methods are sequential methods, increasing the computational cost in a highly parallelizable way may not affect data throughput. NOTE. Clustering can also be obtained for free in some scenarios. If Facebook data are retrieved, one can use the geographic information of the users to form raw clustering. If one works with the CIFAR-10 dataset, the known CIFAR-100 labels can be used as clustering information too [14]. 1.2 Our New Results We first observe some limitations of N-SAGA. Firstly, it is biased algorithm and does not converge to the objective minimum.3 Secondly, in order to keep the bias small, N-SAGA only exploits a small neighborhood for every data vector. Thirdly, N-SAGA may need 20 times more computation time per iteration as compared to SAGA or SGD, if 20 is the average neighborhood size. We explore in this paper how a given (s, δ) raw clustering can improve the performance of training ERM problems. We propose two unbiased algorithms that we call ClusterACDM and ClusterSVRG. The two algorithms use different techniques. ClusterACDM uses a novel clustering-based transformation in the dual space, and provides a faster algorithm than ACDM [1, 15] both in practice and in terms of asymptotic worst-case performance. ClusterSVRG is built on top of SVRG [13], but using a new clustering-based gradient estimator to improve the running time. More specifically, consider for simplicity ridge regression where the ℓ2 regularizer has weight λ > 0. The best known non-accelerated methods (such as SAGA [6] and SVRG [6]) and the best known accelerated methods (such as ACDM or AccSDCA [23])) run in time respectively non-accelerated: eO  nd + d λ  and accelerated: eO  nd + √n √ λd  (1.1) where d is the dimension of the data vectors and the eO notation hides the log(1/ε) factor that depends on the accuracy. Accelerated methods converge faster when λ is smaller than 1/n. 3N-SAGA uses the stochastic gradient of one data vector to completely represent its neighbors. This changes the objective value and therefore cannot give very accurate solutions. 2 Our ClusterACDM method outperforms (1.1) both in terms of theory and practice. Given an (s, δ) raw clustering, ClusterACDM enjoys a worst-case running time eO  nd + max{√s, √ δn} √ λ d  . (1.2) In the ideal case when all the feature vectors are identical, ClusterACDM converges in time eO nd + d √ λ  . Otherwise, our running time is asymptotically better than known accelerated methods by a factor O(min{p n s , 1 √ δ}) that depends on the clustering quality. Our speed-up also generalizes to other ERM problems as well such as Lasso. Our ClusterSVRG matches the best non-accelerated result in (1.1) in the worst-case;4 however, it enjoys a provably smaller variance than SVRG or SAGA, so runs faster in practice. Techniques Behind ClusterACDM. We highlight our main techniques behind ClusterACDM. Since a cluster of vectors have almost identical directions if δ is small, we wish to create an auxiliary vector for each cluster representing “moving in the average direction of all vectors in this cluster”. Next, we design a stochastic gradient method that, instead of uniformly choosing a random vector, selects those auxiliary vectors with a much higher probability compared with ordinary ones. This could lead to a running time improvement because moving in the direction of an auxiliary vector only costs O(d) running time but exploits the information of the entire cluster. We implement the above intuition using optimization insights. In the dual space of the ERM problem, each variable corresponds to a data example in the primal, and the objective is known to be coordinate-wise smooth with the same smoothness parameter per coordinate. In the preprocessing step, ClusterACDM applies a novel Haar transformation on each cluster of the dual coordinates. Haar transformation rotates the dual space, and for each cluster, it automatically reserves a new dual variable that corresponds to the “auxiliary vector” mentioned above. Furthermore, these new dual variables have significantly larger smoothness parameters and therefore will be selected with probability much larger than 1/n if one applies a state-of-the-art accelerated coordinate descent method such as ACDM. Other Related Work. ClusterACDM can be viewed as “preconditioning” the data matrix from the dual variable side. Recently, preconditioning received some attentions in machine learning. In particular, non-uniform sampling can be viewed as using diagonal preconditioners [1, 28]. However, diagonal preconditioning has nothing to do with clustering: for instance, if all data vectors have the same Euclidean norm, the cited results are identical to SVRG or APCG so do not exploit the clustering information. Some authors also study preconditioning from the primal side using SVD [25]. This is different from us because for instance when all the data vectors are same (thus forming a perfect cluster), the cited result reduces to SVRG and does not improve the running time. 2 Preliminaries Given a dataset consisting of n vectors {a1, . . . , an} ⊂Rd, we assume without loss of generality that ∥ai∥2 ≤1 for each i ∈[n]. Let a clustering of the dataset be a partition of the indices [n] = S1 ∪· · · ∪Ss. We call each set Sc a cluster and use nc = |Sc| to denote its size. It satisfies Ps c=1 nc = n. We are interested in the following quantification that estimates the clustering quality: Definition 2.1 (raw clustering on vectors). We say a partition [n] = S1 ∪· · · ∪Ss is an (s, δ) raw clustering for the vectors {a1, . . . , an} if for every cluster Sc it satisfies 1 |Sc|2 P i,j∈Sc ∥ai−aj∥2 ≤δ. We call it a raw clustering because the above definition captures the minimum requirement for each cluster to have similar vectors. For instance, the above “average” definition allows a few outliers to exist in each cluster and allows nearby vectors to be split into different clusters. Raw clustering of the dataset is very easy to obtain: we include in Section 5.1 a simple and efficient algorithm for computing an (sδ, δ) raw clustering of any quality δ. A similar assumption like our (s, σ) raw clustering assumption in Definition 2.1 was also introduced by Hofmann et al. [11]. Definition 2.2 (Smoothness and strong convexity). For a convex function g: Rn →R, • g is σ-strongly convex if ∀x, y ∈Rn, it satisfies g(y) ≥g(x) + ⟨∇g(x), y −x⟩+ σ 2 ∥x −y∥2. • g is L-smooth if ∀x, y ∈Rn, it satisfies ∥∇g(x) −∇g(y)∥≤L∥x −y∥. 4The asymptotic worst-case running time for non-accelerated methods in (1.1) cannot be improved in general, even if a perfect clustering (i.e., δ = 0) is given. 3 • g is coordinate-wise smooth with parameters (L1, L2, . . . , Ln), if for every x ∈Rn, δ > 0, i ∈[n], it satisfies |∇ig(x + δei) −∇ig(x)| ≤Li · δ. For strongly convex and coordinate-wise smooth functions g, one can apply the accelerated coordinate descent algorithm (ACDM) to minimize g: Theorem 2.3 (ACDM). If g(x) is σ-strongly convex and coordinate-wise smooth with parameters (L1, . . . , Ln), the non-uniform accelerated coordinate descent method of [1] produces an output y satisfying g(y) −minx g(x) ≤ε in O P i p Li/σ · log(1/ε)  iterations. Each iteration runs in time proportional to the computation of a coordinate gradient ∇ig(·) of g. Remark 2.4. Accelerated coordinate descent admits several variants such as APCG [17], ACDM [15], and NU_ACDM [1]. These variants agree on the running time when L1 = · · · = Ln, but NU_ACDM is the fastest when L1, . . . , Ln are non-uniform. More specifically, NU_ACDM selects a coordinate i with probability proportional to √Li. In contrast, ACDM samples coordinate i with probability proportional to Li and APCG samples i with probability 1/n. We refer to NU_ACDM as the accelerated coordinate descent method (ACDM) in this paper. 3 ClusterACDM Algorithm Our ClusterACDM method is an accelerated stochastic gradient method just like AccSDCA [23], APCG [17], ACDM [1, 15], SPDC [27], etc. Consider a regularized least-square problem Primal: min x∈Rd n P(x) def = 1 2n n X i=1 (⟨ai, x⟩−li)2 + r(x) o , (3.1) where each ai ∈Rd is the feature vector of a training example and li is the label of ai. Problem (3.1) becomes ridge regression when r(x) = λ 2 ∥x∥2 2, and becomes Lasso when r(x) = λ∥x∥1. One of the state-of-the-art accelerated stochastic gradient methods to solve (3.1) is through its dual. Consider the following equivalent dual formulation of (3.1) (see for instance [17] for the detailed proof): Dual: miny∈Rn n D(y) def = 1 2n∥y∥2 + 1 n⟨y, l⟩+ r∗−1 nAy  = 1 n Pn i=1 1 2y2 i + yi · li  + r∗−1 n Pn i=1 yiai o , (3.2) where A = [a1, a2, . . . , an] ∈Rd×n and r∗(y) def = maxw yT w −r(w) is the Fenchel dual of r(w). 3.1 Previous Solutions If r(x) is λ-strongly convex in P(x), the dual objective D(y) is both strongly convex and smooth. The following lemma is due to [17] but is also proved in our appendix for completeness. Lemma 3.1. If r(x) is λ-strongly convex, then D(y) is σ = 1 n strongly convex and coordinate-wise smooth with parameters (L1, . . . , Ln) for Li = 1 n + 1 λn2 ∥ai∥2. For this reason, the authors of [17] proposed to apply accelerated coordinate descent (such as their APCG method) to minimize D(y).5 Assuming without loss of generality ∥ai∥2 ≤1 for i ∈[n], we have Li ≤1 n + 1 λn2 . Using Theorem 2.3 on D(·), we know that ACDM produces an ε-approximate dual minimizer y in O(P i p Li/σ log(1/ε)) = eO(n + p n/λ) iterations, and each iteration runs in time proportional to the computation of ∇iD(y) which is O(d). This total running time eO(nd + p n/λ · d) is the fastest for solving (3.1) when r(x) is λ-strongly convex. Due to space limitation, in the main body we only focus on the case when r(x) is strongly convex; the non-strongly convex case (such as Lasso) can be reduced to this case. See Remark A.1 in appendix. 3.2 Our New Algorithm Each dual coordinate yi naturally corresponds to the i-th feature vector ai. Therefore, given a raw clustering [n] = S1 ∪S2 ∪· · · ∪Ss of the dataset, we can partition the coordinates of the dual vector y ∈Rn into s blocks each corresponding to a cluster. Without loss of generality, we assume the coordinates of y are sorted in the order of the cluster indices. In other words, we write y = (yS1, . . . , ySs) where each ySc ∈Rnc. 5They showed that defining x = ∇r∗(−Ay/n), if y is a good approximate minimizer of the dual objective D(y), x is also a good approximate minimizer of the primal objective P(x). 4 Algorithm 1 ClusterACDM Input: a raw clustering S1 ∪· · · ∪Ss. 1: Apply cluster-based Haar transformation Hcl to get the transformed objective D′(y′). 2: Run ACDM to minimize D′(y′) 3: Transform the solution of D′(y′) back to the original space. ClusterACDM transforms the dual objective (3.2) into an equivalent form, by performing an ncdimensional Haar transformation on the c-th block of coordinates for every c ∈[s]. Formally, Definition 3.2. Let R2 def =  1/ √ 2 −1/ √ 2  , R3 def = h √ 2/ √ 3 − √ 2/(2 √ 3) − √ 2/(2 √ 3) 0 1/ √ 2 −1/ √ 2 i , and more generally Rn def =   1/a √ 1/a+1/b . . . 1/a √ 1/a+1/b −1/b √ 1/a+1/b . . . −1/b √ 1/a+1/b Ra 0 0 Rb  ∈R(n−1)×n for a = ⌊n/2⌋and b = ⌈n/2⌉. Then, define the n-dimensional (normalized) Haar matrix as Hn def =  1/√n · · · 1/√n Rn  ∈Rn×n We give a few examples of Haar matrices in Example A.2 in Appendix A. It is easy to verify that Lemma 3.3. For every n, HT n Hn = HnHT n = I, so Hn is a unitary matrix. Definition 3.4. Given a clustering [n] = S1 ∪· · · ∪Ss, define the following cluster-based Haar transformation Hcl ∈Rn×n that is a block diagonal matrix: Hcl def = diag H|S1|, H|S2|, . . . , H|Ss|  . Accordingly, we apply the unitary transformation Hcl on (3.2) and consider min y′∈Rn n D′(y′) def = 1 2n∥y′∥2 + 1 n⟨y′, Hcll⟩+ r∗−1 nAHT cl y′o . (3.3) We call D′(y′) the transformed objective function. It is clear that the minimization problem (3.3) is equivalent to (3.2) by transforming y = HT cl y′. Now, our ClusterACDM algorithm applies ACDM on minimizing this transformed objective D′(y′). We claim the following running time of ClusterACDM and discuss the high-level intuition in the main body. We defer detailed analysis to Appendix A. Theorem 3.5. If r(·) is λ-strongly convex and an (s, δ) raw clustering is given, then ClusterACDM outputs an ε-approximate minimizer of D(·) in time T = O nd + max{√s, √ δn} √ λ d  . Comparing to the complexity of APCG, ACDM, or AccSDCA (see (1.1)), ClusterACDM is faster by a factor that is up to Ω min{ p n/s, p 1/δ}  . High-Level Intuition. To see why Haar transformation is helpful, we focus on one cluster c ∈[s]. Assume without loss of generality that cluster c has vectors a1, a2, · · · , anc. After applying Haar transformation, the new columns 1, 2, . . . , nc of matrix AHT cl become weighted combinations of a1, a2, · · · , anc, and the weights are determined by the entries in the corresponding rows of Hnc. Observe that every row except the first one in Hnc has its entries sum up to 0. Therefore, columns 2, . . . , nc in AHT cl will be close to zero vectors and have small norms. In contrast, since the first row in Hnc has all its entries equal to 1/√nc, the first column of AHT cl becomes √nc · a1+···+anc nc , the scaled average of all vectors in this cluster. It has a large Euclidean norm. The first column after Haar transformation can be viewed as an auxiliary feature vector representing the entire cluster. If we run ACDM with respect to this new matrix, and whenever this auxiliary column is selected, it represents “moving in the average direction of all vectors in this cluster”. Since this single auxiliary column cannot represent the entire cluster, the remaining nc −1 columns serve as helpers that ensure that the algorithm is unbiased (i.e., converges to the exact minimizer). Most importantly, as discussed in Remark 2.4, ACDM is a stochastic method that samples a dual coordinate i (thus a primal feature vector ai) with a probability proportional to its square-root 5 coordinate-smoothness (thus roughly proportional to ∥ai∥). Since auxiliary vectors have much larger Euclidean norms, we expect them to be sampled with probabilities much larger 1/n. This is how the faster running time is obtained in Theorem 3.5. REMARK. The speed-up of ClusterACDM depends on how much “non-uniformity” the underlying coordinate descent method can utilize. Therefore, no speed-up can be obtained if one applies APCG instead of the NU_ACDM which is optimally designed to utilize coordinate non-uniformity. 4 ClusterSVRG Algorithm Our ClusterSVRG is a non-accelerated stochastic gradient method just like SVRG [13], SAGA [6], SDCA [22], etc. It directly works on minimizing the primal objective (similar to SVRG and SAGA): min x∈Rd n F(x) def = f(x) + Ψ(x) def = 1 n n X i=1 fi(x) + Ψ(x) o . (4.1) Here, f(x) = 1 n Pn i=1 fi(x) is the finite average of n functions, each fi(x) is convex and L-smooth, and Ψ(x) is a simple (but possibly non-differentiable) convex function, sometimes called the proximal function. We denote x∗as a minimizer of (4.1). Recall that stochastic gradient methods work as follows. At every iteration t, they perform updates xt ←xt−1 −η e∇t−1 for some step length η > 0,6 where e∇t−1 is the so-called gradient estimator and its expectation had better equal the full gradient ∇f(xt−1). It is a known fact that the faster the variance Var[e∇t−1] diminishes, the faster the underlying method converges. [13] For instance, SVRG defines the estimator as follows. It has an outer loop of epochs. At the beginning of each epoch, SVRG records the current iterate x as a snapshot point ex, and computes its full gradient ∇f(ex). In each inner iteration within an epoch, SVRG defines e∇t−1 def = 1 n Pn j=1 ∇fj(ex) + ∇fi(xt−1) −∇fi(ex) where i is a random index in [n]. SVRG usually chooses the epoch length m to be 2n, and it is known that Var[e∇t−1] approaches to zero as t increases. We denote by e∇t−1 SVRG this choice of e∇t−1 for SVRG. In ClusterSVRG, we define the gradient estimator e∇t−1 based on clustering information. Given a clustering [n] = S1 ∪· · · ∪Ss and denoting by ci ∈[s] the cluster that index i belongs to, we define e∇t−1 def = 1 n n X j=1 ∇fj(ex) + ζcj  + ∇fi(xt−1) − ∇fi(ex) + ζci  . Above, for each cluster c we introduce an additional ζc term that can be defined in one of the following two ways. Initializing ζc = 0 at the beginning of the epoch for each cluster c. Then, • In Option I, after each iteration t is completed and suppose i is the random index chosen at iteration t, we update ζci ←∇fi(xt−1) −∇fi(ex). • In Option II, we divide an epoch into subepochs of length s each (recall s is the number of clusters). At the beginning of each subepoch, for each cluster c ∈[s], we define ζc ←∇fj(x) −∇fj(ex). Here, x is the last iterate of the previous subepoch and j is a random index in Sc. We summarize both options in Algorithm 2. Note that Option I gives simpler intuition but Option II leads to a simpler proof. The intuition behind our new choice of e∇t−1 can be understood as follows. Observe that in the SVRG estimator e∇t−1 SVRG, each term ∇fj(ex) can be viewed as a “guess term” of the true gradient ∇fj(xt−1) for function fj. However, these guess terms may be very “outdated” because ex can be m = 2n iterations away from xt−1, and therefore contribute to a large variance. We use raw clusterings to improve these guess terms and reduce the variance. If function fj belongs to cluster c, then our Option I uses ∇fj(ex) + ∇fk(xt) −∇fk(ex) as the new guess of ∇fj(xt), where t is the last time cluster c was accessed and k is the index of the vector in this cluster that was accessed. This new guess only has an “outdatedness” of roughly s that could be much smaller than n. Due to space limitation, we defer all technical details of ClusterSVRG to Appendix B and B.3. SVRG vs. SAGA vs. ClusterSVRG. SVRG becomes a special case of ClusterSVRG when all the data vectors belong to the same cluster; SAGA becomes a special case of ClusterSVRG when each 6Or more generally the proximal updates xt ←arg minx  1 2η ∥x −xt−1∥2 + ⟨e∇t−1, x⟩+ Ψ(x) if Ψ(x) is nonzero. 6 Algorithm 2 ClusterSVRG Input: Epoch length m and learning rate η, a raw clustering S1 ∪· · · ∪Ss. 1: x0, x ←initial point, t ←0. 2: for epoch ←0 to MaxEpoch do 3: ex ←xt, and (ζ1, . . . , ζs) ←(0, . . . , 0) 4: for iter ←1 to m do 5: t ←t + 1 and choose i uniformly at random from {1, · · · , n} 6: xt ←xt−1 −η  1 n Pn j=1 ∇fj(ex) + ζcj  + ∇fi(xt−1) − ∇fi(ex) + ζci  7: Option I: ζci ←∇fi(xt−1) −∇fi(ex) 8: Option II: if iter mod s = 0 then for all c = 1, . . . , s, 9: ζc ←∇fj(xt−1) −∇fj(ex) where j is randomly chosen from Sc. 10: end for 11: end for data vector belongs to its own cluster. We hope that this interpolation helps experimentalists decide between these methods: (1) if the data vectors are pairwise close to each other then use SVRG; (2) if the data vectors are all very separated from each other then use SAGA; and (3) if the data vectors have nice clustering structures (which one can detect using LSH), then use our ClusterSVRG. 5 Experiments We conduct experiments for three datasets that can be found on the LibSVM website [8]: COVTYPE.BINARY, SENSIT (combined scale), and NEWS20.BINARY. To make easier comparison across datasets, we scale every vector by the average Euclidean norm of all the vectors. This step is for comparison only and not necessary in practice. Note that Covtype and SensIT are two datasets where the feature vectors have a nice clustering structure; in contrast, dataset News20 cannot be well clustered and we include it for comparison purpose only. 5.1 Clustering and Haar Transformation We use the approximate nearest neighbor algorithm library E2LSH [2] to compute raw clusterings. Since this is not the main focus of our paper, we include our implementation in Appendix D. The running time needed for raw clustering is reasonable. In Table 1 in the appendix, we list the running time (1) to sub-sample and detect if good clustering exists and (2) to compute the actual clustering. We also list the one-pass running time of SAGA using sparse implementation for comparison. We conclude two things from Table 1. First, in about the same time as SAGA performing 0.3 pass on the datasets, we can detect clustering structure in the dataset for a given diameter δ. This is a fast-enough preprocessing step to help experimentalists choose to use clustering-based methods or not. Second, in about the same time as SAGA performing 3 passes on well-clustered datasets such as Covtype and SensIT, we obtain the actual raw clustering. As emphasized in the introduction, we view the time needed for clustering as negligible. This not only because 0.3 and 3 are small quantities as compared to the average number of passes needed to converge (which is usually around 20). It is also because the clustering time is usually amortized over multiple runs of the training algorithm due to different data analysis tasks, parameter tunings, etc. In ClusterACDM, we need to pre-compute matrix AHT cl using Haar transformation. This can be efficiently implemented thanks to the sparsity of Haar matrices. In Table 2 in the appendix, we see that the time needed to do so is roughly 2 passes of the dataset. Again, this time should be amortized over multiple runs of the algorithm so is negligible. 5.2 Performance Comparison We compare our algorithms with SVRG, SAGA and ACDM. We use default epoch length m = 2n and Option I for SVRG. We use m = 2n and Option I for ClusterSVRG. We consider ridge and Lasso regressions, and denote by λ the weight of the ℓ2 regularizer for ridge or the ℓ1 regularizer for Lasso. Parameters. For SVRG and SAGA, we tune the best step size for each test case. To make our comparison even stronger, instead of tuning the best step size for ClusterSVRG, we simply set it to be either the best of SVRG or the best of SAGA in each test case. For ACDM and ClusterACDM, the step size is computed automatically so tuning is unnecessary. For Lasso, because the objective is not strongly convex, one has to add a dummy ℓ2 regularizer on the objective in order to run ACDM or ClusterACDM. (This step is needed for every accelerated method 7 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-7661-0.06 ClusterSVRG-7661-0.06-0.3 SVRG-0.3 SAGA-0.1 ACDM (No Cluster) (a) Covtype, Ridge λ = 10−5 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-7661-0.06 ClusterSVRG-7661-0.06-0.3 SVRG-0.3 SAGA-0.3 ACDM (No Cluster) (b) Covtype, Ridge λ = 10−6 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-7661-0.06 ClusterSVRG-7661-0.06-0.3 SVRG-0.3 SAGA-0.3 ACDM (No Cluster) (c) Covtype, Ridge λ = 10−7 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-376-0.2 ClusterSVRG-376-0.2-0.03 SVRG-0.03 SAGA-0.03 ACDM (No Cluster) (d) SensIT, Ridge λ = 10−3 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-376-0.2 ClusterSVRG-376-0.2-0.1 SVRG-1.0 SAGA-0.3 ACDM (No Cluster) (e) SensIT, Ridge λ = 10−5 0 5 10 15 20 25 30 #grad / n 10-14 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 training loss - optimum ClusterACDM-376-0.2 ClusterSVRG-376-0.2-0.1 SVRG-1.0 SAGA-0.3 ACDM (No Cluster) (f) SensIT, Ridge λ = 10−6 Figure 1: Selected plots on ridge regression. For Lasso and more detailed comparisons, see Appendix including AccSDCA, APCG or SPDC.) We choose this dummy regularizer to have weight 10−7 for Covtype and SenseIT, and weight 10−6 for News20.7 Plot Format. In our plots, the y-axis represents the objective distance to the minimizer, and the x-axis represents the number of passes of the dataset. (The snapshot computation of SVRG and ClusterSVRG counts as one pass.) In the legend, we use the format • “ClusterSVRG–s–δ–stepsize” for ClusterSVRG, • “ClusterACDM–s–δ” for ClusterACDM. • “SVRG/SAGA–stepsize” for SVRG or SAGA. • “ACDM (no Cluster)” for the vanilla ACDM without using any clustering info.8 Results. Our comprehensive experimental plots are included only in the appendix, see Figure 2, 3, 4, 5), 6, and 7. Due to space limitation, here we simply compare all the algorithms on ridge regression for datasets SensIT and Covtype by choosing only one representative clustering, see Figure 1. Generally, ClusterSVRG outperforms SAGA/SVRG when the regularizing parameter λ is large. ClusterACDM outperforms all other algorithms when λ is small. This is because accelerated methods outperform non-accelerated ones with smaller values of λ, and the complexity of ClusterACDM outperforms ACDM more when λ is smaller (compare (1.1) with (1.2)).9 Our other findings can be summarized as follows. Firstly, dataset News20 does not have a nice clustering structure but our ClusterSVRG and ClusterACDM still perform comparably well to SVRG and ACDM respectively. Secondly, the performance of ClusterSVRG is slightly better with clustering that has smaller diameter δ. In contrast, ClusterACDM with larger δ performs slightly better. This is because ClusterACDM can take advantage of very large but low-quality clusters, and this is a very appealing feature in practice. Sensitivity on Clustering. In Figure 8 in appendix, we plot the performance curves of ClusterSVRG and ClusterACDM for SensIT and Covtype, with 7 different clusterings. From the plots we claim that ClusterSVRG and ClusterACDM are very insensitive to the clustering quality. As long as one does not choose the most extreme clustering, the performance improvement due to clustering can be significant. Moreover, ClusterSVRG is slightly faster if the clustering has relatively smaller diameter δ (say, below 0.1), while the ClusterACDM can be fast even for very large δ (say, around 0.6). 7Choosing large dummy regularizer makes the algorithm converge faster but to a worse minimum, and vice versa. In our experiments, we find these choices reasonable for our datasets. Since our main focus is to compare ClusterACDM with ACDM, as long as we choose the same dummy regularizer our the comparison is fair. 8ACDM has slightly better performance compared to APCG, so we adopt ACDM in our experiments [1]. Furthermore, our comparison is fair because ClusterACDM and ACDM are implemented in the same manner. 9The best choice of λ usually requires cross-validation. For instance, by performing a 10-fold cross validation, one can figure out that the best λ is around 10−6 for SensIT Ridge, 10−5 for SensIT Lasso, 10−7 for Covtype Ridge, and 10−6 for Covtype Lasso. Therefore, for these two datasets ClusterACDM is preferred. 8 References [1] Zeyuan Allen-Zhu, Peter Richtárik, Zheng Qu, and Yang Yuan. Even faster accelerated coordinate descent using non-uniform sampling. In ICML, 2016. [2] Alexandr Andoni. E2LSH. http://www.mit.edu/ andoni/LSH/, 2004. [3] Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, and Ludwig Schmidt. Practical and optimal lsh for angular distance. In NIPS, pages 1225–1233, 2015. [4] Léon Bottou. Stochastic gradient descent. http://leon.bottou.org/projects/sgd. [5] Nicolo Cesa-Bianchi and Gabor Lugosi. Prediction, Learning, and Games. Cambridge University Press, Cambridge, 2006. [6] Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives. In NIPS, 2014. [7] Aaron J. Defazio, Tibério S. Caetano, and Justin Domke. Finito: A Faster, Permutable Incremental Gradient Method for Big Data Problems. In ICML, 2014. [8] Rong-En Fan and Chih-Jen Lin. LIBSVM Data: Classification, Regression and Multi-label. Accessed: 2015-06. [9] Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization. In ICML, volume 37, pages 1–28, 2015. [10] Tiezheng Ge, Kaiming He, Qifa Ke, and Jian Sun. Optimized product quantization. IEEE Trans. Pattern Anal. Mach. Intell., 36(4):744–755, 2014. [11] Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, and Brian McWilliams. Variance reduced stochastic gradient descent with neighbors. In NIPS 2015, pages 2296–2304, 2015. [12] Hervé Jégou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Trans. Pattern Anal. Mach. Intell., 33(1):117–128, 2011. [13] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, NIPS 2013, pages 315–323, 2013. [14] Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images, 2009. [15] Yin Tat Lee and Aaron Sidford. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In FOCS, pages 147–156. IEEE, 2013. [16] Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A Universal Catalyst for First-Order Optimization. In NIPS, 2015. [17] Qihang Lin, Zhaosong Lu, and Lin Xiao. An Accelerated Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization. In NIPS, pages 3059–3067, 2014. [18] Julien Mairal. Incremental Majorization-Minimization Optimization with Application to LargeScale Machine Learning. SIAM Journal on Optimization, 25(2):829–855, April 2015. Preliminary version appeared in ICML 2013. [19] Yurii Nesterov. Introductory Lectures on Convex Programming Volume: A Basic course, volume I. Kluwer Academic Publishers, 2004. [20] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv preprint arXiv:1309.2388, pages 1–45, 2013. Preliminary version appeared in NIPS 2012. [21] Shai Shalev-Shwartz and Tong Zhang. Proximal Stochastic Dual Coordinate Ascent. arXiv preprint arXiv:1211.2717, pages 1–18, 2012. [22] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. Journal of Machine Learning Research, 14:567–599, 2013. [23] Shai Shalev-Shwartz and Tong Zhang. Accelerated Proximal Stochastic Dual Coordinate Ascent for Regularized Loss Minimization. In ICML, pages 64–72, 2014. [24] Lin Xiao and Tong Zhang. A Proximal Stochastic Gradient Method with Progressive Variance Reduction. SIAM Journal on Optimization, 24(4):2057—-2075, 2014. [25] Tianbao Yang, Rong Jin, Shenghuo Zhu, and Qihang Lin. On data preconditioning for regularized loss minimization. Machine Learning, pages 1–23, 2014. [26] Tong Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In ICML, 2004. [27] Yuchen Zhang and Lin Xiao. Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization. In ICML, 2015. [28] Peilin Zhao and Tong Zhang. Stochastic Optimization with Importance Sampling for Regularized Loss Minimization. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 1–9, 2015. 9
2016
179
6,082
Dialog-based Language Learning Jason Weston Facebook AI Research, New York. jase@fb.com Abstract A long-term goal of machine learning research is to build an intelligent dialog agent. Most research in natural language understanding has focused on learning from fixed training sets of labeled data, with supervision either at the word level (tagging, parsing tasks) or sentence level (question answering, machine translation). This kind of supervision is not realistic of how humans learn, where language is both learned by, and used for, communication. In this work, we study dialog-based language learning, where supervision is given naturally and implicitly in the response of the dialog partner during the conversation. We study this setup in two domains: the bAbI dataset of [23] and large-scale question answering from [3]. We evaluate a set of baseline learning strategies on these tasks, and show that a novel model incorporating predictive lookahead is a promising approach for learning from a teacher’s response. In particular, a surprising result is that it can learn to answer questions correctly without any reward-based supervision at all. 1 Introduction Many of machine learning’s successes have come from supervised learning, which typically involves employing annotators to label large quantities of data per task. However, humans can learn by acting and learning from the consequences of (i.e, the feedback from) their actions. When humans act in dialogs (i.e., make speech utterances) the feedback is from other human’s responses, which hence contain very rich information. This is perhaps most pronounced in a student/teacher scenario where the teacher provides positive feedback for successful communication and corrections for unsuccessful ones [8, 22]. However, in general any reply from a dialog partner, teacher or not, is likely to contain an informative training signal for learning how to use language in subsequent conversations. In this paper we explore whether we can train machine learning models to learn from dialogs. The ultimate goal is to be able to develop an intelligent dialog agent that can learn while conducting conversations. To do that it needs to learn from feedback that is supplied as natural language. However, most machine learning tasks in the natural language processing literature are not of this form: they are either hand labeled at the word level (part of speech tagging, named entity recognition), segment (chunking) or sentence level (question answering) by labelers. Subsequently, learning algorithms have been developed to learn from that kind of supervision. We therefore need to develop evaluation datasets for the dialog-based language learning setting, as well as developing models and algorithms able to learn in such a regime. The contribution of the present work is thus: • We introduce a set of tasks that model natural feedback from a teacher and hence assess the feasibility of dialog-based language learning. • We evaluate some baseline models on this data, comparing to standard supervised learning. • We introduce a novel forward prediction model, whereby the learner tries to predict the teacher’s replies to its actions, yielding promising results, even with no reward signal at all. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 2 Related Work In human language learning the usefulness of social interaction and natural infant directed conversations is emphasized, see e.g. the review paper [6], although the usefulness of feedback for learning grammar is disputed [10]. Support for the usefulness of feedback is found however in second language learning [1] and learning by students [4, 8, 22]. In machine learning, one line of research has focused on supervised learning from dialogs using neural models [18, 3]. Question answering given either a database of knowledge [3] or short stories [23] can be considered as a simple case of dialog which is easy to evaluate. Those tasks typically do not consider feedback. There is work on the the use of feedback and dialog for learning, notably for collecting knowledge to answer questions [5, 14], the use of natural language instruction for learning symbolic rules [7] and the use of binary feedback (rewards) for learning parsers [2]. Another setting which uses feedback is the setting of reinforcement learning, see e.g. [16] for a summary of its use in dialog. However, those approaches often consider reward as the feedback model rather than exploiting the dialog feedback per se. Nevertheless, reinforcement learning ideas have been used to good effect for other tasks as well, such as understanding text adventure games [12], machine translation and summarization [15]. Recently, [11] also proposed a reward-based learning framework for learning how to learn. Finally, forward prediction models, which we make use of in this work, have been used for learning eye tracking [17], controlling robot arms [9] and vehicles [21], and action-conditional video prediction in atari games [13]. We are not aware of their use thus far for dialog. 3 Dialog-Based Supervision Tasks Dialog-based supervision comes in many forms. As far as we are aware it is a currently unsolved problem which type of learning strategy will work in which setting. In this section we therefore identify different modes of dialog-based supervision, and build a learning problem for each. The goal is to then evaluate learners on each type of supervision. We thus begin by selecting two existing datasets: (i) the single supporting fact problem from the bAbI datasets [23] which consists of short stories from a simulated world followed by questions; and (ii) the MovieQA dataset [3] which is a large-scale dataset (∼100k questions over ∼75k entities) based on questions with answers in the open movie database (OMDb). For each dataset we then consider ten modes of dialog-based supervision. The supervision modes are summarized in Fig. 1 using a snippet of the bAbI dataset as an example. The same setups are also used for MovieQA, some examples of which are given in Fig 2. We now describe the supervision setups. Imitating an Expert Student In Task 1 the dialogs take place between a teacher and an expert student who gives semantically coherent answers. Hence, the task is for the learner to imitate that expert student, and become an expert themselves. For example, imagine the real-world scenario where a child observes their two parents talking to each other, it can learn but it is not actually taking part in the conversation. Note that our main goal in this paper is to examine how a non-expert can learn to improve its dialog skills while conversing. The rest of our tasks will hence concentrate on that goal. This task can be seen as a natural baseline for the rest of our tasks given the same input dialogs and questions. Positive and Negative Feedback In Task 2, when the learner answers a question the teacher then replies with either positive or negative feedback. In our experiments the subsequent responses are variants of “No, that’s incorrect” or “Yes, that’s right”. In the datasets we build there are 6 templates for positive feedback and 6 templates for negative feedback, e.g. ”Sorry, that’s not it.”, ”Wrong”, etc. To separate the notion of positive from negative (otherwise the signal is just words with no notion that yes is better than no) we assume an additional external reward signal that is not part of the text. As shown in Fig. 1 Task 2, (+) denotes positive reward external to the dialog (e.g. feedback provided by another medium, such as a nod of the head from the teacher). This is provided with every positive response. Note the difference in supervision compared to Task 1: there every answer is right and provides positive supervision. Here, only the answers the learner got correct have positive supervision. This could clearly be a problem when the learner is unskilled: it will supply incorrect answers and never (or hardly ever) receive positive responses. 2 Figure 1: Sample dialogs with differing supervision signals (tasks 1 to 10). In each case the same example story is given for simplicity (shown first). Black text is spoken by the teacher, red text denotes responses by the learner, blue text is provided by an expert student (which the learner can imitate), (+) denotes positive reward external to the dialog (e.g. feedback provided by another medium, such as a nod of the head from the teacher). Story Context (same for all examples below) Mary went to the hallway. John moved to the bathroom. Mary travelled to the kitchen. Task 1: Imitating an Expert Student Task 2: Positive and Negative Feedback Where is Mary? A:kitchen Where is Mary? A:playground Where is John? A:bathroom No, that’s incorrect. Where is John? A:bathroom Yes, that’s right! (+) Task 3: Answers Supplied by Teacher Task 4: Hints Supplied by Teacher Where is Mary? A:bedroom Where is Mary? A:bathroom No, the answer is kitchen. No, they are downstairs. Where is John? A:bathroom Where is John? A:kitchen Correct! (+) No, they are upstairs. Task 5: Supporting Facts Supplied by Teacher Task 6: Partial Feedback Where is Mary? A:kitchen Where is Mary? A:kitchen Yes, that’s right! (+) Yes, that’s right! Where is John? A:hallway Where is John? A:bathroom No, because John moved to the bathroom. Yes, that’s correct! (+) Task 7: No Feedback Task 8: Imitation and Feedback Mixture Where is Mary? A:kitchen Where is Mary? A:kitchen Yes, that’s right! Where is John? A:bathroom Where is John? A:bathroom That’s right! (+) Yes, that’s correct! Task 9: Asking For Corrections Task 10: Asking For Supporting Facts Where is Mary? A:kitchen Where is Mary? A:kitchen Yes, that’s right! (+) Yes, that’s right! (+) Where is John? A:hallway Where is John? A:hallway No, that’s not right. A:Can you help me? No, that’s not right. A:Can you help me? Bathroom. A relevant fact is John moved to the bathroom. Figure 2: Samples from the MovieQA dataset [3]. In our experiments we consider 10 different language learning setups as described in Figure 1 and Sec. 3. The examples given here are for tasks 2 and 3, questions are in black and answers in red, and (+) indicates receiving positive reward. Task 2: Positive and Negative Feedback Task 3: Answers Supplied by Teacher What movies are about open source? Revolution OS What films are about Hawaii? 50 First Dates That’s right! (+) Correct! (+) What movies did Darren McGavin star in? Carmen Who acted in Licence to Kill? Billy Madison Sorry, that’s not it. No, the answer is Timothy Dalton. Who directed the film White Elephant? M. Curtiz What genre is Saratoga Trunk in? Drama No, that is incorrect. Yes! (+) Answers Supplied by Teacher In Task 3 the teacher gives positive and negative feedback as in Task 2, however when the learner’s answer is incorrect, the teacher also responds with the correction. For example if “where is Mary?” is answered with the incorrect answer “bedroom” the teacher responds “No, the answer is kitchen”’, see Fig. 1 Task 3. If the learner knows how to use this extra information, it effectively has as much supervision signal as with Task 1, and much more than for Task 2. Hints Supplied by Teacher In Task 4, the corrections provided by the teacher do not provide the exact answer as in Task 3, but only a useful hint. This setting is meant to mimic the real life occurrence of being provided only partial information about what you did wrong. In our datasets 3 we do this by providing the class of the correct answer, e.g. “No, they are downstairs” if the answer should be kitchen, or “No, it is a director” for the question “Who directed Monsters, Inc.?” (using OMDB metadata). The supervision signal here is hence somewhere in between Task 2 and 3. Supporting Facts Supplied by Teacher In Task 5, another way of providing partial supervision for an incorrect answer is explored. Here, the teacher gives a reason (explanation) why the answer is wrong by referring to a known fact that supports the true answer that the incorrect answer may contradict. For example “No, because John moved to the bathroom” for an incorrect answer to “Where is John?”, see Fig. 1 Task 5. This is related to what is termed strong supervision in [23] where supporting facts and answers are given for question answering tasks. Partial Feedback Task 6 considers the case where external rewards are only given some of (50% of) the time for correct answers, the setting is otherwise identical to Task 3. This attempts to mimic the realistic situation of some learning being more closely supervised (a teacher rewarding you for getting some answers right) whereas other dialogs have less supervision (no external rewards). The task attempts to assess the impact of such partial supervision. No Feedback In Task 7 external rewards are not given at all, only text, but is otherwise identical to Tasks 3 and 6. This task explores whether it is actually possible to learn how to answer at all in such a setting. We find in our experiments the answer is surprisingly yes, at least in some conditions. Imitation and Feedback Mixture Task 8 combines Tasks 1 and 2. The goal is to see if a learner can learn successfully from both forms of supervision at once. This mimics a child both observing pairs of experts talking (Task 1) while also trying to talk (Task 2). Asking For Corrections Another natural way of collecting supervision is for the learner to ask questions of the teacher about what it has done wrong. Task 9 tests one of the most simple instances, where asking “Can you help me?” when wrong obtains from the teacher the correct answer. This is thus related to the supervision in Task 3 except the learner must first ask for help in the dialog. This is potentially harder for a model as the relevant information is spread over a larger context. Asking for Supporting Facts Finally, in Task 10, a second less direct form of supervision for the learner after asking for help is to receive a hint rather than the correct answer, such as “A relevant fact is John moved to the bathroom” when asking “Can you help me?”, see Fig. 1 Task 10. This is thus related to the supervision in Task 5 except the learner must request help. In our experiments we constructed the ten supervision tasks for the two datasets which are all available for download at http://fb.ai/babi. They were built in the following way: for each task we consider a fixed policy1 for performing actions (answering questions) which gets questions correct with probability πacc (i.e. the chance of getting the red text correct in Figs. 1 and 2). We thus can compare different learning algorithms for each task over different values of πacc (0.5, 0.1 and 0.01). In all cases a training, validation and test set is provided. For the bAbI dataset this consists of 1000, 100 and 1000 questions respectively per task, and for movieQA there are ∼96k, ∼10k and ∼10k respectively. MovieQA also includes a knowledge base (KB) of ∼85k facts from OMDB, the memory network model we employ uses inverted index retrieval based on the question to form relevant memories from this set, see [3] for more details. Note that because the policies are fixed the experiments in this paper are not in a reinforcement learning setting. 4 Learning Models Our main goal is to explore training strategies that can execute dialog-based language learning. To this end we evaluate four possible strategies: imitation learning, reward-based imitation, forward prediction, and a combination of reward-based imitation and forward prediction. We will subsequently describe each in turn. We test all of these approaches with the same model architecture: an end-to-end memory network (MemN2N) [20]. Memory networks are a recently introduced model that have been shown to do 1 Since the policy is fixed and actually does not depend on the model being learnt, one could also think of it as coming from another agent (or the same agent in the past) which in either case is an imperfect expert. 4 Figure 3: Architectures for (reward-based) imitation and forward prediction. Memory Module Controller module Input addressing read addressing read Internal state Vector (initially: query) Output   Memory vectors Supervision (direct or reward-based) m m q Memory Module Controller module Input Output Predict Response to Answer addressing read addressing read Internal state Vector (initially: query) addressing Candidate( Answers( read Memory vectors m m q q Answer (action taken) (a) Model for (reward-based) imitation learning (b) Model for forward prediction well on a number of text understanding tasks, including question answering, dialog [3] and language modeling [20]. In particular, they outperform LSTMs and other baselines on the bAbI datasets [23] which we employ with dialog-based learning modifications in Sec. 3. They are hence a natural baseline model for us to use in order to explore differing modes of learning in our setup. In the following we will first review memory networks, detailing the explicit choices of architecture we made, and then show how they can be modified and applied to our setting of dialog-based language learning. Memory Networks A high-level description of the memory network architecture we use is given in Fig. 3 (a). The input is the last utterance of the dialog, x, as well as a set of memories (context) (c1, . . . , cN) which can encode both short-term memory, e.g. recent previous utterances and replies, and long-term memories, e.g. facts that could be useful for answering questions. The context inputs ci are converted into vectors mi via embeddings and are stored in the memory. The goal is to produce an output ˆa by processing the input x and using that to address and read from the memory, m, possibly multiple times, in order to form a coherent reply. In the figure the memory is read twice, which is termed multiple “hops” of attention. In the first step, the input x is embedded using a matrix A of size d × V where d is the embedding dimension and V is the size of the vocabulary, giving q = Ax, where the input x is as a bag-ofwords vector. Each memory ci is embedded using the same matrix, giving mi = Aci. The output of addressing and then reading from memory in the first hop is: o1 = X i p1 i mi, p1 i = Softmax(q⊤mi). Here, the match between the input and the memories is computed by taking the inner product followed by a softmax, yielding p1, giving a probability vector over the memories. The goal is to select memories relevant to the last utterance x, i.e. the most relevant have large values of p1 i . The output memory representation o1 is then constructed using the weighted sum of memories, i.e. weighted by p1. The memory output is then added to the original input, u1 = R1(o1 + q), to form the new state of the controller, where R1 is a d × d rotation matrix2. The attention over the memory can then be repeated using u1 instead of q as the addressing vector, yielding: o2 = X i p2 i mi, p2 i = Softmax(u⊤ 1 mi), The controller state is updated again with u2 = R2(o2 + u1), where R2 is another d × d matrix to be learnt. In a two-hop model the final output is then defined as: ˆa = Softmax(u⊤ 2 Ay1, . . . , u⊤ 2 AyC) (1) 2Optionally, different dictionaries can be used for inputs, memories and outputs instead of being shared. 5 where there are C candidate answers in y. In our experiments C is the set of actions that occur in the training set for the bAbI tasks, and for MovieQA it is the set of words retrieved from the KB. Having described the basic architecture, we now detail the possible training strategies we can employ for our tasks. Imitation Learning This approach involves simply imitating one of the speakers in observed dialogs, which is essentially a supervised learning objective3. This is the setting that most existing dialog learning, as well as question answer systems, employ for learning. Examples arrive as (x, c, a) triples, where a is (assumed to be) a good response to the last utterance x given context c. In our case, the whole memory network model defined above is trained using stochastic gradient descent by minimizing a standard cross-entropy loss between ˆa and the label a. Reward-based Imitation If some actions are poor choices, then one does not want to repeat them, that is we shouldn’t treat them as a supervised objective. In our setting positive reward is only obtained immediately after (some of) the correct actions, or else is zero. A simple strategy is thus to only apply imitation learning on the rewarded actions. The rest of the actions are simply discarded from the training set. This strategy is derived naturally as the degenerate case one obtains by applying policy gradient [24] in our setting where the policy is fixed (see end of Sec. 3). In more complex settings (i.e. where actions that are made lead to long-term changes in the environment and delayed rewards) applying reinforcement learning algorithms would be necessary, e.g. one could still use policy gradient to train the MemN2N but applied to the model’s own policy. Forward Prediction An alternative method of training is to perform forward prediction: the aim is, given an utterance x from speaker 1 and an answer a by speaker 2 (i.e., the learner), to predict ¯x, the response to the answer from speaker 1. That is, in general to predict the changed state of the world after action a, which in this case involves the new utterance ¯x. To learn from such data we propose the following modification to memory networks, also shown in Fig. 3 (b): essentially we chop off the final output from the original network of Fig. 3 (a) and replace it with some additional layers that compute the forward prediction. The first part of the network remains exactly the same and only has access to input x and context c, just as before. The computation up to u2 = R2(o2 + u1) is thus exactly the same as before. At this point we observe that the computation of the output in the original network, by scoring candidate answers in eq. (1) looks similar to the addressing of memory. Our key idea is thus to perform another “hop” of attention but over the candidate answers rather than the memories. Crucially, we also incorporate the information of which action (candidate) was actually selected in the dialog (i.e. which one is a). After this “hop”, the resulting state of the controller is then used to do the forward prediction. Concretely, we compute: o3 = X i p3 i (Ayi + β∗[a = yi]), p3 i = Softmax(u⊤ 2 Ayi), (2) where β∗is a d-dimensional vector, that is also learnt, that represents in the output o3 the action that was actually selected. After obtaining o3, the forward prediction is then computed as: ˆx = Softmax(u⊤ 3 A¯x1, . . . , u⊤ 3 A¯x ¯ C) where u3 = R3(o3 + u2). That is, it computes the scores of the possible responses to the answer a over ¯C possible candidates. The mechanism in eq. (2) gives the model a way to compare the most likely answers to x with the given answer a, which in terms of supervision we believe is critical. For example in question answering if the given answer a is incorrect and the model can assign high pi to the correct answer then the output o3 will contain a small amount of β∗; conversely, o3 has a large amount of β∗if a is correct. Thus, o3 informs the model of the likely response ¯x from the teacher. Training can then be performed using the cross-entropy loss between ˆx and the label ¯x, similar to before. In the event of a large number of candidates ¯C we subsample the negatives, always keeping ¯x in the set. The set of answers y can also be similarly sampled, making the method highly scalable. 3Imitation learning algorithms are not always strictly supervised algorithms, they can also depend on the agent’s actions. That is not the setting we use here, where the task is to imitate one of the speakers in a dialog. 6 Table 1: Test accuracy (%) on the Single Supporting Fact bAbI dataset for various supervision approachess (training with 1000 examples on each) and different policies πacc. A task is successfully passed if ≥95% accuracy is obtained (shown in blue). MemN2N MemN2N MemN2N imitation reward-based forward MemN2N learning imitation (RBI) prediction (FP) RBI + FP Supervision Type πacc = 0.5 0.1 0.01 0.5 0.1 0.01 0.5 0.1 0.01 0.5 0.1 0.01 1 - Imitating an Expert Student 100 100 100 100 100 100 23 30 29 99 99 100 2 - Positive and Negative Feedback 79 28 21 99 92 91 93 54 30 99 92 96 3 - Answers Supplied by Teacher 83 37 25 99 96 92 99 96 99 99 100 98 4 - Hints Supplied by Teacher 85 23 22 99 91 90 97 99 66 99 100 100 5 - Supporting Facts Supplied by Teacher 84 24 27 100 96 83 98 99 100 100 99 100 6 - Partial Feedback 90 22 22 98 81 59 100 100 99 99 100 99 7 - No Feedback 90 34 19 20 22 29 100 98 99 98 99 99 8 - Imitation + Feedback Mixture 90 89 82 99 98 98 28 64 67 99 98 97 9 - Asking For Corrections 85 30 22 99 89 83 23 15 21 95 90 84 10 - Asking For Supporting Facts 86 25 26 99 96 84 23 30 48 97 95 91 Number of completed tasks (≥95%) 1 1 1 9 5 2 5 5 4 10 8 8 A major benefit of this particular architectural design for forward prediction is that after training with the forward prediction criterion, at test time one can “chop off” the top again of the model to retrieve the original memory network model of Fig. 3 (a). One can thus use it to predict answers ˆa given only x and c. We can thus evaluate its performance directly for that goal as well. Finally, and importantly, if the answer to the response ¯x carries pertinent supervision information for choosing ˆa, as for example in many of the settings of Sec. 3 (and Fig. 1), then this will be backpropagated through the model. This is simply not the case in the imitation, reward-shaping [19] or reward-based imitation learning strategies which concentrate on the x, a pairs. Reward-based Imitation + Forward Prediction As our reward-based imitation learning uses the architecture of Fig. 3 (a), and forward prediction uses the same architecture but with the additional layers of Fig 3 (b), we can learn jointly with both strategies. One simply shares the weights across the two networks, and performs gradient steps for both criteria, one of each type per action. The former makes use of the reward signal – which when available is a very useful signal – but fails to use potential supervision feedback in the subsequent utterances, as described above. It also effectively ignores dialogs carrying no reward. Forward prediction in contrast makes use of dialog-based feedback and can train without any reward. On the other hand not using rewards when available is a serious handicap. Hence, the mixture of both strategies is a potentially powerful combination. Table 2: Test accuracy (%) on the MovieQA dataset dataset for various supervision approaches. Numbers in bold are the winners for that task and choice of πacc. MemN2N MemN2N MemN2N imitation reward-based forward MemN2N learning imitation (RBI) prediction (FP) RBI + FP Supervision Type πacc = 0.5 0.1 0.01 0.5 0.1 0.01 0.5 0.1 0.01 0.5 0.1 0.01 1 - Imitating an Expert Student 80 80 80 80 80 80 24 23 24 77 77 77 2 - Positive and Negative Feedback 46 29 27 52 32 26 48 34 24 68 53 34 3 - Answers Supplied by Teacher 48 29 26 52 32 27 60 57 58 69 65 62 4 - Hints Supplied by Teacher 47 29 26 51 32 28 58 58 42 70 54 32 5 - Supporting Facts Supplied by Teacher 47 28 26 51 32 26 43 44 33 66 53 40 6 - Partial Feedback 48 29 27 49 32 24 60 58 58 70 63 62 7 - No Feedback 51 29 27 22 21 21 60 53 58 61 56 50 8 - Imitation + Feedback Mixture 60 50 47 63 53 51 46 31 23 72 69 69 9 - Asking For Corrections 48 29 27 52 34 26 67 52 44 68 52 39 10 - Asking For Supporting Facts 49 29 27 52 34 27 51 44 35 69 53 36 Mean Accuracy 52 36 34 52 38 34 52 45 40 69 60 50 5 Experiments We conducted experiments on the datasets described in Section 3. As described before, for each task we consider a fixed policy for performing actions (answering questions) which gets questions correct with probability πacc. We can thus compare the different training strategies described in Sec. 4 over each task for different values of πacc. Hyperparameters for all methods are optimized on the validation sets. A summary of the results is reported in Table 1 for the bAbI dataset and Table 2 for MovieQA. We observed the following results: 7 • Imitation learning, ignoring rewards, is a poor learning strategy when imitating inaccurate answers, e.g. for πacc < 0.5. For imitating an expert however (Task 1) it is hard to beat. • Reward-based imitation (RBI) performs better when rewards are available, particularly in Table 1, but also degrades when they are too sparse e.g. for πacc = 0.01. • Forward prediction (FP) is more robust and has stable performance at different levels of πacc. However as it only predicts answers implicitly and does not make use of rewards it is outperformed by RBI on several tasks, notably Tasks 1 and 8 (because it cannot do supervised learning) and Task 2 (because it does not take advantage of positive rewards). • FP makes use of dialog feedback in Tasks 3-5 whereas RBI does not. This explains why FP does better with useful feedback (Tasks 3-5) than without (Task 2), whereas RBI cannot. • Supplying full answers (Task 3) is more useful than hints (Task 4) but hints still help FP more than just yes/no answers without extra information (Task 2). • When positive feedback is sometimes missing (Task 6) RBI suffers especially in Table 1. FP does not as it does not use this feedback. • One of the most surprising results of our experiments is that FP performs well overall, given that it does not use feedback, which we will attempt to explain subsequently. This is particularly evident on Task 7 (no feedback) where RBI has no hope of succeeding as it has no positive examples. FP on the other hand learns adequately. • Tasks 9 and 10 are harder for FP as the question is not immediately before the feedback. • Combining RBI and FP ameliorates the failings of each, yielding the best overall results. One of the most interesting aspects of our results is that FP works at all without any rewards. In Task 2 it does not even “know” the difference between words like “yes” or “’correct” vs. words like “wrong” or “incorrect”, so why should it tend to predict actions that lead to a response like “yes, that’s right”? This is because there is a natural coherence to predicting true answers that leads to greater accuracy in forward prediction. That is, you cannot predict a “right” or “wrong” response from the teacher if you don’t know what the right answer is. In our experiments our policies πacc sample negative answers equally, which may make learning simpler. We thus conducted an experiment on Task 2 (positive and negative feedback) of the bAbI dataset with a much more biased policy: it is the same as πacc = 0.5 except when the policy predicts incorrectly there is probability 0.5 of choosing a random guess as before, and 0.5 of choosing the fixed answer bathroom. In this case the FP method obtains 68% accuracy showing the method still works in this regime, although not as well as before. 6 Conclusion We have presented a set of evaluation datasets and models for dialog-based language learning. The ultimate goal of this line of research is to move towards a learner capable of talking to humans, such that humans are able to effectively teach it during dialog. We believe the dialog-based language learning approach we described is a small step towards that goal. This paper only studies some restricted types of feedback, namely positive feedback and corrections of various types. However, potentially any reply in a dialog can be seen as feedback, and should be useful for learning. It should be studied if forward prediction, and the other approaches we tried, work there too. Future work should also develop further evaluation methodologies to test how the models we presented here, and new ones, work in those settings, e.g. in more complex settings where actions that are made lead to long-term changes in the environment and delayed rewards, i.e. extending to the reinforcement learning setting, and to full language generation. Finally, dialog-based feedback could also be used as a medium to learn non-dialog based skills, e.g. natural language dialog for completing visual or physical tasks. Acknowledgments We thank Arthur Szlam, Y-Lan Boureau, Marc’Aurelio Ranzato, Ronan Collobert, Michael Auli, David Grangier, Alexander Miller, Sumit Chopra, Antoine Bordes and Leon Bottou for helpful discussions and feedback, and the Facebook AI Research team in general for supporting this work. 8 References [1] M. A. Bassiri. Interactional feedback and the impact of attitude and motivation on noticing l2 form. English Language and Literature Studies, 1(2):61, 2011. [2] J. Clarke, D. Goldwasser, M.-W. Chang, and D. Roth. Driving semantic parsing from the world’s response. In Proceedings of computational natural language learning, 2010. [3] J. Dodge, A. Gane, X. Zhang, A. Bordes, S. Chopra, A. Miller, A. Szlam, and J. Weston. Evaluating prerequisite qualities for learning end-to-end dialog systems. arXiv preprint arXiv:1511.06931, 2015. [4] R. Higgins, P. Hartley, and A. Skelton. The conscientious consumer: Reconsidering the role of assessment feedback in student learning. Studies in higher education, 27(1):53–64, 2002. [5] B. Hixon, P. Clark, and H. Hajishirzi. Learning knowledge graphs for question answering through conversational dialog. In ACL, 2015. [6] P. K. Kuhl. Early language acquisition: cracking the speech code. Nature reviews neuroscience, 5(11): 831–843, 2004. [7] G. Kuhlmann, P. Stone, R. Mooney, and J. Shavlik. Guiding a reinforcement learner with natural language advice: Initial results in robocup soccer. In AAAI-2004 workshop on supervisory control, 2004. [8] A. S. Latham. Learning through feedback. Educational Leadership, 54(8):86–87, 1997. [9] I. Lenz, R. Knepper, and A. Saxena. Deepmpc: Learning deep latent features for model predictive control. In Robotics Science and Systems (RSS), 2015. [10] G. F. Marcus. Negative evidence in language acquisition. Cognition, 46(1):53–85, 1993. [11] T. Mikolov, A. Joulin, and M. Baroni. A roadmap towards machine intelligence. arXiv preprint arXiv:1511.08130, 2015. [12] K. Narasimhan, T. Kulkarni, and R. Barzilay. Language understanding for text-based games using deep reinforcement learning. arXiv preprint arXiv:1506.08941, 2015. [13] J. Oh, X. Guo, H. Lee, R. L. Lewis, and S. Singh. Action-conditional video prediction using deep networks in atari games. In Advances in Neural Information Processing Systems, pages 2845–2853, 2015. [14] A. Pappu and A. Rudnicky. Predicting tasks in goal-oriented spoken dialog systems using semantic knowledge bases. In Proceedings of the SIGDIAL, pages 242–250, 2013. [15] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015. [16] V. Rieser and O. Lemon. Reinforcement learning for adaptive dialogue systems. Springer Science & Business Media, 2011. [17] J. Schmidhuber and R. Huber. Learning to generate artificial fovea trajectories for target detection. International Journal of Neural Systems, 2(01n02):125–134, 1991. [18] A. Sordoni, M. Galley, M. Auli, C. Brockett, Y. Ji, M. Mitchell, J.-Y. Nie, J. Gao, and B. Dolan. A neural network approach to context-sensitive generation of conversational responses. NAACL, 2015. [19] P.-H. Su, D. Vandyke, M. Gasic, N. Mrksic, T.-H. Wen, and S. Young. Reward shaping with recurrent neural networks for speeding up on-line policy learning in spoken dialogue systems. arXiv preprint arXiv:1508.03391, 2015. [20] S. Sukhbaatar, J. Weston, R. Fergus, et al. End-to-end memory networks. In Advances in Neural Information Processing Systems, pages 2431–2439, 2015. [21] G. Wayne and L. Abbott. Hierarchical control using networks trained with higher-level forward models. Neural computation, 2014. [22] M. G. Werts, M. Wolery, A. Holcombe, and D. L. Gast. Instructive feedback: Review of parameters and effects. Journal of Behavioral Education, 5(1):55–75, 1995. [23] J. Weston, A. Bordes, S. Chopra, and T. Mikolov. Towards ai-complete question answering: a set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015. [24] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. 9
2016
18
6,083
Consistent Kernel Mean Estimation for Functions of Random Variables Carl-Johann Simon-Gabriel⇤, Adam ´Scibior⇤,†, Ilya Tolstikhin, Bernhard Schölkopf Department of Empirical Inference, Max Planck Institute for Intelligent Systems Spemanstraße 38, 72076 Tübingen, Germany ⇤joint first authors; † also with: Engineering Department, Cambridge University cjsimon@, adam.scibior@, ilya@, bs@tuebingen.mpg.de Abstract We provide a theoretical foundation for non-parametric estimation of functions of random variables using kernel mean embeddings. We show that for any continuous function f, consistent estimators of the mean embedding of a random variable X lead to consistent estimators of the mean embedding of f(X). For Matérn kernels and sufficiently smooth functions we also provide rates of convergence. Our results extend to functions of multiple random variables. If the variables are dependent, we require an estimator of the mean embedding of their joint distribution as a starting point; if they are independent, it is sufficient to have separate estimators of the mean embeddings of their marginal distributions. In either case, our results cover both mean embeddings based on i.i.d. samples as well as “reduced set” expansions in terms of dependent expansion points. The latter serves as a justification for using such expansions to limit memory resources when applying the approach as a basis for probabilistic programming. 1 Introduction A common task in probabilistic modelling is to compute the distribution of f(X), given a measurable function f and a random variable X. In fact, the earliest instances of this problem date back at least to Poisson (1837). Sometimes this can be done analytically. For example, if f is linear and X is Gaussian, that is f(x) = ax + b and X ⇠N(µ; σ), we have f(X) ⇠N(aµ + b; aσ). There exist various methods for obtaining such analytical expressions (Mathai, 1973), but outside a small subset of distributions and functions the formulae are either not available or too complicated to be practical. An alternative to the analytical approach is numerical approximation, ideally implemented as a flexible software library. The need for such tools is recognised in the general programming languages community (McKinley, 2016), but no standards were established so far. The main challenge is in finding a good approximate representation for random variables. Distributions on integers, for example, are usually represented as lists of (xi, p(xi)) pairs. For real valued distributions, integral transforms (Springer, 1979), mixtures of Gaussians (Milios, 2009), Laguerre polynomials (Williamson, 1989), and Chebyshev polynomials (Korze´n and Jaroszewicz, 2014) were proposed as convenient representations for numerical computation. For strings, probabilistic finite automata are often used. All those approaches have their merits, but they only work with a specific input type. There is an alternative, based on Monte Carlo sampling (Kalos and Whitlock, 2008), which is to represent X by a (possibly weighted) sample {(xi, wi)}n i=1 (with wi ≥0). This representation has several advantages: (i) it works for any input type, (ii) the sample size controls the time-accuracy trade-off, and (iii) applying functions to random variables reduces to applying the functions pointwise 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. to the sample, i.e., {(f(xi), wi)} represents f(X). Furthermore, expectations of functions of random variables can be estimated as E [f(X)] ⇡P i wif(xi)/ P i wi, sometimes with guarantees for the convergence rate. The flexibility of this Monte Carlo approach comes at a cost: without further assumptions on the underlying input space X, it is hard to quantify the accuracy of this representation. For instance, given two samples of the same size, {(xi, wi)}n i=1 and {(x0 i, w0 i)}n i=1, how can we tell which one is a better representation of X? More generally, how could we optimize a representation with predefined sample size? There exists an alternative to the Monte Carlo approach, called Kernel Mean Embeddings (KME) (Berlinet and Thomas-Agnan, 2004; Smola et al., 2007). It also represents random variables as samples, but additionally defines a notion of similarity between sample points. As a result, (i) it keeps all the advantages of the Monte Carlo scheme, (ii) it includes the Monte Carlo method as a special case, (iii) it overcomes its pitfalls described above, and (iv) it can be tailored to focus on different properties of X, depending on the user’s needs and prior assumptions. The KME approach identifies both sample points and distributions with functions in an abstract Hilbert space. Internally the latter are still represented as weighted samples, but the weights can be negative and the straightforward Monte Carlo interpretation is no longer valid. Schölkopf et al. (2015) propose using KMEs as approximate representation of random variables for the purpose of computing their functions. However, they only provide theoretical justification for it in rather idealised settings, which do not meet practical implementation requirements. In this paper, we build on this work and provide general theoretical guarantees for the proposed estimators. Specifically, we prove statements of the form “if {(xi, wi)}n i=1 provides a good estimate for the KME of X, then {(f(xi), wi)}n i=1 provides a good estimate for the KME of f(X)”. Importantly, our results do not assume joint independence of the observations xi (and weights wi). This makes them a powerful tool. For instance, imagine we are given data {(xi, wi)}n i=1 from a random variable X that we need to compress. Then our theorems guarantee that, whatever compression algorithm we use, as long as the compressed representation {(x0 j, w0 j)}n j=1 still provides a good estimate for the KME of X, the pointwise images {(f(x0 j), w0 j)}n j=1 provide good estimates of the KME of f(X). In the remainder of this section we first introduce KMEs and discuss their merits. Then we explain why and how we extend the results of Schölkopf et al. (2015). Section 2 contains our main results. In Section 2.1 we show consistency of the relevant estimator in a general setting, and in Section 2.2 we provide finite sample guarantees when Matérn kernels are used. In Section 3 we show how our results apply to functions of multiple variables, both interdependent and independent. Section 4 concludes with a discussion. 1.1 Background on kernel mean embeddings Let X be a measurable input space. We use a positive definite bounded and measurable kernel k : X ⇥X ! R to represent random variables X ⇠P and weighted samples ˆX := {(xi, wi)}n i=1 as two functions µk X and ˆµk X in the corresponding Reproducing Kernel Hilbert Space (RKHS) Hk by defining µk X := Z k(x, .) dP(x) and ˆµk X := X i wik(xi, .) . These are guaranteed to exist, since we assume the kernel is bounded (Smola et al., 2007). When clear from the context, we omit the kernel k in the superscript. µX is called the KME of P, but we also refer to it as the KME of X. In this paper we focus on computing functions of random variables. For f : X ! Z, where Z is a measurable space, and for a positive definite bounded kz : Z ⇥Z ! R we also write µkz f(X) := Z kz(f(x), .) dP(x) and ˆµkz f(X) := X i wikz(f(xi), .) . (1) The advantage of mapping random variables X and samples ˆX to functions in the RKHS is that we may now say that ˆX is a good approximation for X if the RKHS distance kˆµX −µXk is small. This distance depends on the choice of the kernel and different kernels emphasise different information about X. For example if on X := [a, b] ⇢R we choose k(x, x0) := x · x0 + 1, then 2 µX(x) = EX⇠P [X] x + 1. Thus any two distributions and/or samples with equal means are mapped to the same function in Hk so the distance between them is zero. Therefore using this particular k, we keep track only of the mean of the distributions. If instead we prefer to keep track of all first p moments, we may use the kernel k(x, x0) := (x · x0 + 1)p. And if we do not want to loose any information at all, we should choose k such that µk is injective over all probability measures on X. Such kernels are called characteristic. For standard spaces, such as X = Rd, many widely used kernels were proven characteristic, such as Gaussian, Laplacian, and Matérn kernels (Sriperumbudur et al., 2010, 2011). The Gaussian kernel k(x, x0) := e−kx−x0k2 2σ2 may serve as another good illustration of the flexibility of this representation. Whatever positive bandwidth σ2 > 0, we do not lose any information about distributions, because k is characteristic. Nevertheless, if σ2 grows, all distributions start looking the same, because their embeddings converge to a constant function 1. If, on the other hand, σ2 becomes small, distributions look increasingly different and ˆµX becomes a function with bumps of height wi at every xi. In the limit when σ2 goes to zero, each point is only similar to itself, so ˆµX reduces to the Monte Carlo method. Choosing σ2 can be interpreted as controlling the degree of smoothing in the approximation. 1.2 Reduced set methods An attractive feature when using KME estimators is the ability to reduce the number of expansion points (i.e., the size of the weighted sample) in a principled way. Specifically, if ˆX0 := {(x0 j, 1/N)}N j=1 then the objective is to construct ˆX := {(xi, wi)}n i=1 that minimises kˆµX0 −ˆµXk with n < N. Often the resulting xi are mutually dependent and the wi certainly depend on them. The algorithms for constructing such expansions are known as reduced set methods and have been studied by the machine learning community (Schölkopf and Smola, 2002, Chapter 18). Although reduced set methods provide significant efficiency gains, their application raises certain concerns when it comes to computing functions of random variables. Let P, Q be distributions of X and f(X) respectively. If x0 j ⇠i.i.d. P, then f(x0 j) ⇠i.i.d. Q and so ˆµf(X0) = 1 N P j k(f(x0 j), .) reduces to the commonly used p N-consistent empirical estimator of µf(X) (Smola et al., 2007). Unfortunately, this is not the case after applying reduced set methods, and it is not known under which conditions ˆµf(X) is a consistent estimator for µf(X). Schölkopf et al. (2015) advocate the use of reduced expansion set methods to save computational resources. They also provide some reasoning why this should be the right thing to do for characteristic kernels, but as they state themselves, their rigorous analysis does not cover practical reduced set methods. Motivated by this and other concerns listed in Section 1.4, we provide a generalised analysis of the estimator ˆµf(X), where we do not make assumptions on how xi and wi were generated. Before doing that, however, we first illustrate how the need for reduced set methods naturally emerges on a concrete problem. 1.3 Illustration with functions of two random variables Suppose that we want to estimate µf(X,Y ) given i.i.d. samples ˆX0 = {x0 i, 1/N}N i=1 and ˆY 0 = {y0 j, 1/N}N j=1 from two independent random variables X 2 X and Y 2 Y respectively. Let Q be the distribution of Z = f(X, Y ). The first option is to consider what we will call the diagonal estimator ˆµ1 := 1 N Pn i=1 kz $ f(x0 i, y0 i), . % . Since f(x0 i, y0 i) ⇠i.i.d. Q, ˆµ1 is p N-consistent (Smola et al., 2007). Another option is to consider the U-statistic estimator ˆµ2 := 1 N 2 PN i,j=1 kz $ f(x0 i, y0 j), . % , which is also known to be p Nconsistent. Experiments show that ˆµ2 is more accurate and has lower variance than ˆµ1 (see Figure 1). However, the U-statistic estimator ˆµ2 needs O(n2) memory rather than O(n). For this reason Schölkopf et al. (2015) propose to use a reduced set method both on ˆX0 and ˆY 0 to get new samples ˆX = {xi, wi}n i=1 and ˆY = {yj, uj}n j=1 of size n ⌧N, and then estimate µf(X,Y ) using ˆµ3 := Pn i,j=1 wiujkx(f(xi, yj), .). 3 We ran experiments on synthetic data to show how accurately ˆµ1, ˆµ2 and ˆµ3 approximate µf(X,Y ) with growing sample size N. We considered three basic arithmetic operations: multiplication X · Y , division X/Y , and exponentiation XY , with X ⇠N(3; 0.5) and Y ⇠N(4; 0.5). As the true embedding µf(X,Y ) is unknown, we approximated it by a U-statistic estimator based on a large sample (125 points). For ˆµ3, we used the simplest possible reduced set method: we randomly sampled subsets of size n = 0.01 · N of the xi, and optimized the weights wi and ui to best approximate ˆµX and ˆµY . The results are summarised in Figure 1 and corroborate our expectations: (i) all estimators converge, (ii) ˆµ2 converges fastest and has the lowest variance, and (iii) ˆµ3 is worse than ˆµ2, but much better than the diagonal estimator ˆµ1. Note, moreover, that unlike the U-statistic estimator ˆµ2, the reduced set based estimator ˆµ3 can be used with a fixed storage budget even if we perform a sequence of function applications—a situation naturally appearing in the context of probabilistic programming. Schölkopf et al. (2015) prove the consistency of ˆµ3 only for a rather limited case, when the points of the reduced expansions {xi}n i=1 and {yi}n i=1 are i.i.d. copies of X and Y , respectively, and the weights {(wi, ui)}n i=1 are constants. Using our new results we will prove in Section 3.1 the consistency of ˆµ3 under fairly general conditions, even in the case when both expansion points and weights are interdependent random variables. Figure 1: Error of kernel mean estimators for basic arithmetic functions of two variables, X · Y , X/Y and XY , as a function of sample size N. The U-statistic estimator ˆµ2 works best, closely followed by the proposed estimator ˆµ3, which outperforms the diagonal estimator ˆµ1. 1.4 Other sources of non-i.i.d. samples Although our discussion above focuses on reduced expansion set methods, there are other popular algorithms that produce KME expansions where the samples are not i.i.d. Here we briefly discuss several examples, emphasising that our selection is not comprehensive. They provide additional motivation for stating convergence guarantees in the most general setting possible. An important notion in probability theory is that of a conditional distribution, which can also be represented using KME (Song et al., 2009). With this representation the standard laws of probability, such as sum, product, and Bayes’ rules, can be stated using KME (Fukumizu et al., 2013). Applying those rules results in KME estimators with strong dependencies between samples and their weights. Another possibility is that even though i.i.d. samples are available, they may not produce the best estimator. Various approaches, such as kernel herding (Chen et al., 2010; Lacoste-Julien et al., 2015), attempt to produce a better KME estimator by actively generating pseudo-samples that are not i.i.d. from the underlying distribution. 2 Main results This section contains our main results regarding consistency and finite sample guarantees for the estimator ˆµf(X) defined in (1). They are based on the convergence of ˆµX and avoid simplifying assumptions about its structure. 4 2.1 Consistency If kx is c0-universal (see Sriperumbudur et al. (2011)), consistency of ˆµf(X) can be shown in a rather general setting. Theorem 1. Let X and Z be compact Hausdorff spaces equipped with their Borel σ-algebras, f : X ! Z a continuous function, kx, kz continuous kernels on X, Z respectively. Assume kx is c0-universal and that there exists C such that P i |wi| C independently of n. The following holds: If ˆµkx X ! µkx X then ˆµkz f(X) ! µkz f(X) as n ! 1. Proof. Let P be the distribution of X and ˆPn = Pn i=1 wiδxi. Define a new kernel on X by ekx(x1, x2) := kz $ f(x1), f(x2) % . X is compact and { ˆPn | n 2 N} [ {P} is a bounded set (in total variation norm) of finite measures, because k ˆPnkT V = Pn i=1 |wi| C. Furthermore, kx is continuous and c0-universal. Using Corollary 52 of Simon-Gabriel and Schölkopf (2016) we conclude that: ˆµkx X ! µkx X implies that ˆP converges weakly to P. Now, kz and f being continuous, so is ekx. Thus, if ˆP converges weakly to P, then ˆµ ekx X ! µ ekx X (Simon-Gabriel and Schölkopf, 2016, Theorem 44, Points (1) and (iii)). Overall, ˆµkx X ! µkx X implies ˆµ ekx X ! µ ekx X . We conclude the proof by showing that convergence in Hekx leads to convergence in Hkz: '''ˆµkz f(X) −µkz f(X) ''' 2 kz = '''ˆµ ekx X −µ ekx X ''' 2 ekx! 0. For a detailed version of the above, see Appendix A. The continuity assumption is rather unrestrictive. All kernels and functions defined on a discrete space are continuous with respect to the discrete topology, so the theorem applies in this case. For X = Rd, many kernels used in practice are continuous, including Gaussian, Laplacian, Matérn and other radial kernels. The slightly limiting factor of this theorem is that kx must be c0-universal, which often can be tricky to verify. However, most standard kernels—including all radial, non-constant kernels—are c0-universal (see Sriperumbudur et al., 2011). The assumption that the input domain is compact is satisfied in most applications, since any measurements coming from physical sensors are contained in a bounded range. Finally, the assumption that P i |wi| C can be enforced, for instance, by applying a suitable regularization in reduced set methods. 2.2 Finite sample guarantees Theorem 1 guarantees that the estimator ˆµf(X) converges to µf(X) when ˆµX converges to µX. However, it says nothing about the speed of convergence. In this section we provide a convergence rate when working with Matérn kernels, which are of the form ks x(x, x0) = 21−s Γ(s) kx −x0ks−d/2 2 Bd/2−s (kx −x0k2) , (2) where B↵is a modified Bessel function of the third kind (also known as Macdonald function) of order ↵, Γ is the Gamma function and s > d 2 is a smoothness parameter. The RKHS induced by ks x is the Sobolev space W s 2 (Rd) (Wendland, 2004, Theorem 6.13 & Chap.10) containing s-times differentiable functions. The finite-sample bound of Theorem 2 is based on the analysis of Kanagawa et al. (2016), which requires the following assumptions: Assumptions 1. Let X be a random variable over X = Rd with distribution P and let ˆX = {(xi, wi)}n i=1 be random variables over X n ⇥Rn with joint distribution S. There exists a probability distribution Q with full support on Rd and a bounded density, satisfying the following properties: (i) P has a bounded density function w.r.t. Q; (ii) there is a constant D > 0 independent of n, such that E S " 1 n n X i=1 g2(xi) # D kgk2 L2(Q) , 8g 2 L2(Q) . 5 These assumptions were shown to be fairly general and we refer to Kanagawa et al. (2016, Section 4.1) for various examples where they are met. Next we state the main result of this section. Theorem 2. Let X = Rd, Z = Rd0, and f : X ! Z be an ↵-times differentiable function (↵2 N+). Take s1 > d/2 and s2 > d0 such that s1, s2/2 2 N+. Let ks1 x and ks2 z be Matérn kernels over X and Z respectively as defined in (2). Assume X ⇠P and ˆX = {(xi, wi)}n i=1 ⇠S satisfy 1. Moreover, assume that P and the marginals of x1, . . . xn have a common compact support. Suppose that, for some constants b > 0 and 0 < c 1/2: (i) ES h kˆµX −µXk2 ks1 x i = O(n−2b) ; (ii) Pn i=1 w2 i = O(n−2c) (with probability 1) . Let ✓= min( s2 2s1 , ↵ s1 , 1) and assume ✓b −(1/2 −c)(1 −✓) > 0. Then E S '''ˆµf(X) −µf(X) ''' 2 ks2 z = O ⇣ (log n)d0 n−2 (✓b−(1/2−c)(1−✓))⌘ . (3) Before we provide a short sketch of the proof, let us briefly comment on this result. As a benchmark, remember that when x1, . . . xn are i.i.d. observations from X and ˆX = {(xi, 1/n)}n i=1, we get kˆµf(X) −µf(X)k2 = OP (n−1), which was recently shown to be a minimax optimal rate (Tolstikhin et al., 2016). How do we compare to this benchmark? In this case we have b = c = 1/2 and our rate is defined by ✓. If f is smooth enough, say ↵> d/2 + 1, and by setting s2 > 2s1 = 2↵, we recover the O(n−1) rate up to an extra (log n)d0 factor. However, Theorem 2 applies to much more general settings. Importantly, it makes no i.i.d. assumptions on the data points and weights, allowing for complex interdependences. Instead, it asks the convergence of the estimator ˆµX to the embedding µX to be sufficiently fast. On the downside, the upper bound is affected by the smoothness of f, even in the i.i.d. setting: if ↵⌧d/2 the rate will become slower, as ✓= ↵/s1. Also, the rate depends both on d and d0. Whether these are artefacts of our proof remains an open question. Proof. Here we sketch the main ideas of the proof and develop the details in Appendix C. Throughout the proof, C will designate a constant that depends neither on the sample size n nor on the variable R (to be introduced). C may however change from line to line. We start by showing that: E S '''ˆµkz f(X) −µkz f(X) ''' 2 kz = (2⇡) d0 2 Z Z E S ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz, (4) where h is Matérn kernel over Z with smoothness parameter s2/2. Second, we upper bound the integrand by roughly imitating the proof idea of Theorem 1 from Kanagawa et al. (2016). This eventually yields: E S ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2Cn−2⌫, (5) where ⌫:= ✓b −(1/2 −c)(1 −✓). Unfortunately, this upper bound does not depend on z and can not be integrated over the whole Z in (4). Denoting BR the ball of radius R, centred on the origin of Z, we thus decompose the integral in (4) as: Z Z E ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz = Z BR E ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz + Z Z\BR E ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz. On BR we upper bound the integral by (5) times the ball’s volume (which grows like Rd): Z BR E ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz CRdn−2⌫. (6) On X\BR, we upper bound the integral by a value that decreases with R, which is of the form: Z Z\BR E ⇣ [ˆµh f(X) −µh f(X)](z) ⌘2dz Cn1−2c(R −C0)s2−2e−2(R−C0) (7) 6 with C0 > 0 being a constant smaller than R. In essence, this upper bound decreases with R because [ˆµh f(X) −µh f(X)](z) decays with the same speed as h when kzk grows indefinitely. We are now left with two rates, (6) and (7), which respectively increase and decrease with growing R. We complete the proof by balancing these two terms, which results in setting R ⇡(log n)1/2. 3 Functions of Multiple Arguments The previous section applies to functions f of one single variable X. However, we can apply its results to functions of multiple variables if we take the argument X to be a tuple containing multiple values. In this section we discuss how to do it using two input variables from spaces X and Y, but the results also apply to more inputs. To be precise, our input space changes from X to X ⇥Y, input random variable from X to (X, Y ), and the kernel on the input space from kx to kxy. To apply our results from Section 2, all we need is a consistent estimator ˆµ(X,Y ) of the joint embedding µ(X,Y ). There are different ways to get such an estimator. One way is to sample (x0 i, y0 i) i.i.d. from the joint distribution of (X, Y ) and construct the usual empirical estimator, or approximate it using reduced set methods. Alternatively, we may want to construct ˆµ(X,Y ) based only on consistent estimators of µX and µY . For example, this is how ˆµ3 was defined in Section 1.3. Below we show that this can indeed be done if X and Y are independent. 3.1 Application to Section 1.3 Following Schölkopf et al. (2015), we consider two independent random variables X ⇠Px and Y ⇠Py. Their joint distribution is Px ⌦Py. Consistent estimators of their embeddings are given by ˆµX = Pn i=1 wikx(xi, .) and ˆµY = Pn j=1 ujky(yi, .). In this section we show that ˆµf(X,Y ) = Pn i,j=1 wiujkz $ f(xi, yj), . % is a consistent estimator of µf(X,Y ). We choose a product kernel kxy $ (x1, y1), (x2, y2) % = kx(x1, x2)ky(y1, y2), so the corresponding RKHS is a tensor product Hkxy = Hkx ⌦Hky (Steinwart and Christmann, 2008, Lemma 4.6) and the mean embedding of the product random variable (X, Y ) is a tensor product of their marginal mean embeddings µ(X,Y ) = µX ⌦µY . With consistent estimators for the marginal embeddings we can estimate the joint embedding using their tensor product ˆµ(X,Y ) = ˆµX ⌦ˆµY = n X i,j=1 wiujkx(xi, .) ⌦ky(yj, .) = n X i,j=1 wiujkxy $ (xi, yj), (. , .) % . If points are i.i.d. and wi = ui = 1/n, this reduces to the U-statistic estimator ˆµ2 from Section 1.3. Lemma 3. Let (sn)n be any positive real sequence converging to zero. Suppose kxy = kxky is a product kernel, µ(X,Y ) = µX ⌦µY , and ˆµ(X,Y ) = ˆµX ⌦ˆµY . Then: ( kˆµX −µXkkx = O(sn); kˆµY −µY kky = O(sn) implies '''ˆµ(X,Y ) −µ(X,Y ) ''' kxy = O(sn) . Proof. For a detailed expansion of the first inequality see Appendix B. '''ˆµ(X,Y ) −µ(X,Y ) ''' kxy kµXkkx kˆµY −µY kky + kµY kky kˆµX −µXkkx + kˆµX −µXkkx kˆµY −µY kky = O(sn) + O(sn) + O(s2 n) = O(sn). Corollary 4. If ˆµX −−−−! n!1 µX and ˆµY −−−−! n!1 µY , then ˆµ(X,Y ) −−−−! n!1 µ(X,Y ). Together with the results from Section 2 this lets us reason about estimators resulting from applying functions to multiple independent random variables. Write ˆµkxy XY = n X i,j=1 wiujkxy $ (xi, yj), . % = n2 X `=1 !`kxy(⇠`, .), 7 where ` enumerates the (i, j) pairs and ⇠` = (xi, yj), !` = wiuj. Now if ˆµkx X ! µkx X and ˆµky Y ! µky Y then ˆµkxy XY ! µkxy (X,Y ) (according to Corollary 4) and Theorem 1 shows that Pn i,j=1 wiujkz $ f(xi, yj), . % is consistent as well. Unfortunately, we cannot apply Theorem 2 to get the speed of convergence, because a product of Matérn kernels is not a Matérn kernel any more. One downside of this overall approach is that the number of expansion points used for the estimation of the joint increases exponentially with the number of arguments of f. This can lead to prohibitively large computational costs, especially if the result of such an operation is used as an input to another function of multiple arguments. To alleviate this problem, we may use reduced expansion set methods before or after applying f, as we did for example in Section 1.2. To conclude this section, let us summarize the implications of our results for two practical scenarios that should be distinguished. . If we have separate samples from two random variables X and Y , then our results justify how to provide an estimate of the mean embedding of f(X, Y ) provided that X and Y are independent. The samples themselves need not be i.i.d. — we can also work with weighted samples computed, for instance, by a reduced set method. . How about dependent random variables? For instance, imagine that Y = −X, and f(X, Y ) = X + Y . Clearly, in this case the distribution of f(X, Y ) is a delta measure on 0, and there is no way to predict this from separate samples of X and Y . However, it should be stressed that our results (consistency and finite sample bound) apply even to the case where X and Y are dependent. In that case, however, they require a consistent estimator of the joint embedding µ(X,Y ). . It is also sufficient to have a reduced set expansion of the embedding of the joint distribution. This setting may sound strange, but it potentially has significant applications. Imagine that one has a large database of user data, sampled from a joint distribution. If we expand the joint’s embedding in terms of synthetic expansion points using a reduced set construction method, then we can pass on these (weighted) synthetic expansion points to a third party without revealing the original data. Using our results, the third party can nevertheless perform arbitrary continuous functional operations on the joint distribution in a consistent manner. 4 Conclusion and future work This paper provides a theoretical foundation for using kernel mean embeddings as approximate representations of random variables in scenarios where we need to apply functions to those random variables. We show that for continuous functions f (including all functions on discrete domains), consistency of the mean embedding estimator of a random variable X implies consistency of the mean embedding estimator of f(X). Furthermore, if the kernels are Matérn and the function f is sufficiently smooth, we provide bounds on the convergence rate. Importantly, our results apply beyond i.i.d. samples and cover estimators based on expansions with interdependent points and weights. One interesting future direction is to improve the finite-sample bounds and extend them to general radial and/or translation-invariant kernels. Our work is motivated by the field of probabilistic programming. Using our theoretical results, kernel mean embeddings can be used to generalize functional operations (which lie at the core of all programming languages) to distributions over data types in a principled manner, by applying the operations to the points or approximate kernel expansions. This is in principle feasible for any data type provided a suitable kernel function can be defined on it. We believe that the approach holds significant potential for future probabilistic programming systems. Acknowledgements We thank Krikamol Muandet for providing the code used to generate Figure 1, Paul Rubenstein, Motonobu Kanagawa and Bharath Sriperumbudur for very useful discussions, and our anonymous reviewers for their valuable feedback. Carl-Johann Simon-Gabriel is supported by a Google European Fellowship in Causal Inference. 8 References R. A. Adams and J. J. F. Fournier. Sobolev Spaces. Academic Press, 2003. C. Bennett and R. Sharpley. Interpolation of Operators. Pure and Applied Mathematics. Elsevier Science, 1988. A. Berlinet and C. Thomas-Agnan. RKHS in probability and statistics. Springer, 2004. Y. Chen, M. Welling, and A. Smola. Super-samples from kernel herding. In UAI, 2010. K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes’ Rule: Bayesian Inference with Positive Definite Kernels. Journal of Machine Learning Research, 14:3753–3783, 2013. I. S. Gradshteyn and I. M. Ryzhik. Table of integrals, series, and products. Elsevier/Academic Press, Amsterdam, 2007. Edited by Alan Jeffrey and Daniel Zwillinger. M. Kalos and P. Whitlock. Monte Carlo Methods. Wiley, 2008. M. Kanagawa, B. K. Sriperumbudur, and K. Fukumizu. Convergence guarantees for kernel-based quadrature rules in misspecified settings. arXiv:1605.07254 [stat], 2016. arXiv: 1605.07254. Y. Katznelson. An Introduction to Harmonic Analysis. Cambridge University Press, 2004. M. Korze´n and S. Jaroszewicz. PaCAL: A Python package for arithmetic computations with random variables. Journal of Statistical Software, 57(10), 2014. S. Lacoste-Julien, F. Lindsten, and F. Bach. Sequential kernel herding : Frank-Wolfe optimization for particle filtering. In Artificial Intelligence and Statistics, volume 38, pages 544–552, 2015. A. Mathai. A review of the different techniques used for deriving the exact distributions of multivariate test criteria. Sankhy¯a: The Indian Journal of Statistics, Series A, pages 39–60, 1973. K. McKinley. Programming the world of uncertain things (keynote). In ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 1–2, 2016. D. Milios. Probability Distributions as Program Variables. PhD thesis, University of Edinburgh, 2009. S. Poisson. Recherches sur la probabilitédes jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilités. 1837. B. Schölkopf and A. J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002. B. Schölkopf, K. Muandet, K. Fukumizu, S. Harmeling, and J. Peters. Computing functions of random variables via reproducing kernel Hilbert space representations. Statistics and Computing, 25(4):755–766, 2015. C. Scovel, D. Hush, I. Steinwart, and J. Theiler. Radial kernels and their reproducing kernel hilbert spaces. Journal of Complexity, 26, 2014. C.-J. Simon-Gabriel and B. Schölkopf. Kernel distribution embeddings: Universal kernels, characteristic kernels and kernel metrics on distributions. Technical report, Max Planck Institute for Intelligent Systems, 2016. A. Smola, A. Gretton, L. Song, and B. Schölkopf. A Hilbert space embedding for distributions. In ALT, 2007. L. Song, J. Huang, A. Smola, and K. Fukumizu. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In International Conference on Machine Learning, pages 1–8, 2009. M. D. Springer. The Algebra of Random Variables. Wiley, 1979. B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. Lanckriet. Hilbert space embeddings and metrics on probability measures. Journal of Machine Learning Research, 11:1517–1561, 2010. B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. Journal of Machine Learning Research, 12:2389–2410, 2011. I. Steinwart and A. Christmann. Support Vector Machines. Information Science and Statistics. Springer, 2008. I. Steinwart and C. Scovel. Mercer’s Theorem on General Domains: On the Interaction between Measures, Kernels, and RKHSs. Constructive Approximation, 35(3):363–417, 2012. I. Tolstikhin, B. Sriperumbudur, and K. Muandet. Minimax Estimation of Kernel Mean Embeddings. arXiv:1602.04361 [math, stat], 2016. H. Wendland. Scattered Data Approximation. Cambridge University Press, 2004. R. Williamson. Probabilistic Arithmetic. PhD thesis, University of Queensland, 1989. 9
2016
180
6,084
Hierarchical Clustering via Spreading Metrics Aurko Roy1 and Sebastian Pokutta2 1College of Computing, Georgia Institute of Technology, Atlanta, GA, USA. Email: aurko@gatech.edu 2ISyE, Georgia Institute of Technology, Atlanta, GA, USA. Email: sebastian.pokutta@isye.gatech.edu Abstract We study the cost function for hierarchical clusterings introduced by [16] where hierarchies are treated as first-class objects rather than deriving their cost from projections into flat clusters. It was also shown in [16] that a top-down algorithm returns a hierarchical clustering of cost at most O (αn log n) times the cost of the optimal hierarchical clustering, where αn is the approximation ratio of the Sparsest Cut subroutine used. Thus using the best known approximation algorithm for Sparsest Cut due to Arora-Rao-Vazirani, the top-down algorithm returns a hierarchical clustering of cost at most O  log3/2 n  times the cost of the optimal solution. We improve this by giving an O(log n)-approximation algorithm for this problem. Our main technical ingredients are a combinatorial characterization of ultrametrics induced by this cost function, deriving an Integer Linear Programming (ILP) formulation for this family of ultrametrics, and showing how to iteratively round an LP relaxation of this formulation by using the idea of sphere growing which has been extensively used in the context of graph partitioning. We also prove that our algorithm returns an O(log n)-approximate hierarchical clustering for a generalization of this cost function also studied in [16]. We also give constant factor inapproximability results for this problem. 1 Introduction Hierarchical clustering is an important method in cluster analysis where a data set is recursively partitioned into clusters of successively smaller size. They are typically represented by rooted trees where the root corresponds to the entire data set, the leaves correspond to individual data points and the intermediate nodes correspond to a cluster of its descendant leaves. Such a hierarchy represents several possible flat clusterings of the data at various levels of granularity; indeed every pruning of this tree returns a possible clustering. Therefore in situations where the number of desired clusters is not known beforehand, a hierarchical clustering scheme is often preferred to flat clustering. The most popular algorithms for hierarchical clustering are bottoms-up agglomerative algorithms like single linkage, average linkage and complete linkage. In terms of theoretical guarantees these algorithms are known to correctly recover a ground truth clustering if the similarity function on the data satisfies corresponding stability properties (see, e.g., [5]). Often, however, one wishes to think of a good clustering as optimizing some kind of cost function rather than recovering a hidden “ground truth”. This is the standard approach in the classical clustering setting where popular objectives are k-means, k-median, min-sum and k-center (see Chapter 14, [23]). However as pointed out by [16] for a lot of popular hierarchical clustering algorithms including linkage based algorithms, it is hard to pinpoint explicitly the cost function that these algorithms are optimizing. Moreover, much of the existing cost function based approaches towards hierarchical clustering evaluate a hierarchy based 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. on a cost function for flat clustering, e.g., assigning the k-means or k-median cost to a pruning of this tree. Motivated by this, [16] introduced a cost function for hierarchical clustering where the cost takes into account the entire structure of the tree rather than just the projections into flat clusterings. This cost function is shown to recover the intuitively correct hierarchies on several synthetic examples like planted partitions and cliques. In addition, a top-down graph partitioning algorithm is presented that outputs a tree with cost at most O(αn log n) times the cost of the optimal tree and where αn is the approximation guarantee of the Sparsest Cut subroutine used. Thus using the Leighton-Rao algorithm [33] or the Arora-Rao-Vazirani algorithm [3] gives an approximation factor of O log2 n  and O  log3/2 n  respectively. In this work we give a polynomial time algorithm to recover a hierarchical clustering of cost at most O(log n) times the cost of the optimal clustering according to this cost function. We also analyze a generalization of this cost function studied by [16] and show that our algorithm still returns an O(log n) approximate clustering in this setting. We do this by giving a combinatorial characterization of the ultrametrics induced by this cost function, writing a convex relaxation for it and showing how to iteratively round a fractional solution into an integral one using a rounding scheme used in graph partitioning algorithms. We also implement the integer program, its LP relaxation, and the rounding algorithm and test it on some synthetic and real world data sets to compare the cost of the rounded solutions to the true optimum, as well as to compare its performance to other hierarchical clustering algorithms used in practice. Our experiments suggest that the hierarchies found by this algorithm are often better than the ones found by linkage based algorithms as well as the k-means algorithm in terms of the error of the best pruning of the tree compared to the ground truth. We conclude with constant factor hardness results for this problem. 1.1 Related Work The immediate precursor to this work is [16] where the cost function for evaluating a hierarchical clustering was introduced. Prior to this there has been a long line of research on hierarchical clustering in the context of phylogenetics and taxonomy (see, e.g., [22]). Several authors have also given theoretical justifications for the success of the popular linkage based algorithms for hierarchical clustering (see, e.g. [1]). In terms of cost functions, one approach has been to evaluate a hierarchy in terms of the k-means or k-median cost that it induces (see [17]). The cost function and the top-down algorithm in [16] can also be seen as a theoretical justification for several graph partitioning heuristics that are used in practice. LP relaxations for hierarchical clustering have also been studied in [2] where the objective is to fit a tree metric to a data set given pairwise dissimilarities. Another work that is indirectly related to our approach is [18] where an ILP was studied in the context of obtaining the closest ultrametric to arbitrary functions on a discrete set. Our approach is to give a combinatorial characterization of the ultrametrics induced by the cost function of [16] which allows us to use the tools from [18] to model the problem as an ILP. The natural LP relaxation of this ILP turns out to be closely related to LP relaxations considered before for several graph partitioning problems (see, e.g., [33, 19, 32]) and we use a rounding technique studied in this context to round this LP relaxation. Recently, we became aware of independent work by Charikar and Chatziafratis [12] obtaining similar results for hierarchical clustering. In particular they improve the approximation factor to O √log n  by showing how to round a spreading metric SDP relaxation for this cost function. They also analyze a similar LP relaxation using the divide-and-conquer approximation algorithms using spreading metrics paradigm of [20] together with a result of [7] to prove an O(log n) approximation. Finally, they also give similar inapproximability results for this problem. 2 Preliminaries A similarity based clustering problem consists of a dataset V of n points and a similarity function κ : V × V →R such that κ(i, j) is a measure of the similarity between i and j for any i, j ∈V . We will assume that the similarity function is symmetric, i.e., κ(i, j) = κ(j, i) for every i, j ∈V . We also require κ ≥0 as in [16]; see supplementary material for a discussion. Note that we do not make any assumptions about the points in V coming from an underlying metric space. For a given instance of a clustering problem we have an associated weighted complete graph Kn with vertex set V and 2 weight function given by κ. A hierarchical clustering of V is a tree T with a designated root r and with the elements of V as its leaves, i.e., leaves(T) = V . For any set S ⊆V we denote the lowest common ancestor of S in T by lca(S). For pairs of points i, j ∈V we will abuse the notation for the sake of simplicity and denote lca({i, j}) simply by lca(i, j). For a node v of T we denote the subtree of T rooted at v by T[v]. The following cost function was introduced by [16] to measure the quality of the hierarchical clustering T cost(T) := X {i,j}∈E(Kn) κ(i, j) |leaves(T[lca(i, j)])| . (1) The intuition behind this cost function is as follows. Let T be a hierarchical clustering with designated root r so that r represents the whole data set V . Since leaves(T) = V , every internal node v ∈T represents a cluster of its descendant leaves, with the leaves themselves representing singleton clusters of V . Starting from r and going down the tree, every distinct pair of points i, j ∈V will be eventually separated at the leaves. If κ(i, j) is large, i.e., i and j are very similar to each other then we would like them to be separated as far down the tree as possible if T is a good clustering of V . This is enforced in the cost function (1): if κ(i, j) is large then the number of leaves of lca(i, j) should be small, i.e., lca(i, j) should be far from the root r of T. Under the cost function (1), one can interpret the tree T as inducing an ultrametric dT on V given by dT (i, j) := |leaves(T[lca (i, j)])| −1. This is an ultrametric since dT (i, j) = 0 iff i = j and for any triple i, j, k ∈V we have dT (i, j) ≤max{dT (i, k), dT (j, k)}. The following definition introduces the notion of non-trivial ultrametrics. These turn out to be precisely the ultrametrics that are induced by tree decompositions of V corresponding to cost function (1), as we will show in Lemma 5. Definition 1. An ultrametric d on a set of points V is non-trivial if the following conditions hold. 1. For every non-empty set S ⊆V , there is a pair of points i, j ∈S such that d(i, j) ≥|S| −1. 2. For any t if St is an equivalence class of V under the relation i ∼j iff d(i, j) ≤t, then maxi,j∈St d(i, j) ≤|St| −1. Note that for an equivalence class St where d(i, j) ≤t for every i, j ∈St it follows from Condition 1 that t ≥|St| −1. Thus in the case when t = |St| −1 the two conditions imply that the maximum distance between any two points in S is t and that there is a pair i, j ∈S for which this maximum is attained. The following lemma shows that non-trivial ultrametrics behave well under restrictions to equivalence classes St of the form i ∼j iff d(i, j) ≤t. Due to page limitation full proofs are included in the supplementary material. Lemma 2. Let d be a non-trivial ultrametric on V and let St ⊆V be an equivalence class under the relation i ∼j iff d(i, j) ≤t. Then d restricted to St is a non-trivial ultrametric on St. The intuition behind the two conditions in Definition 1 is as follows. Condition 1 imposes a certain lower bound by ruling out trivial ultrametrics where, e.g., d(i, j) = 1 for every distinct pair i, j ∈V . On the other hand Condition 2 discretizes and imposes an upper bound on d by restricting its range to the set {0, 1, . . . , n −1} (see Lemma 3). This rules out the other spectrum of triviality where for example d(i, j) = n for every distinct pair i, j ∈V with |V | = n. Lemma 3. Let d be a non-trivial ultrametric on the set V . Then the range of d is contained in the set {0, 1, . . . , n −1} with |V | = n. 3 Ultrametrics and Hierarchical Clusterings In this section we study the combinatorial properties of the ultrametrics induced by cost function (1). We start with the following easy lemma showing that if a subset S ⊆V has r as its lowest common ancestor, then there must be a pair of points i, j ∈S for which r = lca(i, j). Lemma 4. Let S ⊆V of size ≥2. If r = lca(S) then there is a pair i, j ∈S such that lca(i, j) = r. The following lemma shows that non-trivial ultrametrics exactly capture the ultrametrics that are induced by tree decompositions of V using cost function (1). The proof of Lemma 5 is inductive and uses Lemma 4 as a base case. As it turns out, the inductive proof also gives an algorithm to build the corresponding hierarchical clustering given such a non-trivial ultrametric in polynomial time. Since 3 this algorithm is relatively straightforward, we refer the reader to the supplementary material for the details. Lemma 5. Let T be a hierarchical clustering on V and let dT be the ultrametric on V induced by cost function (1). Then dT is a non-trivial ultrametric on V . Conversely, let d be a non-trivial ultrametric on V . Then there is a hierarchical clustering T on V such that for any pair i, j ∈V we have dT (i, j) = |leaves(T[lca (i, j)])| −1 = d(i, j). Moreover this hierarchy can be constructed in time O n3 where |V | = n. Therefore to find the hierarchical clustering of minimum cost, it suffices to minimize ⟨κ, d⟩over non-trivial ultrametrics d : V × V →{0, . . . , n −1}. A natural approach is to formulate this problem as an Integer Linear Program (ILP) and then study Linear Programming (LP) relaxations of it. We consider the following ILP for this problem that is motivated by [18]. We have the variables x1 ij, . . . , xn−1 ij for every distinct pair i, j ∈V with xt ij = 1 if and only if d(i, j) ≥t. For any positive integer n, let [n] := {1, 2, . . . , n}. min n−1 X t=1 X {i,j}∈E(Kn) κ(i, j)xt ij (ILP-ultrametric) s.t. xt ij ≥xt+1 ij ∀i, j ∈V, t ∈[n −2] (2) xt ij + xt jk ≥xt ik ∀i, j, k ∈V, t ∈[n −1] (3) X i,j∈S xt ij ≥2 ∀t ∈[n −1], S ⊆V, |S| = t + 1 (4) X i,j∈S x|S| ij ≤|S|2     X i,j∈S xt ij + X i∈S j /∈S 1 −xt ij     ∀t ∈[n −1], S ⊆V (5) xt ij = xt ji, xt ii = 0 ∀i, j ∈V, t ∈[n −1] (6) xt ij ∈{0, 1} ∀i, j ∈V, t ∈[n −1] (7) Note that constraint (3) is the same as the strong triangle inequality since the variables xt ij are in {0, 1}. Constraint 6 ensures that the ultrametric is symmetric. Constraint 4 ensures the ultrametric satisfies Condition 1 of non-triviality: for every S ⊆V of size t + 1 we know that there must be points i, j ∈S such that d(i, j) = d(j, i) ≥t or in other words xt ij = xt ji = 1. Constraint 5 ensures that the ultrametric satisfies Condition 2 of non-triviality. To see this note that the constraint is active only when P i,j∈S xt ij = 0 and P i∈S,j /∈S(1 −xt ij) = 0. In other words d(i, j) ≤t −1 for every i, j ∈S and S is a maximal such set since if i ∈S and j /∈S then d(i, j) ≥t. Thus S is an equivalence class under the relation i ∼j iff d(i, j) ≤t −1 and so for every i, j ∈S we have d(i, j) ≤|S| −1 or equivalently x|S| ij = 0. The ultrametric d represented by a feasible solution xt ij is given by d(i, j) = Pn−1 t=1 xt ij. Definition 6. For any  xt ij | t ∈[n −1], i, j ∈V let Et be defined as Et :=  {i, j} | xt ij = 0 . Note that if xt ij is feasible for ILP-ultrametric then Et ⊆Et+1 for any t since xt ij ≥xt+1 ij . The sets {Et}n−1 t=1 induce a natural sequence of graphs {Gt}n−1 t=1 where Gt = (V, Et) with V being the data set. For a fixed t ∈{1, . . . , n −1} it is instructive to study the combinatorial properties of the so called layer-t problem, where we fix a choice of t and restrict ourselves to the constraints corresponding to that particular t. In particular we drop the inter-layer constraint (2), and constraints (3), (4) and (5) only range over i, j, k ∈V and S ⊆V with t fixed. The following lemma provides a combinatorial characterization of feasible solutions to the layer-t problem. Lemma 7. Fix a choice of t ∈[n−1]. Let Gt = (V, Et) be the graph as in Definition 6 corresponding to a solution xt ij to the layer-t problem. Then Gt is a disjoint union of cliques of size ≤t. Moreover this exactly characterizes all feasible solutions to the layer-t ILP. 4 By Lemma 7 the layer-t problem is to find a subset Et ⊆E(Kn) of minimum weight under κ, such that the complement graph Gt = (V, Et) is a disjoint union of cliques of size ≤t. Our algorithmic approach is to solve an LP relaxation of ILP-ultrametric and then round the solution to get a feasible solution to ILP-ultrametric. The rounding however proceeds iteratively in a layer-wise manner and so we need to make sure that the rounded solution satisfies the inter-layer constraints (2) and (5). The following lemma gives a combinatorial characterization of solutions that satisfy these two constraints. Lemma 8. For every t ∈[n −1], let xt ij be feasible for the layer-t problem. Let Gt = (V, Et) be the graph as in Definition 6 corresponding to xt ij, so that by Lemma 7, Gt is a disjoint union of cliques Kt 1, . . . , Kt lt each of size at most t. Then xt ij is feasible for ILP-ultrametric if and only if the following conditions hold. Nested cliques For any s ≤t every clique Ks p for some p ∈[ls] in Gs is a subclique of some clique Kt q in Gt where q ∈[lt]. Realization If Kt p = s for some s ≤t, then Gs contains Kt p as a component clique, i.e., Ks q = Kt p for some q ∈[ls]. The combinatorial interpretation of the individual layer-t problems allow us to simplify the formulation of ILP-ultrametric by replacing the constraints for sets of a specific size (Constraint 4) by a global constraint about all sets. Lemma 9. We may replace Constraint 4 of ILP-ultrametric by the following equivalent constraint P j∈S xt ij ≥|S| −t, for every t ∈[n −1], S ⊆V and i ∈S. 4 Rounding an LP relaxation In this section we consider the following natural LP relaxation for ILP-ultrametric. We keep the variables xt ij for every t ∈[n −1] and i, j ∈V but relax the integrality constraint on the variables. min n−1 X t=1 X {i,j}∈E(Kn) κ(i, j)xt ij (LP-ultrametric) s.t. xt ij ≥xt+1 ij ∀i, j ∈V, t ∈[n −2] (8) xt ij + xt jk ≥xt ik ∀i, j, k ∈V, t ∈[n −1] (9) X j∈S xt ij ≥|S| −t ∀t ∈[n −1], S ⊆V, i ∈S (10) xt ij = xt ji, xt ii = 0 ∀i, j ∈V, t ∈[n −1] (11) 0 ≤xt ij ≤1 ∀i, j ∈V, t ∈[n −1] (12) Note that the LP relaxation LP-ultrametric differs from ILP-ultrametric in not having constraint 5. A feasible solution xt ij to LP-ultrametric induces a sequence {dt}t∈[n−1] of distance metrics over V defined as dt(i, j) := xt ij. Constraint 10 enforces an additional restriction on this metric: informally points in a “large enough” subset S should be spread apart according to the metric dt. Metrics of type dt are called spreading metrics and were first studied by [19, 20] in relation to graph partitioning problems. The following lemma gives a technical interpretation of spreading metrics (see, e.g., [19, 20]). Lemma 10. Let xt ij be feasible for LP-ultrametric and for a fixed t ∈[n −1], let dt be the induced spreading metric. Let i ∈V be an arbitrary vertex and let S ⊆V be a set containing i such that |S| > (1 + ε)t for some ε > 0. Then maxj∈S dt(i, j) > ε 1+ε. The following lemma states that we can optimize over LP-ultrametric in polynomial time. Lemma 11. An optimal solution to LP-ultrametric can be computed in time polynomial in n and log (maxi,j κ(i, j)). From now on we will simply refer to a feasible solution of LP-ultrametric by the sequence of spreading metrics {dt}t∈[n−1] it induces. The following definition introduces the notion of an open 5 ball BU (i, r, t) of radius r centered at i ∈V according to the metric dt and restricted to the set U ⊆V . Definition 12. Let {dt | t ∈[n −1]} be the sequence of spreading metrics feasible for LPultrametric. Let U ⊆V be an arbitrary subset of V . For a vertex i ∈U, r ∈R, and t ∈[n −1] we define the open ball BU (i, r, t) of radius r centered at i as BU (i, r, t) := {j ∈U | dt(i, j) < r} ⊆U. If U = V then we denote BU (i, r, t) simply by B (i, r, t). To round LP-ultrametric to get a feasible solution for ILP-ultrametric, we will use the technique of sphere growing which was introduced in [33] to show an O(log n) approximation for the maximum multicommodity flow problem. The basic idea is to grow a ball around a vertex until the expansion of this ball is below a certain threshold, chop off this ball and declare it as a partition and recurse on the remaining vertices. Since then this idea has been used by [25, 19, 14] to design approximation algorithms for various graph partitioning problems. The first step is to associate to every ball BU (i, r, t) a volume vol (BU (i, r, t)) and a boundary ∂BU (i, r, t) so that its expansion is defined. For any t ∈[n −1] and U ⊆V we denote by γU t the value of the layer-t objective for solution dt restricted to the set U, i.e., γU t := P i,j∈U i<j κ(i, j)dt(i, j). When U = V we refer to γU t simply by γt. Since κ : V × V →R≥0, it follows that γU t ≤γt for any U ⊆V . We are now ready to define the volume, boundary and expansion of a ball BU (i, r, t). We use the definition of [19] modified for restrictions to arbitrary subsets U ⊆V . Definition 13. [19] Let U be an arbitrary subset of V . For a vertex i ∈U, radius r ∈R, and t ∈[n −1], let BU (i, r, t) be the ball of radius r as in Definition 12. Then we define its volume as vol (BU (i, r, t)) := γU t n log n + X j,k∈BU(i,r,t) j<k κ(j, k)dt(j, k) + X j∈BU(i,r,t) k/∈BU(i,r,t) k∈U κ(j, k) (r −dt(i, j)) . The boundary of the ball ∂BU (i, r, t) is the partial derivative of volume with respect to the radius, i.e., ∂BU (i, r, t) := ∂vol(BU (i,r,t)) ∂r . The expansion φ(BU (i, r, t)) of the ball BU (i, r, t) is then defined as the ratio of its boundary to its volume, i.e., φ (BU (i, r, t)) := ∂BU(i,r,t) vol(BU(i,r,t)). The following theorem establishes that the rounding procedure of Algorithm 1 ensures that the cliques in Ct are “small” and that the cost of the edges removed to form them are not too high. It also shows that Algorithm 1 can be implemented to run in time polynomial in n. Let mε := j n−1 1+ε k as in Algorithm 1. Theorem 14. Let  xt ij | t ∈[mε], i, j ∈V be the output of Algorithm 1 on a feasible solution {dt}t∈[n−1] of LP-ultrametric and any choice of ε ∈(0, 1). For any t ∈[mε], xt ij is feasible for the layer-⌊(1 + ε) t⌋problem and there is a constant c(ε) > 0 depending only on ε such that P {i,j}∈E(Kn) κ(i, j)xt ij ≤c(ε)(log n)γt. Moreover, Algorithm 1 can be implemented to run in time polynomial in n. We are now ready to state the main theorem showing that we can obtain a low cost non-trivial ultrametric from Algorithm 1. The proof idea of the main theorem is to use the combinatorial characterization of Lemma 8 to show that the rounded solution is feasible for ILP-ultrametric besides using Theorem 14 for the individual layer-t guarantees. Theorem 15. Let {xt ij | t ∈[mε] , i, j ∈V } be the output of Algorithm 1 on an optimal solution {dt}t∈[n−1] of LP-ultrametric for any choice of ε ∈(0, 1). Define the sequence  yt ij for every t ∈[n −1] and i, j ∈V as yt ij := x⌊t/(1+ε)⌋ ij if t > 1 + ε and yt ij := 1 otherwise. Then yt ij is feasible for ILP-ultrametric and satisfies Pn−1 t=1 P {i,j}∈E(Kn) κ(i, j)yt ij ≤(2c(ε) log n) OPT, where OPT is the optimal solution to ILP-ultrametric and c(ε) is the constant in the statement of Theorem 14. Lemma 11 and Theorem 15 imply the following corollary where we put everything together to obtain a hierarchical clustering of V in time polynomial in n with |V | = n. Let T denote the set of all possible hierarchical clusterings of V . 6 Algorithm 1: Iterative rounding algorithm to find a low cost ultrametric Input: Data set V , {dt}t∈[n−1] : V × V , ε > 0, κ : V × V →R≥0 Output: A solution set of the form n xt ij ∈{0, 1} | t ∈ hj n−1 1+ε ki , i, j ∈V o mε ← j n−1 1+ε k t ←mε Ct+1 ←{V } ∆← ε 1+ε while t ≥1 do Ct ←∅ for U ∈Ct+1 do if |U| ≤(1 + ε)t then Ct ←Ct ∪{U} Go to line 1 end while U ̸= ∅do Let i be arbitrary in U Let r ∈(0, ∆] be s.t. φ (BU (i, r, t)) ≤1 ∆log  vol(BU(i,∆,t)) vol(BU(i,0,t))  Ct ←Ct ∪{BU (i, r, t)} U ←U \ BU (i, r, t) end end xt ij = 1 if i ∈U1 ∈Ct, j ∈U2 ∈Ct and U1 ̸= U2, else xt ij = 0 t ←t −1 end return  xt ij | t ∈[mε], i, j ∈V Corollary 16. Given a data set V of n points and a similarity function κ : V × V → R≥0, there is an algorithm to compute a hierarchical clustering T of V satisfying cost(T) ≤ O (log n) minT ′∈T cost(T ′) in time polynomial in n and log (maxi,j∈V κ(i, j)). 5 Generalized Cost Function In this section we study the following natural generalization of cost function (1) also introduced by [16] where the distance between the two points is scaled by a function f : R≥0 →R≥0 i.e., costf(T) := P {i,j}∈E(Kn) κ(i, j)f (|leaves T[lca(i, j)]|). In order for this cost function to make sense, f should be strictly increasing and satisfy f(0) = 0. Possible choices for f could be in {x2, ex −1, log(1 + x)}. The top-down heuristic in [16] finds the optimal hierarchical clustering up to an approximation factor of cn log n with cn being defined as cn := 3αn max1≤n′≤n f(n′) f(⌈n′/3⌉) and where αn is the approximation factor from the Sparsest Cut algorithm used. A naive approach to solving this problem using the ideas of Algorithm 1 would be to replace the objective function of ILP-ultrametric by P {i,j}∈E(Kn) κ(i, j)f Pn−1 t=1 xt ij  . This makes the corresponding analogue of LP-ultrametric non-linear however, and for a general κ and f it is not clear how to compute an optimum solution in polynomial time. Using a small trick, one can still prove that Algorithm 1 returns a good approximation in this case as the following theorem states. For more details on the generalized cost function we refer the reader to the supplementary material. Theorem 17. Let an := maxn′∈[n](f(n′) −f(n′ −1)). Given a data set V of n points and a similarity function κ : V × V →R≥0, there is an algorithm to compute a hierarchical clustering T of V satisfying costf(T) ≤O (log n + an) minT ′∈T costf(T ′) in time polynomial in n, log (maxi,j∈V κ(i, j)) and log f(n). Note that, in this case we pay a price of O (log f(n)) in the running time due to binary search. 7 6 Experiments Finally, we describe the experiments we performed. We implemented a generalized version of ILP-ultrametric where one can plug in any strictly increasing function f satisfying f(0) = 0. For the sake of exposition, we limited ourselves to  x, x2, log(1 + x), ex −1 for the function f. We used the dual simplex method and separate constraints (9) and (10) to obtain fast computations. For the similarity function κ we limited ourselves to using cosine similarity κcos and the Gaussian kernel κgauss with σ = 1. Since Algorithm 1 requires κ ≥0, in practice we use 1 + κcos instead of κcos. Note that both Ward’s method and the k-means algorithm work on the squared Euclidean distance and thus need vector representations of the data set. For the linkage based algorithms we use the same similarity function that we use for Algorithm 1. We considered synthetic data sets and some data sets from the UCI database [36]. The synthetic data sets were mixtures of Gaussians in various small dimensional spaces and for some of the larger data sets we subsampled a smaller number of points uniformly at random for a number of times depending on the performance of the MIP and LP solver. For a comparison of the cost of the hierarchy returned by Algorithm 1 and the optimal hierarchy obtained by solving ILP-ultrametric, see the supplementary material. To compare the different hierarchical clustering algorithms, we prune the hierarchy to get the best k flat clusters and measure its error relative to the ground truth. We use the following notion of error also known as Classification Error that is standard in the literature for hierarchical clustering (see, e.g., [37]). Definition 18. Given a proposed clustering h : V →{1, . . . , k} its classification error relative to a target clustering g : V →{1, . . . , k} is denoted by err (g, h) and is defined as err (g, h) := minσ∈Sk [Prx∈V [h(x) ̸= σ(g(x))] . Figure 1 shows that Algorithm 1 often gives better prunings compared to the other standard clustering algorithms with respect to this notion of error. 7 Conclusion In this work we have studied the cost function introduced by [16] for hierarchical clustering of data under a pairwise similarity function. We have shown a combinatorial characterization of ultrametrics induced by this cost function leading to an improved approximation algorithm for this problem. It remains for future work to investigate combinatorial algorithms for this cost function as well as algorithms for other cost functions of a similar flavor; see supplementary material for a discussion. 0 10 20 30 40 50 Data sets 0.0 0.2 0.4 0.6 0.8 1.0 Error with respect to ground truth Algorithm 1 Average linkage Single linkage Complete linkage Ward’s method k-means 0 10 20 30 40 50 Data sets 0.0 0.2 0.4 0.6 0.8 1.0 Error with respect to ground truth Algorithm 1 Average linkage Single linkage Complete linkage Ward’s method k-means Figure 1: Comparison of Algorithm 1 with other algorithms for clustering using 1 + κcos (left) and κgauss (right) 8 Acknowledgments Research reported in this paper was partially supported by NSF CAREER award CMMI-1452463 and NSF grant CMMI-1333789. The authors thank Kunal Talwar and Mohit Singh for helpful discussions and anonymous reviewers for helping improve the presentation of this paper. 8 References [1] Margareta Ackerman, Shai Ben-David, and David Loker. Characterization of linkage-based clustering. In COLT, pages 270–281. Citeseer, 2010. 2 [2] Nir Ailon and Moses Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05), pages 73–82. IEEE, 2005. 2 [3] Sanjeev Arora, Satish Rao, and Umesh Vazirani. Expander flows, geometric embeddings and graph partitioning. Journal of the ACM (JACM), 56(2):5, 2009. 2 [5] Maria-Florina Balcan, Avrim Blum, and Santosh Vempala. A discriminative framework for clustering via similarity functions. In Proceedings of the fortieth annual ACM symposium on Theory of computing, pages 671–680. ACM, 2008. 1 [7] Yair Bartal. Graph decomposition lemmas and their role in metric embedding methods. In European Symposium on Algorithms, pages 89–97. Springer, 2004. 2 [12] Moses Charikar and Vaggos Chatziafratis. Approximate hierarchical clustering via sparsest cut and spreading metrics. arXiv preprint arXiv:1609.09548, 2016. 2 [14] Moses Charikar, Venkatesan Guruswami, and Anthony Wirth. Clustering with qualitative information. In Foundations of Computer Science, 2003. Proceedings. 44th Annual IEEE Symposium on, pages 524–533. IEEE, 2003. 6 [16] Sanjoy Dasgupta. A cost function for similarity-based hierarchical clustering. In Daniel Wichs and Yishay Mansour, editors, Proceedings of the 48th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2016, Cambridge, MA, USA, June 18-21, 2016, pages 118–127. ACM, 2016. ISBN 978-1-45034132-5. doi: 10.1145/2897518.2897527. URL http://doi.acm.org/10.1145/2897518.2897527. 1, 2, 3, 7, 8 [17] Sanjoy Dasgupta and Philip M Long. Performance guarantees for hierarchical clustering. Journal of Computer and System Sciences, 70(4):555–569, 2005. 2 [18] Marco Di Summa, David Pritchard, and Laura Sanità. Finding the closest ultrametric. Discrete Applied Mathematics, 180:70–80, 2015. 2, 4 [19] Guy Even, Joseph Naor, Satish Rao, and Baruch Schieber. Fast approximate graph partitioning algorithms. SIAM Journal on Computing, 28(6):2187–2214, 1999. 2, 5, 6 [20] Guy Even, Joseph SeffiNaor, Satish Rao, and Baruch Schieber. Divide-and-conquer approximation algorithms via spreading metrics. Journal of the ACM (JACM), 47(4):585–616, 2000. 2, 5 [22] Joseph Felsenstein and Joseph Felenstein. Inferring phylogenies, volume 2. Sinauer Associates Sunderland, 2004. 2 [23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. 1 [25] Naveen Garg, Vijay V Vazirani, and Mihalis Yannakakis. Approximate max-flow min-(multi) cut theorems and their applications. SIAM Journal on Computing, 25(2):235–251, 1996. 6 [32] Robert Krauthgamer, Joseph SeffiNaor, and Roy Schwartz. Partitioning graphs into balanced components. In Proceedings of the twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 942–949. Society for Industrial and Applied Mathematics, 2009. 2 [33] Tom Leighton and Satish Rao. An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms. In Foundations of Computer Science, 1988., 29th Annual Symposium on, pages 422–431. IEEE, 1988. 2, 6 [36] M. Lichman. UCI machine learning repository, 2013. URL http://archive.ics.uci.edu/ml. 8 [37] Marina Meil˘a and David Heckerman. An experimental comparison of model-based clustering methods. Machine learning, 42(1-2):9–29, 2001. 8 9
2016
181
6,085
Combining Fully Convolutional and Recurrent Neural Networks for 3D Biomedical Image Segmentation Jianxu Chen University of Notre Dame jchen16@nd.edu Lin Yang University of Notre Dame lyang5@nd.edu Yizhe Zhang University of Notre Dame yzhang29@nd.edu Mark Alber University of Notre Dame malber@nd.edu Danny Z. Chen University of Notre Dame dchen@nd.edu Abstract Segmentation of 3D images is a fundamental problem in biomedical image analysis. Deep learning (DL) approaches have achieved state-of-the-art segmentation performance. To exploit the 3D contexts using neural networks, known DL segmentation methods, including 3D convolution, 2D convolution on planes orthogonal to 2D image slices, and LSTM in multiple directions, all suffer incompatibility with the highly anisotropic dimensions in common 3D biomedical images. In this paper, we propose a new DL framework for 3D image segmentation, based on a combination of a fully convolutional network (FCN) and a recurrent neural network (RNN), which are responsible for exploiting the intra-slice and inter-slice contexts, respectively. To our best knowledge, this is the first DL framework for 3D image segmentation that explicitly leverages 3D image anisotropism. Evaluating using a dataset from the ISBI Neuronal Structure Segmentation Challenge and in-house image stacks for 3D fungus segmentation, our approach achieves promising results comparing to the known DL-based 3D segmentation approaches. 1 Introduction In biomedical image analysis, a fundamental problem is the segmentation of 3D images, to identify target 3D objects such as neuronal structures [1] and knee cartilage [15]. In biomedical imaging, 3D images often consist of highly anisotropic dimensions [11], that is, the scale of each voxel in depth (the z-axis) can be much larger (e.g., 5∼10 times) than that in the xy plane. On various biomedical image segmentation tasks, deep learning (DL) methods have achieved tremendous success in terms of accuracy (outperforming classic methods by a large margin [4]) and generality (mostly application-independent [16]). For 3D segmentation, known DL schemes can be broadly classified into four categories. (I) 2D fully convolutional networks (FCN), such as U-Net [16] and DCAN [2], can be applied to each 2D image slice, and 3D segmentation is then generated by concatenating the 2D results. (II) 3D convolutions can be employed to replace 2D convolutions [10], or combined with 2D convolutions into a hybrid network [11]. (III) Tri-planar schemes (e.g., [15]) apply three 2D convolutional networks based on orthogonal planes (i.e., the xy, yz, and xz planes) to perform voxel classification. (IV) 3D segmentation can also be conducted by recurrent neural networks (RNN). A most representative RNN based scheme is Pyramid-LSTM [18], which uses six generalized long short term memory networks to exploit the 3D context. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Figure 1: An overview of our DL framework for 3D segmentation. There are two key components in the architecture: kU-Net and BDC-LSTM. kU-Net is a type of FCN and is applied to 2D slices to exploit intra-slice contexts. BDC-LSTM, a generalized LSTM network, is applied to a sequence of 2D feature maps, from 2D slice z −ρ to 2D slice z + ρ, extracted by kU-Nets, to extract hierarchical features from the 3D contexts. Finally, a softmax function (the green arrows) is applied to the result of each slice in order to build the segmentation probability map. There are mainly three issues to the known DL-based 3D segmentation methods. First, simply linking 2D segmentations into 3D cannot leverage the spatial correlation along the z-direction. Second, incorporating 3D convolutions may incur extremely high computation costs (e.g., high memory consumption and long training time [10]). Third, both 3D convolution and other circumventive solutions (to reduce intensive computation of 3D convolution), like tri-planar schemes or PyramidLSTM, perform 2D convolutions with isotropic kernel on anisotropic 3D images. This could be problematic, especially for images with substantially lower resolution in depth (the z-axis). For instance, both the tri-planar schemes and Pyramid-LSTM perform 2D convolutions on the xz and yz planes. For two orthogonal one-voxel wide lines in the xz plane, one along the z-direction and the other along the x-direction, they may correspond to two structures at very different scales, and consequently may correspond to different types of objects — or even may not both correspond to objects of interest. But, 2D convolutions on the xz plane with isotropic kernel are not able to differentiate these two lines. On the other hand, 3D objects of a same type, if rotated in 3D, may have very different appearances in the xz or yz plane. This fact makes the features extracted by such 2D isotropic convolutions in the xz or yz plane suffer poor generality (e.g., may cause overfitting). In common practice, a 3D biomedical image is often represented as a sequence of 2D slices (called a z-stack). Recurrent neural networks, especially LSTM [8], are an effective model to process sequential data [14, 17]. Inspired by these facts, we propose a new framework combining two DL components: a fully convolutional network (FCN) to extract intra-slice contexts, and a recurrent neural network (RNN) to extract inter-slice contexts. Our framework is based on the following ideas. Our FCN component employs a new deep architecture for 2D feature extraction. It aims to efficiently compress the intra-slice information into hierarchical features. Comparing to known FCN for 2D biomedical imaging (e.g., U-Net [16]), our new FCN is considerably more effective in dealing with objects of very different scales by simulating human behaviors in perceiving multi-scale information. We introduce a generalized RNN to exploit 3D contexts, which essentially applies a series of 2D convolutions on the xy plane in a recurrent fashion to interpret 3D contexts while propagating contextual information in the z-direction. Our key idea is to hierarchically assemble intra-slice contexts into 3D contexts by leveraging the inter-slice correlations. The insight is that our RNN can distill 3D contexts in the same spirit as the 2D convolutional neural network (CNN) extracting a hierarchy of contexts from a 2D image. Comparing to known RNN models for 3D segmentation, such as Pyramid-LSTM [18], our RNN model is free of the problematic isotropic convolutions on anisotropic images, and can exploit 3D contexts more efficiently by combining with FCN. The essential difference between our new DL framework and the known DL-based 3D segmentation approaches is that we explicitly leverage the anisotropism of 3D images and efficiently construct a hierarchy of discriminative features from 3D contexts by performing systematic 2D operations. Our framework can serve as a new paradigm of migrating 2D DL architectures (e.g., CNN) to effectively exploit 3D contexts and solve 3D image segmentation problems. 2 Methodology A schematic view of our DL framework is given in Fig. 1. This framework is a combination of two key components: an FCN (called kU-Net) and an RNN (called BDC-LSTM), to exploit intra-slice 2 Figure 2: Illustrating four different ways to organize k submodule U-Nets in kU-Net (here k = 2). U-Net-2 works in a coarser scale (downsampled once from the original image), while U-Net-1 works in a finer scale (directly cropped from the original image). kU-Net propagates high level information extracted by U-Net-2 to U-Net-1. (A) U-Net-1 fuses the output of U-Net-2 in the downsampling stream. (B) U-Net-1 fuses the output of U-Net-2 in the upsampling stream. (C) U-Net-1 fuses the intermediate result of U-Net-2 in the most abstract layer. (D) U-Net-1 takes every piece of information from U-Net-2 in the commensurate layers. Architecture (A) is finally adopted for kU-Net. and inter-slice contexts, respectively. Section 2.1 presents the kU-Net, and Section 2.2 introduces the derivation of the BDC-LSTM. We then show how to combine these two components in the framework to conduct 3D segmentation. Finally, we discuss the training strategy. 2.1 The FCN Component: kU-Net The FCN component aims to construct a feature map for each 2D slice, from which object-relevant information (e.g., texture, shapes) will be extracted and object-irrelevant information (e.g., uneven illumination, imaging contrast) will be discarded. By doing so, the next RNN component can concentrate on the inter-slice context. A key challenge to the FCN component is the multi-scale issue. Namely, objects in biomedical images, specifically in 2D slices, can have very different scales and shapes. But, the common FCN [13] and other known variants for segmenting biomedical images (e.g., U-Net [16]) work on a fixed-size perception field (e.g., a 500 × 500 region in the whole 2D slice). When objects are of larger scale than the pre-defined perception field size, it can be troublesome for such FCN methods to capture the high level context (e.g., the overall shapes). In the literature, a multi-stream FCN was proposed in ProNet [19] to address this multi-scale issue in natural scene images. In ProNet, the same image is resized to different scales and fed in parallel to a shared FCN with the same parameters. However, the mechanism of shared parameters may make it not suitable for biomedical images, because objects of different scales may have very different appearances and require different FCNs to process. We propose a new FCN architecture to simulate how human experts perceive multi-scale information, in which multiple submodule FCNs are employed to work on different image scales systematically. Here, we use U-Net [16] as the submodule FCN and call the new architecture kU-Net. U-Net [16] is chosen because it is a well-known FCN achieving huge success in biomedical image segmentation. U-Net [16] consists of four downsampling steps followed by four upsampling steps. Skip-layer 3 connections exist between each downsampled feature map and the commensurate upsampled feature map. We refer to [16] for the detailed structure of U-Net. We observed that, when human experts label the ground truth, they tend to first zoom out the image to figure out where are the target objects and then zoom in to label the accurate boundaries of those targets. There are two critical mechanisms in kU-Net to simulate such human behaviors. (1) kU-Net employs a sequence of submodule FCNs to extract information at different scales sequentially (from the coarsest scale to the finest scale). (2) The information extracted by the submodule FCN responsible for a coarser scale will be propagated to the subsequent submodule FCN to assist the feature extraction in a finer scale. First, we create different scales of an original input 2D image by a series of connections of k −1 max-pooling layers. Let It be the image of scale t (t = 1, . . . , k), i.e., the result after t −1 maxpooling layers (I1 is the original image). Each pixel in It corresponds to 2t−1 pixels in the original image. Then, we use U-Net-t (t = 1, . . . , k), i.e., the t-th submodule, to process It. We keep the input window size the same across all U-Nets by using crop layers. Intuitively, U-Net-1 to U-Net-k all have the same input size, while U-Net-1 views the smallest region with the highest resolution and U-Net-k views the largest region with the lowest resolution. In other words, for any 1 ≤t1 < t2 ≤k, U-Net-t2 is responsible for a larger image scale than U-Net-t1. Second, we need to propagate the higher level information extracted by U-Net-t (2 ≤t ≤k) to the next submodule, i.e., U-Net-(t −1), so that clues from a coarser scale can assist the work in a finer scale. A natural strategy is to copy the result from U-Net-t to the commensurate layer in U-Net-(t −1). As shown in Fig. 2, there are four typical ways to achieve this: (A) U-Net-(t −1) only uses the final result from U-Net-t and uses it at the start; (B) U-Net-(t −1) only uses the final result from U-Net-t and uses it at the end; (C) U-Net-(t −1) only uses the most abstract information from U-Net-t; (D) U-Net-(t −1) uses every piece of information from U-Net-t. Based on our trial studies, type (A) and type (D) achieved the best performance. Since type (A) has fewer parameters than (D), we chose type (A) as our final architecture to organize the sequence of submodule FCNs. From a different perspective, each submodule U-Net can be viewed as a “super layer". Therefore, the kU-Net is a “deep” deep learning model. Because the parameter k exponentially increases the input window size of the network, a small k is sufficient to handle many biomedical images (we use k = 2 in our experiments). Appended with a 1×1 convolution (to convert the number of channels in the feature map) and a softmax layer, the kU-Net can be used for 2D segmentation problems. We will show (see Table 1) that kU-Net (i.e., a sequence of collaborative U-Nets) can achieve better performance than a single U-Net in terms of segmentation accuracy. 2.2 The RNN Component: BDC-LSTM In this section, we first review the classic LSTM network [8], and the generalized convolutional LSTM [14, 17, 18] (denoted by CLSTM). Next, we describe how our RNN component, called BDC-LSTM, is extended from CLSTM. Finally, we propose a deep architecture for BDC-LSTM, and discuss its advantages over other variants. LSTM and CLSTM: RNN (e.g., LSTM) is a neural network that maintains a self-connected internal status acting as a “memory". The ability to “remember” what has been seen allows RNN to attain exceptional performance in processing sequential data. Recently, a generalized LSTM, denoted by CLSTM, was developed [14, 17, 18]. CLSTM explicitly assumes that the input is images and replaces the vector multiplication in LSTM gates by convolutional operators. It is particularly efficient in exploiting image sequences. For instance, it can be used for image sequence prediction either in an encoder-decoder framework [17] or by combining with optical flows [14]. Specifically, CLSTM can be formulated as follows.          iz = σ(xz ∗Wxi + hz−1 ∗Whi + bi) fz = σ(xz ∗Wxf + hz−1 ∗Whf + bf) cz = cz−1 ⊙fz + iz ⊙tanh(xz ∗Wxc + hz−1 ∗Whc + bc) oz = σ(xz ∗Wxo + hz−1 ∗Who + bo) hz = oz ⊙tanh(cz) (1) 4 Here, ∗denotes convolution and ⊙denotes element-wise product. σ() and tanh() are logistic sigmoid and hyperbolic tangent functions; iz, fz, oz are the input gate, forget gate, and output gate, bi, bf, bc, bo are bias terms, and xz, cz, hz are the input, the cell activation state, and the hidden state, at slice z. W∗∗are diagonal weight matrices governing the value transitions. For instance, Whf controls how the forget gate takes values from the hidden state. The input to CLSTM is a feature map of size fin×lin×win, and the output is a feature map of size fout×lout×wout, lout ≤lin and wout ≤win. lout and wout depend on the size of the convolution kernels and whether padding is used. BDC-LSTM: We extend CLSTM to Bi-Directional Convolutional LSTM (BDC-LSTM). The key extension is to stack two layers of CLSTM, which work in two opposite directions (see Fig. 3(A)). The contextual information carried in the two layers, one in z−-direction and the other in z+-direction, is concatenated as output. It can be interpreted as follows. To determine the hidden state at a slice z, we take the 2D hierarchical features in slice z (i.e., xz) and the contextual information from both the z+ and z−directions. One layer of CLSTM will integrate the information from the z−-direction (resp., z+-direction) and xz to capture the minus-side (resp., plus-side) context (see Fig. 3(B)). Then, the two one-side contexts (z+ and z−) will be fused. In fact, Pyramid-LSTM [18] can be viewed as a different extension of CLSTM, which employs six CLSTMs in six different directions (x+/−, y+/−, and z+/−) and sums up the outputs of the six CLSTMs. However, useful information may be lost during the output summation. Intuitively, the sum of six outputs can only inform a simplified context instead of the exact situations in different directions. It should be noted that concatenating six outputs may greatly increase the memory consumption, and is thus impractical in Pyramid-LSTM. Hence, besides avoiding problematic convolutions on the xz and yz planes (as discussed in Section 1), BDC-LSTM is in principle more effective in exploiting inter-slice contexts than Pyramid-LSTM. Deep Architectures: Multiple BDC-LSTMs can be stacked into a deep structure by taking the output feature map of one BDC-LSTM as the input to another BDC-LSTM. In this sense, each BDC-LSTM can be viewed as a super “layer" in the deep structure. Besides simply taking one output as another input, we can also insert other operations, like max-pooling or deconvolution, in between BDC-LSTM layers. As a consequence, deep architectures for 2D CNN can be easily migrated or generalized to build deep architectures for BDC-LSTM. This is shown in Fig. 3(C)-(D). The underlying relationship between deep BDC-LSTM and 2D deep CNN is that deep CNN extracts a hierarchy of non-linear features from a 2D image and a deeper layer aims to interpret higher level information of the image, while deep BDC-LSTM extracts a hierarchy of hierarchical contextual features from the 3D context and a deeper BDC-LSTM layer seeks to interpret higher level 3D contexts. In [14, 17, 18], multiple CLSTMs were simply stacked one by one, maybe with different kernel sizes, in which a CLSTM “layer” may be viewed as a degenerated BDC-LSTM “layer”. When considering the problem in the context of CNN, as discussed above, one can see that no feature hierarchy was even formed in these simple architectures. Usually, convolutional layers are followed by subsampling, such as max-pooling, in order to form the hierarchy. We propose a deep architecture combining max-pooling, dropout and deconvolution layers with the BDC-LSTM layers. The detailed structure is as follows (the numbers in parentheses indicate the size changes of the feature map in each 2D slice). Input (64×126×126), dropout layer with p=0.5, two BDC-LSTMs with 64 hidden units and 5×5 kernels (64×118×118), 2×2 max-pooling (64×59×59), dropout layer with p=0.5, two BDC-LSTMs with 64 hidden units and 5×5 kernels (64×51×51), 2×2 deconvolution (64×102 ×102), dropout layer with p=0.5, 3×3 convolution layer without recurrent connections (64×100×100), 1×1 convolution layer without recurrent connections (2×100×100). (Note: All convolutions in BDC-LSTM use the same kernel size as indicated in the layers.) Thus, to predict the probability map of a 100×100 region, we need the 126×126 region centered at the same position as the input. In the evaluation stage, the whole feature map can be processed using the overlapping-tile strategy [16], because deep BDC-LSTM is fully convolutional along the z-direction. Suppose the feature map of a whole slice is of size 64×W ×H. The input tensor will be padded with zeros on the borders to resize into 64×(W +26)×(H +26). Then, a sequence of 64×126 ×126 patches will be processed each time. The results are stitched to form the 3D segmentation. 5 Figure 3: (A) The structure of BDC-LSTM, where two layers of CLSTM modules are connected in a bi-directional manner. (B) A graphical illustration of information propagation through BDC-LSTM along the z-direction. (C) The circuit diagram of BDC-LSTM. The green arrows represent the recurrent connections in opposite directions. When rotating this diagram by 90 degrees, it has a similar structure of a layer in CNN, except the recurrent connections. (D) The deep structure of BDC-LSTM used in our method. BDC-LSTM can be stacked in a way analogous to a layer in CNN. The red arrows are 5 × 5 convolutions. The yellow and purple arrows indicate max-pooling and deconvolution, respectively. The rightmost blue arrow indicates a 1 × 1 convolution. Dropout is applied (not shown) after the input layer, the max-pooling layer and the deconvolution layer. 2.3 Combining kU-Net and BDC-LSTM The motivation of solving 3D segmentation by combining FCN (kU-Net) and RNN (BDC-LSTM) is to distribute the burden of exploiting 3D contexts. kU-Net extracts and compresses the hierarchy of intra-slice contexts into feature maps, and BDC-LSTM distills the 3D context from a sequence of abstracted 2D contexts. These two components work coordinately, as follows. Suppose the 3D image consists of Nz 2D slices of size Nx × Ny each. First, kU-Net extracts feature maps of size 64 × Nx × Ny, denoted by f z 2D, from each slice z. The overlapping-tile strategy [16] will be adopted when the 2D images are too big to be processed by kU-Net in one shot. Second, BDC-LSTM works on f z 2D to build the hierarchy of non-linear features from 3D contexts and generate another 64 × Nx × Ny feature map, denoted by f z 3D, z = 1, . . . Nz. For each slice z, f h 2D (h = z−ρ, . . . , z, . . . , z+ρ) will serve as the context (ρ = 1 in our implementation). Finally, a softmax function is applied to f z 3D to generate the 3D segmentation probability map. 2.4 Training Strategy Our whole network, including kU-Net and BDC-LSTM, can be trained either end-to-end or in a decoupled manner. Sometimes, biomedical images are too big to be processed as a whole. Overlapping-tile is a common approach [16], but can also reduce the range of the context utilized by the networks. The decoupled training, namely, training kU-Net and BDC-LSTM separately, is especially useful in situations where the effective context of each voxel is very large. Given the same amount of computing resources (e.g., GPU memory), when allocating all resources to train one component only, both kU-Net and BDC-LSTM can take much larger tiles as input. In practice, even though the end-to-end training has its advantage of simplicity and consistency, the decoupled training strategy is preferred for challenging problems. kU-Net is initialized using the strategy in [7] and trained using Adam [9], with first moment coefficient (β1)=0.9, second moment coefficient (β2)=0.999, ϵ=1e−10, and a constant learning rate 5e−5. The training method for BDC-LSTM is Rms-prop [6], with smoothing constant (α)=0.9 and ϵ=1e−5. The initial learning rate is set as 1e−3 and halves every 2000 iterations, until 1e−5. In each iteration, one training example is randomly selected. The training data is augmented with rotation, flipping, and mirroring. To avoid gradient explosion, the gradient is clipped to [−5, 5] in each iteration. The parameters in BDC-LSTM are initialized with random values uniformly selected from [−0.02, 0.02]. We use a weighted cross-entropy loss in both the kU-Net and BDC-LSTM training. In biomedical image segmentation, there may often be certain important regions in which errors should be reduced 6 Table 1: Experimental results on the ISBI neuron dataset and in-house 3D fungus datasets. Neuron Fungus Method Vrand Vinfo Pixel Error Pyramid-LSTM [18] 0.9677 0.9829 N/A U-Net [16] 0.9728 0.9866 0.0263 Tri-Planar [15] 0.8462 0.9180 0.0375 3D Conv [10] 0.8178 0.9125 0.0630 Ours (FCN only) 0.9749 0.9869 0.0242 Ours (FCN+simple RNN) 0.9742 0.9869 0.0241 Ours (FCN+deep RNN) 0.9753 0.9870 0.0215 as much as possible. For instance, when two objects touch tightly to each other, it is important to make correct segmentation along the separating boundary between the two objects, while errors near the non-touching boundaries are of less importance. Hence, we adopt the idea in [16] to assign a unique weight for each voxel in the loss calculation. 3 Experiments Our framework was implemented in Torch7 [5] and the RNN package [12]. We conducted experiments on a workstation with 12GB NVIDIA TESLA K40m GPU, using CuDNN library (v5) for GPU acceleration. Our approach was evaluated in two 3D segmentation applications and compared with several state-of-the-art DL methods. 3D Neuron Structures: The first evaluation dataset was from the ISBI challenge on the segmentation of neuronal structures in 3D electron microscopic (EM) images [1]. The objective is to segment the neuron boundaries. Briefly, there are two image stacks of 512 × 512 × 30 voxels, where each voxel measures 4 × 4 × 50µm. Noise and section alignment errors exist in both stacks. One stack (with ground truth) was used for training, and the other was for evaluation. We adopted the same metrics as in [1], i.e., foreground-restricted rand score (Vrand) and information theoretic score (Vinfo) after border thinning. As shown in [1], Vrand and Vinfo are good approximation to the difficulty for human to correct the segmentation errors, and are robust to border variations due to the thickness. 3D Fungus Structures: Our method was also evaluated on in-house datasets for the segmentation of tubular fungus structures in 3D images from Serial Block-Face Scanning Electron Microscope. The ratio of the voxel scales is x : y : z = 1 : 1 : 3.45. There are five stacks, in all of which each slice is a grayscale image of 853 × 877 pixels. We manually labeled the first 16 slices in one stack as the training data and used the other four stacks, each containing 81 sections, for evaluation. The metric to quantify the segmentation accuracy is pixel error, defined as the Euclidean distance between the ground truth label (0 or 1) and segmentation probability (a value in the range of [0, 1]). Note that we do not use the same metric as the neuron dataset, because the “border thinning" is not applicable to the fungus datasets. The pixel error was actually adopted at the time of the ISBI neuron segmentation challenge, which is also a well-recognized metric to quantify pixel-level accuracy. It is also worth mentioning that it is impractical to label four stacks for evaluation due to intensive labor. Hence, we prepared the ground truth every 5 sections in each evaluation stack (i.e., 5, 10, 15, . . ., 75, 80). Totally, 16 sections were selected to estimate the performance on a whole stack. Namely, all 81 sections in each stack were segmented, but 16 of them were used to compute the evaluation score in the corresponding stack. The reported performance is the average of the scores for all four stacks. Recall the four categories of known deep learning based 3D segmentation methods described in Section 1. We selected one typical method from each category for comparison. (1) U-Net [16], which achieved the state-of-the-art segmentation accuracy on 2D biomedical images, is selected as the representative scheme of linking 2D segmentations into 3D results. (Note: We are aware of the method [3] which is another variant of 2D FCN and achieved excellent performance on the neuron dataset. But, different from U-Net, the generality of [3] in different applications is not yet clear. Our test of [3] on the in-house datasets showed an at least 5% lower F1-score than U-Net. Thus, we decided to take U-Net as the representative method in this category.) (2) 3D-Conv [10] is a method using CNN with 3D convolutions. (3) Tri-planar [15] is a classic solution to avoid high computing 7 Figure 4: (A) A cropped region in a 2D fungus image. (B) The result using only the FCN component. (C) The result of combining FCN and RNN. (D) The true fungi to be segmented in (A). costs of 3D convolutions, which replaces 3D convolution with three 2D convolutions on orthogonal planes. (4) Pyramid-LSTM [18] is the best known generalized LSTM networks for 3D segmentation. Results: The results on the 3D neuron dataset and the fungus datasets are shown in Table 1. It is evident that our proposed kU-Net, when used alone, achieves considerable improvement over U-Net [16]. Our approach outperforms the known DL methods utilizing 3D contexts. Moreover, one can see that our proposed deep architecture achieves better performance than simply stacking multiple BDC-LSTMs together. As discussed in Section 2.2, adding subsampling layers like in 2D CNN makes the RNN component able to perceive higher level 3D contexts. It worth mentioning that our two evaluation datasets are quite representative. The fungus data has small anisotropism (z resolution is close to xy resolution). The 3D neuron dataset has large anisotropism (z resolution is much less than xy resolution). The effectiveness of our framework on handling and leveraging anisotropism can be demonstrated. We should mention that we re-implemented Pyramid-LSTM [18] in Torch7 and tested it on the fungus datasets. But, the memory requirement of Pyramid-LSTM, when implemented in Torch7, was too large for our GPU. For the original network structure, the largest possible cubical region to process each time within our GPU memory capacity was 40 × 40 × 8. Using the same hyper-parameters in [18], we cannot obtain acceptable results due to the limited processing cube. (The result of Pyramid-LSTM on the 3D neuron dataset was fetched from the ISBI challenge leader board1 on May 10, 2016.) Here, one may see that our method is much more efficient in GPU memory, when implemented under the same deep learning framework and tested on the same machine. Some results are shown in Fig. 4 to qualitatively compare the results using the FCN component alone and the results of combining RNN and FCN. In general, both methods make nearly no false negative errors. But, the RNN component can help to (1) suppress false positive errors by maintaining inter-slice consistency, and (2) make more confident prediction in ambiguous cases by leveraging the 3D context. In a nutshell, FCN collects as much discriminative information as possible within each slice and RNN makes further refinement according to inter-slice correlation, so that an accurate segmentation can be made at each voxel. 4 Conclusions and Future Work In this paper, we introduce a new deep learning framework for 3D image segmentation, based on a combination of an FCN (i.e., kU-Net) to exploit 2D contexts and an RNN (i.e., BDC-LSTM) to integrate contextual information along the z-direction. Evaluated in two different 3D biomedical image segmentation applications, our proposed approach can achieve the state-of-the-art performance and outperform known DL schemes utilizing 3D contexts. Our framework provides a new paradigm to migrate the superior performance of 2D deep architectures to exploit 3D contexts. Following this new paradigm, we will explore BDC-LSTMs in different deep architectures to achieve further improvement and conduct more extensive evaluations on different datasets, such as BraTS (http://www.braintumorsegmentation.org/) and MRBrainS (http://mrbrains13.isi.uu.nl). 5 Acknowledgement This research was support in part by NSF Grants CCF-1217906 and CCF-1617735 and NIH Grants R01-GM095959 and U01-HL116330. Also, we would like to thank Dr. Viorica Patraucean at University of Cambridge (UK) for discussion of BDC-LSTM, and Prof. David P. Hughes and Dr. Maridel Fredericksen at Pennsylvania State University (US) for providing the 3D fungus datasets. 1http://brainiac2.mit.edu/isbi_challenge/leaders-board-new 8 References [1] A. Cardona, S. Saalfeld, S. Preibisch, B. Schmid, A. Cheng, J. Pulokas, P. Tomancak, and V. Hartenstein. An integrated micro-and macroarchitectural analysis of the drosophila brain by computer-assisted serial section electron microscopy. PLoS Biol, 8(10):e1000502, 2010. [2] H. Chen, X. Qi, L. Yu, and P.-A. Heng. Dcan: Deep contour-aware networks for accurate gland segmentation. arXiv preprint arXiv:1604.02677, 2016. [3] H. Chen, X. J. Qi, J. Z. Cheng, and P. A. Heng. Deep contextual networks for neuronal structure segmentation. In AAAI Conference on Artificial Intelligence, 2016. [4] D. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber. Deep neural networks segment neuronal membranes in electron microscopy images. In NIPS, pages 2843–2851, 2012. [5] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A Matlab-like environment for machine learning. In BigLearn, NIPS Workshop, 2011. [6] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. Rmsprop and equilibrated adaptive learning rates for non-convex optimization. arXiv preprint arXiv:1502.04390, 2015. [7] K. He, X. Zhang, S. Ren, and J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In CVPR, pages 1026–1034, 2015. [8] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735–1780, 1997. [9] D. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [10] M. Lai. Deep learning for medical image segmentation. arXiv preprint arXiv:1505.02000, 2015. [11] K. Lee, A. Zlateski, V. Ashwin, and H. S. Seung. Recursive training of 2D-3D convolutional networks for neuronal boundary prediction. In NIPS, pages 3559–3567, 2015. [12] N. Léonard, S. Waghmare, and Y. Wang. rnn: Recurrent library for Torch. arXiv preprint arXiv:1511.07889, 2015. [13] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, pages 3431–3440, 2015. [14] V. Patraucean, A. Handa, and R. Cipolla. Spatio-temporal video autoencoder with differentiable memory. arXiv preprint arXiv:1511.06309, 2015. [15] A. Prasoon, K. Petersen, C. Igel, F. Lauze, E. Dam, and M. Nielsen. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In MICCAI, pages 246–253, 2013. [16] O. Ronneberger, P. Fischer, and T. Brox. U-Net: Convolutional networks for biomedical image segmentation. In MICCAI, pages 234–241, 2015. [17] X. Shi, Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W. chun Woo. Convolutional lstm network: A machine learning approach for precipitation nowcasting. arXiv preprint arXiv:1506.04214, 2015. [18] M. F. Stollenga, W. Byeon, M. Liwicki, and J. Schmidhuber. Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. In NIPS, pages 2980–2988, 2015. [19] C. Sun, M. Paluri, R. Collobert, R. Nevatia, and L. Bourdev. Pronet: Learning to propose object-specific boxes for cascaded neural networks. arXiv preprint arXiv:1511.03776, 2015. 9
2016
182
6,086
SDP Relaxation with Randomized Rounding for Energy Disaggregation Kiarash Shaloudegi Imperial College London k.shaloudegi16@imperial.ac.uk András György Imperial College London a.gyorgy@imperial.ac.uk Csaba Szepesvári University of Alberta szepesva@ualberta.ca Wilsun Xu University of Alberta wxu@ualberta.ca Abstract We develop a scalable, computationally efficient method for the task of energy disaggregation for home appliance monitoring. In this problem the goal is to estimate the energy consumption of each appliance over time based on the total energy-consumption signal of a household. The current state of the art is to model the problem as inference in factorial HMMs, and use quadratic programming to find an approximate solution to the resulting quadratic integer program. Here we take a more principled approach, better suited to integer programming problems, and find an approximate optimum by combining convex semidefinite relaxations randomized rounding, as well as a scalable ADMM method that exploits the special structure of the resulting semidefinite program. Simulation results both in synthetic and real-world datasets demonstrate the superiority of our method. 1 Introduction Energy efficiency is becoming one of the most important issues in our society. Identifying the energy consumption of individual electrical appliances in homes can raise awareness of power consumption and lead to significant saving in utility bills. Detailed feedback about the power consumption of individual appliances helps energy consumers to identify potential areas for energy savings, and increases their willingness to invest in more efficient products. Notifying home owners of accidentally running stoves, ovens, etc., may not only result in savings but also improves safety. Energy disaggregation or non-intrusive load monitoring (NILM) uses data from utility smart meters to separate individual load consumptions (i.e., a load signal) from the total measured power (i.e., the mixture of the signals) in households. The bulk of the research in NILM has mostly concentrated on applying different data mining and pattern recognition methods to track the footprint of each appliance in total power measurements. Several techniques, such as artificial neural networks (ANN) [Prudenzi, 2002, Chang et al., 2012, Liang et al., 2010], deep neural networks [Kelly and Knottenbelt, 2015], k-nearest neighbor (k-NN) [Figueiredo et al., 2012, Weiss et al., 2012], sparse coding [Kolter et al., 2010], or ad-hoc heuristic methods [Dong et al., 2012] have been employed. Recent works, rather than turning electrical events into features fed into classifiers, consider the temporal structure of the data[Zia et al., 2011, Kolter and Jaakkola, 2012, Kim et al., 2011, Zhong et al., 2014, Egarter et al., 2015, Guo et al., 2015], resulting in state-of-the-art performance [Kolter and Jaakkola, 2012]. These works usually model the individual appliances by independent hidden Markov models (HMMs), which leads to a factorial HMM (FHMM) model describing the total consumption. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. FHMMs, introduced by Ghahramani and Jordan [1997], are powerful tools for modeling times series generated from multiple independent sources, and are great for modeling speech with multiple people simultaneously talking [Rennie et al., 2009], or energy monitoring which we consider here [Kim et al., 2011]. Doing exact inference in FHMMs is NP hard; therefore, computationally efficient approximate methods have been the subject of study. Classic approaches include sampling methods, such as MCMC or particle filtering [Koller and Friedman, 2009] and variational Bayes methods [Wainwright and Jordan, 2007, Ghahramani and Jordan, 1997]. In practice, both methods are nontrivial to make work and we are not aware of any works that would have demonstrated good results in our application domain with the type of FHMMs we need to work and at practical scales. In this paper we follow the work of Kolter and Jaakkola [2012] to model the NILM problem by FHMMs. The distinguishing features of FHMMs in this setting are that (i) the output is the sum of the output of the underlying HMMs (perhaps with some noise), and (ii) the number of transitions are small in comparison to the signal length. FHMMs with the first property are called additive. In this paper we derive an efficient, convex relaxation based method for FHMMs of the above type, which significantly outperforms the state-of-the-art algorithms. Our approach is based on revisiting relaxations to the integer programming formulation of Kolter and Jaakkola [2012]. In particular, we replace the quadratic programming relaxation of Kolter and Jaakkola, 2012 with a relaxation to an semi-definite program (SDP), which, based on the literature of relaxations is expected to be tighter and thus better. While SDPs are convex and could in theory be solved using interior-point (IP) methods in polynomial time [Malick et al., 2009], IP scales poorly with the size of the problem and is thus unsuitable to our large scale problem which may involve as many a million variables. To address this problem, capitalizing on the structure of our relaxation coming from our FHMM model, we develop a novel variant of ADMM [Boyd et al., 2011] that uses Moreau-Yosida regularization and combine it with a version of randomized rounding that is inspired by the the recent work of Park and Boyd [2015]. Experiments on synthetic and real data confirm that our method significantly outperforms other algorithms from the literature, and we expect that it may find its applications in other FHMM inference problems, too. 1.1 Notation Throughout the paper, we use the following notation: R denotes the set of real numbers, Sn + denotes the set of n ⇥n positive semidefinite matrices, I{E} denotes the indicator function of an event E (that is, it is 1 if the event is true and zero otherwise), 1 denotes a vector of appropriate dimension whose entries are all 1. For an integer K, [K] denotes the set {1, 2, . . . , K}. N(µ, ⌃) denotes the Gaussian distribution with mean µ and covariance matrix ⌃. For a matrix A, trace(A) denotes its trace and diag(A) denotes the vector formed by the diagonal entries of A. 2 System Model Following Kolter and Jaakkola [2012], the energy usage of the household is modeled using an additive factorial HMM [Ghahramani and Jordan, 1997]. Suppose there are M appliances in a household. Each of them is modeled via an HMM: let Pi 2 RKi⇥Ki denote the transition-probability matrix of appliance i 2 [M], and assume that for each state s 2 [Ki], the energy consumption of the appliance is constant µi,s (µi denotes the corresponding Ki-dimensional column vector (µi,1, . . . , µi,Ki)>). Denoting by xt,i 2 {0, 1}Ki the indicator vector of the state st,i of appliance i at time t (i.e., xt,i,s = I{st,i=s}), the total power consumption at time t is P i2[M] µ> i xt,i, which we assume is observed with some additive zero mean Gaussian noise of variance σ2: yt ⇠N(P i2[M] µ> i xt,i, σ2).1 Given this model, the maximum likelihood estimate of the appliance state vector sequence can be obtained by minimizing the log-posterior function arg min xt,i T X t=1 (yt −PM i=1 x> t,iµi)2 2σ2 − T −1 X t=1 M X i=1 x> t,i(log Pi)xt+1,i subject to xt,i 2 {0, 1}Ki, 1>xt,i = 1, i 2 [M] and t 2 [T], (1) 1Alternatively, we can assume that the power consumption yt,iof each appliance is normally distributed with mean µ> i xt,i and variance σ2 i , where σ2 = P i2[M] σ2 i , and yt = P i2[M] yt,i. 2 where log Pi denotes a matrix obtained from Pi by taking the logarithm of each entry. In our particular application, in addition to the signal’s temporal structure, large changes in total power (in comparison to signal noise) contain valuable information that can be used to further improve the inference results (in fact, solely this information was used for energy disaggregation, e.g., by Dong et al., 2012, 2013, Figueiredo et al., 2012). This observation was used by Kolter and Jaakkola [2012] to amend the posterior with a term that tries to match the large signal changes to the possible changes in the power level when only the state of a single appliance changes. Formally, let ∆yt = yt+1 −yt, ∆µ(i) m,k = µi,k −µi,m, and define the matrices Et,i 2 RKi⇥Ki by (Et,i)m,k = (∆yt −∆µ(i) m,k)2/(2σ2 diff), for some constant σdiff > 0. Intuitively, (Et,i)m,k is the negative log-likelihood (up to a constant) of observing a change ∆yt in the power level when appliance i transitions from state m to state k under some zero-mean Gaussian noise with variance σ2 diff. Making the heuristic approximation that the observation noise and this noise are independent (which clearly does not hold under the previous model), Kolter and Jaakkola [2012] added the term (−PT −1 t=1 PM i=1 x> t,iEt,ixt+1,i) to the objective of (1), arriving at arg min xt,i f(x1, . . . , xT ) := T X t=1 (yt −PM i=1 x> t,iµi)2 2σ2 − T −1 X t=1 M X i=1 x> t,i(Et,i + log Pi)xt+1,i subject to xt,i 2 {0, 1}Ki, 1>xt,i = 1, i 2 [M] and t 2 [T] . (2) In the rest of the paper we derive an efficient approximate solution to (2), and demonstrate that it is superior to the approximate solution derived by Kolter and Jaakkola [2012] with respect to several measures quantifying the accuracy of load disaggregation solutions. 3 SDP Relaxation and Randomized Rounding There are two major challenges to solve the optimization problem (2) exactly: (i) the optimization is over binary vectors xt,i; and (ii) the objective function f, even when considering its extension to a convex domain, is in general non-convex (due to the second term). As a remedy we will relax (2) to make it an integer quadratic programming problem, then apply an SDP relaxation and randomized rounding to solve approximately the relaxed problem. We start with reviewing the latter methods. 3.1 Approximate Solutions for Integer Quadratic Programming In this section we consider approximate solutions to the integer quadratic programming problem minimize f(x) = x>Dx + 2d>x subject to x 2 {0, 1}n, (3) where D 2 Sn + is positive semidefinite, and d 2 Rn. While an exact solution of (3) can be found by enumerating all possible combination of binary values within a properly chosen box or ellipsoid, the running time of such exact methods is nearly exponential in the number n of binary variables, making these methods unfit for large scale problems. One way to avoid exponential running times is to replace (3) with a convex problem with the hope that the solutions of the convex problems can serve as a good starting point to find high-quality solutions to (3). The standard approach to this is to linearize (3) by introducing a new variable X 2 Sn + tied to x trough X = xx>, so that x>Dx = trace(DX), and then relax the nonconvex constraints X = xx>, x 2 {0, 1}n to X ⌫xx>, diag(X) = x, x 2 [0, 1]n. This leads to the relaxed SDP problem minimize trace(D>X) + 2d>x subject to  1 x> x X $ ⌫0, diag(X) = x, x 2 [0, 1]n (4) 3 By introducing ˆX =  1 x> x X $ this can be written in the compact SDP form minimize trace( ˆD> ˆX) subject to ˆX ⌫0, A ˆX = b . (5) where ˆD =  0 d> d D $ 2 Sn+1 + , b 2 Rm and A : Sn + ! Rm is an appropriate linear operator. This general SDP optimization problem can be solved with arbitrary precision in polynomial time using interior-point methods [Malick et al., 2009, Wen et al., 2010]. As discussed before, this approach becomes impractical in terms of both the running time and the required memory if either the number of variables or the optimization constraints are large [Wen et al., 2010]. We will return to the issue of building scaleable solvers for NILM in Section 5. Note that introducing the new variable X, the problem is projected into a higher dimensional space, which is computationally more challenging than just simply relaxing the integrality constraint in (3), but leads to a tighter approximation of the optimum (c.f., Park and Boyd, 2015; see also Lovász and Schrijver, 1991, Burer and Vandenbussche, 2006). To obtain a feasible point of (3) from the solution of (5), we still need to change the solution x to a binary vector. This can be done via randomized rounding [Park and Boyd, 2015, Goemans and Williamson, 1995]: Instead of letting x 2 [0, 1]n, the integrality constraint x 2 {0, 1}n in (3) can be replaced by the inequalities xi(xi −1) ≥0 for all i 2 [n]. Although these constraints are nonconvex, they admit an interesting probabilistic interpretation: the optimization problem minimize Ew⇠N (µ,⌃)[w>Dw + 2d>w] subject to Ew⇠N (µ,⌃)[wi(wi −1)] ≥0, i 2 [n], µ 2 Rn, ⌃⌫0 is equivalent to minimize trace((⌃+ µµ>)D) + 2d>µ subject to ⌃i,i + µ2 i −µi ≥0, i 2 [n], (6) which is in the form of (4) with X = ⌃+ µµ> and x = µ (above, Ex⇠P [f(x)] stands for R f(x)dP(x)). This leads to the rounding procedure: starting from a solution (x⇤, X⇤) of (4), we randomly draw several samples w(j) from N(x⇤, X⇤−x⇤x⇤>), round w(j) i to 0 or 1 to obtain x(j), and keep the x(j) with the smallest objective value. In a series of experiments, Park and Boyd [2015] found this procedure to be better than just naively rounding the coordinates of x⇤. 4 An Efficient Algorithm for Inference in FHMMs To arrive at our method we apply the results of the previous subsection to (2). To do so, as mentioned at the beginning of the section, we need to change the problem to a convex one, since the elements of the second term in the objective of (2), −x> t,i(Et,i + log Pi)xt+1,i are not convex. To address this issue, we relax the problem by introducing new variables Zt,i = xt,ix> t+1,i and replace the constraint Zt,i = xt,ix> t+1,i with two new ones: Zt,i1 = xt,i and Z> t,i1 = xt+1,i. To simplify the presentation, we will assume that Ki = K for all i 2 [M]. Then problem (2) becomes arg min xt,i T X t=1 ⇢ 1 2σ2 ' yt −x> t µ (2 −p> t zt ) subject to xt 2 {0, 1}MK, t 2 [T], ˆzt 2 {0, 1}MKK, t 2 [T −1], 1>xt,i = 1, t 2 [T] and i 2 [M], Zt,i1> = xt,i, Z> t,i1> = xt+1,i , t 2 [T −1] and i 2 [M], (7) 4 Algorithm 1 ADMM-RR: Randomized rounding algorithm for suboptimal solution to (2) Given: number of iterations: itermax, length of input data: T Solve the optimization problem (8): Run Algorithm 2 to get X⇤ t and z⇤ t Set xbest t := z⇤ t and Xbest t := X⇤ t for t = 1, . . . , T for t = 2, . . . , T −1 do Set x := [xbest t−1 >, xbest t >, xbest t+1 >]> Set X := block(Xbest t−1 , Xbest t , Xbest t+1 ) where block(·, ·) constructs block diagonal matrix from input arguments Set f best := 1 Form the covariance matrix ⌃:= X −xxT and find its Cholesky factorization LL> = ⌃. for k = 1, 2, . . . , itermax do Random sampling: zk := x + Lw, where w ⇠N(0, I) Round zk to the nearest integer point xk that satisfies the constraints of (7) If f best > ft(xk) then update xbest t and Xbest t from the corresponding entries of xk and xkxk>, respectively end for end for where x> t = [x> t,1, . . . , x> t,M], µ> = [µ> 1 , . . . , µ> M], z> t = [vec(Zt,1)>, . . . , vec(Zt,M)>] and p> t = [vec(Et,1 + log P1), . . . , vec(log PT )], with vec(A) denoting the column vector obtained by concatenating the columns of A for a matrix A. Expanding the first term of (7) and following the relaxation method of Section 3.1, we get the following SDP problem:2 arg min Xt,zt T X t=1 trace(D> t Xt) + d> t zt subject to AXt = b, BXt + Czt + EXt+1 = g, Xt ⌫0, Xt, zt ≥0 . (8) Here A : SMK+1 + ! Rm, B, E : SMK+1 + ! Rm0 and C 2 RMKK⇥m0 are all appropriate linear operators, and the integers m and m0 are determined by the number of equality constraints, while Dt = 1 2σ2  0 −ytµ> −ytµ µµ> $ and dt = pt. Notice that (8) is a simple, though huge-dimensional SDP problem in the form of (5) where ˆD has a special block structure. Next we apply the randomized rounding method from Section 3.1 to provide an approximate solution to our original problem (2). Starting from an optimal solution (z⇤, X⇤) of (8) , and utilizing that we have an SDP problem for each time step t, we obtain Algorithm 1 that performs the rounding sequentially for t = 1, 2, . . . , T. However we run the randomized method for three consecutive time steps, since Xt appears at both time steps t −1 and t + 1 in addition to time t (cf., equation 9). Following Park and Boyd [2015], in the experiments we introduce a simple greedy search within Algorithm 1: after finding the initial point xk, we greedily try to objective the target value by change the status of a single appliance at a single time instant. The search stops when no such improvement is possible, and we use the resulting point as the estimate. 5 ADMM Solver for Large-Scale, Sparse Block-Structured SDP Problems Given the relaxation and randomized rounding presented in the previous subsection all that remains is to find X⇤ t , z⇤ t to initialize Algorithm 1. Although interior point methods can solve SDP problems efficiently, even for problems with sparse constraints as (4), the running time to obtain an ✏optimal solution is of the order of n3.5 log(1/✏) [Nesterov, 2004, Section 4.3.3], which becomes prohibitive in our case since the number of variables scales linearly with the time horizon T. As an alternative solution, first-order methods can be used for large scale problems [Wen et al., 2010]. Since our problem (8) is an SDP problem where the objective function is separable, ADMM is a promising candidate to find a near-optimal solution. To apply ADMM, we use the Moreau-Yosida quadratic regularization [Malick et al., 2009], which is well suited for the primal formulation we 2The only modification is that we need to keep the equality constraints in (7) that are missing from (3). 5 Algorithm 2 ADMM for sparse SDPs of the form (8) Given: length of input data: T, number of iterations: itermax. Set the initial values to zero. W 0 t , P 0 t , S0 = 0, λ0 t = 0, ⌫0 t = 0, and r0 t , h0 t = 0 Set µ = 0.001 {Default step-size value} for k = 0, 1, . . . , itermax do for t = 1, 2, . . . , T do Update P k t , W k t , λk, Sk t , rk t , hk t , and ⌫k t , respectively, according to (11) (Appendix A). end for end for consider. When implementing ADMM over the variables (Xt, zt)t, the sparse structure of our constraints allows to consider the SDP problems for each time step t sequentially: arg min Xt,zt trace(D> t Xt) + d> t zt subject to AXt = b, BXt + Czt + EXt+1 = g, BXt−1 + Czt−1 + EXt = g, Xt ⌫0, Xt, zt ≥0 . (9) The regularized Lagrangian function for (9) is3 Lµ =trace(D>X) + d>z + 1 2µkX −Sk2 F + 1 2µkz −rk2 2 + λ>(b −AX) + ⌫>(g −BX −Cz −EX+) + ⌫> −(g −BX−−Cz−−EX) −trace(W >X) −trace(P >X) −h>z, (10) where λ, ⌫, W ≥0, P ⌫0, and h ≥0 are dual variables, and µ > 0 is a constant. By taking the derivatives of Lµ and computing the optimal values of X and z, one can derive the standard ADMM updates, which, due to space constraints, are given in Appendix A. The final algorithm, which updates the variables for each t sequentially, is given by Algorithm 2. Algorithms 1 and 2 together give an efficient algorithm for finding an approximate solution to (2) and thus also to the inference problem of additive FHMMs. 6 Learning the Model The previous section provided an algorithm to solve the inference part of our energy disaggregation problem. However, to be able to run the inference method, we need to set up the model. To learn the HMMs describing each appliance, we use the method of Kontorovich et al. [2013] to learn the transition matrix, and the spectral learning method of Anandkumar et al. [2012] (following Mattfeld, 2014) to determine the emission parameters. However, when it comes to the specific application of NILM, the problem of unknown, time-varying bias also needs to be addressed, which appears due to the presence of unknown/unmodeled appliances in the measured signal. A simple idea, which is also followed by Kolter and Jaakkola [2012], is to use a “generic model” whose contribution to the objective function is downweighted. Surprisingly, incorporating this idea in the FHMM inference creates some unexpected challenges.4 Therefore, in this work we come up with a practical, heuristic solution tailored to NILM. First we identify all electric events defined by a large change ∆yt in the power usage (using some ad-hoc threshold). Then we discard all events that are similar to any possible level change ∆µ(i) m,k. The remaining large jumps are regarded as coming from a generic HMM model describing the unregistered appliances: they are clustered into K −1 clusters, and an HMM model is built where each cluster is regarded as power usage coming from a single state of the unregistered appliances. We also allow an “off state” with power usage 0. 3We drop the subscript t and replace t + 1 and t −1 with + and −signs, respectively. 4For example, the incorporation of this generic model breaks the derivation of the algorithm of Kolter and Jaakkola [2012]. See Appendix B for a discussion of this. 6 7 Experimental Results We evaluate the performance of our algorithm in two setups:5 we use a synthetic dataset to test the inference method in a controlled environment, while we used the REDD dataset of Kolter and Johnson [2011] to see how the method performs on non-simulated, “real” data. The performance of our algorithm is compared to the structured variational inference (SVI) method of Ghahramani and Jordan [1997], the method of Kolter and Jaakkola [2012] and that of Zhong et al. [2014]; we shall refer to the last two algorithms as KJ and ZGS, respectively. 7.1 Experimental Results: Synthetic Data The synthetic dataset was generated randomly (the exact procedure is described in Appendix C). To evaluate the performance, we use normalized disaggregation error as suggested by Kolter and Jaakkola [2012] and also adopted by Zhong et al. [2014]. This measures the reconstruction error for each individual appliance. Given the true output yt,i and the estimated output ˆyt,i (i.e. ˆyt,i = µ> i ˆxt,i), the error measure is defined as NDE = qP t,i(yt,i −ˆyt,i)2/P t,i (yt,i)2 . Figures 1 and 2 show the performance of the algorithms as the number HMMs (M) (resp., number of states, K) is varied. Each plot is a report for T = 1000 steps averaged over 100 random models and realizations, showing the mean and standard deviation of NDE. Our method, shown under the label ADMM-RR, runs ADMM for 2500 iterations, runs the local search at the end of each 250 iterations, and chooses the result that has the maximum likelihood. ADMM is the algorithm which applies naive rounding. It can be observed that the variational inference method is significantly outperformed by all other methods, while our algorithm consistently obtained better results than its competitors, KJ coming second and ZGS third. 2 3 4 5 6 7 8 9 -1 0 1 2 3 4 5 Normalized error Number of states: 3; Data length T=1000; Number of samples: 100 ADMM-RR KJ method ADMM Variational Approx. ZGS method 2 3 4 5 6 7 8 9 -0.2 0 0.2 0.4 0.6 0.8 1 Normalized error Number of states: 3; Data length T=1000; Number of samples: 100 ADMM-RR KJ method ADMM ZGS method Figure 1: Disaggregation error varying the number of HMMs. 2 3 4 5 6 -0.5 0 0.5 1 1.5 2 2.5 3 Normalized error Number of appliances: 5; Data length T=1000; Number of samples: 100 ADMM-RR KJ method ADMM Variational Approx. ZGS method 2 3 4 5 6 -0.2 0 0.2 0.4 0.6 0.8 Normalized error Number of appliances: 5; Data length T=1000; Number of samples: 100 ADMM-RR KJ method ADMM ZGS method Figure 2: Disaggregation error varying the number of states. 7.2 Experimental Results: Real Data In this section, we also compared the 3 best methods on the real dataset REDD [Kolter and Johnson, 2011]. We use the first half of the data for training and the second half for testing. Each HMM (i.e., 5Our code is available online at https://github.com/kiarashshaloudegi/FHMM_inference. 7 Appliance ADMM-RR KJ method ZGS method 1 Oven-3 61.70/78.30% 27.62/72.32% 5.35/15.04% 2 Fridge 90.22/97.63% 41.20/97.46% 46.89/87.10% 3 Microwave 12.40/74.74% 13.40/96.32% 4.55/45.07% 4 Bath. GFI-12 50.88/60.25% 12.87/51.46% 6.16/42.67% 5 Kitch. Out.-15 69.23/98.85% 16.66/79.47% 5.69/26.72% 6 Wash./Dry.-20-A 98.23/93.80% 70.41/98.19% 15.91/35.51% 7 Unregistered-A 94.27/87.80% 85.35/25.91% 57.43/99.31% 8 Oven-4 25.41/76.37% 13.60/78.59% 9.52/12.05% 9 Dishwasher-6 54.53/90.91% 25.20/98.72% 29.42/31.01% 10 Wash./Dryer-10 21.92/63.58% 18.63/25.79% 7.79/3.01% 11 Kitch. Out.-16 17.88/79.04% 8.87/100% 0.00/0.00% 12 Wash./Dry.-20-B 98.19/28.31% 72.13/77.10% 27.44/71.25% 13 Unregistered-B 97.78/91.73% 96.92/73.97% 33.63/99.98% Average 60.97/78.56% 38.68/75.02% 17.97/36.22% Table 1: Comparing the disaggregation performance of three different algorithms: precision/recall. Bold numbers represent statistically better performance on both measures. appliance) is trained separately using the associated circuit level data, and the HMM corresponding to unregistered appliances is trained using the main panel data. In this set of experiments we monitor appliances consuming more than 100 watts. ADMM-RR is run for 1000 iterations, and the local search is run at the end of each 250 iterations, and the result with the largest likelihood is chosen. To be able to use the ZGS method on this data, we need to have some prior information about the usage of each appliance; the authors suggestion is to us national energy surveys, but in the lack of this information (also about the number of residents, type of houses, etc.) we used the training data to extract this prior knowledge, which is expected to help this method. Detailed results about the precision and recall of estimating which appliances are ‘on’ at any given time are given in Table 1. In Appendix D we also report the error of the total power usage assigned to different appliances (Table 2), as well as the amount of assigned power to each appliance as a percentage of total power (Figure 3). As a summary, we can see that our method consistently outperformed the others, achieving an average precision and recall of 60.97% and 78.56%, with about 50% better precision than KJ with essentially the same recall (38.68/75.02%), while significantly improving upon ZGS (17.97/36.22%). Considering the error in assigning the power consumption to different appliances, our method achieved about 30 −35% smaller error (ADMM-RR: 2.87%, KJ: 4.44%, ZGS: 3.94%) than its competitors. In our real-data experiments, there are about 1 million decision variables: M = 7 or 6 appliances (for phase A and B power, respectively) with K = 4 states each and for about T = 30, 000 time steps for one day, 1 sample every 6 seconds. KJ and ZGS solve quadratic programs, increasing their memory usage (14GB vs 6GB in our case). On the other hand, our implementation of their method, using the commercial solver MOSEK inside the Matlab-based YALMIP [Löfberg, 2004], runs in 5 minutes, while our algorithm, which is purely Matlab-based takes 5 hours to finish. We expect that an optimized C++ version of our method could achieve a significant speed-up compared to our current implementation. 8 Conclusion FHMMs are widely used in energy disaggregation. However, the resulting model has a huge (factored) state space, making standard inference FHMM algorithms infeasible even for only a handful of appliances. In this paper we developed a scalable approximate inference algorithm, based on a semidefinite relaxation combined with randomized rounding, which significantly outperformed the state of the art in our experiments. A crucial component of our solution is a scalable ADMM method that utilizes the special block-diagonal-like structure of the SDP relaxation and provides a good initialization for randomized rounding. We expect that our method may prove useful in solving other FHMM inference problems, as well as in large scale integer quadratic programming. Acknowledgements This work was supported in part by the Alberta Innovates Technology Futures through the Alberta Ingenuity Centre for Machine Learning and by NSERC. K. is indebted to Pooria Joulani and Mohammad Ajallooeian, whom provided much useful technical advise, while all authors are grateful for Zico Kolter for sharing his code. 8 References A. Anandkumar, D. Hsu, and S. M. Kakade. A Method of Moments for Mixture Models and Hidden Markov Models. In COLT, volume 23, pages 33.1–33.34, 2012. S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. FTML, 3(1):1–122, 2011. S. Burer and D. Vandenbussche. Solving Lift-and-Project Relaxations of Binary Integer Programs. SIAM Journal on Optimization, 16(3):726–750, 2006. H.-H. Chang, K.-L. Chen, Y.-P. Tsai, and W.-J. Lee. A New Measurement Method for Power Signatures of Nonintrusive Demand Monitoring and Load Identification. IEEE T. on Industry Applications, 48:764–771, 2012. M. Dong, P. C. M. Meira, W. Xu, and W. Freitas. An Event Window Based Load Monitoring Technique for Smart Meters. IEEE Transactions on Smart Grid, 3(2):787–796, June 2012. M. Dong, Meira, W. Xu, and C. Y. Chung. Non-Intrusive Signature Extraction for Major Residential Loads. IEEE Transactions on Smart Grid, 4(3):1421–1430, Sept. 2013. D. Egarter, V. P. Bhuvana, and W. Elmenreich. PALDi: Online Load Disaggregation via Particle Filtering. IEEE Transactions on Instrumentation and Measurement, 64(2):467–477, 2015. M. Figueiredo, A. de Almeida, and B. Ribeiro. Home Electrical Signal Disaggregation for Non-intrusive Load Monitoring (NILM) Systems. Neurocomputing, 96:66–73, Nov. 2012. Z. Ghahramani and M. Jordan. Factorial Hidden Markov Models. Machine learning, 29(2):245–273, 1997. M. X. Goemans and D. P. Williamson. Improved Approximation Algorithms for Maximum Cut and Satisfiability Problems Using Semidefinite Programming. J. of the ACM, 42(6):1115–1145, 1995. Z. Guo, Z. J. Wang, and A. Kashani. Home Appliance Load Modeling From Aggregated Smart Meter Data. IEEE Transactions on Power Systems, 30(1):254–262, Jan. 2015. J. Kelly and W. Knottenbelt. Neural NILM: Deep Neural Networks Applied to Energy Disaggregation. In BuildSys, pages 55–64, 2015. H. Kim, M. Marwah, M. F. Arlitt, G. Lyon, and J. Han. Unsupervised Disaggregation of Low Frequency Power Measurements. In ICDM, volume 11, pages 747–758, 2011. D. Koller and N. Friedman. Probabilistic graphical models: principles and techniques. Adaptive computation and machine learning. MIT Press, Cambridge, MA, 2009. J. Z. Kolter and T. Jaakkola. Approximate Inference in Additive Factorial HMMs with Application to Energy Disaggregation. In AISTATS, pages 1472–1482, 2012. J. Z. Kolter and M. J. Johnson. REDD: A Public Data Set for Energy Disaggregation Research. In Workshop on Data Mining Applications in Sustainability (SIGKDD), pages 59–62, 2011. J. Z. Kolter, S. Batra, and A. Y. Ng. Energy Disaggregation via Discriminative Sparse Coding. In Advances in Neural Information Processing Systems, pages 1153–1161, 2010. A. Kontorovich, B. Nadler, and R. Weiss. On Learning Parametric-Output HMMs. In ICML, pages 702–710, 2013. J. Liang, S. K. K. Ng, G. Kendall, and J. W. M. Cheng. Load Signature Study -Part I: Basic Concept, Structure, and Methodology. IEEE Transactions on Power Delivery, 25(2):551–560, Apr. 2010. J. Löfberg. YALMIP : A Toolbox for Modeling and Optimization in MATLAB. In CACSD, 2004. L. Lovász and A. Schrijver. Cones of Matrices and Set-functions and 0-1 Optimization. SIAM Journal on Optimization, 1(2):166–190, 1991. J. Malick, J. Povh, F. Rendl, and A. Wiegele. Regularization Methods for Semidefinite Programming. SIAM Journal on Optimization, 20(1):336–356, Jan. 2009. ISSN 1052-6234, 1095-7189. C. Mattfeld. Implementing spectral methods for hidden Markov models with real-valued emissions. arXiv preprint arXiv:1404.7472, 2014. Y. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Kluwer, 2004. J. Park and S. Boyd. A Semidefinite Programming Method for Integer Convex Quadratic Minimization. arXiv preprint arXiv:1504.07672, 2015. A. Prudenzi. A neuron nets based procedure for identifying domestic appliances pattern-of-use from energy recordings at meter panel. In PESW, volume 2, pages 941–946, 2002. S. J. Rennie, J. R. Hershey, and P. Olsen. Single-channel speech separation and recognition using loopy belief propagation. In ICASSP, pages 3845–3848, 2009. M. J. Wainwright and M. I. Jordan. Graphical Models, Exponential Families, and Variational Inference. FTML, 1(1–2):1–305, 2007. M. Weiss, A. Helfenstein, F. Mattern, and T. Staake. Leveraging smart meter data to recognize home appliances. In PerCom, pages 190–197, 2012. Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented Lagrangian methods for semidefinite programming. Mathematical Programming Computation, 2(3-4):203–230, Dec. 2010. M. Zhong, N. Goddard, and C. Sutton. Signal Aggregate Constraints in Additive Factorial HMMs, with Application to Energy Disaggregation. In NIPS, pages 3590–3598, 2014. T. Zia, D. Bruckner, and A. Zaidi. A hidden Markov model based procedure for identifying household electric loads. In IECON, pages 3218–3223, 2011. 9
2016
183
6,087
Finite Sample Prediction and Recovery Bounds for Ordinal Embedding Lalit Jain University of Michigan Ann Arbor, MI 48109 lalitj@umich.edu Kevin Jamieson University of California, Berkeley Berkeley, CA 94720 kjamieson@berkeley.edu Robert Nowak University of Wisconsin Madison, WI 53706 rdnowak@wisc.edu Abstract The goal of ordinal embedding is to represent items as points in a low-dimensional Euclidean space given a set of constraints like “item i is closer to item j than item k”. Ordinal constraints like this often come from human judgments. The classic approach to solving this problem is known as non-metric multidimensional scaling. To account for errors and variation in judgments, we consider the noisy situation in which the given constraints are independently corrupted by reversing the correct constraint with some probability. The ordinal embedding problem has been studied for decades, but most past work pays little attention to the question of whether accurate embedding is possible, apart from empirical studies. This paper shows that under a generative data model it is possible to learn the correct embedding from noisy distance comparisons. In establishing this fundamental result, the paper makes several new contributions. First, we derive prediction error bounds for embedding from noisy distance comparisons by exploiting the fact that the rank of a distance matrix of points in Rd is at most d + 2. These bounds characterize how well a learned embedding predicts new comparative judgments. Second, we show that the underlying embedding can be recovered by solving a simple convex optimization. This result is highly non-trivial since we show that the linear map corresponding to distance comparisons is non-invertible, but there exists a nonlinear map that is invertible. Third, two new algorithms for ordinal embedding are proposed and evaluated in experiments. 1 Ordinal Embedding Ordinal embedding aims to represent items as points in Rd so that the distances between items agree as well as possible with a given set of ordinal comparisons such as item i is closer to item j than to item k. In other words, the goal is to find a geometric representation of data that is faithful to comparative similarity judgments. This problem has been studied and applied for more than 50 years, dating back to the classic non-metric multidimensional scaling (NMDS) [1, 2] approach, and it is widely used to gauge and visualize how people perceive similarities. Despite the widespread application of NMDS and recent algorithmic developments [3, 4, 5, 6, 7], the fundamental question of whether an embedding can be learned from noisy distance/similarity comparisons had not been answered. This paper shows that if the data are generated according to a known probabilistic model, then accurate recovery of the underlying embedding is possible by solving a simple convex optimization, settling this long-standing open question. In the process of answering this question, the paper also characterizes how well a learned embedding predicts new distance comparisons and presents two new computationally efficient algorithms for solving the optimization problem. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 1.1 Related Work The classic approach to ordinal embedding is NMDS [1, 2]. Recently, several authors have proposed new approaches based on more modern techniques. Generalized NMDS [3] and Stochastic Triplet Embedding (STE) [6] employ hinge or logistic loss measures and convex relaxations of the lowdimensionality (i.e., rank) constraint based on the nuclear norm. These works are most closely related to the theory and methods in this paper. The Linear partial order embedding (LPOE) method is similar, but starts with a known Euclidean embedding and learns a kernel/metric in this space based distance comparison data [7]. The Crowd Kernel [4] and t-STE [6] propose alternative non-convex loss measures based on probabilistic generative models. The main contributions in these papers are new optimization methods and experimental studies, but did not address the fundamental question of whether an embedding can be recovered under an assumed generative model. Other recent work has looked at the asymptotics of ordinal embedding, showing that embeddings can be learned as the number of items grows and the items densely populate the embedding space [8, 9, 10]. In contrast, this paper focuses on the practical setting involving a finite set items. Finally, it is known that at least 2dn log n distance comparisons are necessary to learn an embedding of n points in Rd [5]. 1.2 Ordinal Embedding from Noisy Data Consider n points x1, x2, . . . , xn 2 Rd. Let X = [x1 · · · xn] 2 Rd⇥n. The Euclidean distance matrix D? is defined to have elements D? ij = kxi −xjk2 2. Ordinal embedding is the problem of recovering X given ordinal constraints on distances. This paper focuses on “triplet” constraints of the form D? ij < D? ik, where 1 i 6= j 6= k n. Furthermore, we only observe noisy indications of these constraints, as follows. Each triplet t = (i, j, k) has an associated probability pt satisfying pt > 1/2 () kxi −xjk2 < kxi −xkk2 . Let S denote a collection of triplets drawn independently and uniformly at random. And for each t 2 S we observe an independent random variable yt = −1 with probability pt, and yt = 1 otherwise. The goal is to recover the embedding X from these data. Exact recovery of D? from such data requires a known link between pt and D?. To this end, our main focus is the following problem. Ordinal Embedding from Noisy Data Consider n points x1, x2 · · · , xn in d-dimensional Euclidean space. Let S denote a collection of triplets and for each t 2 S observe an independent random variable yt = 8 < : −1 w.p. f(D? ij −D? ik) 1 w.p. 1 −f(D? ij −D? ik) . where the link function f : R ! [0, 1] is known. Estimate X from S, {yt}, and f. For example, if f is the logistic function, then for triplet t = (i, j, k) pt = P(yt = −1) = f(D? ij −D? ik) = 1 1 + exp(D? ij −D? ik) , (1) then D? ij −D? ik = log $ 1−pt pt % . However, we stress that we only require the existence of a link function for exact recovery of D?. Indeed, if one just wishes to predict the answers to unobserved triplets, then the results of Section 2 hold for arbitrary pt probabilities. Aspects of the statistical analysis are related to one-bit matrix completion and rank aggregation [11, 12, 13]. However, we use novel methods for the recovery of the embedding based on geometric properties of Euclidean distance matrices. 1.3 Organization of Paper This paper takes the following approach to ordinal embedding. 1. Our samples are assumed to be independently generated according to a probabilistic model based on an underlying low-rank distance matrix. We use relatively standard statistically learning theory 2 techniques to analyze the minimizer of a bounded, Lipschitz loss with a nuclear norm constraint, and show that an embedding can be learned from the data that predicts nearly as well as the true embedding with O(dn log n) samples (Theorem 1). 2. Next, assuming the form of the probabilistic generative model is known (e.g., logistic), we show that if the learned embedding is a good predictor of the ordinal comparisons, then it must also be a good estimator of the true differences of distances between the embedding points (Theorem 2). This result hinges on the fact that the (linear) observation model acts approximately like an isometry on differences of distances. 3. While the true differences of distances can be estimated, the observation process is “blind” to the mean distance between embedding points. Despite this, we show that the mean is determined by the differences of distances, due to the special properties of Euclidean distance matrices. Specifically, the second eigenvalue of the “mean-centered” distance matrix (well-estimated by the data from the estimate of the differences of distances, Theorem 3) is proportional to the mean distance (Theorem 4). This allows us to show that the minimizer of the loss with a nuclear norm constraint indeed recovers an accurate estimate of the underlying true distance matrix. 1.4 Notation and Assumptions We will use (D?, G?) to denote the distance and Gram matrices of the latent embedding, and (D, G) to denote an arbitrary distance matrix and its corresponding Gram matrix. The observations {yt} carry information about D?, but distance matrices are invariant to rotation and translation, and therefore it may only be possible to recover X up to a rigid transformation. Without loss of generality, we assume assume the points x1, . . . xn 2 Rd are centered at the origin (i.e., Pn i=1 xi = 0). Define the centering matrix V := I −1 n11T . If X is centered, XV = X. Note that D? is determined by the Gram matrix G? = XT X. In addition, X can be determined from G? up to a unitary transformation. Note that if X is centered, the Gram matrix is “centered” so that V G?V = G?. It will be convenient in the paper to work with both the distance and Gram matrix representations, and the following identities will be useful to keep in mind. For any distance matrix D and its centered Gram matrix G G = −1 2V DV , (2) D = diag(G)1T −2G + 1diag(G)T , (3) where diag(G) is the column vector composed of the diagonal of G. In particular this establishes a bijection between centered Gram matrices and distance matrices. We refer the reader to [14] for an insightful and thorough treatment of the properties of distance matrices. We also define the set of all unique triplets T := ' (i, j, k) : 1 i 6= j 6= k n, j < k . Assumption 1. The observed triplets in S are drawn independently and unifomly from T . 2 Prediction Error Bounds For t 2 T with t = (i, j, k) we define Lt to be the linear operator satisfying Lt(XT X) = kxi −xjk2 −kxi −xkk2 for all t 2 T . In general, for any Gram matrix G Lt(G) := Gjj −2Gij −Gkk + 2Gik. We can naturally view Lt as a linear operator on Sn +, the space of n⇥n symmetric positive semidefinite matrices. We can also represent Lt as a symmetric n ⇥n matrix that is zero everywhere except on the submatrix corresponding to i, j, k which has the form " 0 −1 1 −1 1 0 1 0 −1 # and so we will write Lt(G) := hLt, Gi 3 where hA, Bi = vec(A)T vec(B) for any compatible matrices A, B. Ordering the elements of T lexicographically, we arrange all the Lt(G) together to define the n $n−1 2 % -dimensional vector L(G) = [L123(G), L124(G), · · · , Lijk(G), · · · ]T . (4) Let `(ythLt, Gi) denote a loss function. For example we can consider the 0 −1 loss `(ythLt, Gi) = 1{sign{ythLt,Gi}6=1}, the hinge-loss `(ythLt, Gi) = max{0, 1 −ythLt, Gi}, or the logistic loss `(ythLt, Gi) = log(1 + exp(−ythLt, Gi)). (5) Let pt := P(yt = −1) and take the expectation of the loss with respect to both the uniformly random selection of the triple t and the observation yt, we have the risk of G R(G) := E[`(ythLt, Gi)] = 1 |T | X t2T pt`(−hLt, Gi) + (1 −pt)`(hLt, Gi). Given a set of observations S under the model defined in the problem statement, the empirical risk is, bRS(G) = 1 |S| X t2S `(ythLt, Gi) (6) which is an unbiased estimator of the true risk: E[ bRS(G)] = R(G). For any G 2 Sn +, let kGk⇤ denote the nuclear norm and kGk1 := maxij |Gij|. Define the constraint set Gλ,γ := {G 2 Sn + : kGk⇤λ, kGk1 γ} . (7) We estimate G? by bG, the solution of the program, bG := argmin G2Gλ,γ bRS(G) . (8) Since G? is positive semidefinite, we expect the diagonal entries of G? to bound the off-diagonal entries. So an infinity norm constraint on the diagonal guarantees that the points x1, . . . , xn corresponding to G? live inside a bounded `2 ball. The `1 constraint in (7) plays two roles: 1) if our loss function is Lipschitz, large magnitude values of hLt, Gi can lead to large deviations of bRS(G) from R(G); bounding ||G||1 bounds |hLt, Gi|. 2) Later we will define ` in terms of the link function f and as the magnitude of hLt, Gi increases the magnitude of the derivative of the link function f typically becomes very small, making it difficult to “invert”; bounding ||G||1 tends to keep hLt, Gi within an invertible regime of f. Theorem 1. Fix λ, γ and assume G? 2 Gλ,γ. If the loss function `(·) is L-Lipschitz (or | supy `(y)|  L max{1, 12γ}) then with probability at least 1 −δ, R( bG) −R(G?) 4Lλ |S| r 18|S| log(n) n + p 3 3 log n ! + Lγ s 288 log 2/δ |S| Proof. The proof follows from standard statistical learning theory techniques, see for instance [15]. By the bounded difference inequality, with probability 1 −δ R( bG) −R(G?) = R( bG) −bRS( bG) + bRS( bG) −bRS(G?) + bRS(G?) −R(G?) 2 sup G2Gλ,γ | bRS(G) −R(G)| 2E[ sup G2Gλ,γ | bRS(G) −R(G)|] + s 2B2 log 2/δ |S| where supG2Gλ,γ `(ythLt, Gi) −`(yt0hLt0, Gi) supG2Gλ,γ L|hytLt −yt0Lt0, Gi| 12Lγ =: B using the facts that Lt has 6 non-zeros of magnitude 1 and ||G||1 γ. Using standard symmetrization and contraction lemmas, we can introduce Rademacher random variables ✏t 2 {−1, 1} for all t 2 S so that E sup G2Gλ,γ | bRS(G) −R(G)| E sup G2Gλ,γ 2L |S| 11111 X t2S ✏thLt, Gi 11111 . 4 The right hand side is just the Rademacher complexity of Gλ,γ. By definition, {G : kGk⇤λ} = λ · conv({uuT : |u| = 1}). where conv(U) is the convex hull of a set U. Since the Rademacher complexity of a set is the same as the Rademacher complexity of it’s closed convex hull, E sup G2Gλ,γ 11111 X t2S ✏thLt, Gi 11111 λE sup |u|=1 11111 X t2S ✏thLt, uuT i 11111 = λE sup |u|=1 11111uT X t2S ✏tLt ! u 11111 which we recognize is just λEk P t2S ✏tLtk. By [16, 6.6.1] we can bound the operator norm k P t2S ✏tLtk in terms of the variance of P t2S L2 t and the maximal eigenvalue of maxt Lt. These are computed in Lemma 1 given in the supplemental materials. Combining these results gives, 2Lλ |S| Ek X t2S ✏tLtk 2Lλ |S| r 18|S| log(n) n + p 3 3 log n ! . We remark that if G is a rank d < n matrix then kGk⇤ p dkGkF  p dnkGk1 so if G? is low rank, we really only need a bound on the infinity norm of our constraint set. Under the assumption that G? is rank d with ||G?||1 γ and we set λ = p dnγ, then Theorem 1 implies that for |S| > n log n/161 R( bG) −R(G?) 8Lγ s 18dn log(n) |S| + Lγ s 288 log 2/δ |S| with probability at least 1 −δ. The above display says that |S| must scale like dn log(n) which is consistent with known finite sample bounds [5]. 3 Maximum Likelihood Embedding We now turn our attention to recovering metric information about G?. Let S be a collection of triplets sampled uniformly at random with replacement and let f : R ! (0, 1) be a known probability function governing the observations. Any link function f induces a natural loss function `f, namely, the negative log-likelihood of a solution G given an observation yt defined as `f(ythLt, Gi) = 1yt=−1 log( 1 f(hLt,Gi)) + 1yt=1 log( 1 1−f(hLt,Gi)) For example, the logistic link function of (1) induces the logistic loss of (5). Recalling that P(yt = −1) = f(hLt, Gi) we have E[`f(ythLt, Gi)] = f(hLt, G?i) log( 1 f(hLt,Gi)) + (1 −f(hLt, G?i) log( 1 1−f(hLt,Gi)) = H(f(hLt, G?i)) + KL(f(hLt, G?i)|f(hLt, Gi)) where H(p) = p log( 1 p) + (1 −p) log( 1 1−p) and KL(p, q) = p log( p q ) + (1 −p) log( 1−p 1−q ) are the entropy and KL divergence of Bernoulli RVs with means p, q. Recall that ||G||1 γ controls the magnitude of hLt, Gi so for the moment, assume this is small. Then by a Taylor series f(hLt, Gi) ⇡ 1 2 + f 0(0)hLt, Gi using the fact that f(0) = 1 2, and by another Taylor series we have KL(f(hLt, G?i)|f(hLt, Gi)) ⇡KL( 1 2 + f 0(0)hLt, G?i| 1 2 + f 0(0)hLt, Gi) ⇡2f 0(0)2(hLt, G? −Gi)2. Thus, recalling the definition of L(G) from (4) we conclude that if eG 2 arg minG R(G) with R(G) = 1 |T | P t2T E[`f(ythLt, Gi)] then one would expect L( eG) ⇡L(G?). Moreover, since bRS(G) is an unbiased estimator of R(G), one expects L( bG) to approximate L(G?). The next theorem, combined with Theorem 1, formalizes this observation; its proof is found in the appendix. 5 Theorem 2. Let Cf = mint2T infG2Gλ,γ |f 0$ hLt, Gi % | where f 0 denotes the derivative of f. Then for any G 2C2 f |T | kL(G) −L(G?)k2 F R(G) −R(G?) . Note that if f is the logistic link function of (1) then its straightforward to show that |f 0$ hLt, Gi % | ≥ 1 4 exp(−|hLt, Gi|) ≥1 4 exp(−6||G||1) for any t, G so it suffices to take Cf = 1 4 exp(−6γ). It remains to see that we can recover G? even given L(G?), much less L( bG). To do this, it is more convenient to work with distance matrices instead of Gram matrices. Analogous to the operators Lt(G) defined above, we define the operators ∆t for t 2 T satisfying, ∆t(D) := Dij −Dik ⌘Lt(G) . We will view the ∆t as linear operators on the space of symmetric hollow n ⇥n matrices Sn h, which includes distance matrices as special cases. As with L, we can arrange all the ∆t together, ordering the t 2 T lexicographically, to define the n $n−1 2 % -dimensional vector ∆(D) = [D12 −D13, · · · , Dij −Dik, · · · ]T . We will use the fact that L(G) ⌘∆(D) heavily. Because ∆(D) consists of differences of matrix entries, ∆has a non-trivial kernel. However, it is easy to see that D can be recovered given ∆(D) and any one off-diagonal element of D, so the kernel is 1-dimensional. Also, the kernel is easy to identify by example. Consider the regular simplex in d dimensions. The distances between all n = d + 1 vertices are equal and the distance matrix can easily be seen to be 11T −I. Thus ∆(D) = 0 in this case. This gives us the following simple result. Lemma 2. Let Sn h denote the space of symmetric hollow matrices, which includes all distance matrices. For any D 2 Sn h, the set of linear functionals {∆t(D), t 2 T } spans an $n 2 % −1 dimensional subspace of Sn h, and the 1-dimensional kernel is given by the span of 11T −I. So we see that the operator ∆is not invertible on Sn h. Define J := 11T −I. For any D, let C, the centered distance matrix, be the component of D orthogonal to the kernel of L (i.e., tr(CJ) = 0). Then we have the orthogonal decomposition D = C + σD J , where σD = trace(DJ)/kJk2 F . Since G is assumed to be centered, the value of σD has a simple interpretation: σD = 1 2 $n 2 % X 1ijn Dij = 2 n −1 X 1in hxi, xii = 2kGk⇤ n −1 , (9) the average of the squared distances or alternatively a scaled version of the nuclear norm of G. Let bD and bC be the corresponding distance and centered distance matrices corresponding to bG the solution to 8. Though ∆is not invertible on all Sn h, it is invertible on the subspace orthogonal to the kernel, namely J?. So if ∆( bD) ⇡∆(D?), or equivalently L( bG) ⇡L(G?), we expect bC to be close to C?. The next theorem quantifies this. Theorem 3. Consider the setting of Theorems 1 and 2 and let bC, C? be defined as above. Then 1 2 $n 2 %k bC −C?k2 F  Lλ 4C2 f|S| r 18|S| log(n) n + p 3 3 log n ! + Lγ 4C2 f s 288 log 2/δ |S| Proof. By combining Theorem 2 with the prediction error bounds obtainined in 1 we see that 2C2 f n $n−1 2 %kL( bG) −L(G?)k2 F 4Lλ |S| r 18|S| log(n) n + p 3 3 log n ! + Lγ s 288 log 2/δ |S| . Next we employ the following restricted isometry property of ∆on the subspace J? whose proof is in the supplementary materials. 6 Lemma 3. Let D and D0 be two different distance matrices of n points in Rd and Rd0. Let C and C0 be the components of D and D0 orthogonal to J. Then nkC −C0k2 F k∆(C) −∆(C0)k2 = k∆(D) −∆(D0)k2 2(n −1)kC −C0k2 F . The result then follows. This implies that by collecting enough samples, we can recover the centered distance matrix. By applying the discussion following Theorem 1 when G? is rank d, we can state an upperbound of 1 2( n 2)k bC −C?k2 F O ⇣ Lγ C2 f q dn log(n)+log(1/δ) |S| ⌘ . However, it is still not clear that this is enough to recover D? or G?. Remarkably, despite this unknown component being in the kernel, we show next that it can be recovered. Theorem 4. Let D be a distance matrix of n points in Rd, let C be the component of D orthogonal to the kernel of L, and let λ2(C) denote the second largest eigenvalue of C. If n > d + 2, then D = C + λ2(C) J . (10) This shows that D is uniquely determined as a function of C. Therefore, since ∆(D) = ∆(C) and because C is orthogonal to the kernel of ∆, the distance matrix D can be recovered from ∆(D), even though the linear operator ∆is non-invertible. We now provide a proof of Theorem 4 in the case where n > d + 3. The result is true in the case when n > d + 2 but requires a more detailed analysis. This includes the construction of a vector x such that Dx = 1 and 1T x ≥0 for any distance matrix a result in [17]. Proof. To prove Theorem 4 we need the following lemma, proved in the supplementary materials. Lemma 4. Let D be a Euclidean distance matrix on n points. Then D is negative semidefinite on the subspace 1? := {x 2 Rn|1T x = 0}. Furthermore, ker(D) ⇢1?. For any matrix M, let λi(M) denote its ith largest eigenvalue. Under the conditions of the theorem, we show that for σ > 0, λ2(D −σJ) = σ. Since C = D −σDJ, this proves the theorem. Note that, λi(D −σJ) = λi(D −σ11T ) + σ for 1 i n and σ arbitrary. So it suffices to show that λ2(D −σ11T ) = 0. By Weyl’s Theorem λ2(D −σ11T ) λ2(D) + λ1(−σ11T ) . Since λ1(−σ11T ) = 0, we have λ2(D −σ11T ) λ2(D) = 0. By the Courant-Fischer Theorem λ2(D) = min U:dim(U)=n−1 max x2U,x6=0 xT Dx xT x  min U=1? max x2U,x6=0 xT Dx xT x 0 since D negative semidefinite on 1?. Now let vi denote the ith eigenvector of D with eigenvalue λi = 0. Then (D −σ11T )vi = Dvi = 0 , since vT i 1 = 0 by 4. So D −σ11T has at least n −d −2 zero eigenvalues, since rankD d + 2. In particular, if n > d + 3, then D −σ11T must have at least two eigenvalues equal to 0. Therefore, λ2(D −σ11T ) = 0. The previous theorem along with Theorem 3 guarantees that we can recover G? as we increase the number of triplets sampled. The final theorem, which follows directly from Theorems 3 and 4, summarizes this. Theorem 5. Assume n > d + 2 and consider the setting of Theorems 1 and 2. As |S| ! 1, bD ! D⇤where bD is the distance matrix corresponding to bG (the solution to 8). Proof. Recall bD = bC + λ2( bC)J, so as bC ! C⇤, bD ! D⇤. 7 Figure 1: G? generated with n = 64 points in d = 2 and d = 8 dimensions on the left and right. 4 Experimental Study The section empirically studies the properties of estimators suggested by our theory. It is not an attempt to perform an exhaustive empirical evaluation of different embedding techniques; for that see [18, 4, 6, 3]. In what follows each of the n points is generated randomly: xi ⇠N(0, 1 2dId) 2 Rd, i = 1, . . . , n, motivated by the observation that E[|hLt, G?i|] = E ⇥11 kxi −xjk2 2 −||xi −xk||2 2 11⇤ E ⇥ kxi −xjk2 2 ⇤ = 2E ⇥ kxik2 2 ⇤ = 1 for any triplet t = (i, j, k).We report the prediction error on a holdout set of 10, 000 triplets and the error in Frobenius norm of the estimated Gram matrix over 36 random trials. We minimize the logistic MLE objective bRS(G) = 1 |S| P t2S log(1 + exp(−ythLt, Gi)). For each algorithm considered, the domain of the objective variable G is the space of symmetric positive semi-definite matrices. None of the methods impose the constraint maxij |Gij| γ (as done above), since this was used to simplify the analysis and does not have a large impact in practice. Rank-d Projected Gradient Descent (PGD) performs gradient descent on the objective bRS(G) with line search, projecting onto the subspace spanned by the top d eigenvalues at each step (i.e. setting the smallest n−d eigenvalues to 0). Nuclear Norm PGD performs gradient descent on bRS(G) projecting onto the nuclear norm ball with radius kG?k⇤, where G? is the Gram matrix of the latent embedding. The nuclear norm projection can have the undesirable effect of shrinking the non-zero eigenvalues toward the origin. To compensate for this potential bias, we employ Nuclear Norm PGD Debiased, which takes the biased output of Nuclear Norm PGD, decomposes it into UEU T where U 2 Rn⇥d are the top d eigenvectors, and outputs Udiag(bs)U T where bs = arg mins2Rd bRS(Udiag(s)U T ). This last algorithm is motivated by the observation that methods for minimizing k · k1 or k · k⇤are good at identifying the true support of a signal, but output biased magnitudes [19]. Rank-d PGD and Nuclear Norm PGD Debiased are novel ordinal embedding algorithms. Figure 1 presents how the algorithms behave for n = 64 and d = 2, 8. We observe that the unbiased nuclear norm solution behaves near-identically to the rank-d solution and remark that this was observed in all of our experiments (see the supplementary materials for other values of n, d, and scalings of G?). A popular technique for recovering rank d embeddings is to perform (stochastic) gradient descent on bRS(U T U) with objective variable U 2 Rn⇥d taken as the embedding [18, 4, 6]. In all of our experiments this method produced Gram matrices nearly identical to those produced by our Rank-d-PGD method, but Rank-d-PGD was an order of magnitude faster in our implementation. Also, in light of our isometry theorem, we can show that the Hessian of E[ bRS(G)] is nearly a scaled identity, leading us to hypothesize that a globally optimal linear convergence result for this nonconvex optimization may be possible using the techniques of [20, 21]. Finally, we note that previous literature has reported that nuclear norm optimizations like Nuclear Norm PGD tend to produce less accurate embeddings than those of non-convex methods [4, 6]. The results imply that Nuclear Norm PGD Debiased appears to close the performance gap between the convex and non-convex solutions. Acknowledgments This work was partially supported by the NSF grants CCF-1218189 and IIS1447449, the NIH grant 1 U54 AI117924-01, the AFOSR grant FA9550-13-1-0138, and by ONR awards N00014-15-1-2620, and N00014-13-1-0129. We would also like to thank Amazon Web Services for providing the computational resources used for running our simulations. 8 References [1] Roger N Shepard. The analysis of proximities: Multidimensional scaling with an unknown distance function. i. Psychometrika, 27(2):125–140, 1962. [2] Joseph B Kruskal. Nonmetric multidimensional scaling: a numerical method. Psychometrika, 29(2):115–129, 1964. [3] Sameer Agarwal, Josh Wills, Lawrence Cayton, Gert Lanckriet, David J Kriegman, and Serge Belongie. Generalized non-metric multidimensional scaling. In International Conference on Artificial Intelligence and Statistics, pages 11–18, 2007. [4] Omer Tamuz, Ce Liu, Ohad Shamir, Adam Kalai, and Serge J Belongie. Adaptively learning the crowd kernel. In Proceedings of the 28th International Conference on Machine Learning (ICML-11), pages 673–680, 2011. [5] Kevin G Jamieson and Robert D Nowak. Low-dimensional embedding using adaptively selected ordinal data. In Communication, Control, and Computing (Allerton), 2011 49th Annual Allerton Conference on, pages 1077–1084. IEEE, 2011. [6] Laurens Van Der Maaten and Kilian Weinberger. Stochastic triplet embedding. In Machine Learning for Signal Processing (MLSP), 2012 IEEE International Workshop on, pages 1–6. IEEE, 2012. [7] Brian McFee and Gert Lanckriet. Learning multi-modal similarity. The Journal of Machine Learning Research, 12:491–523, 2011. [8] Matthäus Kleindessner and Ulrike von Luxburg. Uniqueness of ordinal embedding. In COLT, pages 40–67, 2014. [9] Yoshikazu Terada and Ulrike V Luxburg. Local ordinal embedding. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 847–855, 2014. [10] Ery Arias-Castro. Some theory for ordinal embedding. arXiv preprint arXiv:1501.02861, 2015. [11] Mark A Davenport, Yaniv Plan, Ewout van den Berg, and Mary Wootters. 1-bit matrix completion. Information and Inference, 3(3), 2014. [12] Yu Lu and Sahand N Negahban. Individualized rank aggregation using nuclear norm regularization. In 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 1473–1479. IEEE, 2015. [13] D. Park, J , Neeman, J. Zhang, S. Sanghavi, and I. Dhillon. Preference completion: Large-scale collaborative ranking from pairwise comparisons. Proc. Int. Conf. Machine Learning (ICML), 2015. [14] Jon Dattorro. Convex Optimization & Euclidean Distance Geometry. Meboo Publishing USA, 2011. [15] Stéphane Boucheron, Olivier Bousquet, and Gábor Lugosi. Theory of classification: A survey of some recent advances. ESAIM: probability and statistics, 9:323–375, 2005. [16] Joel A. Tropp. An introduction to matrix concentration inequalities, 2015. [17] Pablo Tarazaga and Juan E. Gallardo. Euclidean distance matrices: new characterization and boundary properties. Linear and Multilinear Algebra, 57(7):651–658, 2009. [18] Kevin G Jamieson, Lalit Jain, Chris Fernandez, Nicholas J Glattard, and Rob Nowak. Next: A system for real-world development, evaluation, and application of active learning. In Advances in Neural Information Processing Systems, pages 2638–2646, 2015. [19] Nikhil Rao, Parikshit Shah, and Stephen Wright. Conditional gradient with enhancement and truncation for atomic norm regularization. In NIPS workshop on Greedy Algorithms, 2013. [20] Samet Oymak, Benjamin Recht, and Mahdi Soltanolkotabi. Sharp time–data tradeoffs for linear inverse problems. arXiv preprint arXiv:1507.04793, 2015. [21] Jie Shen and Ping Li. A tight bound of hard thresholding. arXiv preprint arXiv:1605.01656, 2016. 9
2016
184
6,088
Search Improves Label for Active Learning Alina Beygelzimer Yahoo Research New York, NY beygel@yahoo-inc.com Daniel Hsu Columbia University New York, NY djhsu@cs.columbia.edu John Langford Microsoft Research New York, NY jcl@microsoft.com Chicheng Zhang UC San Diego La Jolla, CA chz038@cs.ucsd.edu Abstract We investigate active learning with access to two distinct oracles: LABEL (which is standard) and SEARCH (which is not). The SEARCH oracle models the situation where a human searches a database to seed or counterexample an existing solution. SEARCH is stronger than LABEL while being natural to implement in many situations. We show that an algorithm using both oracles can provide exponentially large problem-dependent improvements over LABEL alone. 1 Introduction Most active learning theory is based on interacting with a LABEL oracle: An active learner observes unlabeled examples, each with a label that is initially hidden. The learner provides an unlabeled example to the oracle, and the oracle responds with the label. Using LABEL in an active learning algorithm is known to give (sometimes exponentially large) problem-dependent improvements in label complexity, even in the agnostic setting where no assumption is made about the underlying distribution [e.g., Balcan et al., 2006, Hanneke, 2007, Dasgupta et al., 2007, Hanneke, 2014]. A well-known deficiency of LABEL arises in the presence of rare classes in classification problems, frequently the case in practice [Attenberg and Provost, 2010, Simard et al., 2014]. Class imbalance may be so extreme that simply finding an example from the rare class can exhaust the labeling budget. Consider the problem of learning interval functions in [0, 1]. Any LABEL-only active learner needs at least Ω(1/) LABEL queries to learn an arbitrary target interval with error at most  [Dasgupta, 2005]. Given any positive example from the interval, however, the query complexity of learning intervals collapses to O(log(1/)), as we can just do a binary search for each of the end points. A natural approach used to overcome this hurdle in practice is to search for known examples of the rare class [Attenberg and Provost, 2010, Simard et al., 2014]. Domain experts are often adept at finding examples of a class by various, often clever means. For instance, when building a hate speech filter, a simple web search can readily produce a set of positive examples. Sending a random batch of unlabeled text to LABEL is unlikely to produce any positive examples at all. Another form of interaction common in practice is providing counterexamples to a learned predictor. When monitoring the stream filtered by the current hate speech filter, a human editor may spot a clear-cut example of hate speech that seeped through the filter. The editor, using all the search tools available to her, may even be tasked with searching for such counterexamples. The goal of the learning system is then to interactively restrict the searchable space, guiding the search process to where it is most effective. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Counterexamples can be ineffective or misleading in practice as well. Reconsidering the intervals example above, a counterexample on the boundary of an incorrect interval provides no useful information about any other examples. What is a good counterexample? What is a natural way to restrict the searchable space? How can the intervals problem be generalized? We define a new oracle, SEARCH, that provides counterexamples to version spaces. Given a set of possible classifiers H mapping unlabeled examples to labels, a version space V ⊆H is the subset of classifiers still under consideration by the algorithm. A counterexample to a version space is a labeled example which every classifier in the version space classifies incorrectly. When there is no counterexample to the version space, SEARCH returns nothing. How can a counterexample to the version space be used? We consider a nested sequence of hypothesis classes of increasing complexity, akin to Structural Risk Minimization (SRM) in passive learning [see, e.g., Vapnik, 1982, Devroye et al., 1996]. When SEARCH produces a counterexample to the version space, it gives a proof that the current hypothesis class is too simplistic to solve the problem effectively. We show that this guided increase in hypothesis complexity results in a radically lower LABEL complexity than directly learning on the complex space. Sample complexity bounds for model selection in LABEL-only active learning were studied by Balcan et al. [2010], Hanneke [2011]. SEARCH can easily model the practice of seeding discussed earlier. If the first hypothesis class has just the constant always-negative classifier h(x) = −1, a seed example with label +1 is a counterexample to the version space. Our most basic algorithm uses SEARCH just once before using LABEL, but it is clear from inspection that multiple seeds are not harmful, and they may be helpful if they provide the proof required to operate with an appropriately complex hypothesis class. Defining SEARCH with respect to a version space rather than a single classifier allows us to formalize “counterexample far from the boundary” in a general fashion which is compatible with the way LABEL-based active learning algorithms work. Related work. The closest oracle considered in the literature is the Class Conditional Query (CCQ) [Balcan and Hanneke, 2012] oracle. A query to CCQ specifies a finite set of unlabeled examples and a label while returning an example in the subset with the specified label, if one exists. In contrast, SEARCH has an implicit query set that is an entire region of the input space rather than a finite set. Simple searches over this large implicit domain can more plausibly discover relevant counterexamples: When building a detector for penguins in images, the input to CCQ might be a set of images and the label “penguin”. Even if we are very lucky and the set happens to contain a penguin image, a search amongst image tags may fail to find it in the subset because it is not tagged appropriately. SEARCH is more likely to discover counterexamples—surely there are many images correctly tagged as having penguins. Why is it natural to define a query region implicitly via a version space? There is a practical reason—it is a concise description of a natural region with an efficiently implementable membership filter [Beygelzimer et al., 2010, 2011, Huang et al., 2015]. (Compare this to an oracle call that has to explicitly enumerate a large set of examples. The algorithm of Balcan and Hanneke [2012] uses samples of size roughly dν/2.) The use of SEARCH in this paper is also substantially different from the use of CCQ by Balcan and Hanneke [2012]. Our motivation is to use SEARCH to assist LABEL, as opposed to using SEARCH alone. This is especially useful in any setting where the cost of SEARCH is significantly higher than the cost of LABEL—we hope to avoid using SEARCH queries whenever it is possible to make progress using LABEL queries. This is consistent with how interactive learning systems are used in practice. For example, the Interactive Classification and Extraction system of Simard et al. [2014] combines LABEL with search in a production environment. The final important distinction is that we require SEARCH to return the label of the optimal predictor in the nested sequence. For many natural sequences of hypothesis classes, the Bayes optimal classifier is eventually in the sequence, in which case it is equivalent to assuming that the label in a counterexample is the most probable one, as opposed to a randomly-drawn label from the conditional distribution (as in CCQ and LABEL). Is this a reasonable assumption? Unlike with LABEL queries, where the labeler has no choice of what to label, here the labeler chooses a counterexample. If a human editor finds an unquestionable 2 example of hate speech that seeped through the filter, it is quite reasonable to assume that this counterexample is consistent with the Bayes optimal predictor for any sensible feature representation. Organization. Section 2 formally introduces the setting. Section 3 shows that SEARCH is at least as powerful as LABEL. Section 4 shows how to use SEARCH and LABEL jointly in the realizable setting where a zero-error classifier exists in the nested sequence of hypothesis classes. Section 5 handles the agnostic setting where LABEL is subject to label noise, and shows an amortized approach to combining the two oracles with a good guarantee on the total cost. 2 Definitions and Setting In active learning, there is an underlying distribution D over X × Y, where X is the instance space and Y := {−1, +1} is the label space. The learner can obtain independent draws from D, but the label is hidden unless explicitly requested through a query to the LABEL oracle. Let DX denote the marginal of D over X. We consider learning with a nested sequence of hypotheses classes H0 ⊂H1 ⊂· · · ⊂Hk · · · , where Hk ⊆YX has VC dimension dk. For a set of labeled examples S ⊆X × Y, let Hk(S) := {h ∈Hk : ∀(x, y) ∈S  h(x) = y} be the set of hypotheses in Hk consistent with S. Let err(h) := Pr(x,y)∼D[h(x) = y] denote the error rate of a hypothesis h with respect to distribution D, and err(h, S) be the error rate of h on the labeled examples in S. Let h∗ k = arg minh∈Hk err(h) breaking ties arbitrarily and let k∗:= arg mink≥0 err(h∗ k) breaking ties in favor of the smallest such k. For simplicity, we assume the minimum is attained at some finite k∗. Finally, define h∗:= h∗ k∗, the optimal hypothesis in the sequence of classes. The goal of the learner is to learn a hypothesis with error rate not much more than that of h∗. In addition to LABEL, the learner can also query SEARCH with a version space. Oracle SEARCHH(V ) (where H ∈{Hk}∞ k=0) input: Set of hypotheses V ⊂H output: Labeled example (x, h∗(x)) s.t. h(x) = h∗(x) for all h ∈V , or ⊥if there is no such example. Thus if SEARCHH(V ) returns an example, this example is a systematic mistake made by all hypotheses in V . (If V = ∅, we expect SEARCH to return some example, i.e., not ⊥.) Our analysis is given in terms of the disagreement coefficient of Hanneke [2007], which has been a central parameter for analyzing active learning algorithms. Define the region of disagreement of a set of hypotheses V as Dis(V ) := {x ∈X : ∃h, h ∈V s.t. h(x) = h(x)}. The disagreement coefficient of V at scale r is θV (r) := suph∈V,r≥r PrDX [Dis(BV (h, r))]/r, where BV (h, r) = {h ∈V : Prx∼DX [h(x) = h(x)] ≤r} is the ball of radius r around h. The ˜O(·) notation hides factors that are polylogarithmic in 1/δ and quantities that do appear, where δ is the usual confidence parameter. 3 The Relative Power of the Two Oracles Although SEARCH cannot always implement LABEL efficiently, it is as effective at reducing the region of disagreement. The clearest example is learning threshold classifiers H := {hw : w ∈[0, 1]} in the realizable case, where hw(x) = +1 if w ≤x ≤1, and −1 if 0 ≤x < w. A simple binary search with LABEL achieves an exponential improvement in query complexity over passive learning. The agreement region of any set of threshold classifiers with thresholds in [wmin, wmax] is [0, wmin)∪[wmax, 1]. Since SEARCH is allowed to return any counterexample in the agreement region, there is no mechanism for forcing SEARCH to return the label of a particular point we want. However, this is not needed to achieve logarithmic query complexity with SEARCH: If binary search starts with querying the label of x ∈[0, 1], we can query SEARCHH(Vx), where Vx := {hw ∈H : w < x} instead. If SEARCH returns ⊥, we know that the target w∗≤x and can safely reduce the region of disagreement to [0, x). If SEARCH returns a counterexample (x0, −1) with x0 ≥x, we know that w∗> x0 and can reduce the region of disagreement to (x0, 1]. 3 This observation holds more generally. In the proposition below, we assume that LABEL(x) = h∗(x) for simplicity. If LABEL(x) is noisy, the proposition holds for any active learning algorithm that doesn’t eliminate any h ∈H : h(x) = LABEL(x) from the version space. Proposition 1. For any call x ∈X to LABEL such that LABEL(x) = h∗(x), we can construct a call to SEARCH that achieves a no lesser reduction in the region of disagreement. Proof. For any V ⊆H, let HSEARCH(V ) be the hypotheses in H consistent with the output of SEARCHH(V ): if SEARCHH(V ) returns a counterexample (x, y) to V , then HSEARCH(V ) := {h ∈ H : h(x) = y}; otherwise, HSEARCH(V ) := V . Let HLABEL(x) := {h ∈H : h(x) = LABEL(x)}. Also, let Vx := H+1(x) := {h ∈H : h(x) = +1}. We will show that Vx is such that HSEARCH(Vx) ⊆HLABEL(x), and hence Dis(HSEARCH(Vx)) ⊆Dis(HLABEL(x)). There are two cases to consider: If h∗(x) = +1, then SEARCHH(Vx) returns ⊥. In this case, HLABEL(x) = HSEARCH(Vx) = H+1(x), and we are done. If h∗(x) = −1, SEARCH(Vx) returns a valid counterexample (possibly (x, −1)) in the region of agreement of H+1(x), eliminating all of H+1(x). Thus HSEARCH(Vx) ⊂H \ H+1(x) = HLABEL(x), and the claim holds also. As shown by the problem of learning intervals on the line, SEARCH can be exponentially more powerful than LABEL. 4 Realizable Case We now turn to general active learning algorithms that combine SEARCH and LABEL. We focus on algorithms using both SEARCH and LABEL since LABEL is typically easier to implement than SEARCH and hence should be used where SEARCH has no significant advantage. (Whenever SEARCH is less expensive than LABEL, Section 3 suggests a transformation to a SEARCH-only algorithm.) This section considers the realizable case, in which we assume that the hypothesis h∗= h∗ k∗∈Hk∗ has err(h∗) = 0. This means that LABEL(x) returns h∗(x) for any x in the support of DX . 4.1 Combining LABEL and SEARCH Our algorithm (shown as Algorithm 1) is called LARCH, because it combines LABEL and SEARCH. Like many selective sampling methods, LARCH uses a version space to determine its LABEL queries. For concreteness, we use (a variant of) the algorithm of Cohn et al. [1994], denoted by CAL, as a subroutine in LARCH. The inputs to CAL are: a version space V , the LABEL oracle, a target error rate, and a confidence parameter; and its output is a set of labeled examples (implicitly defining a new version space). CAL is described in Appendix B; its essential properties are specified in Lemma 1. LARCH differs from LABEL-only active learners (like CAL) by first calling SEARCH in Step 3. If SEARCH returns ⊥, LARCH checks to see if the last call to CAL resulted in a small-enough error, halting if so in Step 6, and decreasing the allowed error rate if not in Step 8. If SEARCH instead returns a counterexample, the hypothesis class Hk must be impoverished, so in Step 12, LARCH increases the complexity of the hypothesis class to the minimum complexity sufficient to correctly classify all known labeled examples in S. After the SEARCH, CAL is called in Step 14 to discover a sufficiently low-error (or at least low-disagreement) version space with high probability. When LARCH advances to index k (for any k ≤k∗), its set of labeled examples S may imply a version space Hk(S) ⊆Hk that can be actively-learned more efficiently than the whole of Hk. In our analysis, we quantify this through the disagreement coefficient of Hk(S), which may be markedly smaller than that of the full Hk. The following theorem bounds the oracle query complexity of Algorithm 1 for learning with both SEARCH and LABEL in the realizable setting. The proof is in section 4.2. Theorem 1. Assume that err(h∗) = 0. For each k ≥0, let θk(·) be the disagreement coefficient of Hk(S[k]), where S[k] is the set of labeled examples S in LARCH at the first time that k ≥k. Fix any , δ ∈(0, 1). If LARCH is run with inputs hypothesis classes {Hk}∞ k=0, oracles LABEL and SEARCH, and learning parameters , δ, then with probability at least 1 −δ: LARCH halts after at most k∗+ log2(1/) for-loop iterations and returns a classifier with error rate at most ; furthermore, 4 Algorithm 1 LARCH input: Nested hypothesis classes H0 ⊂H1 ⊂· · · ; oracles LABEL and SEARCH; learning parameters , δ ∈(0, 1) 1: initialize S ←∅, (index) k ←0,  ←0 2: for i = 1, 2, . . . do 3: e ←SEARCHHk(Hk(S)) 4: if e = ⊥then # no counterexample found 5: if 2− ≤ then 6: return any h ∈Hk(S) 7: else 8:  ← + 1 9: end if 10: else # counterexample found 11: S ←S ∪{e} 12: k ←min{k : Hk(S) = ∅} 13: end if 14: S ←S ∪CAL(Hk(S), LABEL, 2−, δ/(i2 + i)) 15: end for it draws at most ˜O(k∗dk∗/) unlabeled examples from DX , makes at most k∗+ log2(1/) queries to SEARCH, and at most ˜O (  k∗+ log(1/)  · (maxk≤k∗θk()) · dk∗· log2(1/)) queries to LABEL. Union-of-intervals example. We now show an implication of Theorem 1 in the case where the target hypothesis h∗is the union of non-trivial intervals in X := [0, 1], assuming that DX is uniform. For k ≥0, let Hk be the hypothesis class of the union of up to k intervals in [0, 1] with H0 containing only the always-negative hypothesis. (Thus, h∗is the union of k∗non-empty intervals.) The disagreement coefficient of H1 is Ω(1/), and hence LABEL-only active learners like CAL are not very effective at learning with such classes. However, the first SEARCH query by LARCH provides a counterexample to H0, which must be a positive example (x1, +1). Hence, H1(S[1]) (where S[1] is defined in Theorem 1) is the class of intervals that contain x1 with disagreement coefficient θ1 ≤4. Now consider the inductive case. Just before LARCH advances its index to a value k (for any k ≤k∗), SEARCH returns a counterexample (x, h∗(x)) to the version space; every hypothesis in this version space (which could be empty) is a union of fewer than k intervals. If the version space is empty, then S must already contain positive examples from at least k different intervals in h∗and at least k −1 negative examples separating them. If the version space is not empty, then the point x is either a positive example belonging to a previously uncovered interval in h∗or a negative example splitting an existing interval. In either case, S[k] contains positive examples from at least k distinct intervals separated by at least k −1 negative examples. The disagreement coefficient of the set of unions of k intervals consistent with S[k] is at most 4k, independent of . The VC dimension of Hk is O(k), so Theorem 1 implies that with high probability, LARCH makes at most k∗+ log(1/) queries to SEARCH and ˜O((k∗)3 log(1/) + (k∗)2 log3(1/)) queries to LABEL. 4.2 Proof of Theorem 1 The proof of Theorem 1 uses the following lemma regarding the CAL subroutine, proved in Appendix B. It is similar to a result of Hanneke [2011], but an important difference here is that the input version space V is not assumed to contain h∗. Lemma 1. Assume LABEL(x) = h∗(x) for every x in the support of DX . For any hypothesis set V ⊆YX with VC dimension d < ∞, and any , δ ∈(0, 1), the following holds with probability at least 1 −δ. CAL(V, LABEL, , δ) returns labeled examples T ⊆{(x, h∗(x)) : x ∈X} such that for any h in V (T), Pr(x,y)∼D[h(x) = y ∧x ∈Dis(V (T))] ≤; furthermore, it draws at most ˜O(d/) unlabeled examples from DX , and makes at most ˜O (θV () · d · log2(1/)) queries to LABEL. We now prove Theorem 1. By Lemma 1 and a union bound, there is an event with probability at least 1 − i≥1 δ/(i2 + i) ≥1 −δ such that each call to CAL made by LARCH satisfies the high-probability guarantee from Lemma 1. We henceforth condition on this event. 5 We first establish the guarantee on the error rate of a hypothesis returned by LARCH. By the assumed properties of LABEL and SEARCH, and the properties of CAL from Lemma 1, the labeled examples S in LARCH are always consistent with h∗. Moreover, the return property of CAL implies that at the end of any loop iteration, with the present values of S, k, and , we have Pr(x,y)∼D[h(x) = y ∧x ∈Dis(Hk(S))] ≤2− for all h ∈Hk(S). (The same holds trivially before the first loop iteration.) Therefore, if LARCH halts and returns a hypothesis h ∈Hk(S), then there is no counterexample to Hk(S), and Pr(x,y)∼D[h(x) = y∧x ∈Dis(Hk(S))] ≤. These consequences and the law of total probability imply err(h) = Pr(x,y)∼D[h(x) = y ∧x ∈Dis(Hk(S))] ≤. We next consider the number of for-loop iterations executed by LARCH. Let Si, ki, and i be, respectively, the values of S, k, and  at the start of the i-th for-loop iteration in LARCH. We claim that if LARCH does not halt in the i-th iteration, then one of k and  is incremented by at least one. Clearly, if there is no counterexample to Hki(Si) and 2−i > , then  is incremented by one (Step 8). If, instead, there is a counterexample (x, y), then Hki(Si ∪{(x, y)}) = ∅, and hence k is incremented to some index larger than ki (Step 12). This proves that ki+1 + i+1 ≥ki + i + 1. We also have ki ≤k∗, since h∗∈Hk∗is consistent with S, and i ≤log2(1/), as long as LARCH does not halt in for-loop iteration i. So the total number of for-loop iterations is at most k∗+ log2(1/). Together with Lemma 1, this bounds the number of unlabeled examples drawn from DX . Finally, we bound the number of queries to SEARCH and LABEL. The number of queries to SEARCH is the same as the number of for-loop iterations—this is at most k∗+ log2(1/). By Lemma 1 and the fact that V (S ∪S) ⊆V (S) for any hypothesis space V and sets of labeled examples S, S, the number of LABEL queries made by CAL in the i-th for-loop iteration is at most ˜O(θki() · dki · 2 i · polylog(i)). The claimed bound on the number of LABEL queries made by LARCH now readily follows by taking a max over i, and using the facts that i ≤k∗and dk ≤dk∗for all k ≤k. 4.3 An Improved Algorithm LARCH is somewhat conservative in its use of SEARCH, interleaving just one SEARCH query between sequences of LABEL queries (from CAL). Often, it is advantageous to advance to higher complexity hypothesis classes quickly, as long as there is justification to do so. Counterexamples from SEARCH provide such justification, and a ⊥result from SEARCH also provides useful feedback about the current version space: outside of its disagreement region, the version space is in complete agreement with h∗(even if the version space does not contain h∗). Based on these observations, we propose an improved algorithm for the realizable setting, which we call SEABEL. Due to space limitations, we present it in Appendix C. We prove the following performance guarantee for SEABEL. Theorem 2. Assume that err(h∗) = 0. Let θk(·) denote the disagreement coefficient of V ki i at the first iteration i in SEABEL where ki ≥k. Fix any , δ ∈(0, 1). If SEABEL is run with inputs hypothesis classes {Hk}∞ k=0, oracles SEARCH and LABEL, and learning parameters , δ ∈(0, 1), then with probability 1 −δ: SEABEL halts and returns a classifier with error rate at most ; furthermore, it draws at most ˜O((dk∗+ log k∗)/) unlabeled examples from DX , makes at most k∗+ O (log(dk∗/) + log log k∗) queries to SEARCH, and at most ˜O (maxk≤k∗θk(2) · (dk∗log2(1/) + log k∗)) queries to LABEL. It is not generally possible to directly compare Theorems 1 and 2 on account of the algorithmdependent disagreement coefficient bounds. However, in cases where these disagreement coefficients are comparable (as in the union-of-intervals example), the SEARCH complexity in Theorem 2 is slightly higher (by additive log terms), but the LABEL complexity is smaller than that from Theorem 1 by roughly a factor of k∗. For the union-of-intervals example, SEABEL would learn target union of k∗intervals with k∗+ O(log(k∗/)) queries to SEARCH and ˜O((k∗)2 log2(1/)) queries to LABEL. 5 Non-Realizable Case In this section, we consider the case where the optimal hypothesis h∗may have non-zero error rate, i.e., the non-realizable (or agnostic) setting. In this case, the algorithm LARCH, which was designed for the realizable setting, is no longer applicable. First, examples obtained by LABEL and SEARCH are of different quality: those returned by SEARCH always agree with h∗, whereas the labels given by LABEL need not agree with h∗. Moreover, the version spaces (even when k = k∗) as defined by LARCH may always be empty due to the noisy labels. 6 Another complication arises in our SRM setting that differentiates it from the usual agnostic active learning setting. When working with a specific hypothesis class Hk in the nested sequence, we may observe high error rates because (i) the finite sample error is too high (but additional labeled examples could reduce it), or (ii) the current hypothesis class Hk is impoverished. In case (ii), the best hypothesis in Hk may have a much larger error rate than h∗, and hence lower bounds [Kääriäinen, 2006] imply that active learning on Hk instead of Hk∗may be substantially more difficult. These difficulties in the SRM setting are circumvented by an algorithm that adaptively estimates the error of h∗. The algorithm, A-LARCH (Algorithm 5), is presented in Appendix D. Theorem 3. Assume err(h∗) = ν. Let θk(·) denote the disagreement coefficient of V ki i at the first iteration i in A-LARCH where ki ≥k. Fix any , δ ∈(0, 1). If A-LARCH is run with inputs hypothesis classes {Hk}∞ k=0, oracles SEARCH and LABEL, learning parameter δ, and unlabeled example budget ˜O((dk∗+ log k∗)(ν + )/2), then with probability 1 −δ: A-LARCH returns a classifier with error rate ≤ν + ; it makes at most k∗+ O (log(dk∗/) + log log k∗) queries to SEARCH, and ˜O (maxk≤k∗θk(2ν + 2) · (dk∗log2(1/) + log k∗) · (1 + ν2/2)) queries to LABEL. The proof is in Appendix D. The LABEL query complexity is at least a factor of k∗better than that in Hanneke [2011], and sometimes exponentially better thanks to the reduced disagreement coefficient of the version space when consistency constraints are incorporated. 5.1 AA-LARCH: an Opportunistic Anytime Algorithm In many practical scenarios, termination conditions based on quantities like a target excess error rate  are undesirable. The target  is unknown, and we instead prefer an algorithm that performs as well as possible until a cost budget is exhausted. Fortunately, when the primary cost being considered are LABEL queries, there are many LABEL-only active learning algorithms that readily work in such an “anytime” setting [see, e.g., Dasgupta et al., 2007, Hanneke, 2014]. The situation is more complicated when we consider both SEARCH and LABEL: we can often make substantially more progress with SEARCH queries than with LABEL queries (as the error rate of the best hypothesis in Hk for k > k can be far lower than in Hk). AA-LARCH (Algorithm 2) shows that although these queries come at a higher cost, the cost can be amortized. AA-LARCH relies on several subroutines: SAMPLE-AND-LABEL, ERROR-CHECK, PRUNE-VERSION-SPACE and UPGRADE-VERSION-SPACE (Algorithms 6, 7, 8, and 9). The detailed descriptions are deferred to Appendix E. SAMPLE-AND-LABEL performs standard disagreement-based selective sampling using oracle LABEL; labels of examples in the disagreement region are queried, otherwise inferred. PRUNE-VERSION-SPACE prunes the version space given the labeled examples collected, based on standard generalization error bounds. ERROR-CHECK checks if the best hypothesis in the version space has large error; SEARCH is used to find a systematic mistake for the version space; if either event happens, AA-LARCH calls UPGRADE-VERSION-SPACE to increase k, the level of our working hypothesis class. Theorem 4. Assume err(h∗) = ν. Let θk(·) denote the disagreement coefficient of Vi at the first iteration i after which k ≥k. Fix any  ∈(0, 1). Let n = ˜O(maxk≤k∗θk(2ν +2)dk∗(1+ν2/2)) and define C = 2(n + k∗τ). Run Algorithm 2 with a nested sequence of hypotheses {Hk}∞ k=0, oracles LABEL and SEARCH, confidence parameter δ, cost ratio τ ≥1, and upper bound N = ˜O(dk∗/2). If the cost spent is at least C, then with probability 1 −δ, the current hypothesis ˜h has error at most ν + . The proof is in Appendix E. A comparison to Theorem 3 shows that AA-LARCH is adaptive: for any cost complexity C, the excess error rate  is roughly at most twice that achieved by A-LARCH. 6 Discussion The SEARCH oracle captures a powerful form of interaction that is useful for machine learning. Our theoretical analyses of LARCH and variants demonstrate that SEARCH can substantially improve LABEL-based active learners, while being plausibly cheaper to implement than oracles like CCQ. 7 Algorithm 2 AA-LARCH input: Nested hypothesis set H0 ⊆H1 ⊆· · · ; oracles LABEL and SEARCH; learning parameter δ ∈(0, 1); SEARCH-to-LABEL cost ratio τ, dataset size upper bound N. output: hypothesis ˜h. 1: Initialize: consistency constraints S ←∅, counter c ←0, k ←0, verified labeled dataset ˜L ←∅, working labeled dataset L0 ←∅, unlabeled examples processed i ←0, Vi ←Hk(S). 2: loop 3: Reset counter c ←0. 4: repeat 5: if ERROR-CHECK(Vi, Li, δi) then 6: (k, S, Vi) ←UPGRADE-VERSION-SPACE(k, S, ∅) 7: Vi ←PRUNE-VERSION-SPACE(Vi, ˜L, δi) 8: Li ←˜L 9: continue loop 10: end if 11: i ←i + 1 12: (Li, c) ←SAMPLE-AND-LABEL(Vi−1, LABEL, Li−1, c) 13: Vi ←PRUNE-VERSION-SPACE(Vi−1, Li, δi) 14: until c = τ or li = N 15: e ←SEARCHHk(Vi) 16: if e = ⊥then 17: (k, S, Vi) ←UPGRADE-VERSION-SPACE(k, S, {e}) 18: Vi ←PRUNE-VERSION-SPACE(Vi, ˜L, δi) 19: Li ←˜L 20: else 21: Update verified dataset ˜L ←Li. 22: Store temporary solution ˜h = arg minh∈Vi err(h, ˜L). 23: end if 24: end loop Are there examples where CCQ is substantially more powerful than SEARCH? This is a key question, because a good active learning system should use minimally powerful oracles. Another key question is: Can the benefits of SEARCH be provided in a computationally efficient general purpose manner? References Josh Attenberg and Foster J. Provost. Why label when you can search? alternatives to active learning for applying human resources to build classification models under extreme class imbalance. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, July 25-28, 2010, pages 423–432, 2010. Maria-Florina Balcan and Steve Hanneke. Robust interactive learning. In COLT, 2012. Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In ICML, 2006. Maria-Florina Balcan, Steve Hanneke, and Jennifer Wortman Vaughan. The true sample complexity of active learning. Machine learning, 80(2-3):111–139, 2010. Alina Beygelzimer, Daniel Hsu, John Langford, and Tong Zhang. Agnostic active learning without constraints. In Advances in Neural Information Processing Systems 23, 2010. Alina Beygelzimer, Daniel Hsu, Nikos Karampatziakis, John Langford, and Tong Zhang. Efficient active learning. In ICML Workshop on Online Trading of Exploration and Exploitation, 2011. David A. Cohn, Les E. Atlas, and Richard E. Ladner. Improving generalization with active learning. Machine Learning, 15(2):201–221, 1994. 8 Sanjoy Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems 18, 2005. Sanjoy Dasgupta, Daniel Hsu, and Claire Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems 20, 2007. Luc Devroye, László Györfi, and Gabor Lugosi. A Probabilistic Theory of Pattern Recognition. Springer Verlag, 1996. Steve Hanneke. A bound on the label complexity of agnostic active learning. In ICML, pages 249–278, 2007. Steve Hanneke. Rates of convergence in active learning. The Annals of Statistics, 39(1):333–361, 2011. Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends R in Machine Learning, 7(2-3):131–309, 2014. ISSN 1935-8237. doi: 10.1561/2200000037. Tzu-Kuo Huang, Alekh Agarwal, Daniel Hsu, John Langford, and Robert E. Schapire. Efficient and parsimonious agnostic active learning. In Advances in Neural Information Processing Systems 28, 2015. Matti Kääriäinen. Active learning in the non-realizable case. In Algorithmic Learning Theory, 17th International Conference, ALT 2006, Barcelona, Spain, October 7-10, 2006, Proceedings, pages 63–77, 2006. Patrice Y. Simard, David Maxwell Chickering, Aparna Lakshmiratan, Denis Xavier Charles, Léon Bottou, Carlos Garcia Jurado Suarez, David Grangier, Saleema Amershi, Johan Verwey, and Jina Suh. ICE: enabling non-experts to build models interactively for large-scale lopsided problems. CoRR, abs/1409.4814, 2014. URL http://arxiv.org/abs/1409.4814. Vladimir N. Vapnik. Estimation of Dependences Based on Empirical Data. Springer-Verlag, 1982. Vladimir N. Vapnik and Alexey Ya. Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and Its Applications, 16(2):264–280, 1971. 9
2016
185
6,089
A Simple Practical Accelerated Method for Finite Sums Aaron Defazio Ambiata, Sydney Australia Abstract We describe a novel optimization method for finite sums (such as empirical risk minimization problems) building on the recently introduced SAGA method. Our method achieves an accelerated convergence rate on strongly convex smooth problems. Our method has only one parameter (a step size), and is radically simpler than other accelerated methods for finite sums. Additionally it can be applied when the terms are non-smooth, yielding a method applicable in many areas where operator splitting methods would traditionally be applied. Introduction A large body of recent developments in optimization have focused on minimization of convex finite sums of the form: f(x) = 1 n n X i=1 fi(x), a very general class of problems including the empirical risk minimization (ERM) framework as a special case. Any function h can be written in this form by setting f1(x) = h(x) and fi = 0 for i ̸= 1, however when each fi is sufficiently regular in a way that can be made precise, it is possible to optimize such sums more efficiently than by treating them as black box functions. In most cases recently developed methods such as SAG [Schmidt et al., 2013] can find an ϵ-minimum faster than either stochastic gradient descent or accelerated black-box approaches, both in theory and in practice. We call this class of methods fast incremental gradient methods (FIG). FIG methods are randomized methods similar to SGD, however unlike SGD they are able to achieve linear convergence rates under Lipschitz-smooth and strong convexity conditions [Mairal, 2014, Defazio et al., 2014b, Johnson and Zhang, 2013, Koneˇcný and Richtárik, 2013]. The linear rate in the first wave of FIG methods directly depended on the condition number L/µ of the problem, whereas recently several methods have been developed that depend on the square-root of the condition number [Lan and Zhou, 2015, Lin et al., 2015, Shalev-Shwartz and Zhang, 2013c, Nitanda, 2014], at least when n is not too large. Analogous to the black-box case, these methods are known as accelerated methods. In this work we develop another accelerated method, which is conceptually simpler and requires less tuning than existing accelerated methods. The method we give is a primal approach, however it makes use of a proximal operator oracle for each fi instead of a gradient oracle, unlike other primal approaches. The proximal operator is also used by dual methods such as some variants of SDCA [Shalev-Shwartz and Zhang, 2013a]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Algorithm 1 Pick some starting point x0 and step size γ. Initialize each g0 i = f ′ i(x0), where f ′ i(x0) is any gradient/subgradient at x0. Then at step k + 1: 1. Pick index j from 1 to n uniformly at random. 2. Update x: zk j = xk + γ " gk j −1 n n X i=1 gk i # , xk+1 = proxγ j zk j  . 3. Update the gradient table: Set gk+1 j = 1 γ zk j −xk+1 , and leave the rest of the entries unchanged (gk+1 i = gk i for i ̸= j). 1 Algorithm Our algorithm’s main step makes use of the proximal operator for a randomly chosen fi. For convenience, we define: proxγ i (x) = argminy  γfi(y) + 1 2 ∥x −y∥2  . This proximal operator can be computed efficiently or in closed form in many cases, see Section 4 for details. Like SAGA, we also maintain a table of gradients gi, one for each function fi. We denote the state of gi at the end of step k by gk i . The iterate (our guess at the solution) at the end of step k is denoted xk. The starting iterate x0 may be chosen arbitrarily. The full algorithm is given as Algorithm 1. The sum of gradients 1 n Pn i=1 gk i can be cached and updated efficiently at each step, and in most cases instead of storing a full vector for each gi, only a single real value needs to be stored. This is the case for linear regression or binary classification with logistic loss or hinge loss, in precisely the same way as for standard SAGA. A discussion of further implementation details is given in Section 4. With step size γ = q (n −1)2 + 4n L µ 2Ln −1 −1 n 2L , the expected convergence rate in terms of squared distance to the solution is given by: E xk −x∗ 2 ≤  1 − µγ 1 + µγ k µ + L µ x0 −x∗ 2 , when each fi : Rd →R is L-smooth and µ-strongly convex. See Nesterov [1998] for definitions of these conditions. Using big-O notation, the number of steps required to reduce the distance to the solution by a factor ϵ is: k = O s nL µ + n ! log 1 ϵ ! , as ϵ →0. This rate matches the lower bound known for this problem [Lan and Zhou, 2015] under the gradient oracle. We conjecture that this rate is optimal under the proximal operator oracle as well. Unlike other accelerated approaches though, we have only a single tunable parameter (the step size γ), and the algorithm doesn’t need knowledge of L or µ except for their appearance in the step size. Compared to the O ((L/µ + n) log (1/ϵ)) rate for SAGA and other non-accelerated FIG methods, accelerated FIG methods are significantly faster when n is small compared to L/µ, however for n ≥L/µ the performance is essentially the same. All known FIG methods hit a kind of wall at n ≈L/µ, where they decrease the error at each step by no more than 1 −1 n. Indeed, when n ≥L/µ the problem is so well conditioned so as to be easy for any FIG method to solve it efficiently. This is sometimes called the big data setting [Defazio et al., 2014b]. 2 Our convergence rate can also be compared to that of optimal first-order black box methods, which have rates of the form k = O p L/µ  log (1/ϵ)  per epoch equivalent. We are able to achieve a √n speedup on a per-epoch basis, for n not too large. Of course, all of the mentioned rates are significantly better than the O ((L/µ) log (1/ϵ)) rate of gradient descent. For non-smooth but strongly convex problems, we prove a 1/ϵ-type rate under a standard iterate averaging scheme. This rate does not require the use of decreasing step sizes, so our algorithm requires less tuning than other primal approaches on non-smooth problems. 2 Relation to other approaches Our method is most closely related to the SAGA method. To make the relation clear, we may write our method’s main step as: xk+1 = xk −γ " f ′ j(xk+1) −gk j + 1 n n X i=1 gk i # , whereas SAGA has a step of the form: xk+1 = xk −γ " f ′ j(xk) −gk j + 1 n n X i=1 gk i # . The difference is the point at which the gradient of fj is evaluated at. The proximal operator has the effect of evaluating the gradient at xk+1 instead of xk. While a small difference on the surface, this change has profound effects. It allows the method to be applied directly to non-smooth problems using fixed step sizes, a property not shared by SAGA or other primal FIG methods. Additionally, it allows for much larger step sizes to be used, which is why the method is able to achieve an accelerated rate. It is also illustrative to look at how the methods behave at n = 1. SAGA degenerates into regular gradient descent, whereas our method becomes the proximal-point method [Rockafellar, 1976]: xk+1 = proxγf(xk). The proximal point method has quite remarkable properties. For strongly convex problems, it converges for any γ > 0 at a linear rate. The downside being the inherent difficulty of evaluating the proximal operator. For the n = 2 case, if each term is an indicator function for a convex set, our algorithm matches Dykstra’s projection algorithm if we take γ = 2 and use cyclic instead of random steps. Accelerated incremental gradient methods Several acceleration schemes have been recently developed as extensions of non-accelerated FIG methods. The earliest approach developed was the ASDCA algorithm [Shalev-Shwartz and Zhang, 2013b,c]. The general approach of applying the proximal-point method as the outer-loop of a doubleloop scheme has been dubbed the Catalyst algorithm Lin et al. [2015]. It can be applied to accelerate any FIG method. Recently a very interesting primal-dual approach has been proposed by Lan and Zhou [2015]. All of the prior accelerated methods are significantly more complex than the approach we propose, and have more complex proofs. 3 Theory 3.1 Proximal operator bounds In this section we rehash some simple bounds from proximal operator theory that we will use in this work. Define the short-hand pγf(x) = proxγf(x), and let gγf(x) = 1 γ (x −pγf(x)), so that pγf(x) = x −γgγf(x). Note that gγf(x) is a subgradient of f at the point pγf(x). This relation is known as the optimality condition of the proximal operator. Note that proofs for the following two propositions are in the supplementary material. 3 Notation Description Additional relation xk Current iterate at step k xk ∈Rd x∗ Solution x∗∈Rd γ Step size pγf(x) Short-hand in results for generic f pγf(x) = proxγf(x) proxγ i (x) Proximal operator of γfi at x = argminy n γfi(y) + 1 2 ∥x −y∥2o gk i A stored subgradient of fi as seen at step k g∗ i A subgradient of fi at x∗ Pn i=1 g∗ i = 0 vi vi = x∗+ γg∗ i x∗= proxγ i (vi) j Chosen component index (random variable) zk j zk j = xk + γ  gk j −1 n Pn i=1 gk i  xk+1 j = proxγ j zk j  Table 1: Notation quick reference Proposition 1. (Strengthening firm non-expansiveness under strong convexity) For any x, y ∈Rd, and any convex function f : Rd →R with strong convexity constant µ ≥0, ⟨x −y, pγf(x) −pγf(y)⟩≥(1 + µγ) ∥pγf(x) −pγf(y)∥2 . In operator theory this property is known as (1 + µγ)-cocoerciveness of pγf. Proposition 2. (Moreau decomposition) For any x ∈Rd, and any convex function f : Rd →R with Fenchel conjugate f ∗: pγf(x) = x −γp 1 γ f ∗(x/γ). (1) Recall our definition of gγf(x) = 1 γ (x −pγf(x)) also. After combining, the following relation thus holds between the proximal operator of the conjugate f ∗and gγf: p 1 γ f ∗(x/γ) = 1 γ (x −pγf(x)) = gγf(x). (2) Theorem 3. For any x, y ∈Rd, and any convex L-smooth function f : Rd →R: ⟨gγf(x) −gγf(y), x −y⟩≥γ  1 + 1 Lγ  ∥gγf(x) −gγf(y)∥2 , Proof. We will apply cocoerciveness of the proximal operator of f ∗as it appears in the decomposition. Note that L-smoothness of f implies 1/L-strong convexity of f ∗. In particular we apply it to the points 1 γ x and 1 γ y:  p 1 γ f ∗( 1 γ x) −p 1 γ f ∗( 1 γ y), 1 γ x −1 γ y  ≥  1 + 1 Lγ  p 1 γ f ∗( 1 γ x) −p 1 γ f ∗( 1 γ y) 2 . Pulling 1 γ from the right side of the inner product out, and plugging in Equation 2, gives the result. 3.2 Notation Let x∗be the unique minimizer (due to strong convexity) of f. In addition to the notation used in the description of the algorithm, we also fix a set of subgradients g∗ j , one for each of fj at x∗, chosen such that Pn j=1 g∗ j = 0. We also define vj = x∗+ γg∗ j . Note that at the solution x∗, we want to apply a proximal step for component j of the form: x∗= proxγ j x∗+ γg∗ j  = proxγ j (vj) . 4 Lemma 4. (Technical lemma needed by main proof) Under Algorithm 1, taking the expectation over the random choice of j, conditioning on xk and each gk i , allows us to bound the following inner product at step k: E * γ " gk j −1 n n X i=1 gk i # −γg∗ j , xk −x∗ + γ " gk j −1 n n X i=1 gk i # −γg∗ j + ≤γ2 1 n n X i=1 gk i −g∗ i 2 . The proof is in the supplementary material. 3.3 Main result Theorem 5. (single step Lyapunov descent) We define the Lyapunov function T k of our algorithm (Point-SAGA) at step k as: T k = c n n X i=1 gk i −g∗ i 2 + xk −x∗ 2 , for c = 1/µL. Then using step size γ = q (n−1)2+4n L µ 2Ln −1−1 n 2L , the expectation of T k+1, over the random choice of j, conditioning on xk and each gk i , is: E  T k+1 ≤(1 −κ) T k for κ = µγ 1 + µγ , when each fi : Rd →R is L-smooth and µ-strongly convex and 0 < µ < L. This is the same Lyapunov function as used by Hofmann et al. [2015]. Proof. Term 1 of T k+1 is straight-forward to simplify: c nE n X i=1 gk+1 i −g∗ i 2 =  1 −1 n  c n n X i=1 gk i −g∗ i 2 + c nE gk+1 j −g∗ j 2 . For term 2 of T k+1 we start by applying cocoerciveness (Theorem 1): (1 + µγ)E xk+1 −x∗ 2 = (1 + µγ)E proxγ j (zk j ) −proxγ j (vj) 2 ≤ E proxγ j (zk j ) −proxγ j (vj), zk j −vj = E xk+1 −x∗, zk j −vj . Now we add and subtract xk : = E xk+1 −xk + xk −x∗, zk j −vj = E xk −x∗, zk j −vj + E xk+1 −xk , zk j −vj = xk −x∗ 2 + E xk+1 −xk , zk j −vj , where we have pulled out the quadratic term by using E[zk j −vj] = xk −x∗(we can take the expectation since the left hand side of the inner product doesn’t depend on j). We now expand E xk+1 −xk , zk j −vj further: E xk+1 −xk , zk j −vj = E xk+1 −γg∗ j + γg∗ j −xk , zk j −vj = E * xk −γgk+1 j + γ " gk j −1 n n X i=1 gk i # −γg∗ j + γg∗ j −xk, xk −x∗ + γ " gk j −1 n n X i=1 gk i # −γg∗ j + . (3) 5 We further split the left side of the inner product to give two separate inner products: = E * γ " gk j −1 n n X i=1 gk i # −γg∗ j , xk −x∗ + γ " gk j −1 n n X i=1 gk i # −γg∗ j + + E * γg∗ j −γgk+1 j , xk −x∗ + γ " gk j −1 n n X i=1 gk i # −γg∗ j + . (4) The first inner product in Equation 4 is the quantity we bounded in Lemma 4 by γ2 1 n Pn i=1 gk i −g∗ i 2. The second inner product in Equation 4, can be simplified using Theorem 3 (note the right side of the inner product is equal to zk j −vj): −γE gk+1 j −g∗ j , zk j −vj ≤−γ2  1 + 1 Lγ  E gk+1 j −g∗ j 2 . Combing these gives the following bound on (1 + µγ)E xk+1 −x∗ 2: (1+µγ)E xk+1 −x∗ 2 ≤ xk −x∗ 2+γ2 1 n n X i=1 gk i −g∗ i 2−γ2  1 + 1 Lγ  E gk+1 j −g∗ j 2 . Define α = 1 1+µγ = 1 −κ, where κ = µγ 1+µγ . Now we multiply the above inequality through by α and combine with the rest of the Lyapunov function, giving: E  T k+1 ≤T k +  αγ2 −c n  1 n n X i gk i −g∗ i 2 +  c n −αγ2 −αγ L  E gk+1 j −g∗ j 2 −κE xk −x∗ 2 . We want an α convergence rate, so we pull out the required terms: E  T k+1 ≤αT k +  αγ2 + κc −c n  1 n n X i gk i −g∗ i 2 +  c n −αγ2 −αγ L  E gk+1 j −g∗ j 2 . Now to complete the proof we note that c = 1/µL and γ = q (n−1)2+4n L µ 2Ln −1−1 n 2L ensure that both terms inside the round brackets are non-positive, giving ET k+1 ≤αT k. These constants were found by equating the equations in the brackets to zero, and solving with respect to the two unknowns, γ and c. It is easy to verify that γ is always positive, as a consequence of the condition number L/µ always being at least 1. Corollary 6. (Smooth case) Chaining Theorem 5 gives a convergence rate for Point-SAGA at step k under the constants given in Theorem 5 of: E xk −x∗ 2 ≤(1 −κ)k µ + L µ x0 −x∗ 2 , if each fi : Rd →R is L-smooth and µ-strongly convex. Theorem 7. (Non-smooth case) Suppose each fi : Rd →R is µ-strongly convex, g0 i −g∗ i ≤B and x0 −x∗ ≤R. Then after k iterations of Point-SAGA with step size γ = R/B√n: E ¯xk −x∗ 2 ≤2 √n (1 + µ (R/B√n)) µk RB, where ¯xk = 1 kE Pk t=1 xt. The proof of this theorem is included in the supplementary material. 6 4 Implementation Care must be taken for efficient implementation, particularly in the sparse gradient case. We discuss the key points below. A fast Cython implementation is available on the author’s website incorporating these techniques. Proximal operators For the most common binary classification and regression methods, implementing the proximal operator is straight-forward. We include details of the computation of the proximal operators for the hinge, square and logistic losses in the supplementary material. The logistic loss does not have a closed form proximal operator, however it may be computed very efficiently in practice using Newton’s method on a 1D subproblem. For problems of a non-trivial dimensionality the cost of the dot products in the main step is much greater than the cost of the proximal operator evaluation. We also detail how to handle a quadratic regularizer within each term’s prox operator, which has a closed form in terms of the unregularized prox operator. Initialization Instead of setting g0 i = f ′ i(x0) before commencing the algorithm, we recommend using g0 i = 0 instead. This avoids the cost of a initial pass over the data. In practical effect this is similar to the SDCA initialization of each dual variable to 0. 5 Experiments We tested our algorithm which we call Point-SAGA against SAGA [Defazio et al., 2014a], SDCA [Shalev-Shwartz and Zhang, 2013a], Pegasos/SGD [Shalev-Shwartz et al., 2011] and the catalyst acceleration scheme [Lin et al., 2015]. SDCA was chosen as the inner algorithm for the catalyst scheme as it doesn’t require a step-size, making it the most practical of the variants. Catalyst applied to SDCA is essentially the same algorithm as proposed in Shalev-Shwartz and Zhang [2013c]. A single inner epoch was used for each SDCA invocation. Accelerated MISO as well as the primal-dual FIG method [Lan and Zhou, 2015] were excluded as we wanted to test on sparse problems and they are not designed to take advantage of sparsity. The step-size parameter for each method (κ for catalyst-SDCA) was chosen using a grid search of powers of 2. The step size that gives the lowest error at the final epoch is used for each method. We selected a set of commonly used datasets from the LIBSVM repository [Chang and Lin, 2011]. The pre-scaled versions were used when available. Logistic regression with L2 regularization was applied to each problem. The L2 regularization constant for each problem was set by hand to ensure f was not in the big data regime n ≥L/µ; as noted above, all the methods perform essentially the same when n ≥L/µ. The constant used is noted beneath each plot. Open source code to exactly replicate the experimental results is available at https://github.com/adefazio/point-saga. Algorithm scaling with respect to n The key property that distinguishes accelerated FIG methods from their non-accelerated counterparts is their performance scaling with respect to the dataset size. For large datasets on well-conditioned problems we expect from the theory to see little difference between the methods. To this end, we ran experiments including versions of the datasets subsampled randomly without replacement in 10% and 5% increments, in order to show the scaling with n empirically. The same amount of regularization was used for each subset. Figure 1 shows the function value sub-optimality for each dataset-subset combination. We see that in general accelerated methods dominate the performance of their non-accelerated counter-parts. Both SDCA and SAGA are much slower on some datasets comparatively than others. For example, SDCA is very slow on the 5 and 10% COVTYPE datasets, whereas both SAGA and SDCA are much slower than the accelerated methods on the AUSTRALIAN dataset. These differences reflect known properties of the two methods. SAGA is able to adapt to inherent strong convexity while SDCA can be faster on very well-conditioned problems. There is no clear winner between the two accelerated methods, each gives excellent results on each problem. The Pegasos (stochastic gradient descent) algorithm with its slower than linear rate is a clear loser on each problem, almost appearing as an almost horizontal line on the log scale of these plots. Non-smooth problems We also tested the RCV1 dataset on the hinge loss. In general we did not expect an accelerated rate for this problem, and indeed we observe that Point-SAGA is roughly as fast as SDCA across the different dataset sizes. 7 0 5 10 15 20 Epoch 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 Epoch 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 Epoch 10−12 10−11 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Function Suboptimality (a) COVTYPE µ = 2 × 10−6 : 5%, 10%, 100% subsets 0 5 10 15 20 25 30 Epoch 10−4 10−3 10−2 10−1 100 101 Function Suboptimality 0 5 10 15 20 25 30 Epoch 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 25 30 Epoch 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality (b) AUSTRALIAN µ = 10−4: 5%, 10%, 100% subsets 0 5 10 15 20 25 30 Epoch 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 25 30 Epoch 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 25 30 Epoch 10−13 10−12 10−11 10−10 10−9 10−8 10−7 10−6 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality (c) MUSHROOMS µ = 10−4: 5%, 10%, 100% subsets 0 5 10 15 20 25 30 35 40 Epoch 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 25 30 35 40 Epoch 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality 0 5 10 15 20 25 30 35 40 Epoch 10−5 10−4 10−3 10−2 10−1 100 Function Suboptimality (d) RCV1 with hinge loss, µ = 5 × 10−5: 5%, 10%, 100% subsets 0 5 10 15 20 Epoch 10−6 10−5 10−4 10−3 10−2 10−1 100 101 Function Suboptimality Point-SAGA Pegasos SAGA SDCA Catalyst-SDCA Figure 1: Experimental results 8 References Chih-Chung Chang and Chih-Jen Lin. Libsvm : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1–27:27, 2011. Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: A fast incremental gradient method with support for non-strongly convex composite objectives. Advances in Neural Information Processing Systems 27 (NIPS 2014), 2014a. Aaron Defazio, Tiberio Caetano, and Justin Domke. Finito: A faster, permutable incremental gradient method for big data problems. Proceedings of the 31st International Conference on Machine Learning, 2014b. Thomas Hofmann, Aurelien Lucchi, Simon Lacoste-Julien, and Brian McWilliams. Variance reduced stochastic gradient descent with neighbors. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2296–2304. Curran Associates, Inc., 2015. Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. NIPS, 2013. Jakub Koneˇcný and Peter Richtárik. Semi-Stochastic Gradient Descent Methods. ArXiv e-prints, December 2013. G. Lan and Y. Zhou. An optimal randomized incremental gradient method. ArXiv e-prints, July 2015. Hongzhou Lin, Julien Mairal, and Zaid Harchaoui. A universal catalyst for first-order optimization. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 3366–3374. Curran Associates, Inc., 2015. Julien Mairal. Incremental majorization-minimization optimization with application to large-scale machine learning. Technical report, INRIA Grenoble Rhône-Alpes / LJK Laboratoire Jean Kuntzmann, 2014. Yu. Nesterov. Introductory Lectures On Convex Programming. Springer, 1998. Atsushi Nitanda. Stochastic proximal gradient descent with acceleration techniques. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 1574–1582. Curran Associates, Inc., 2014. R Tyrrell Rockafellar. Monotone operators and the proximal point algorithm. SIAM journal on control and optimization, 14(5):877–898, 1976. Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. Technical report, INRIA, 2013. Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. JMLR, 2013a. Shai Shalev-Shwartz and Tong Zhang. Accelerated mini-batch stochastic dual coordinate ascent. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 378–385. Curran Associates, Inc., 2013b. Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Technical report, The Hebrew University, Jerusalem and Rutgers University, NJ, USA, 2013c. Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 127(1):3–30, 2011. 9
2016
186
6,090
Coupled Generative Adversarial Networks Ming-Yu Liu Mitsubishi Electric Research Labs (MERL), mliu@merl.com Oncel Tuzel Mitsubishi Electric Research Labs (MERL), oncel@merl.com Abstract We propose coupled generative adversarial network (CoGAN) for learning a joint distribution of multi-domain images. In contrast to the existing approaches, which require tuples of corresponding images in different domains in the training set, CoGAN can learn a joint distribution without any tuple of corresponding images. It can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one. We apply CoGAN to several joint distribution learning tasks, including learning a joint distribution of color and depth images, and learning a joint distribution of face images with different attributes. For each task it successfully learns the joint distribution without any tuple of corresponding images. We also demonstrate its applications to domain adaptation and image transformation. 1 Introduction The paper concerns the problem of learning a joint distribution of multi-domain images from data. A joint distribution of multi-domain images is a probability density function that gives a density value to each joint occurrence of images in different domains such as images of the same scene in different modalities (color and depth images) or images of the same face with different attributes (smiling and non-smiling). Once a joint distribution of multi-domain images is learned, it can be used to generate novel tuples of images. In addition to movie and game production, joint image distribution learning finds applications in image transformation and domain adaptation. When training data are given as tuples of corresponding images in different domains, several existing approaches [1, 2, 3, 4] can be applied. However, building a dataset with tuples of corresponding images is often a challenging task. This correspondence dependency greatly limits the applicability of the existing approaches. To overcome the limitation, we propose the coupled generative adversarial networks (CoGAN) framework. It can learn a joint distribution of multi-domain images without existence of corresponding images in different domains in the training set. Only a set of images drawn separately from the marginal distributions of the individual domains is required. CoGAN is based on the generative adversarial networks (GAN) framework [5], which has been established as a viable solution for image distribution learning tasks. CoGAN extends GAN for joint image distribution learning tasks. CoGAN consists of a tuple of GANs, each for one image domain. When trained naively, the CoGAN learns a product of marginal distributions rather than a joint distribution. We show that by enforcing a weight-sharing constraint the CoGAN can learn a joint distribution without existence of corresponding images in different domains. The CoGAN framework is inspired by the idea that deep neural networks learn a hierarchical feature representation. By enforcing the layers that decode high-level semantics in the GANs to share the weights, it forces the GANs to decode the high-level semantics in the same way. The layers that decode low-level details then map the shared representation to images in individual domains for confusing the respective discriminative models. CoGAN is for multi-image domains but, for ease of presentation, we focused on the case of two image domains in the paper. However, the discussions and analyses can be easily generalized to multiple image domains. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We apply CoGAN to several joint image distribution learning tasks. Through convincing visualization results and quantitative evaluations, we verify its effectiveness. We also show its applications to unsupervised domain adaptation and image transformation. 2 Generative Adversarial Networks A GAN consists of a generative model and a discriminative model. The objective of the generative model is to synthesize images resembling real images, while the objective of the discriminative model is to distinguish real images from synthesized ones. Both the generative and discriminative models are realized as multilayer perceptrons. Let x be a natural image drawn from a distribution, pX, and z be a random vector in Rd. Note that we only consider that z is from a uniform distribution with a support of [−1 1]d, but different distributions such as a multivariate normal distribution can be applied as well. Let g and f be the generative and discriminative models, respectively. The generative model takes z as input and outputs an image, g(z), that has the same support as x. Denote the distribution of g(z) as pG. The discriminative model estimates the probability that an input image is drawn from pX. Ideally, f(x) = 1 if x ∼pX and f(x) = 0 if x ∼pG. The GAN framework corresponds to a minimax two-player game, and the generative and discriminative models can be trained jointly via solving max g min f V (f, g) ≡Ex∼pX[−log f(x)] + Ez∼pZ[−log(1 −f(g(z)))]. (1) In practice (1) is solved by alternating the following two gradient update steps: Step 1: θt+1 f = θt f −λt∇θf V (f t, gt), Step 2: θt+1 g = θt g + λt∇θgV (f t+1, gt) where θf and θg are the parameters of f and g, λ is the learning rate, and t is the iteration number. Goodfellow et al. [5] show that, given enough capacity to f and g and sufficient training iterations, the distribution, pG, converges to pX. In other words, from a random vector, z, the network g can synthesize an image, g(z), that resembles one that is drawn from the true distribution, pX. 3 Coupled Generative Adversarial Networks CoGAN as illustrated in Figure 1 is designed for learning a joint distribution of images in two different domains. It consists of a pair of GANs—GAN1 and GAN2; each is responsible for synthesizing images in one domain. During training, we force them to share a subset of parameters. This results in that the GANs learn to synthesize pairs of corresponding images without correspondence supervision. Generative Models: Let x1 and x2 be images drawn from the marginal distribution of the 1st domain, x1 ∼pX1 and the marginal distribution of the 2nd domain, x2 ∼pX2, respectively. Let g1 and g2 be the generative models of GAN1 and GAN2, which map a random vector input z to images that have the same support as x1 and x2, respectively. Denote the distributions of g1(z) and g1(z) by pG1 and pG2. Both g1 and g2 are realized as multilayer perceptrons: g1(z) = g(m1) 1 g(m1−1) 1 . . . g(2) 1 g(1) 1 (z)  , g2(z) = g(m2) 2 g(m2−1) 2 . . . g(2) 2 g(1) 2 (z)  where g(i) 1 and g(i) 2 are the ith layers of g1 and g2 and m1 and m2 are the numbers of layers in g1 and g2. Note that m1 need not equal m2. Also note that the support of x1 need not equal to that of x2. Through layers of perceptron operations, the generative models gradually decode information from more abstract concepts to more material details. The first layers decode high-level semantics and the last layers decode low-level details. Note that this information flow direction is opposite to that in a discriminative deep neural network [6] where the first layers extract low-level features while the last layers extract high-level features. Based on the idea that a pair of corresponding images in two domains share the same high-level concepts, we force the first layers of g1 and g2 to have identical structure and share the weights. That is θg(i) 1 = θg(i) 2 , for i = 1, 2, ..., k where k is the number of shared layers, and θg(i) 1 and θg(i) 2 are the parameters of g(i) 1 and g(i) 2 , respectively. This constraint forces the high-level semantics to be decoded in the same way in g1 and g2. No constraints are enforced to the last layers. They can materialize the shared high-level representation differently for fooling the respective discriminators. 2 Generators Discriminators weight sharing GAN1 GAN2 Figure 1: CoGAN consists of a pair of GANs: GAN1 and GAN2. Each has a generative model for synthesizing realistic images in one domain and a discriminative model for classifying whether an image is real or synthesized. We tie the weights of the first few layers (responsible for decoding high-level semantics) of the generative models, g1 and g2. We also tie the weights of the last few layers (responsible for encoding high-level semantics) of the discriminative models, f1 and f2. This weight-sharing constraint allows CoGAN to learn a joint distribution of images without correspondence supervision. A trained CoGAN can be used to synthesize pairs of corresponding images—pairs of images sharing the same high-level abstraction but having different low-level realizations. Discriminative Models: Let f1 and f2 be the discriminative models of GAN1 and GAN2 given by f1(x1) = f (n1) 1 f (n1−1) 1 . . . f (2) 1 f (1) 1 (x1)  , f2(x2) = f (n2) 2 f (n2−1) 2 . . . f (2) 2 f (1) 2 (x2)  where f (i) 1 and f (i) 2 are the ith layers of f1 and f2 and n1 and n2 are the numbers of layers. The discriminative models map an input image to a probability score, estimating the likelihood that the input is drawn from a true data distribution. The first layers of the discriminative models extract low-level features, while the last layers extract high-level features. Because the input images are realizations of the same high-level semantics in two different domains, we force f1 and f2 to have the same last layers, which is achieved by sharing the weights of the last layers via θf (n1−i) 1 = θf (n2−i) 2 , for i = 0, 1, ..., l −1 where l is the number of weight-sharing layers in the discriminative models, and θf (i) 1 and θf (i) 2 are the network parameters of f (i) 1 and f (i) 2 , respectively. The weightsharing constraint in the discriminators helps reduce the total number of parameters in the network, but it is not essential for learning a joint distribution. Learning: The CoGAN framework corresponds to a constrained minimax game given by max g1,g2 min f1,f2 V (f1, f2, g1, g2), subject to θg(i) 1 = θg(i) 2 , for i = 1, 2, ..., k (2) θf (n1−j) 1 = θf (n2−j) 2 , for j = 0, 1, ..., l −1 where the value function V is given by V (f1, f2, g1, g2) = Ex1∼pX1 [−log f1(x1)] + Ez∼pZ[−log(1 −f1(g1(z)))] + Ex2∼pX2 [−log f2(x2)] + Ez∼pZ[−log(1 −f2(g2(z)))]. (3) In the game, there are two teams and each team has two players. The generative models form a team and work together for synthesizing a pair of images in two different domains for confusing the discriminative models. The discriminative models try to differentiate images drawn from the training data distribution in the respective domains from those drawn from the respective generative models. The collaboration between the players in the same team is established from the weight-sharing constraint. Similar to GAN, CoGAN can be trained by back propagation with the alternating gradient update steps. The details of the learning algorithm are given in the supplementary materials. Remarks: CoGAN learning requires training samples drawn from the marginal distributions, pX1 and pX2. It does not rely on samples drawn from the joint distribution, pX1,X2, where corresponding supervision would be available. Our main contribution is in showing that with just samples drawn separately from the marginal distributions, CoGAN can learn a joint distribution of images in the two domains. Both weight-sharing constraint and adversarial training are essential for enabling this capability. Unlike autoencoder learning [3], which encourages a generated pair of images to be identical to the target pair of corresponding images in the two domains for minimizing the reconstruction loss1, the adversarial training only encourages the generated pair of images to be 1This is why [3] requires samples from the joint distribution for learning the joint distribution. 3 Figure 2: Left (Task A): generation of digit and corresponding edge images. Right (Task B): generation of digit and corresponding negative images. Each of the top and bottom pairs was generated using the same input noise. We visualized the results by traversing in the input space. # of weight-sharing layers in the discriminative models 0 1 2 3 Avg. pixel agreement ratios 0.88 0.9 0.92 0.94 0.96 Task B: pair generation of digit and negative images Generative models share 1 layer. Generative models share 2 layers. Generative models share 3 layers. Generative models share 4 layers. # of weight-sharing layers in the discriminative models 0 1 2 3 avg. pixel agreement ratios 0.88 0.9 0.92 0.94 0.96 Task A: pair generation of digit and edge images Figure 3: The figures plot the average pixel agreement ratios of the CoGANs with different weight-sharing configurations for Task A and B. The larger the pixel agreement ratio the better the pair generation performance. We found that the performance was positively correlated with the number of weight-sharing layers in the generative models but was uncorrelated to the number of weight-sharing layers in the discriminative models. CoGAN learned the joint distribution without weight-sharing layers in the discriminative models. individually resembling to the images in the respective domains. With this more relaxed adversarial training setting, the weight-sharing constraint can then kick in for capturing correspondences between domains. With the weight-sharing constraint, the generative models must utilize the capacity more efficiently for fooling the discriminative models, and the most efficient way of utilizing the capacity for generating a pair of realistic images in two domains is to generate a pair of corresponding images since the neurons responsible for decoding high-level semantics can be shared. CoGAN learning is based on existence of shared high-level representations in the domains. If such a representation does not exist for the set of domains of interest, it would fail. 4 Experiments In the experiments, we emphasized there were no corresponding images in the different domains in the training sets. CoGAN learned the joint distributions without correspondence supervision. We were unaware of existing approaches with the same capability and hence did not compare CoGAN with prior works. Instead, we compared it to a conditional GAN to demonstrate its advantage. Recognizing that popular performance metrics for evaluating generative models all subject to issues [7], we adopted a pair image generation performance metric for comparison. Many details including the network architectures and additional experiment results are given in the supplementary materials. An implementation of CoGAN is available in https://github.com/mingyuliutw/cogan. Digits: We used the MNIST training set to train CoGANs for the following two tasks. Task A is about learning a joint distribution of a digit and its edge image. Task B is about learning a joint distribution of a digit and its negative image. In Task A, the 1st domain consisted of the original handwritten digit images, while the 2nd domain consisted of their edge images. We used an edge detector to compute training edge images for the 2nd domain. In the supplementary materials, we also showed an experiment for learning a joint distribution of a digit and its 90-degree in-plane rotation. We used deep convolutional networks to realized the CoGAN. The two generative models had an identical structure; both had 5 layers and were fully convolutional. The stride lengths of the convolutional layers were fractional. The models also employed the batch normalization processing [8] and the parameterized rectified linear unit processing [9]. We shared the parameters for all the layers except for the last convolutional layers. For the discriminative models, we used a variant of LeNet [10]. 4 The inputs to the discriminative models were batches containing output images from the generative models and images from the two training subsets (each pixel value is linearly scaled to [0 1]). We divided the training set into two equal-size non-overlapping subsets. One was used to train GAN1 and the other was used to train GAN2. We used the ADAM algorithm [11] for training and set the learning rate to 0.0002, the 1st momentum parameter to 0.5, and the 2nd momentum parameter to 0.999 as suggested in [12]. The mini-batch size was 128. We trained the CoGAN for 25000 iterations. These hyperparameters were fixed for all the visualization experiments. The CoGAN learning results are shown in Figure 2. We found that although the CoGAN was trained without corresponding images, it learned to render corresponding ones for both Task A and B. This was due to the weight-sharing constraint imposed to the layers that were responsible for decoding high-level semantics. Exploiting the correspondence between the two domains allowed GAN1 and GAN2 to utilize more capacity in the networks to better fit the training data. Without the weight-sharing constraint, the two GANs just generated two unrelated images in the two domains. Weight Sharing: We varied the numbers of weight-sharing layers in the generative and discriminative models to create different CoGANs for analyzing the weight-sharing effect for both tasks. Due to lack of proper validation methods, we did a grid search on the training iteration hyperparameter and reported the best performance achieved by each network. For quantifying the performance, we transformed the image generated by GAN1 to the 2nd domain using the same method employed for generating the training images in the 2nd domain. We then compared the transformed image with the image generated by GAN2. A perfect joint distribution learning should render two identical images. Hence, we used the ratios of agreed pixels between 10K pairs of images generated by each network (10K randomly sampled z) as the performance metric. We trained each network 5 times with different initialization weights and reported the average pixel agreement ratios over the 5 trials for each network. The results are shown in Figure 3. We observed that the performance was positively correlated with the number of weight-sharing layers in the generative models. With more sharing layers in the generative models, the rendered pairs of images resembled true pairs drawn from the joint distribution more. We also noted that the performance was uncorrelated to the number of weight-sharing layers in the discriminative models. However, we still preferred discriminator weight-sharing because this reduces the total number of network parameters. Comparison with Conditional GANs: We compared the CoGAN with the conditional GANs [13]. We designed a conditional GAN with the generative and discriminative models identical to those in the CoGAN. The only difference was the conditional GAN took an additional binary variable as input, which controlled the domain of the output image. When the binary variable was 0, it generated an image resembling images in the 1st domain; otherwise, it generated an image resembling images in the 2nd domain. Similarly, no pairs of corresponding images were given during the conditional GAN training. We applied the conditional GAN to both Task A and B and hoped to empirically answer whether a conditional model can be used to learn to render corresponding images with correspondence supervision. The pixel agreement ratio was used as the performance metric. The experiment results showed that for Task A, CoGAN achieved an average ratio of 0.952, outperforming 0.909 achieved by the conditional GAN. For Task B, CoGAN achieved a score of 0.967, which was much better than 0.778 achieved by the conditional GAN. The conditional GAN just generated two different digits with the same random noise input but different binary variable values. These results showed that the conditional model failed to learn a joint distribution from samples drawn from the marginal distributions. We note that for the case that the supports of the two domains are different such as the color and depth image domains, the conditional model cannot even be applied. Faces: We applied CoGAN to learn a joint distribution of face images with different. We trained several CoGANs, each for generating a face with an attribute and a corresponding face without the attribute. We used the CelebFaces Attributes dataset [14] for the experiments. The dataset covered large pose variations and background clutters. Each face image had several attributes, including blond hair, smiling, and eyeglasses. The face images with an attribute constituted the 1st domain; and those without the attribute constituted the 2nd domain. No corresponding face images between the two domains was given. We resized the images to a resolution of 132 × 132 and randomly sampled 128 × 128 regions for training. The generative and discriminative models were both 7 layer deep convolutional neural networks. The experiment results are shown in Figure 4. We randomly sampled two points in the 100dimensional input noise space and visualized the rendered face images as traveling from one pint to 5 Figure 4: Generation of face images with different attributes using CoGAN. From top to bottom, the figure shows pair face generation results for the blond-hair, smiling, and eyeglasses attributes. For each pair, the 1st row contains faces with the attribute, while the 2nd row contains corresponding faces without the attribute. the other. We found CoGAN generated pairs of corresponding faces, resembling those from the same person with and without an attribute. As traveling in the space, the faces gradually change from one person to another. Such deformations were consistent for both domains. Note that it is difficult to create a dataset with corresponding images for some attribute such as blond hair since the subjects have to color their hair. It is more ideal to have an approach that does not require corresponding images like CoGAN. We also noted that the number of faces with an attribute was often several times smaller than that without the attribute in the dataset. However, CoGAN learning was not hindered by the mismatches. Color and Depth Images: We used the RGBD dataset [15] and the NYU dataset [16] for learning joint distribution of color and depth images. The RGBD dataset contains registered color and depth images of 300 objects captured by the Kinect sensor from different view points. We partitioned the dataset into two equal-size non-overlapping subsets. The color images in the 1st subset were used for training GAN1, while the depth images in the 2nd subset were used for training GAN2. There were no corresponding depth and color images in the two subsets. The images in the RGBD dataset have different resolutions. We resized them to a fixed resolution of 64 × 64. The NYU dataset contains color and depth images captured from indoor scenes using the Kinect sensor. We used the 1449 processed depth images for the depth domain. The training images for the color domain were from 6 Figure 5: Generation of color and depth images using CoGAN. The top figure shows the results for the RGBD dataset: the 1st row contains the color images, the 2nd row contains the depth images, and the 3rd and 4th rows visualized the depth profile under different view points. The bottom figure shows the results for the NYU dataset. all the color images in the raw dataset except for those registered with the processed depth images. We resized both the depth and color images to a resolution of 176 × 132 and randomly cropped 128 × 128 patches for training. Figure 5 showed the generation results. We found the rendered color and depth images resembled corresponding RGB and depth image pairs despite of no registered images existed in the two domains in the training set. The CoGAN recovered the appearance–depth correspondence unsupervisedly. 5 Applications In addition to rendering novel pairs of corresponding images for movie and game production, the CoGAN finds applications in the unsupervised domain adaptation and image transformation tasks. Unsupervised Domain Adaptation (UDA): UDA concerns adapting a classifier trained in one domain to classify samples in a new domain where there is no labeled example in the new domain for re-training the classifier. Early works have explored ideas from subspace learning [17, 18] to deep discriminative network learning [19, 20, 21]. We show that CoGAN can be applied to the UDA problem. We studied the problem of adapting a digit classifier from the MNIST dataset to the USPS dataset. Due to domain shift, a classifier trained using one dataset achieves poor performance in the other. We followed the experiment protocol in [17, 20], which randomly samples 2000 images from the MNIST dataset, denoted as D1, and 1800 images from the USPS dataset, denoted as D2, to define an UDA problem. The USPS digits have a different resolution. We resized them to have the same resolution as the MNIST digits. We employed the CoGAN used for the digit generation task. For classifying digits, we attached a softmax layer to the last hidden layer of the discriminative models. We trained the CoGAN by jointly solving the digit classification problem in the MNIST domain which used the images and labels in D1 and the CoGAN learning problem which used the images in both D1 and D2. This produced two classifiers: c1(x1) ≡c(f (3) 1 (f (2) 1 (f (1) 1 (x1)))) for MNIST and c2(x2) ≡c(f (3) 2 (f (2) 2 (f (1) 2 (x2)))) for USPS. No label information in D2 was used. Note that f (2) 1 ≡f (2) 2 and f (3) 1 ≡f (3) 2 due to weight sharing and c denotes the softmax layer. We then applied c2 to classify digits in the USPS dataset. The classifier adaptation from USPS to MNIST can be achieved in the same way. The learning hyperparameters were determined via a validation set. We reported the average accuracy over 5 trails with different randomly selected D1 and D2. Table 1 reports the performance of the proposed CoGAN approach with comparison to the stateof-the-art methods for the UDA task. The results for the other methods were duplicated from [20]. We observed that CoGAN significantly outperformed the state-of-the-art methods. It improved the accuracy from 0.64 to 0.90, which translates to a 72% error reduction rate. Cross-Domain Image Transformation: Let x1 be an image in the 1st domain. Cross-domain image transformation is about finding the corresponding image in the 2nd domain, x2, such that the joint 7 Method [17] [18] [19] [20] CoGAN From MNIST 0.408 0.467 0.478 0.607 0.912 ±0.008 to USPS From USPS 0.274 0.355 0.631 0.673 0.891 ±0.008 to MNIST Average 0.341 0.411 0.554 0.640 0.902 Table 1: Unsupervised domain adaptation performance comparison. The table reported classification accuracies achieved by competing algorithms. Figure 6: Cross-domain image transformation. For each pair, left is the input; right is the transformed image. probability density, p(x1, x2), is maximized. Let L be a loss function measuring difference between two images. Given g1 and g2, the transformation can be achieved by first finding the random vector that generates the query image in the 1st domain z∗= arg minz L(g1(z), x1). After finding z∗, one can apply g2 to obtain the transformed image, x2 = g2(z∗). In Figure 6, we show several CoGAN cross-domain transformation results, computed by using the Euclidean loss function and the L-BFGS optimization algorithm. We found the transformation was successful when the input image was covered by g1 (The input image can be generated by g1.) but generated blurry images when it is not the case. To improve the coverage, we hypothesize that more training images and a better objective function are required, which are left as future work. 6 Related Work Neural generative models has recently received an increasing amount of attention. Several approaches, including generative adversarial networks[5], variational autoencoders (VAE)[22], attention models[23], moment matching[24], stochastic back-propagation[25], and diffusion processes[26], have shown that a deep network can learn an image distribution from samples. The learned networks can be used to generate novel images. Our work was built on [5]. However, we studied a different problem, the problem of learning a joint distribution of multi-domain images. We were interested in whether a joint distribution of images in different domains can be learned from samples drawn separately from its marginal distributions of the individual domains. We showed its achievable via the proposed CoGAN framework. Note that our work is different to the Attribute2Image work[27], which is based on a conditional VAE model [28]. The conditional model can be used to generate images of different styles, but they are unsuitable for generating images in two different domains such as color and depth image domains. Following [5], several works improved the image generation quality of GAN, including a Laplacian pyramid implementation[29], a deeper architecture[12], and conditional models[13]. Our work extended GAN to dealing with joint distributions of images. Our work is related to the prior works in multi-modal learning, including joint embedding space learning [30] and multi-modal Boltzmann machines [1, 3]. These approaches can be used for generating corresponding samples in different domains only when correspondence annotations are given during training. The same limitation is also applied to dictionary learning-based approaches [2, 4]. Our work is also related to the prior works in cross-domain image generation [31, 32, 33], which studied transforming an image in one style to the corresponding images in another style. However, we focus on learning the joint distribution in an unsupervised fashion, while [31, 32, 33] focus on learning a transformation function directly in a supervised fashion. 7 Conclusion We presented the CoGAN framework for learning a joint distribution of multi-domain images. We showed that via enforcing a simple weight-sharing constraint to the layers that are responsible for decoding abstract semantics, the CoGAN learned the joint distribution of images by just using samples drawn separately from the marginal distributions. In addition to convincing image generation results on faces and RGBD images, we also showed promising results of the CoGAN framework for the image transformation and unsupervised domain adaptation tasks. 8 References [1] Nitish Srivastava and Ruslan R Salakhutdinov. Multimodal learning with deep boltzmann machines. In NIPS, 2012. [2] Shenlong Wang, Lei Zhang, Yan Liang, and Quan Pan. Semi-coupled dictionary learning with applications to image super-resolution and photo-sketch synthesis. In CVPR, 2012. [3] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal deep learning. In ICML, 2011. [4] Jianchao Yang, John Wright, Thomas S Huang, and Yi Ma. Image super-resolution via sparse representation. IEEE TIP, 2010. [5] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014. [6] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [7] Lucas Theis, Aäron van den Oord, and Matthias Bethge. A note on the evaluation of generative models. In ICLR, 2016. [8] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167, 2015. [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In ICCV, 2015. [10] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998. [11] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. [12] Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. [13] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets. arXiv:1411.1784, 2014. [14] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In ICCV, 2015. [15] Kevin Lai, Liefeng Bo, Xiaofeng Ren, and Dieter Fox. A large-scale hierarchical multi-view rgb-d object dataset. In ICRA, 2011. [16] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In ECCV, 2012. [17] Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip Yu. Transfer feature learning with joint distribution adaptation. In ICCV, 2013. [18] Basura Fernando, Tatiana Tommasi, and Tinne Tuytelaars. Joint cross-domain classification and subspace learning for unsupervised adaptation. Pattern Recognition Letters, 65:60–66, 2015. [19] Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. Deep domain confusion: Maximizing for domain invariance. arXiv:1412.3474, 2014. [20] Artem Rozantsev, Mathieu Salzmann, and Pascal Fua. Beyond sharing weights for deep domain adaptation. arXiv:1603.06432, 2016. [21] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. JMLR, 2016. [22] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. [23] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015. [24] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. ICML, 2016. [25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. ICML, 2014. [26] Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML, 2015. [27] Xinchen Yan, Jimei Yang, Kihyuk Sohn, and Honglak Lee. Attribute2image: Conditional image generation from visual attributes. arXiv:1512.00570, 2015. [28] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [29] Emily L Denton, Soumith Chintala, Rob Fergus, et al. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, 2015. [30] Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv:1411.2539, 2014. [31] Junho Yim, Heechul Jung, ByungIn Yoo, Changkyu Choi, Dusik Park, and Junmo Kim. Rotating your face using multi-task deep neural network. In CVPR, 2015. [32] Scott E Reed, Yi Zhang, Yuting Zhang, and Honglak Lee. Deep visual analogy-making. In NIPS, 2015. [33] Alexey Dosovitskiy, Jost Tobias Springenberg, and Thomas Brox. Learning to generate chairs with convolutional neural networks. In CVPR, 2015. 9
2016
187
6,091
Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels Ilya Tolstikhin Department of Empirical Inference MPI for Intelligent Systems Tübingen 72076, Germany ilya@tuebingen.mpg.de Bharath K. Sriperumbudur Department of Statistics Pennsylvania State University University Park, PA 16802, USA bks18@psu.edu Bernhard Schölkopf Department of Empirical Inference MPI for Intelligent Systems Tübingen 72076, Germany bs@tuebingen.mpg.de Abstract Maximum Mean Discrepancy (MMD) is a distance on the space of probability measures which has found numerous applications in machine learning and nonparametric testing. This distance is based on the notion of embedding probabilities in a reproducing kernel Hilbert space. In this paper, we present the first known lower bounds for the estimation of MMD based on finite samples. Our lower bounds hold for any radial universal kernel on Rd and match the existing upper bounds up to constants that depend only on the properties of the kernel. Using these lower bounds, we establish the minimax rate optimality of the empirical estimator and its U-statistic variant, which are usually employed in applications. 1 Introduction Over the past decade, the notion of embedding probability measures in a Reproducing Kernel Hilbert Space (RKHS) [1, 13, 18, 17] has gained a lot of attention in machine learning, owing to its wide applicability. Some popular applications of RKHS embedding of probabilities include twosample testing [5, 6], independence [7] and conditional independence testing [3], feature selection [14], covariate-shift [13], causal discovery [9], density estimation [15], kernel Bayes’ rule [4], and distribution regression [20]. This notion of embedding probability measures can be seen as a generalization of classical kernel methods which deal with embedding points of an input space as elements in an RKHS. Formally, given a probability measure P and a continuous positive definite real-valued kernel k (we denote H to be the corresponding RKHS) defined on a separable topological space X, P is embedded into H as µP := R k(·, x) dP(x), called the mean element or the kernel mean assuming k and P satisfy R X p k(x, x) dP(x) < 1. Based on the above embedding of P, [5] defined a distance—called the Maximum Mean Discrepancy (MMD)—on the space of probability measures as the distance between the corresponding mean elements, i.e., MMDk(P, Q) = kµP −µQkH . We refer the reader to [18, 17] for a detailed study on the properties of MMD and its relation to other distances on probabilities. Estimation of kernel mean. In all the above mentioned applications, since the only knowledge of the underlying distribution is through random samples drawn from it, an estimate of µP is employed 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. in practice. In applications such as two-sample test [5, 6] and independence test [7] that involve MMD, an estimate of MMD is constructed based on the estimates of µP and µQ respectively. The simple and most popular estimator of µP is the empirical estimator, µPn := 1 n Pn i=1 k(·, Xi) which is a Monte Carlo approximation of µP based on random samples (Xi)n i=1 drawn i.i.d. from P. Recently, [10] proposed a shrinkage estimator of µP based on the idea of James-Stein shrinkage, which is demonstrated to empirically outperform µPn. While both these estimators are shown to be pn-consistent [13, 5, 10], it was not clear until the recent work of [21] whether any of these estimators are minimax rate optimal, i.e., is there an estimator of µP that yields a convergence rate faster than n−1/2? Based on the minimax optimality of the sample mean (i.e., X := 1 n Pn i=1 Xi) for the estimation of a finite dimensional mean of a normal distribution at a minimax rate of n−1/2 [8, Chapter 5, Example 1.14], while one can intuitively argue that the empirical and shrinkage estimators of µP are minimax rate optimal, it is difficult to extend the finite dimensional argument in a rigorous manner to the estimation of the infinite dimensional object, µP . Note that H is infinite dimensional if k is universal [19, Chapter 4], e.g., Gaussian kernel. By establishing a remarkable relation between the MMD of two Gaussian distributions and the Euclidean distance between their means for any bounded continuous translation invariant universal kernel on X = Rd, [21] rigorously showed that the estimation of µP is only as hard as the estimation of the finite dimensional mean of a normal distribution and thereby established the minimax rate of estimating µP to be n−1/2. This in turn demonstrates the minimax rate optimality of empirical and shrinkage estimators of µP . Estimation of MMD. In this paper, we are interested in the minimax optimal estimation of MMDk(P, Q). The question of finding optimal estimators of MMD is of interest in applications such as kernel-based two-sample [5] and independence tests [7] as the test statistic is indeed an estimate of MMD and it is important to use statistically optimal estimators in the construction of these kernel based tests. An estimator of MMD that is currently employed in these applications is based on the empirical estimators of µP and µQ, i.e., MMDn,m := kµPn −µQmkH, which is constructed from samples (Xi)n i=1 i.i.d. ⇠ P and (Yi)m i=1 i.i.d. ⇠ Q. [5, 7] also considered a U-statistic variant of MMDn,m as a test statistic in these applications. As discussed above, while µPn and µQm are minimax rate optimal estimators of µP and µQ respectively, they need not guarantee that MMDn,m is minimax rate optimal. Using the fact that kµPn −µP kH = Op(n−1/2) and |MMDk(P, Q) −MMDn,m| kµP −µPnkH + kµQm −µQkH, it is easy to see that |MMDk(P, Q) −MMDn,m| = Op(n−1/2 + m−1/2). (1) In fact, if k is a bounded kernel, it can be shown that the constants (which are hidden in the order notation in (1)) depend only on the bound on the kernel and are independent of X, P and Q. The goal of this work is to find the minimax rate rn,m,k(P) and a positive constant ck(P) (independent of m and n) such that inf ˆ Fn,m sup P,Q2P P n ⇥Qm n r−1 n,m,k(P) | ˆFn,m −MMDk(P, Q)| ≥ck(P) o > 0, (2) where P is a suitable subset of Borel probability measures on X, the infimum is taken over all estimators ˆFn,m mapping the i.i.d. sample {(Xi)n i=1, (Yi)m i=1} to R+, and P n ⇥Qm denotes the probability measure associated with the sample when (Xi)n i=1 i.i.d. ⇠P and (Yi)m i=1 i.i.d. ⇠Q. In addition to the rate, we are also interested in the behavior of ck(P) in terms of its dependence on k, X and P. Contributions. The main contribution of the paper is in establishing m−1/2 + n−1/2, i.e., rn,m,k(P) = p (m + n)/mn as the minimax rate for estimating MMDk(P, Q) when k is a radial universal kernel (examples include the Gaussian, Matérn and inverse multiquadric kernels) on Rd and P is the set of all Borel probability measures on Rd with infinitely differentiable densities. This result guarantees that MMDn,m and its U-statistic variant are minimax rate optimal estimators of MMDk(P, Q), which thereby ensures the minimax optimality of the test statistics used in kernel two-sample and independence tests. We would like to highlight the fact that our result of the minimax lower bound on MMDk(P, Q) implies part of the results of [21] related to the minimax estimation 2 of µP , as it can be seen that any ✏-accurate estimators ˆµP and ˆµQ of µP and µQ respectively in the RKHS norm lead to the 2✏-accurate estimator ˆFn,m := kˆµP −ˆµQkH of MMDk(P, Q), i.e., ck(P)(n−1/2 + m−1/2) |MMDk(P, Q) −ˆFn,m| kµP −ˆµP kH + kµQ −ˆµQkH. In Section 2, we present the main results of our work wherein Theorem 1 is developed by employing the ideas of [21] involving Le Cam’s method (see Theorem 3) [22, Sections 2.3 and 2.6]. However, we show that while the minimax rate is m−1/2 + n−1/2, there is a sub-optimal dependence on d in the constant ck(P) which makes the result uninteresting in high dimensional scenarios. To alleviate this issue, we present a refined result in Theorem 2 based on the method of two fuzzy hypotheses (see Theorem 4) [22, Section 2.7.4] which shows that ck(P) in (2) is independent of d (i.e., X). This result provides a sharp lower bound for MMD estimation both in terms of the rate and the constant (which is independent of X) that matches with behavior of the upper bound for MMDn,m. The proofs of these results are provided in Section 3 while supplementary results are collected in an appendix. Notation. In this work we focus on radial kernels, i.e., k(x, y) = (kx −yk2) for all x, y 2 Rd. Schoenberg’s theorem [12] states that a radial kernel k is positive definite for every d if and only if there exists a non-negative finite Borel measure ⌫on [0, 1) such that k(x, y) = Z 1 0 e−tkx−yk2 d⌫(t) (3) for all x, y 2 Rd. An important example of a radial kernel is the Gaussian kernel k(x, y) = exp{−kx −yk2/(2⌘2)} for ⌘2 > 0. [17, Proposition 5] showed that k in (3) is universal if and only if supp(⌫) 6= {0}, where for a finite non-negative Borel measure µ on Rd we define supp(µ) = {x 2 Rd | if x 2 U and U is open then µ(U) > 0}. 2 Main results In this section, we present the main results of our work wherein we develop minimax lower bounds for the estimation of MMDk(P, Q) when k is a radial universal kernel on Rd. We show that the minimax rate for estimating MMDk(P, Q) based on random samples (Xi)n i=1 i.i.d. ⇠P and (Yi)m i=1 i.i.d. ⇠Q is m−1/2+n−1/2, thereby establishing the minimax rate optimality of the empirical estimator MMDn,m of MMD(P, Q). First, we present the following result (proved in Section 3.1) for Gaussian kernels, which is based on an argument similar to the one used in [21] to obtain a minimax lower bound for the estimation of µP . Theorem 1. Let P be the set of all Borel probability measures over Rd with continuously infinitely differentiable densities. Let k be a Gaussian kernel with bandwidth parameter ⌘2 > 0. Then the following holds: inf ˆ Fn,m sup P,Q2P P n ⇥Qm ((((MMDk(P, Q) −ˆFn,m ((( ≥1 8 r 1 d + 1 max ⇢1 pn, 1 pm +) ≥1 5. (4) The following remarks can be made about Theorem 1. (a) Theorem 1 shows that MMDk(P, Q) cannot be estimated at a rate faster than max{n−1/2, m−1/2} by any estimator ˆFn,m for all P, Q 2 P. Since max{m−1/2, n−1/2} ≥1 2(m−1/2 + n−1/2), the result combined with (1) therefore establishes the minimax rate optimality of the empirical estimator, MMDn,m. (b) While Theorem 1 shows the right order of dependence on m and n, the dependence on d seems to be sub-optimal as the upper bound on |MMDn,m −MMDk(P, Q)| depends only on the bound on the kernel and is independent of d. This sub-optimal dependence on d may be due to the fact the proof of Theorem 1 (see Section 3.1) as aforementioned is closely based on the arguments applied in [21] for the minimax estimation of µP . While the lower bounding technique used in [21]—which is commonly known as Le Cam’s method based on many hypotheses [22, Chapter 2]—provides optimal results in the problem of estimation of functions (e.g., estimation of µP in the norm of H), it often fails to do so in the case of estimation of real-valued functionals, which is precisely the focus of our work. Even though Theorem 1 is sub-optimal, we presented the result to highlight the fact that 3 the minimax lower bounds for estimation of µP may not yield optimal results for MMDk(P, Q). In Theorem 2, we will develop a new argument based on two fuzzy hypotheses, which is a method of choice for nonparametric estimation of functionals [22, Section 2.7.4]. This will allow us to get rid of the superfluous dependence on the dimensionality d in the lower bound. (c) While Theorem 1 holds for only Gaussian kernels, we would like to mention that by using the analysis of [21], Theorem 1 can be straightforwardly improved in various ways: (i) it can be generalized to hold for a wide class of radial universal kernels, (ii) the factor d−1/2 in (4) can be removed altogether for the case when P consists of all Borel discrete distributions on Rd. However, these improvements do not involve any novel ideas than those captured by the proof of Theorem 1 and so will not be discussed in this work. For details, we refer an interested reader to Theorems 2 and 6 of [21] for extension to radial universal kernels and discrete measures, respectively. (d) Finally, it is worth mentioning that any lower bound on the minimax probability (including the bounds of Theorems 1 and 2) leads to a lower bound on the minimax risk, which is based on a simple application of the Markov’s inequality: EP n⇥Qm⇥ s−1 n,m · |An,m| ⇤ ≥P n ⇥Qm{|An,m| ≥sn,m}. The following result (proved in Section 3.2) is the main contribution of this work. It provides a minimax lower bound for the problem of MMD estimation, which holds for general radial universal kernels. In contrast to Theorem 1, it avoids the superfluous dependence on d and depends only on the properties of k while exhibiting the correct rate. Theorem 2. Let P be the set of all Borel probability measures over Rd with continuously infinitely differentiable densities. Let k be a radial kernel on Rd of the form (3), where ⌫is a bounded nonnegative measure on [0, 1). Assume that there exist 0 < t0 t1 < 1 and 0 < β < 1 such that ⌫([t0, t1]) ≥β. Then the following holds: inf ˆ Fn,m sup P,Q2P P n ⇥Qm ((((MMDk(P, Q) −ˆFn,m ((( ≥1 20 r βt0 t1e max ⇢1 pn, 1 pm +) ≥1 14. (5) Note that the existence of 0 < t0 t1 < 1 and 0 < β < 1 such that ⌫([t0, t1]) ≥β ensures that supp(⌫) 6= {0} (i.e., the kernel is not a constant function), which implies k is universal. If k is a Gaussian kernel with bandwidth parameter ⌘2 > 0, it is easy to verify that t0 = t1 = (2⌘2)−1 and β = 1 satisfy ⌫([t0, t1]) ≥β as the Gaussian kernel is generated by ⌫= δ1/(2⌘2) in (3), where δx is a Dirac measure supported at x. Therefore we obtain a dimension independent constant in (5) for Gaussian kernels compared to the bound in (4). 3 Proofs In this section, we present the proofs of Theorems 1 and 2. Before we present the proofs, we first introduce the setting of nonparametric estimation. Let F : ⇥! R be a functional defined on a measurable space ⇥and P⇥= {P✓: ✓2 ⇥} be a family of probability distributions indexed by ⇥ and defined over a measurable space X associated with data. We observe the data D 2 X distributed according to an unknown element P✓2 P⇥and the goal is to estimate F(✓). Usually X, D, and P✓ will depend on sample size n. Let ˆFn := ˆFn(D) be an estimator of F(✓) based on D. The following well known result [22, Theorem 2.2] provides a lower bound on the minimax probability of this problem. We refer the reader to Appendix A for a proof of its more general version. Theorem 3. Assume there exist ✓0, ✓1 2 ⇥such that |F(✓0) −F(✓1)| ≥2s > 0 and KL(P✓1kP✓0) ↵with 0 < ↵< 1. Then inf ˆ Fn sup ✓2⇥ P✓ n | ˆFn(D) −F(✓)| ≥s o ≥max 1 4e−↵, 1 − p ↵/2 2 ! , where KL(P✓1kP✓0) := R log ⇣dP✓1 dP✓0 ⌘ dP✓1 denotes the Kullback-Leibler divergence between P✓1 and P✓0. The above result (also called the Le Cam’s method) provides the recipe for obtaining minimax lower bounds, where the goal is to construct two hypotheses ✓0, ✓1 2 ⇥such that (i) F(✓0) and F(✓1) are far apart, while (ii) the corresponding distributions, P✓0 and P✓1 are close enough. The requirement (i) can be relaxed by introducing two random (fuzzy) hypotheses ✓0, ✓1 2 ⇥, and requiring F(✓0) 4 and F(✓1) to be far apart with high probability. This weaker requirement leads to a lower bounding technique, called the method of two fuzzy hypotheses. This method is captured by the following theorem [22, Theorem 2.14] and is commonly used to derive lower bounds on the minimax risk in the problem of estimation of functionals [22, Section 2.7.4]. Theorem 4. Let µ0 and µ1 be any probability distributions over ⇥. Assume that 1. There exist c 2 R, s > 0, 0 β0, β1 < 1 such that µ0 3 ✓: F(✓) c 4 ≥1 −β0 and µ1 3 ✓: F(✓) ≥c + 2s 4 ≥1 −β1. 2. There exist ⌧> 0 and 0 < ↵< 1 such that P1 ⇣ dPa 0 dP1 ≥⌧ ⌘ ≥1 −↵, where Pi(D) = Z P✓(D)µi(d✓), i 2 {0, 1} and Pa 0 is the absolutely continuous component of P0 with respect to P1. Then inf ˆ Fn sup ✓2⇥ P✓ n | ˆFn(D) −F(✓)| ≥s o ≥⌧(1 −↵−β1) −β0 1 + ⌧ . With this set up and background, we are ready to prove Theorems 1 and 2. 3.1 Proof of Theorem 1 The proof is based on Theorem 3 and treats two cases m ≥n and m < n separately. We consider only the case m ≥n as the second one follows the same steps. Let Gd denote a class of multivariate Gaussian distributions over Rd with covariance matrices proportional to identity matrix Id 2 Rd⇥d. In our case Gd ✓P, which leads to the following lower bound for any s > 0: sup P,Q2P P n⇥Qm n(((MMDk(P, Q) −ˆFn,m ((( ≥s o ≥sup P,Q2Gd P n⇥Qm n(((MMDk(P, Q) −ˆFn,m ((( ≥s o . Note that every element G(µ, σ2Id) 2 Gd is indexed by a pair (µ, σ2) 2 Rd ⇥(0, 1) =: ˜⇥. Given two elements P, Q 2 Gd, the data is distributed according to P n ⇥Qm. This brings us into the context of Theorem 3 with ⇥:= ˜⇥⇥˜⇥, X := (Rd)n+m, P✓:= Gn 1 ⇥Gm 2 for ✓= (˜✓1, ˜✓2) 2 ⇥ with Gaussian distributions G1 and G2 corresponding to parameters ˜✓1, ˜✓2 2 ˜⇥respectively, and F(✓) = MMDk(G1, G2). In order to apply Theorem 3 we need to choose two probability distributions P✓0 and P✓1. We define four different d-dimensional Gaussian distributions: P0 = G(µP 0 , σ2Id), Q0 = G(µQ 0 , σ2Id), P1 = Q1 = G(0, σ2Id) with σ2 = c1⌘2 d ⇣ 2 + n m ⌘ , kµP 0 k2 = c2⌘2 d ✓1 n + 1 m ◆ , kµQ 0 k2 = c2⌘2 dm , kµP 0 −µQ 0 k2 = c3⌘2 dn , where c1, c2, c3 > 0 are positive constants independent of m and n to be specified later. Note that this construction is possible as long as p c3 n  q c2 3 1 n + 1 m 4 + p c2 m, which is clearly satisfied if c3 c2. First we will check the upper bound on the KL divergence between the distributions. Using the chain rule of KL divergence and its closed form expression for Gaussian distributions we write KL(P n 1 ⇥Qm 1 kP n 0 ⇥Qm 0 ) = n · kµP 0 k2 2σ2 + m · kµQ 0 k2 2σ2 = n · c2⌘2 3 1 n + 1 m 4 2c1⌘2 3 2 + n m 4 + m · c2⌘2 1 m 2c1⌘2 3 2 + n m 4 = c2 2 + n m 2c1 3 2 + n m 4 = c2 2c1 . Next we need to lower bound an absolute value between MMDk(P0, Q0) and MMDk(P1, Q1). Note that |MMDk(P0, Q0) −MMDk(P1, Q1)| = MMDk(P0, Q0). (6) 5 Using a closed-form expression for the MMD between Gaussian distribution [21, Eq. 25] we write MMD2 k(P0, Q0) = 2 ✓ ⌘2 ⌘2 + 2σ2 ◆d/2 1 −exp −kµP 0 −µQ 0 k2 2⌘2 + 4σ2 !! . Assume kµP 0 −µQ 0 k2 2⌘2 + 4σ2 1. (7) Using 1 −e−x ≥x/2, which holds for x 2 [0, 1], we write |MMDk(P0, Q0) −MMDk(P1, Q1)| ≥ d d + 2c1 3 2 + n m 4 !d/4 s kµP 0 −µQ 0 k2 2⌘2 + 4σ2 . Since m ≥n and (1 −1 x)x−1 monotonically decreases to e−1 for x ≥1, we have d d + 2c1 3 2 + n m 4 !d 4 ≥ ✓ d d + 6c1 ◆d 4 = ✓ 1 − 1 1 + d/(6c1) ◆(1+d/(6c1)−1)! 6c1 d · d 4 ≥e−3c1 2 . Using this and setting c3 = c2 we get |MMDk(P0, Q0) −MMDk(P1, Q1)| ≥ 1 pne−3c1 2 r c2 2d + 4c1 3 2 + n m 4 ≥ 1 pne−3c1 2 r c2 2d + 12c1 . Now we set c1 = 0.16, c2 = 0.23. Checking that Condition (7) is satisfied and noting that max 1 4e−c2 2c1 , 1 − p c2/(4c1) 2 ! > 1 5, 1 2e−3c1 2 rc2 2 > 1 8 and 1 d + 6c1 > 1 d + 1 we conclude the proof with an application of Theorem 3. 3.2 Proof of Theorem 2 First, we repeat the argument presented in the proof of Theorem 1 to bring ourselves into the context of minimax estimation, introduced in the beginning of Section 3.1. Namely, we reduce the class of distributions P to its subset Gd containing all the multivariate Gaussian distributions over Rd with covariance matrices proportional to identity matrix Id 2 Rd⇥d. The proof is based on Theorem 4 and treats two cases m ≥n and m < n separately. We consider only the case m ≥n as the second one follows the same steps. In order to apply Theorem 4 we need to choose two “fuzzy hypotheses”, that is two probability distributions µ0 and µ1 over ⇥. In our setting there is a one-to-one correspondence between parameters ✓2 ⇥and pairs of Gaussian distributions (G1, G2) 2 Gd ⇥Gd. Throughout the proof it will be more convenient to treat µ0 and µ1 as distributions over Gd ⇥Gd. We will set µ0 to be a Dirac measure supported on (P0, Q0) with P0 = Q0 = G(0, σ2Id). Clearly, MMDk(P0, Q0) = 0. This gives µ0 3 ✓: F(✓) = 0 4 = 1 and the first inequality of Condition 1 in Theorem 4 holds with c = 0 and β0 = 0. Next we set µ1 to be a distribution of a random pair (P, Q) with Q = Gd(0, σ2Id), P = Gd(µ, σ2Id), σ2 = 1 2t1d, where µ ⇠Pµ for some probability distribution Pµ over Rd to be specified later. Next we are going to check Condition 2 of Theorem 4. For D = (x1, . . . , xn, y1, . . . , ym) define “posterior” distributions Pi(D) = Z P✓(D)µi(d✓), i 2 {0, 1} as in Theorem 4. Using Markov’s inequality we write P1 ✓dP0 dP1 < ⌧ ◆ = P1 ✓dP1 dP0 > ⌧−1 ◆ ⌧E1 dP1 dP0 : . (8) 6 We have dP1 dP0 (D) = R Rd Qn j=1 e− kxj −µk2 2σ2 Qm k=1 e− kykk2 2σ2 dPµ(µ) Qn j=1 e− kxj k2 2σ2 Qm k=1 e− kykk2 2σ2 = Z Rd e−nkµk2 2σ2 e hPn j=1 xj ,µi σ2 dPµ(µ). Now we compute the expected value appearing in (8): ED⇠P1 dP1 dP0 (D) : = Z Rd e−nkµk2 2σ2 ED⇠P1 h eh Pn j=1 xj, µi/σ2i dPµ(µ) = Z Rd e−nkµk2 2σ2 ✓Z Rd E  e 1 σ2 DPn j=1 Xµ0 j , µ E: dPµ(µ0) ◆ dPµ(µ), (9) where Xµ0 1 , . . . , Xµ0 n are independent and distributed according to Gd(µ0, σ2Id). Note that Pn j=1 Xµ0 j ⇠Gd(nµ0, nσ2Id) and as a result DPn j=1 Xµ0 j , µ E ⇠G 3 nhµ0, µi, nσ2kµk24 . Using the closed form for the moment generating function of a Gaussian distribution Z ⇠G(µ, σ2), E ⇥ etZ⇤ = eµte 1 2 σ2t2, we get E  e 1 σ2 DPn j=1 Xµ0 j , µ E: = e nhµ0,µi σ2 e nkµk2 2σ2 . Together with (9) this gives ED⇠P1 dP1 dP0 (D) : = Z Rd e−nkµk2 2σ2 ✓Z Rd e nhµ0,µi σ2 e nkµk2 2σ2 dPµ(µ0) ◆ dPµ(µ) = E  e nhµ0,µi σ2 : , (10) where µ and µ0 are independent random variables both distributed according to Pµ. Now we set Pµ to be a uniform distribution in the d-dimensional cube of appropriate size Pµ := U h −c1/ p dnt1, c1/ p dnt1 id . In this case, using Lemma B.1 presented in Appendix B we get E  e nhµ0,µi σ2 : = d Y i=1 E  e nµiµ0 i σ2 : = d Y i=1 dnσ2t1 2nc2 1 Shi ✓n σ2 c2 1 dnt1 ◆ = ✓1 4c2 1 Shi 3 2c2 1 4◆d . Using (10) and also assuming 1 4c2 1 Shi 3 2c2 1 4 1 (11) we get ED⇠P1 dP1 dP0 (D) :  1 4c2 1 Shi 3 2c2 1 4 . Combining with (8) we finally get P1 ⇣ dP0 dP1 < ⌧ ⌘  ⌧ 4c2 1 Shi 3 2c2 1 4 or equivalently P1 ⇣ dP0 dP1 ≥⌧ ⌘ ≥ 1 − ⌧ 4c2 1 Shi 3 2c2 1 4 . This shows that Condition 2 of Theorem 4 is satisfied with ↵= ⌧ 4c2 1 Shi 3 2c2 1 4 . Finally, we need to check the second inequality of Condition 1 in Theorem 4. Take two Gaussian distributions P = Gd(µ, σ2Id) and Q = Gd(0, σ2Id). Using [21, Eq. 30] we have MMD2 k(P, Q) ≥βt0 e ✓ 1 − 2 2 + d ◆ kµk2 given σ2 = 1 2t1d and t1kµk2 1 + 4t1σ2. (12) Notice that the largest diagonal of a d-dimensional cube scales as p d. Using this we conclude that for µ ⇠Pµ with probability 1 it holds that kµk2  c2 1 t1n and the second condition in (12) holds as long as c2 1 n. Using this we get for any c2 > 0 P (P,Q)⇠µ1 ( MMDk(P, Q) ≥c2 r βt0 t1en ) ≥ P µ⇠Pµ ⇢ kµk2 ≥c2 2 t1n ✓2 + d d ◆+ . (13) 7 Note that for µ ⇠Pµ, kµk2 = Pd i=1 µ2 i is a sum of d i.i.d. bounded random variables. Also simple computations show that Ekµk2 = d X i=1 Eµ2 i = d c2 1 3dnt1 = c2 1 3nt1 and Vkµk2 = d X i=1 Vµ2 i = 4c4 1 45dn2t2 1 . Using Chebyshev-Cantelli’s inequality of Theorem B.2 (Appendix B) we get for any ✏> 0 P µ⇠Pµ B kµk2 ≥Ekµk2 −✏ = 1 − P µ⇠Pµ B −kµk2 > −Ekµk2 + ✏ ≥1 − 1 1 + 45dn2t2 1 4c4 1 ✏2 or equivalently for any ✏> 0, P µ⇠Pµ ⇢ kµk2 ≥c2 1 ✓1 3 − 2✏ 3 p 5d ◆1 nt1 + ≥1 − 1 1 + ✏2 . Choosing ✏ p 5 2 −9 p 5 2 ⇣ c2 c1 ⌘2 , we can further lower bound (13): P (P,Q)⇠µ1 ( MMDk(P, Q) ≥c2 r βt0 t1en ) ≥ P µ⇠Pµ ⇢ kµk2 ≥c2 1 ✓1 3 − 2✏ 3 p 5d ◆1 nt1 + ≥1− 1 1 + ✏2 . We finally set ⌧= 0.4, c1 = 0.8, c2 = 0.1, ✏= p 5 2 −9 p 5 2 ⇣ c2 c1 ⌘2 , and check that inequality (11) and the second condition of (12) are satisfied, while ⌧ ⇣ 1 − ⌧ 4c2 1 Shi 3 2c2 1 4 − 1 1+✏2 ⌘ 1 + ⌧ > 1 14. We complete the proof by application of Theorem 4. 4 Discussion In this paper, we provided the first known lower bounds for the estimation of maximum mean discrepancy (MMD) based on finite random samples. Based on this result, we established the minimax rate optimality of the empirical estimator. Interestingly, we showed that for radial kernels on Rd, the optimal speed of convergence depends only on the properties of the kernel and is independent of d. However, the paper does not address an important question about the minimax rates for MMD based tests. We believe that the minimax rates of testing with MMD matches with that of the minimax rates for MMD estimation and we intend to build on this work in future to establish minimax testing results involving MMD. Since MMD is an integral probability metric (IPM) [11], a related problem of interest is the minimax estimation of IPMs. IPM is a class of distances on probability measures, which is defined as γ(P, Q) := sup{ R f(x) d(P −Q)(x) : f 2 F}, where F is a class of bounded measurable functions on a topological space X with P and Q being Borel probability measures. It is well known [16] that the choice of F = {f 2 H : kfkH 1} yields MMDk(P, Q) where H is a reproducing kernel Hilbert space with a bounded reproducing kernel k. [16] studied the empirical estimation of γ(P, Q) for various choices of F and established the consistency and convergence rates for the empirical estimator. However, it remains an open question as to whether these rates are minimax optimal. References [1] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics. Kluwer Academic Publishers, London, UK, 2004. [2] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. 8 [3] K. Fukumizu, A. Gretton, X. Sun, and B. Schölkopf. Kernel measures of conditional dependence. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 489–496, Cambridge, MA, 2008. MIT Press. [4] K. Fukumizu, L. Song, and A. Gretton. Kernel Bayes’ rule: Bayesian inference with positive definite kernels. J. Mach. Learn. Res., 14:3753–3783, 2013. [5] A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. Smola. A kernel method for the two sample problem. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 513–520, Cambridge, MA, 2007. MIT Press. [6] A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. J. Smola. A kernel two-sample test. Journal of Machine Learning Research, 13:723–773, 2012. [7] A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Schölkopf, and A. J. Smola. A kernel statistical test of independence. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 585–592. MIT Press, 2008. [8] E. L. Lehmann and G. Casella. Theory of Point Estimation. Springer-Verlag, New York, 2008. [9] D. Lopez-Paz, K. Muandet, B. Schölkopf, and I. Tolstikhin. Towards a learning theory of causeeffect inference. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, 2015. [10] K. Muandet, B. Sriperumbudur, K. Fukumizu, A. Gretton, and B. Schölkopf. Kernel mean shrinkage estimators. Journal of Machine Learning Research, 2016. To appear. [11] A. Müller. Integral probability metrics and their generating classes of functions. Advances in Applied Probability, 29:429–443, 1997. [12] I. J. Schoenberg. Metric spaces and completely monotone functions. The Annals of Mathematics, 39(4):811–841, 1938. [13] A. J. Smola, A. Gretton, L. Song, and B. Schölkopf. A Hilbert space embedding for distributions. In Proceedings of the 18th International Conference on Algorithmic Learning Theory (ALT), pages 13–31. Springer-Verlag, 2007. [14] L. Song, A. Smola, A. Gretton, J. Bedo, and K. Borgwardt. Feature selection via dependence maximization. Journal of Machine Learning Research, 13:1393–1434, 2012. [15] L. Song, X. Zhang, A. Smola, A. Gretton, and B. Schölkopf. Tailoring density estimation via reproducing kernel moment matching. In Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pages 992–999, 2008. [16] B. K. Sriperumbudur, K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. G. Lanckriet. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550– 1599, 2012. [17] B. K. Sriperumbudur, K. Fukumizu, and G. R. G. Lanckriet. Universality, characteristic kernels and RKHS embedding of measures. J. Mach. Learn. Res., 12:2389–2410, 2011. [18] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, B. Schölkopf, and G. R. G. Lanckriet. Hilbert space embeddings and metrics on probability measures. J. Mach. Learn. Res., 11:1517–1561, 2010. [19] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [20] Z. Szabó, A. Gretton, B. Póczos, and B. K. Sriperumbudur. Two-stage sampled learning theory on distributions. In Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, volume 38, pages 948–957. JMLR Workshop and Conference Proceedings, 2015. [21] I. Tolstikhin, B. Sriperumbudur, and K. Muandet. Minimax estimation of kernel mean embeddings. arXiv:1602.04361 [math.ST], 2016. [22] A. B. Tsybakov. Introduction to Nonparametric Estimation. Springer, NY, 2008. 9
2016
188
6,092
Using Social Dynamics to Make Individual Predictions: Variational Inference with a Stochastic Kinetic Model Zhen Xu, Wen Dong, and Sargur Srihari Department of Computer Science and Engineering University at Buffalo {zxu8,wendong,srihari}@buffalo.edu Abstract Social dynamics is concerned primarily with interactions among individuals and the resulting group behaviors, modeling the temporal evolution of social systems via the interactions of individuals within these systems. In particular, the availability of large-scale data from social networks and sensor networks offers an unprecedented opportunity to predict state-changing events at the individual level. Examples of such events include disease transmission, opinion transition in elections, and rumor propagation. Unlike previous research focusing on the collective effects of social systems, this study makes efficient inferences at the individual level. In order to cope with dynamic interactions among a large number of individuals, we introduce the stochastic kinetic model to capture adaptive transition probabilities and propose an efficient variational inference algorithm the complexity of which grows linearly — rather than exponentially— with the number of individuals. To validate this method, we have performed epidemic-dynamics experiments on wireless sensor network data collected from more than ten thousand people over three years. The proposed algorithm was used to track disease transmission and predict the probability of infection for each individual. Our results demonstrate that this method is more efficient than sampling while nonetheless achieving high accuracy. 1 Introduction The field of social dynamics is concerned primarily with interactions among individuals and the resulting group behaviors. Research in social dynamics models the temporal evolution of social systems via the interactions of the individuals within these systems [9]. For example, opinion dynamics can model the opinion state transitions of an entire population in an election scenario [3], and epidemic dynamics can predict disease outbreaks ahead of time [10]. While traditional socialdynamics models focus primarily on the macroscopic effects of social systems, often we instead wish to know the answers to more specific questions. Given the movement and behavior history of a subject with Ebola, can we tell how many people should be tested or quarantined? City-size quarantine is not necessary, but family-size quarantine is insufficient. We aim to model a method to evaluate the paths of illness transmission and the risks of infection for individuals, so that limited medical resources can be most efficiently distributed. The rapid growth of both social networks and sensor networks offers an unprecedented opportunity to collect abundant data at the individual level. From these data we can extract temporal interactions among individuals, such as meeting or taking the same class. To take advantage of this opportunity, we model social dynamics from an individual perspective. Although such an approach has considerable potential, in practice it is difficult to model the dynamic interactions and handle the costly computations when a large number of individuals are involved. In this paper, we introduce an 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. event-based model into social systems to characterize their temporal evolutions and make tractable inferences on the individual level. Our research on the temporal evolutions of social systems is related to dynamic Bayesian networks and continuous time Bayesian networks [13,18,21]. Traditionally, a coupled hidden Markov model is used to capture the interactions of components in a system [2], but this model does not consider dynamic interactions. However, a stochastic kinetic model is capable of successfully describing the interactions of molecules (such as collisions) in chemical reactions [12,22], and is widely used in many fields such as chemistry and cell biology [1,11]. We introduce this model into social dynamics and use it to focus on individual behaviors. A challenge in capturing the interactions of individuals is that in social dynamics the state space grows exponentially with the number of individuals, which makes exact inference intractable. To resolve this we must apply approximate inference methods. One class of these involves sampling-based methods. Rao and Teh introduce a Gibbs sampler based on local updates [20], while Murphy and Russell introduce Rao-Blackwellized particle filtering for dynamic Bayesian networks [17]. However, sampling-based methods sometimes mix slowly and require a large number of samples/particles. To demonstrate this issue, we offer empirical comparisons with two major sampling methods in Section 4. An alternative class of approximations is based on variational inference. Opper and Sanguinetti apply the variational mean field approach to factor a Markov jump process [19], and Cohn and El-Hay further improve its efficiency by exploiting the structure of the target network [4]. A problem is that in an event-based model such as a stochastic kinetic model (SKM), the variational mean field is not applicable when a single event changes the states of two individuals simultaneously. Here, we use a general expectation propagation principle [14] to design our algorithm. This paper makes three contributions: First, we introduce the discrete event model into social dynamics and make tractable inferences on both individual behaviors and collective effects. To this end, we apply the stochastic kinetic model to define adaptive transition probabilities that characterize the dynamic interaction patterns in social systems. Second, we design an efficient variational inference algorithm whose computation complexity grows linearly with the number of individuals. As a result, it scales very well in large social systems. Third, we conduct experiments on epidemic dynamics to demonstrate that our algorithm can track the transmission of epidemics and predict the probability of infection for each individual. Further, we demonstrate that the proposed method is more efficient than sampling while nonetheless achieving high accuracy. The remainder of this paper is organized as follows. In Section 2, we briefly review the coupled hidden Markov model and the stochastic kinetic model. In Section 3, we propose applying a variational algorithm with the stochastic kinetic model to make tractable inferences in social dynamics. In Section 4, we detail empirical results from applying the proposed algorithm to our epidemic data along with the proximity data collected from sensor networks. Section 5 concludes. 2 Background 2.1 Coupled Hidden Markov Model A coupled hidden Markov model (CHMM) captures the dynamics of a discrete time Markov process that joins a number of distinct hidden Markov models (HMMs), as shown in Figure 2.1(a). xt = (x(1) t , . . . , x(M) t ) defines the hidden states of all HMMs at time t, and x(m) t is the hidden state of HMM m at time t. yt = (y(1) t , . . . , y(M) t ) are observations of all HMMs at time t, and y(m) t is the observation of HMM m at time t. P(xt|xt−1) are transition probabilities, and P(yt|xt) are emission probabilities for CHMM. Given hidden states, all observations are independent. As such, P(yt|xt) = Q m P(y(m) t |x(m) t ), where P(y(m) t |x(m) t ) is the emission probability for HMM m at time t. The joint probability of CHMM can be defined as follows: P (x1,...,T , y1,...,T ) = T Y t=1 P(xt|xt−1)P(yt|xt). (1) For a CHMM that contains M HMMs in a binary state, the state space is 2M, and the state transition kernel is a 2M ⇥2M matrix. In order to make exact inferences, the classic forward-backward algorithm sweeps a forward/filtering pass to compute the forward statistics ↵t(xt) = P(xt|y1,...,t) 2 x1,t-1 x2,t-1 x3,t-1 x1,t x2,t x3,t y1,t-1 y2,t-1 y3,t-1 y1,t y2,t y3,t t-1 t HMM 1 HMM 2 HMM 3 x1,t+1 x2,t+1 x3,t+1 y1,t+1 y2,t+1 y3,t+1 Time ... ... ... ... ... ... t+1 (a) x1,t-1 x2,t-1 x3,t-1 x1,t x2,t x3,t y1,t-1 y2,t-1 y3,t-1 y1,t y2,t y3,t t-1 t HMM 1 HMM 2 HMM 3 x1,t+1 x2,t+1 x3,t+1 y1,t+1 y2,t+1 y3,t+1 Time ... ... ... ... ... ... t+1 vt vt+1 (b) Figure 1: Illustration of (a) Coupled Hidden Markov Model, (b) Stochastic Kinetic Model. and a backward/smoothing pass to estimate the backward statistics βt(xt) = P (yt+1,...,T |xt) P (yt+1,...,T |y1,...,t). Then it can estimate the one-slice statistics γt(xt) = P(xt|y1,...,T ) = ↵t(xt)βt(xt) and two-slice statistics ⇠t(xt−1, xt) = P(xt−1, xt|y1,...,T ) = ↵t−1(xt−1)P (xt|xt−1)P (yt|xt)βt(xt) P (yt|y1,...,t−1) . Its complexity grows exponentially with the number of HMM chains. In order to make tractable inferences, certain factorizations and approximations must be applied. In the next section, we introduce a stochastic kinetic model to lower the dimensionality of transition probabilities. 2.2 The Stochastic Kinetic Model A stochastic kinetic model describes the temporal evolution of a chemical system with M species X = {X1, X2, · · · , XM} driven by V events (or chemical reactions) parameterized by rate constants c = (c1, . . . , cV ). An event (chemical reaction) k has a general form as follows: r1X1 + · · · + rMXM ck −! p1X1 + · · · + pMXM. The species on the left are called reactants, and rm is the number of mth reactant molecules consumed during the reaction. The species on the right are called products, and pm is the number of mth product molecules produced in the reaction. Species involved in the reaction (rm > 0) without consumption or production (rm = pm) are called catalysts. At any specific time t, the populations of the species is xt = (x(1) t , . . . , x(M) t ). An event k happens with rate hk(xt, ck), determined by the rate constant and the current population state [22]: hk(xt, ck) =ckgk(xt) = ck M Y m=1 g(m) k (x(m) t ). (2) The form of gk(xt) depends on the reaction. In our case, we adopt the product form QM m=1 g(m) k (x(m) t ), which represents the total number of ways that reactant molecules can be selected to trigger event k [22]. Event k changes the populations by ∆k = xt −xt−1. The probability that event k will occur during time interval (t, t + dt] is hk(xt, ck)dt. We assume at each discrete time step that no more than one event will occur. This assumption follows the linearization principle in the literature [18], and is valid when the discrete time step is small. We treat each discrete time step as a unit of time, so that hk(xt, ck) represents the probability of an event. In epidemic modeling, for example, an infection event vi has the form S + I ci −! 2I, such that a susceptible individual (S) is infected by an infectious individual (I) with rate constant ci. If there is only one susceptible individual (type m = 1) and one infectious individual (type m = 2) involved in this event, hi(xt, ci) = ci, ∆i = [−1 1]T and P(xt −xt−1 = ∆i) = P(xt|xt−1, vi) = ci. In a traditional hidden Markov model, the transition kernel is typically fixed. In comparison, SKM is better at capturing dynamic interactions in terms of the events with rates dependent on reactant populations, as shown in Eq.(2). 3 3 Variational Inference with the Stochastic Kinetic Model In this section, we define the likelihood of the entire sequence of hidden states and observations for an event-based model, and derive a variational inference algorithm and parameter-learning algorithm. 3.1 Likelihood for Event-based Model In social dynamics, we use a discrete time Markov model to describe the temporal evolutions of a set of individuals x(1), . . . , x(M) according to a set of V events. To cope with dynamic interactions, we introduce the SKM and express the state transition probabilities in terms of event probabilities, as shown in Figure 2.1(b). We assume at each discrete time step that no more than one event will occur. Let v1, . . . , vT be a sequence of events, x1, . . . , xT a sequence of hidden states, and y1, . . . , yT a set of observations. Similar to Eq.(1), the likelihood of the entire sequence is as follows: P (x1,...,T , y1,...,T , v1,...,T ) = T Y t=1 P(xt, vt|xt−1)P(yt|xt), where (3) P(xt, vt|xt−1) = ⇢ck · gk (xt−1) · δ(xt −xt−1 ⌘∆k) if vt = k (1 −P k ckgk (xt−1)) · δ(xt −xt−1 ⌘0) if vt = ; . P(xt, vt|xt−1) is the event-based transition kernel. δ(xt −xt−1 ⌘∆k) is 1 if the previous state is xt−1 and the current state is xt = xt−1 + ∆k, and 0 otherwise. ∆k is the effect of event vk. ; represents an auxiliary event, meaning that there is no event. Substituting the product form of gk, the transition kernel can be written as follows: P(xt, vt = k|xt−1) = ck Y m g(m) k (x(m) t−1) · Y m δ(x(m) t −x(m) t−1 ⌘∆(m) k ), (4) P(xt, vt = ;|xt−1) = (1 − X k ck Y m g(m) k (x(m) t−1)) · Y m δ(x(m) t −x(m) t−1 ⌘0), (5) where δ(x(m) t −x(m) t−1 ⌘∆(m) k ) is 1 if the previous state of an individual m is x(m) t−1 and the current state is x(m) t = x(m) t−1 + ∆(m) k , and 0 otherwise. 3.2 Variational Inference for Stochastic Kinetic Model As noted in Section 2.1, exact inference in social dynamics is intractable due to the formidable state space. However, we can approximate the posterior distribution P(x1,...,T , v1,...,T |y1,...,T ) using an approximate distribution within the exponential family. The inference algorithm minimizes the KL divergence between these two distributions, which can be formulated as an optimization problem [14]: Minimize: X t,xt−1,xt,vt ˆ⇠t(xt−1, xt, vt) · log ˆ⇠t(xt−1, xt, vt) P(xt, vt|xt−1)P(yt|xt) (6) − X t,xt Y m ˆγ(m) t (x(m) t ) log Y m ˆγ(m) t (x(m) t ) Subject to: X vt,xt−1,{xt\x(m) t } ˆ⇠t(xt−1, xt, vt) = ˆγ(m) t (x(m) t ), for all t, m, x(m) t , X vt,{xt−1\x(m) t−1},xt ˆ⇠t(xt−1, xt, vt) = ˆγ(m) t−1(x(m) t−1), for all t, m, x(m) t−1, X x(m) t ˆγ(m) t (x(m) t ) = 1, for all t, m. The objective function is the Bethe free energy, composed of average energy and Bethe entropy approximation [23]. ˆ⇠t(xt−1, xt, vt) is the approximate two-slice statistics and ˆγ(m) t (x(m) t ) is the approximate one-slice statistics for each individual m. They form the approximate distribution over which to minimize the Bethe free energy. The P t,xt−1,xt,vt is an abbreviation for summing over t, xt−1, xt, and vt. P {xt\x(m) t } is the sum over all individuals in xt except x(m) t . We use similar abbreviations below. The first two sets of constraints are marginalization conditions, and the third 4 is normalization conditions. To solve this constrained optimization problem, we first define the Lagrange function using Lagrange multipliers to weight constraints, then take the partial derivatives with respect to ˆ⇠t(xt−1, xt, vt), and ˆγ(m) t (x(m) t ). The dual problem is to find the approximate forward statistics ˆ↵(m) t−1(x(m) t−1) and backward statistics ˆβ(m) t (x(m) t ) in order to maximize the pseudo-likelihood function. The duality is between minimizing Bethe free energy and maximizing pseudo-likelihood. The fixed-point solution for the primal problem is as follows1: ˆ⇠t(x(m) t−1, x(m) t , vt) = 1 Zt X m06=m,x(m0) t−1 ,x(m0) t P (xt,vt|xt−1)·Q m ˆ↵(m) t−1(x(m) t−1)·Q m P (y(m) t |x(m) t )·Q m ˆβ(m) t (x(m) t ). (7) ˆ⇠t(x(m) t−1, x(m) t , vt) is the two-slice statistics for an individual m, and Zt is the normalization constant. Given the factorized form of P(xt, vt|xt−1) in Eqs. (4) and (5), everything in Eq. (7) can be written in a factorized form. After reformulating the term relevant to the individual m, ˆ⇠t(x(m) t−1, x(m) t , vt) can be shown neatly as follows: ˆ⇠t(x(m) t−1, x(m) t , vt) = 1 Zt ˆP(x(m) t , vt|x(m) t−1) · ˆ↵(m) t−1(x(m) t−1)P(y(m) t |x(m) t )ˆβ(m) t (x(m) t ), (8) where the marginalized transition kernel ˆP(x(m) t , vt|x(m) t−1) for the individual m can be defined as: ˆP(x(m) t , vt = k|x(m) t−1) = ckg(m) k (x(m) t−1) Y m06=m ˜g(m0) k,t−1 · δ(x(m) t −x(m) t−1 ⌘∆(m) k ), (9) ˆP(x(m) t , vt = ;|x(m) t−1) = (1 − X k ckg(m) k (x(m) t−1) Y m06=m ˆg(m0) k,t−1)δ(x(m) t −x(m) t−1 ⌘0), (10) ˜g(m0) k,t−1=P x(m0) t −x(m0) t−1 ⌘∆(m0) k ↵(m0) t−1 (x(m0) t−1 )P (y(m0) t |x(m0) t )β(m0) t (x(m0) t )g(m0) k (x(m0) t−1 ) # P x(m0) t −x(m0) t−1 ⌘0 ↵(m0) t−1 (x(m0) t−1 )P (y(m0) t |x(m0) t )β(m0) t (x(m0) t ), ˆg(m0) k,t−1=P x(m0) t −x(m0) t−1 ⌘0 ↵t−1(x(m0) t−1 )P (y(m0) t |x(m0) t )β(m0) t (x(m0) t )g(m0) k (x(m0) t−1 ) # P x(m0) t −x(m0) t−1 ⌘0 ↵(m0) t−1 (x(m0) t−1 )P (y(m0) t |x(m0) t )β(m0) t (x(m0) t ), In the above equations, we consider the mean field effect by summing over the current and previous states of all the other individuals m0 6= m. The marginalized transition kernel considers the probability of event k on the individual m given the context of the temporal evolutions of the other individuals. Comparing Eqs. (9) and (10) with Eqs. (4) and (5), instead of multiplying g(m0) k (x(m0) t−1 ) for individual m0 6= m, we use the expected value of g(m0) k with respect to the marginal probability distribution of x(m0) t−1 . Complexity Analysis: In our inference algorithm, the most computation-intensive step is the marginalization in Eqs. (9)-(10). The complexity is O(MS2), where M is the number of individuals and S is the state space of a single individual. The complexity of the entire algorithm is therefore O(MS2TN), where T is the number of time steps and N is the number of iterations until convergence. As such, the complexity of our algorithm grows only linearly with the number of individuals; it offers excellent scalability when the number of tracked individuals becomes large. 3.3 Parameter Learning In order to learn the rate constant ck, we maximize the expected log likelihood. In a stochastic kinetic model, the probability of a sample path is given in Eq. (3). The expected log likelihood over the posterior probability conditioned on the observations y1, . . . , yT takes the following form: log P (x1,...,T , y1,...,T , v1,...,T ) = X t,xt−1,xt,vt ˆ⇠t(xt−1, xt, vt) · log(P(xt, vt|xt−1)P(yt|xt)). ˆ⇠t (xt−1, xt, vt) is the approximate two-slice statistics defined in Eq. (6). Maximizing this expected log likelihood by setting its partial derivative over the rate constants to 0 gives the maximum expected log likelihood estimation of these rate constants. ck = P t,xt−1,xt ˆ⇠t(xt−1, xt, vt = k) P t,xt−1,xt ˆ⇠t(xt−1, xt, vt = ;)gk(xt−1) ⇡ P t P xt−1,xt ˆ⇠t(xt−1, xt, vt = k) P t Q m P x(m) t−1 ˆγ(m) t−1(x(m) t−1)g(m) k (x(m) t−1) . (11) 1The derivations for the optimization problem and its solution are shown in the Supplemental Material. 5 As such, the rate constant for event k is the expected number of times that this event has occurred divided by the total expected number of times this event could have occurred. To summarize, we provide the variational inference algorithm below. Algorithm: Variational Inference with a Stochastic Kinetic Model Given the observations y(m) t for t = 1, . . . , T and m = 1, . . . , M, find x(m) t , vt and rate constants ck for k = 1, . . . , V . Latent state inference. Iterate through the following forward and backward passes until convergence, where ˆP(x(m) t , vt|x(m) t−1) is given by Eqs. (9) and (10). • Forward pass. For t = 1, . . . , T and m = 1, . . . , M, update ˆ↵(m) t (x(m) t ) according to ˆ↵(m) t (x(m) t ) 1 Zt X x(m) t−1,vt ˆ↵(m) t−1(x(m) t−1) ˆP(x(m) t , vt|x(m) t−1)P(y(m) t |x(m) t ). • Backward pass. For t = T, . . . , 1 and m = 1, . . . , M, update ˆβ(m) t−1(x(m) t−1) according to ˆβ(m) t−1(x(m) t−1) 1 Zt X x(m) t ,vt ˆβ(m) t (x(m) t ) ˆP(x(m) t , vt|x(m) t−1)P(y(m) t |x(m) t ). Parameter estimation. Iterate through the latent state inference (above) and rate constants estimate of ck according to Eq. (11), until convergence. 4 Experiments on Epidemic Applications In this section, we evaluate the performance of variational inference with a stochastic kinetic model (VISKM) algorithm of epidemic dynamics, with which we predict the transmission of diseases and the health status of each individual based on proximity data collected from sensor networks. 4.1 Epidemic Dynamics In epidemic dynamics, Gt = (M, Et) is a dynamic network, where each node m 2 M is an individual in the network, and Et = {(mi, mj)} is a set of edges in Gt representing that individuals mi and mj have interacted at a specific time t. There are two possible hidden states for each individual m at time t, x(m) t 2 {0, 1}, where 0 indicates the susceptible state and 1 the infectious state. y(m) t 2 {0, 1} represents the presence or absence of symptoms for individual m at time t. P(y(m) t |x(m) t ) represents the observation probability. We define three types of events in epidemic applications: (1) A previously infectious individual recovers and becomes susceptible again: I c1 −! S. (2) An infectious individual infects a susceptible individual in the network: S + I c2 −! 2I. (3) A susceptible individual in the network is infected by an outside infectious individual: S c3 −! I. Based on these events, the transition kernel can be defined as follows: P(x(m) t = 0|x(m) t−1 = 1) = c1, P(x(m) t = 1|x(m) t−1 = 1) = 1 −c1, P(x(m) t =0|x(m) t−1 =0) = (1 −c3)(1 −c2)Cm,t, P(x(m) t =1|x(m) t−1 =0) = 1 −(1 −c3)(1 −c2)Cm,t, where Cm,t = P m0:(m0,m)2Et δ(x(m0) t ⌘1) is the number of possible infectious sources for individual m at time t. Intuitively, the probability of a susceptible individual becoming infected is 1 minus the probability that no infectious individuals (inside or outside the network) infected him. When the probability of infection is very small, we can approximate P(x(m) t = 1|x(m) t−1 = 0) ⇡c3+c2·Cm,t. 6 4.2 Experimental Results Data Explanation: We employ two data sets of epidemic dynamics. The real data set is collected from the Social Evolution experiment [5,6]. This study records “common cold” symptoms of 65 students living in a university residence hall from January 2009 to April 2009, tracking their locations and proximities using mobile phones. In addition, the students took periodic surveys regarding their health status and personal interactions. The synthetic data set was collected on the Dartmouth College campus from April 2001 to June 2004, and contains the movement history of 13,888 individuals [16]. We synthesized disease transmission along a timeline using the popular susceptible-infectioussusceptible (SIS) epidemiology model [15], then applied the VISKM to calibrate performance. We selected this data set because we want to demonstrate that our model works on data with a large number of people over a long period of time. Evaluation Metrics and Baseline Algorithms: We select the receiver operating characteristic (ROC) curve as our performance metric because the discrimination thresholds of diseases vary. We first compare the accuracy and efficiency of VISKM with Gibbs sampling (Gibbs) and particle filtering (PF) on the Social Evolution data set [7, 8].2 Both Gibbs sampling and particle filtering iteratively sample the infectious and susceptible latent state sequences and the infection and recovery events conditioned on these state sequences. Gibbs-Prediction-10000 indicates 10,000 iterations of Gibbs sampling with 1000 burn-in iterations for the prediction task. PF-Smoothing-1000 similarly refers to 1000 iterations of particle filtering for the smoothing task. All experiments are performed on the same computer. Individual State Inference: We infer the probabilities of a hidden infectious state for each individual at different times under different scenarios. There are three tasks: 1. Prediction: Given an individual’s past health and current interaction patterns, we predict the current infectious latent state. Figure 2(a) compares prediction performance among the different approximate inference methods. 2. Smoothing: Given an individual’s interaction patterns and past health with missing periods, we infer the infectious latent states during these missing periods. Figure 2(b) compares the performance of the three inference methods. 3. Expansion: Given the health records of a portion (⇠10%) of the population, we estimate the individual infectious states of the entire population before medically inspecting them. For example, given either a group of volunteers willing to report their symptoms or the symptom data of patients who came to hospitals, we determine the probabilities that the people near these individuals also became or will become infected. This information helps the government or aid agencies to efficiently distribute limited medical resources to those most in need. Figure 2(c) compares the performance of the different methods. From the above three graphs, we can see that all three methods identify the infectious states in an accurate way. However, VISKM outperforms Gibbs sampling and particle filtering in terms of area under the ROC curve for all three tasks. VISKM has an advantage in the smoothing task because the backward pass helps to infer the missing states using subsequent observations. In addition, the performance of Gibbs and PF improves as the number of samples/particles increases. Figure 2(d) shows the performance of the three tasks on the Dartmouth data set. We do not apply the same comparison because it takes too much time for sampling. From the graph, we can see that VISKM infers most of the infectious moments of individuals in an accurate way for a large social system. In addition, the smoothing results are slightly better than the prediction results because we can leverage observations from both directions. The expansion case is relatively poor, because we use only very limited information to derive the results; however, even in this case the ROC curve has good discriminating power to differentiate between infectious and susceptible individuals. Collective Statistics Inference: After determining the individual results, we aggregate them to approximate the total number of infected individuals in the social system as time evolves. This offers a collective statistical summary of the spread of disease in one area as in traditional research, which typically scales the sample statistics with respect to the sample ratio. Figures 2(e) and (f) show that given 20% of the Social Evolution data and 10% of the Dartmouth data, VISKM estimates the collective statistics better than the other methods. Efficiency and Scalability: Table 1 shows the running time of different algorithms for the Social Evolution data on the same computer. From the table, we can see that Gibbs sampling runs slightly longer than PF, but they are in the same scale. However, VISKM requires much less computation time. 2Code and data are available at http://cse.buffalo.edu/~wendong/. 7 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate True Positive Rate VISKM−Prediction PF−Prediction−10000 PF−Prediction−1000 Gibbs−Prediction−10000 Gibbs−Prediction−1000 (a) Prediction 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate True Positive Rate VISKM−Smoothing PF−Smoothing−10000 PF−Smoothing−1000 Gibbs−Smoothing−10000 Gibbs−Smoothing−1000 (b) Smoothing 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate True Positive Rate VISKM−Expansion PF−Expansion−10000 PF−Expansion−1000 Gibbs−Expansion−10000 Gibbs−Expansion−1000 (c) Expansion 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False Positive Rate True Positive Rate VISKM−Prediction VISKM−Smoothing VISKM−Expansion (d) Dartmouth 0 20 40 60 80 100 120 140 0 5 10 15 20 25 30 35 40 45 Time Sequence Number of Patients Real Number VISKM−Aggregation PF−10000 Gibbs−10000 Scaling (e) Social Evolution Statistics 0 500 1000 1500 2000 2500 3000 0 50 100 150 Time Sequence Number of Patients Real Number VISKM−Aggregation Scaling (f) Dartmouth Statistics Figure 2: Experimental results. (a-c) show the prediction, smoothing, and expansion performance comparisons for Social Evolution data, while (d) shows performance of the three tasks for Dartmouth data. (e-f) represent the statistical inferences for both data sets. Table 1: Running time for different approximate inference algorithms. Gibbs_10000 refers to Gibbs sampling for 10,000 iterations, and PF_1000 to particle filtering for 1000 iterations. Other entries follow the same pattern. All times are measured in seconds. VISKM Gibbs_1000 Gibbs_10000 PF_1000 PF_10000 60 People 0.78 771 7820 601 6100 30 People 0.39 255 2556 166 1888 15 People 0.19 101 1003 122 1435 In addition, the computation time of VISKM grows linearly with the number of individuals, which validates the complexity analysis in Section 3.2. Thus, it offers excellent scalability for large social systems. In comparison, Gibbs sampling and PF grow super linearly with the number of individuals, and roughly linearly with the number of samples. Summary: Our proposed VISKM achieves higher accuracy in terms of area under ROC curve and collective statistics than Gibbs sampling or particle filtering (within 10,000 iterations). More importantly, VISKM is more efficient than sampling with much less computation time. Additionally, the computation time of VISKM grows linearly with the number of individuals, demonstrating its excellent scalability for large social systems. 5 Conclusions In this paper, we leverage sensor network and social network data to capture temporal evolution in social dynamics and infer individual behaviors. In order to define the adaptive transition kernel, we introduce a stochastic dynamic mode that captures the dynamics of complex interactions. In addition, in order to make tractable inferences we propose a variational inference algorithm the computation complexity of which grows linearly with the number of individuals. Large-scale experiments on epidemic dynamics demonstrate that our method effectively captures the evolution of social dynamics and accurately infers individual behaviors. More accurate collective effects can be also derived through the aggregated results. Potential applications for our algorithm include the dynamics of emotion, opinion, rumor, collaboration, and friendship. 8 References [1] Adam Arkin, John Ross, and Harley H McAdams. Stochastic kinetic analysis of developmental pathway bifurcation in phage λ-infected escherichia coli cells. Genetics, 149(4):1633–1648, 1998. 1 [2] Matthew Brand, Nuria Oliver, and Alex Pentland. Coupled hidden markov models for complex action recognition. In Proc. of CVPR, pages 994–999, 1997. 1 [3] Claudio Castellano, Santo Fortunato, and Vittorio Loreto. Statistical physics of social dynamics. Reviews of modern physics, 81(2):591, 2009. 1 [4] Ido Cohn, Tal El-Hay, Nir Friedman, and Raz Kupferman. Mean field variational approximation for continuous-time bayesian networks. The Journal of Machine Learning Research, 11:2745– 2783, 2010. 1 [5] Wen Dong, Katherine Heller, and Alex Sandy Pentland. Modeling infection with multi-agent dynamics. In International Conference on Social Computing, Behavioral-Cultural Modeling, and Prediction, pages 172–179. Springer, 2012. 4.2 [6] Wen Dong, Bruno Lepri, and Alex Sandy Pentland. Modeling the co-evolution of behaviors and social relationships using mobile phone data. In Proc. of the 10th International Conference on Mobile and Ubiquitous Multimedia, pages 134–143. ACM, 2011. 4.2 [7] Wen Dong, Alex Pentland, and Katherine A Heller. Graph-coupled hmms for modeling the spread of infection. In Proc. of UAI, pages 227–236, 2012. 4.2 [8] Arnaud Doucet and Adam M Johansen. A tutorial on particle filtering and smoothing: Fifteen years later. Handbook of Nonlinear Filtering, 12(656-704):3, 2009. 4.2 [9] Steven N Durlauf and H Peyton Young. Social dynamics, volume 4. MIT Press, 2004. 1 [10] Stephen Eubank, Hasan Guclu, VS Anil Kumar, Madhav V Marathe, Aravind Srinivasan, Zoltan Toroczkai, and Nan Wang. Modelling disease outbreaks in realistic urban social networks. Nature, 429(6988):180–184, 2004. 1 [11] Daniel T Gillespie. Stochastic simulation of chemical kinetics. Annu. Rev. Phys. Chem., 58:35–55, 2007. 1 [12] Andrew Golightly and Darren J Wilkinson. Bayesian parameter inference for stochastic biochemical network models using particle markov chain monte carlo. Interface focus, 2011. 1 [13] Creighton Heaukulani and Zoubin Ghahramani. Dynamic probabilistic models for latent feature propagation in social networks. In Proc. of ICML, pages 275–283, 2013. 1 [14] Tom Heskes and Onno Zoeter. Expectation propagation for approximate inference in dynamic bayesian networks. In Proc. of UAI, pages 216–223, 2002. 1, 3.2 [15] Matt J Keeling and Pejman Rohani. Modeling infectious diseases in humans and animals. Princeton University Press, 2008. 4.2 [16] David Kotz, Tristan Henderson, Ilya Abyzov, and Jihwang Yeo. CRAWDAD data set dartmouth/campus (v. 2007-02-08). Downloaded from http://crawdad.org/dartmouth/campus/, 2007. 4.2 [17] Kevin Murphy and Stuart Russell. Rao-blackwellised particle filtering for dynamic bayesian networks. In Sequential Monte Carlo methods in practice, pages 499–515. Springer, 2001. 1 [18] Uri Nodelman, Christian R Shelton, and Daphne Koller. Continuous time bayesian networks. In Proc. of UAI, pages 378–387. Morgan Kaufmann Publishers Inc., 2002. 1, 2.2 [19] Manfred Opper and Guido Sanguinetti. Variational inference for markov jump processes. In Proc. of NIPS, pages 1105–1112, 2008. 1 [20] V. Rao and Y. W. Teh. Fast MCMC sampling for markov jump processes and continuous time bayesian networks. In Proc. of UAI, 2011. 1 [21] Joshua W Robinson and Alexander J Hartemink. Learning non-stationary dynamic bayesian networks. The Journal of Machine Learning Research, 11:3647–3680, 2010. 1 [22] Darren J Wilkinson. Stochastic modeling for systems biology. CRC press, 2011. 1, 2.2, 2.2 [23] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Understanding belief propagation and its generalizations. Exploring artificial intelligence in the new millennium, 8:236–239, 2003. 3.2 9
2016
189
6,093
A Theoretically Grounded Application of Dropout in Recurrent Neural Networks Yarin Gal University of Cambridge {yg279,zg201}@cam.ac.uk Zoubin Ghahramani Abstract Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning. 1 Introduction Recurrent neural networks (RNNs) are sequence-based models of key importance for natural language understanding, language generation, video processing, and many other tasks [1–3]. The model’s input is a sequence of symbols, where at each time step a simple neural network (RNN unit) is applied to a single symbol, as well as to the network’s output from the previous time step. RNNs are powerful models, showing superb performance on many tasks, but overfit quickly. Lack of regularisation in RNN models makes it difficult to handle small data, and to avoid overfitting researchers often use early stopping, or small and under-specified models [4]. Dropout is a popular regularisation technique with deep networks [5, 6] where network units are randomly masked during training (dropped). But the technique has never been applied successfully to RNNs. Empirical results have led many to believe that noise added to recurrent layers (connections between RNN units) will be amplified for long sequences, and drown the signal [4]. Consequently, existing research has concluded that the technique should be used with the inputs and outputs of the RNN alone [4, 7–10]. But this approach still leads to overfitting, as is shown in our experiments. Recent results at the intersection of Bayesian research and deep learning offer interpretation of common deep learning techniques through Bayesian eyes [11–16]. This Bayesian view of deep learning allowed the introduction of new techniques into the field, such as methods to obtain principled uncertainty estimates from deep learning networks [14, 17]. Gal and Ghahramani [14] for example showed that dropout can be interpreted as a variational approximation to the posterior of a Bayesian neural network (NN). Their variational approximating distribution is a mixture of two Gaussians with small variances, with the mean of one Gaussian fixed at zero. This grounding of dropout in approximate Bayesian inference suggests that an extension of the theoretical results might offer insights into the use of the technique with RNN models. Here we focus on common RNN models in the field (LSTM [18], GRU [19]) and interpret these as probabilistic models, i.e. as RNNs with network weights treated as random variables, and with suitably defined likelihood functions. We then perform approximate variational inference in these probabilistic Bayesian models (which we will refer to as Variational RNNs). Approximating the posterior distribution over the weights with a mixture of Gaussians (with one component fixed at zero and small variances) will lead to a tractable optimisation objective. Optimising this objective is identical to performing a new variant of dropout in the respective RNNs. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. xt yt xt−1 yt−1 xt+1 yt+1 (a) Naive dropout RNN xt yt xt−1 yt−1 xt+1 yt+1 (b) Variational RNN Figure 1: Depiction of the dropout technique following our Bayesian interpretation (right) compared to the standard technique in the field (left). Each square represents an RNN unit, with horizontal arrows representing time dependence (recurrent connections). Vertical arrows represent the input and output to each RNN unit. Coloured connections represent dropped-out inputs, with different colours corresponding to different dropout masks. Dashed lines correspond to standard connections with no dropout. Current techniques (naive dropout, left) use different masks at different time steps, with no dropout on the recurrent layers. The proposed technique (Variational RNN, right) uses the same dropout mask at each time step, including the recurrent layers. In the new dropout variant, we repeat the same dropout mask at each time step for both inputs, outputs, and recurrent layers (drop the same network units at each time step). This is in contrast to the existing ad hoc techniques where different dropout masks are sampled at each time step for the inputs and outputs alone (no dropout is used with the recurrent connections since the use of different masks with these connections leads to deteriorated performance). Our method and its relation to existing techniques is depicted in figure 1. When used with discrete inputs (i.e. words) we place a distribution over the word embeddings as well. Dropout in the word-based model corresponds then to randomly dropping word types in the sentence, and might be interpreted as forcing the model not to rely on single words for its task. We next survey related literature and background material, and then formalise our approximate inference for the Variational RNN, resulting in the dropout variant proposed above. Experimental results are presented thereafter. 2 Related Research In the past few years a considerable body of work has been collected demonstrating the negative effects of a naive application of dropout in RNNs’ recurrent connections. Pachitariu and Sahani [7], working with language models, reason that noise added in the recurrent connections of an RNN leads to model instabilities. Instead, they add noise to the decoding part of the model alone. Bayer et al. [8] apply a deterministic approximation of dropout (fast dropout) in RNNs. They reason that with dropout, the RNN’s dynamics change dramatically, and that dropout should be applied to the “non-dynamic” parts of the model – connections feeding from the hidden layer to the output layer. Pham et al. [9] assess dropout with handwriting recognition tasks. They conclude that dropout in recurrent layers disrupts the RNN’s ability to model sequences, and that dropout should be applied to feed-forward connections and not to recurrent connections. The work by Zaremba, Sutskever, and Vinyals [4] was developed in parallel to Pham et al. [9]. Zaremba et al. [4] assess the performance of dropout in RNNs on a wide series of tasks. They show that applying dropout to the non-recurrent connections alone results in improved performance, and provide (as yet unbeaten) state-of-the-art results in language modelling on the Penn Treebank. They reason that without dropout only small models were used in the past in order to avoid overfitting, whereas with the application of dropout larger models can be used, leading to improved results. This work is considered a reference implementation by many (and we compare to this as a baseline below). Bluche et al. [10] extend on the previous body of work and perform exploratory analysis of the performance of dropout before, inside, and after the RNN’s unit. They provide mixed results, not showing significant improvement on existing techniques. More recently, and done in parallel to this work, Moon et al. [20] suggested a new variant of dropout in RNNs in the speech recognition community. They randomly drop elements in the LSTM’s internal cell ct and use the same mask at every time step. This is the closest to our proposed approach (although fundamentally different to the approach we suggest, explained in §4.1), and we compare to this variant below as well. 2 Existing approaches are based on an empirical experimentation with different flavours of dropout, following a process of trial-and-error. These approaches have led many to believe that dropout cannot be extended to a large number of parameters within the recurrent layers, leaving them with no regularisation. In contrast to these conclusions, we show that it is possible to derive a variational inference based variant of dropout which successfully regularises such parameters, by grounding our approach in recent theoretical research. 3 Background We review necessary background in Bayesian neural networks and approximate variational inference. Building on these ideas, in the next section we propose approximate inference in the probabilistic RNN which will lead to a new variant of dropout. 3.1 Bayesian Neural Networks Given training inputs X = {x1, . . . , xN} and their corresponding outputs Y = {y1, . . . , yN}, in Bayesian (parametric) regression we would like to infer parameters ! of a function y = f !(x) that are likely to have generated our outputs. What parameters are likely to have generated our data? Following the Bayesian approach we would put some prior distribution over the space of parameters, p(!). This distribution represents our prior belief as to which parameters are likely to have generated our data. We further need to define a likelihood distribution p(y|x, !). For classification tasks we may assume a softmax likelihood, p ! y = d|x, ! " = Categorical exp(f ! d (x))/ X d0 exp(f ! d0(x)) ! or a Gaussian likelihood for regression. Given a dataset X, Y, we then look for the posterior distribution over the space of parameters: p(!|X, Y). This distribution captures how likely various function parameters are given our observed data. With it we can predict an output for a new input point x⇤by integrating p(y⇤|x⇤, X, Y) = Z p(y⇤|x⇤, !)p(!|X, Y)d!. (1) One way to define a distribution over a parametric set of functions is to place a prior distribution over a neural network’s weights, resulting in a Bayesian NN [21, 22]. Given weight matrices Wi and bias vectors bi for layer i, we often place standard matrix Gaussian prior distributions over the weight matrices, p(Wi) = N(0, I) and often assume a point estimate for the bias vectors for simplicity. 3.2 Approximate Variational Inference in Bayesian Neural Networks We are interested in finding the distribution of weight matrices (parametrising our functions) that have generated our data. This is the posterior over the weights given our observables X, Y: p(!|X, Y). This posterior is not tractable in general, and we may use variational inference to approximate it (as was done in [23–25, 12]). We need to define an approximating variational distribution q(!), and then minimise the KL divergence between the approximating distribution and the full posterior: KL ! q(!)||p(!|X, Y) " / − Z q(!) log p(Y|X, !)d! + KL(q(!)||p(!)) = − N X i=1 Z q(!) log p(yi|f !(xi))d! + KL(q(!)||p(!)). (2) We next extend this approximate variational inference to probabilistic RNNs, and use a q(!) distribution that will give rise to a new variant of dropout in RNNs. 4 Variational Inference in Recurrent Neural Networks In this section we will concentrate on simple RNN models for brevity of notation. Derivations for LSTM and GRU follow similarly. Given input sequence x = [x1, ..., xT ] of length T, a simple RNN is formed by a repeated application of a function fh. This generates a hidden state ht for time step t: ht = fh(xt, ht−1) = σ(xtWh + ht−1Uh + bh) for some non-linearity σ. The model output can be defined, for example, as fy(hT ) = hT Wy + by. We view this RNN as a probabilistic model by regarding ! = {Wh, Uh, bh, Wy, by} as random 3 variables (following normal prior distributions). To make the dependence on ! clear, we write f ! y for fy and similarly for f ! h . We define our probabilistic model’s likelihood as above (section 3.1). The posterior over random variables ! is rather complex, and we use variational inference with approximating distribution q(!) to approximate it. Evaluating each sum term in eq. (2) above with our RNN model we get Z q(!) log p(y|f ! y (hT ))d! = Z q(!) log p ✓ y ((((f ! y ! f ! h (xT , hT −1) "◆ d! = Z q(!) log p ✓ y ((((f ! y ! f ! h (xT , f ! h (...f ! h (x1, h0)...)) "◆ d! with h0 = 0. We approximate this with Monte Carlo (MC) integration with a single sample: ⇡log p ✓ y ((((f b! y ! f b! h (xT , f b! h (...f b! h (x1, h0)...)) "◆ , b! ⇠q(!) resulting in an unbiased estimator to each sum term. This estimator is plugged into equation (2) to obtain our minimisation objective L ⇡− N X i=1 log p ✓ yi ((((f b!i y ! f b!i h (xi,T , f b!i h (...f b!i h (xi,1, h0)...)) "◆ + KL(q(!)||p(!)). (3) Note that for each sequence xi we sample a new realisation b!i = {c Wi h, bUi h, bbi h, c Wi y, bbi y}, and that each symbol in the sequence xi = [xi,1, ..., xi,T ] is passed through the function f b!i h with the same weight realisations c Wi h, bUi h, bbi h used at every time step t T. Following [17] we define our approximating distribution to factorise over the weight matrices and their rows in !. For every weight matrix row wk the approximating distribution is: q(wk) = pN(wk; 0, σ2I) + (1 −p)N(wk; mk, σ2I) with mk variational parameter (row vector), p given in advance (the dropout probability), and small σ2. We optimise over mk the variational parameters of the random weight matrices; these correspond to the RNN’s weight matrices in the standard view1. The KL in eq. (3) can be approximated as L2 regularisation over the variational parameters mk [17]. Evaluating the model output f b! y (·) with sample b! ⇠q(!) corresponds to randomly zeroing (masking) rows in each weight matrix W during the forward pass – i.e. performing dropout. Our objective L is identical to that of the standard RNN. In our RNN setting with a sequence input, each weight matrix row is randomly masked once, and importantly the same mask is used through all time steps.2 Predictions can be approximated by either propagating the mean of each layer to the next (referred to as the standard dropout approximation), or by approximating the posterior in eq. (1) with q(!), p(y⇤|x⇤, X, Y) ⇡ Z p(y⇤|x⇤, !)q(!)d! ⇡1 K K X k=1 p(y⇤|x⇤, b!k) (4) with b!k ⇠q(!), i.e. by performing dropout at test time and averaging results (MC dropout). 4.1 Implementation and Relation to Dropout in RNNs Implementing our approximate inference is identical to implementing dropout in RNNs with the same network units dropped at each time step, randomly dropping inputs, outputs, and recurrent connections. This is in contrast to existing techniques, where different network units would be dropped at different time steps, and no dropout would be applied to the recurrent connections (fig. 1). Certain RNN models such as LSTMs and GRUs use different gates within the RNN units. For example, an LSTM is defined using four gates: “input”, “forget”, “output”, and “input modulation”, i = sigm ! ht−1Ui + xtWi " f = sigm ! ht−1Uf + xtWf " o = sigm ! ht−1Uo + xtWo " g = tanh ! ht−1Ug + xtWg " 1Graves et al. [26] further factorise the approximating distribution over the elements of each row, and use a Gaussian approximating distribution with each element (rather than a mixture); the approximating distribution above seems to give better performance, and has a close relation with dropout [17]. 2In appendix A we discuss the relation of our dropout interpretation to the ensembling one. 4 ct = f ◦ct−1 + i ◦g ht = o ◦tanh(ct) (5) with ! = {Wi, Ui, Wf, Uf, Wo, Uo, Wg, Ug} weight matrices and ◦the element-wise product. Here an internal state ct (also referred to as cell) is updated additively. Alternatively, the model could be re-parametrised as in [26]: 0 B @ i f o g 1 C A = 0 B @ sigm sigm sigm tanh 1 C A ✓✓ xt ht−1 ◆ · W ◆ (6) with ! = {W}, W a matrix of dimensions 2K by 4K (K being the dimensionality of xt). We name this parametrisation a tied-weights LSTM (compared to the untied-weights LSTM in eq. (5)). Even though these two parametrisations result in the same deterministic model, they lead to different approximating distributions q(!). With the first parametrisation one could use different dropout masks for different gates (even when the same input xt is used). This is because the approximating distribution is placed over the matrices rather than the inputs: we might drop certain rows in one weight matrix W applied to xt and different rows in another matrix W0 applied to xt. With the second parametrisations we would place a distribution over the single matrix W. This leads to a faster forward-pass, but with slightly diminished results as we will see in the experiments section. In more concrete terms, we may write our dropout variant with the second parametrisation (eq. (6)) as 0 B @ i f o g 1 C A = 0 B @ sigm sigm sigm tanh 1 C A ✓✓ xt ◦zx ht−1 ◦zh ◆ · W ◆ (7) with zx, zh random masks repeated at all time steps (and similarly for the parametrisation in eq. (5)). In comparison, Zaremba et al. [4]’s variant replaces zx in eq. (7) with a time-dependent mask: xt ◦zt x where zt x is sampled anew every time step (whereas zh is removed and the recurrent connection ht−1 is not dropped). On the other hand, Moon et al. [20]’s variant changes eq. (5) by adapting the internal cell ct = ct ◦zc with the same mask zc used at all time steps. Note that unlike [20], by viewing dropout as an operation over the weights our technique trivially extends to RNNs and GRUs. 4.2 Word Embeddings Dropout In datasets with continuous inputs we often apply dropout to the input layer – i.e. to the input vector itself. This is equivalent to placing a distribution over the weight matrix which follows the input and approximately integrating over it (the matrix is optimised, therefore prone to overfitting otherwise). But for models with discrete inputs such as words (where every word is mapped to a continuous vector – a word embedding) this is seldom done. With word embeddings the input can be seen as either the word embedding itself, or, more conveniently, as a “one-hot” encoding (a vector of zeros with 1 at a single position). The product of the one-hot encoded vector with an embedding matrix WE 2 RV ⇥D (where D is the embedding dimensionality and V is the number of words in the vocabulary) then gives a word embedding. Curiously, this parameter layer is the largest layer in most language applications, yet it is often not regularised. Since the embedding matrix is optimised it can lead to overfitting, and it is therefore desirable to apply dropout to the one-hot encoded vectors. This in effect is identical to dropping words at random throughout the input sentence, and can also be interpreted as encouraging the model to not “depend” on single words for its output. Note that as before, we randomly set rows of the matrix WE 2 RV ⇥D to zero. Since we repeat the same mask at each time step, we drop the same words throughout the sequence – i.e. we drop word types at random rather than word tokens (as an example, the sentence “the dog and the cat” might become “— dog and — cat” or “the — and the cat”, but never “— dog and the cat”). A possible inefficiency implementing this is the requirement to sample V Bernoulli random variables, where V might be large. This can be solved by the observation that for sequences of length T, at most T embeddings could be dropped (other dropped embeddings have no effect on the model output). For T ⌧V it is therefore more efficient to first map the words to the word embeddings, and only then to zero-out word embeddings based on their word type. 5 Experimental Evaluation We start by implementing our proposed dropout variant into the Torch implementation of Zaremba et al. [4], that has become a reference implementation for many in the field. Zaremba et al. [4] have 5 set a benchmark on the Penn Treebank that to the best of our knowledge hasn’t been beaten for the past 2 years. We improve on [4]’s results, and show that our dropout variant improves model performance compared to early-stopping and compared to using under-specified models. We continue to evaluate our proposed dropout variant with both LSTM and GRU models on a sentiment analysis task where labelled data is scarce. We finish by giving an in-depth analysis of the properties of the proposed method, with code and many experiments deferred to the appendix due to space constraints. 5.1 Language Modelling We replicate the language modelling experiment of Zaremba, Sutskever, and Vinyals [4]. The experiment uses the Penn Treebank, a standard benchmark in the field. This dataset is considered a small one in the language processing community, with 887, 521 tokens (words) in total, making overfitting a considerable concern. Throughout the experiments we refer to LSTMs with the dropout technique proposed following our Bayesian interpretation as Variational LSTMs, and refer to existing dropout techniques as naive dropout LSTMs (different masks at different steps, applied to the input and output of the LSTM alone). We refer to LSTMs with no dropout as standard LSTMs. We implemented a Variational LSTM for both the medium model of [4] (2 layers with 650 units in each layer) as well as their large model (2 layers with 1500 units in each layer). The only changes we’ve made to [4]’s setting are 1) using our proposed dropout variant instead of naive dropout, and 2) tuning weight decay (which was chosen to be zero in [4]). All other hyper-parameters are kept identical to [4]: learning rate decay was not tuned for our setting and is used following [4]. Dropout parameters were optimised with grid search (tying the dropout probability over the embeddings together with the one over the recurrent layers, and tying the dropout probability for the inputs and outputs together as well). These are chosen to minimise validation perplexity3. We further compared to Moon et al. [20] who only drop elements in the LSTM internal state using the same mask at all time steps (in addition to performing dropout on the inputs and outputs). We implemented their dropout variant with each model size, and repeated the procedure above to find optimal dropout probabilities (0.3 with the medium model, and 0.5 with the large model). We had to use early stopping for the large model with [20]’s variant as the model starts overfitting after 16 epochs. Moon et al. [20] proposed their dropout variant within the speech recognition community, where they did not have to consider embeddings overfitting (which, as we will see below, affect the recurrent layers considerably). We therefore performed an additional experiment using [20]’s variant together with our embedding dropout (referred to as Moon et al. [20]+emb dropout). Our results are given in table 1. For the variational LSTM we give results using both the tied weights model (eq. (6)–(7), Variational (tied weights)), and without weight tying (eq. (5), Variational (untied weights)). For each model we report performance using both the standard dropout approximation (averaging the weights at test time – propagating the mean of each approximating distribution as input to the next layer), and using MC dropout (obtained by performing dropout at test time 1000 times, and averaging the model outputs following eq. (4), denoted MC). For each model we report average perplexity and standard deviation (each experiment was repeated 3 times with different random seeds and the results were averaged). Model training time is given in words per second (WPS). It is interesting that using the dropout approximation, weight tying results in lower validation error and test error than the untied weights model. But with MC dropout the untied weights model performs much better. Validation perplexity for the large model is improved from [4]’s 82.2 down to 77.3 (with weight tying), or 77.9 without weight tying. Test perplexity is reduced from 78.4 down to 73.4 (with MC dropout and untied weights). To the best of our knowledge, these are currently the best single model perplexities on the Penn Treebank. It seems that Moon et al. [20] underperform even compared to [4]. With no embedding dropout the large model overfits and early stopping is required (with no early stopping the model’s validation perplexity goes up to 131 within 30 epochs). Adding our embedding dropout, the model performs much better, but still underperforms compared to applying dropout on the inputs and outputs alone. Comparing our results to the non-regularised LSTM (evaluated with early stopping, giving similar performance as the early stopping experiment in [4]) we see that for either model size an improvement can be obtained by using our dropout variant. Comparing the medium sized Variational model to the large one we see that a significant reduction in perplexity can be achieved by using a larger model. This cannot be done with the non-regularised LSTM, where a larger model leads to worse results. 3Optimal probabilities are 0.3 and 0.5 respectively for the large model, compared [4]’s 0.6 dropout probability, and 0.2 and 0.35 respectively for the medium model, compared [4]’s 0.5 dropout probability. 6 Medium LSTM Large LSTM Validation Test WPS Validation Test WPS Non-regularized (early stopping) 121.1 121.7 5.5K 128.3 127.4 2.5K Moon et al. [20] 100.7 97.0 4.8K 122.9 118.7 3K Moon et al. [20] +emb dropout 88.9 86.5 4.8K 88.8 86.0 3K Zaremba et al. [4] 86.2 82.7 5.5K 82.2 78.4 2.5K Variational (tied weights) 81.8 ± 0.2 79.7 ± 0.1 4.7K 77.3 ± 0.2 75.0 ± 0.1 2.4K Variational (tied weights, MC) − 79.0 ± 0.1 − − 74.1 ± 0.0 − Variational (untied weights) 81.9 ± 0.2 79.7 ± 0.1 2.7K 77.9 ± 0.3 75.2 ± 0.2 1.6K Variational (untied weights, MC) − 78.6 ± 0.1 − − 73.4 ± 0.0 − Table 1: Single model perplexity (on test and validation sets) for the Penn Treebank language modelling task. Two model sizes are compared (a medium and a large LSTM, following [4]’s setup), with number of processed words per second (WPS) reported. Both dropout approximation and MC dropout are given for the test set with the Variational model. A common approach for regularisation is to reduce model complexity (necessary with the non-regularised LSTM). With the Variational models however, a significant reduction in perplexity is achieved by using larger models. This shows that reducing the complexity of the model, a possible approach to avoid overfitting, actually leads to a worse fit when using dropout. We also see that the tied weights model achieves very close performance to that of the untied weights one when using the dropout approximation. Assessing model run time though (on a Titan X GPU), we see that tying the weights results in a more time-efficient implementation. This is because the single matrix product is implemented as a single GPU kernel, instead of the four smaller matrix products used in the untied weights model (where four GPU kernels are called sequentially). Note though that a low level implementation should give similar run times. We further experimented with a model averaging experiment following [4]’s setting, where several large models are trained independently with their outputs averaged. We used Variational LSTMs with MC dropout following the setup above. Using 10 Variational LSTMs we improve [4]’s test set perplexity from 69.5 to 68.7 – obtaining identical perplexity to [4]’s experiment with 38 models. Lastly, we report validation perplexity with reduced learning rate decay (with the medium model). Learning rate decay is often used for regularisation by setting the optimiser to make smaller steps when the model starts overfitting (as done in [4]). By removing it we can assess the regularisation effects of dropout alone. As can be seen in fig. 2, even with early stopping, Variational LSTM achieves lower perplexity than naive dropout LSTM and standard LSTM. Note though that a significantly lower perplexity for all models can be achieved with learning rate decay scheduling as seen in table 1 5.2 Sentiment Analysis We next evaluate our dropout variant with both LSTM and GRU models on a sentiment analysis task, where labelled data is scarce. We use MC dropout (which we compare to the dropout approximation further in appendix B), and untied weights model parametrisations. We use the raw Cornell film reviews corpus collected by Pang and Lee [27]. The dataset is composed of 5000 film reviews. We extract consecutive segments of T words from each review for T = 200, and use the corresponding film score as the observed output y. The model is built from one embedding layer (of dimensionality 128), one LSTM layer (with 128 network units for each gate; GRU setting is built similarly), and finally a fully connected layer applied to the last output of the LSTM (resulting in a scalar output). We use the Adam optimiser [28] throughout the experiments, with batch size 128, and MC dropout at test time with 10 samples. Figure 2: Medium model validation perplexity for the Penn Treebank language modelling task. Learning rate decay was reduced to assess model overfitting using dropout alone. Even with early stopping, Variational LSTM achieves lower perplexity than naive dropout LSTM and standard LSTM. Lower perplexity for all models can be achieved with learning rate decay scheduling, seen in table 1. 7 (a) LSTM train error: variational, naive dropout, and standard LSTM. (b) LSTM test error: variational, naive dropout, and standard LSTM. (c) GRU test error: variational, naive dropout, and standard LSTM. Figure 3: Sentiment analysis error for Variational LSTM / GRU compared to naive dropout LSTM / GRU and standard LSTM / GRU (with no dropout). The main results can be seen in fig. 3. We compared Variational LSTM (with our dropout variant applied with each weight layer) to standard techniques in the field. Training error is shown in fig. 3a and test error is shown in fig. 3b. Optimal dropout probabilities and weight decay were used for each model (see appendix B). It seems that the only model not to overfit is the Variational LSTM, which achieves lowest test error as well. Variational GRU test error is shown in fig. 14 (with loss plot given in appendix B). Optimal dropout probabilities and weight decay were used again for each model. Variational GRU avoids overfitting to the data and converges to the lowest test error. Early stopping in this dataset will result in smaller test error though (lowest test error is obtained by the non-regularised GRU model at the second epoch). It is interesting to note that standard techniques exhibit peculiar behaviour where test error repeatedly decreases and increases. This behaviour is not observed with the Variational GRU. Convergence plots of the loss for each model are given in appendix B. We next explore the effects of dropping-out different parts of the model. We assessed our Variational LSTM with different combinations of dropout over the embeddings (pE = 0, 0.5) and recurrent layers (pU = 0, 0.5) on the sentiment analysis task. The convergence plots can be seen in figure 4a. It seems that without both strong embeddings regularisation and strong regularisation over the recurrent layers the model would overfit rather quickly. The behaviour when pU = 0.5 and pE = 0 is quite interesting: test error decreases and then increases before decreasing again. Also, it seems that when pU = 0 and pE = 0.5 the model becomes very erratic. Lastly, we tested the performance of Variational LSTM with different recurrent layer dropout probabilities, fixing the embedding dropout probability at either pE = 0 or pE = 0.5 (figs. 4b-4c). These results are rather intriguing. In this experiment all models have converged, with the loss getting near zero (not shown). Yet it seems that with no embedding dropout, a higher dropout probability within the recurrent layers leads to overfitting! This presumably happens because of the large number of parameters in the embedding layer which is not regularised. Regularising the embedding layer with dropout probability pE = 0.5 we see that a higher recurrent layer dropout probability indeed leads to increased robustness to overfitting, as expected. These results suggest that embedding dropout can be of crucial importance in some tasks. In appendix B we assess the importance of weight decay with our dropout variant. Common practice is to remove weight decay with naive dropout. Our results suggest that weight decay plays an important role with our variant (it corresponds to our prior belief of the distribution over the weights). 6 Conclusions We presented a new technique for recurrent neural network regularisation. Our RNN dropout variant is theoretically motivated and its effectiveness was empirically demonstrated. (a) Combinations of pE = 0, 0.5 with pU = 0, 0.5. (b) pU = 0, ..., 0.5 with fixed pE = 0. (c) pU = 0, ..., 0.5 with fixed pE = 0.5. Figure 4: Test error for Variational LSTM with various settings on the sentiment analysis task. Different dropout probabilities are used with the recurrent layer (pU) and embedding layer (pE). 8 References [1] Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. LSTM neural networks for language modeling. In INTERSPEECH, 2012. [2] N Kalchbrenner and P Blunsom. Recurrent continuous translation models. In EMNLP, 2013. [3] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [4] Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. Recurrent neural network regularization. arXiv preprint arXiv:1409.2329, 2014. [5] Geoffrey E others Hinton. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [6] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. [7] Marius Pachitariu and Maneesh Sahani. Regularization and nonlinearities for neural language models: when are they needed? arXiv preprint arXiv:1301.5650, 2013. [8] J Bayer et al. On fast dropout and its applicability to recurrent networks. arXiv preprint arXiv:1311.0701, 2013. [9] Vu Pham, Theodore Bluche, Christopher Kermorvant, and Jerome Louradour. Dropout improves recurrent neural networks for handwriting recognition. In ICFHR. IEEE, 2014. [10] Théodore Bluche, Christopher Kermorvant, and Jérôme Louradour. Where to apply dropout in recurrent neural networks for handwriting recognition? In ICDAR. IEEE, 2015. [11] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [12] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In ICML, 2015. [13] Jose Miguel Hernandez-Lobato and Ryan Adams. Probabilistic backpropagation for scalable learning of Bayesian neural networks. In ICML, 2015. [14] Yarin Gal and Zoubin Ghahramani. Bayesian convolutional neural networks with Bernoulli approximate variational inference. arXiv:1506.02158, 2015. [15] Diederik Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In NIPS. Curran Associates, Inc., 2015. [16] Anoop Korattikara Balan, Vivek Rathod, Kevin P Murphy, and Max Welling. Bayesian dark knowledge. In NIPS. Curran Associates, Inc., 2015. [17] Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. arXiv:1506.02142, 2015. [18] S Hochreiter and J Schmidhuber. Long short-term memory. Neural computation, 9(8), 1997. [19] Kyunghyun Cho et al. Learning phrase representations using RNN encoder–decoder for statistical machine translation. In EMNLP, Doha, Qatar, October 2014. ACL. [20] Taesup Moon, Heeyoul Choi, Hoshik Lee, and Inchul Song. RnnDrop: A Novel Dropout for RNNs in ASR. In ASRU Workshop, December 2015. [21] David JC MacKay. A practical Bayesian framework for backpropagation networks. Neural computation, 4 (3):448–472, 1992. [22] R M Neal. Bayesian learning for neural networks. PhD thesis, University of Toronto, 1995. [23] Geoffrey E Hinton and Drew Van Camp. Keeping the neural networks simple by minimizing the description length of the weights. In COLT, pages 5–13. ACM, 1993. [24] David Barber and Christopher M Bishop. Ensemble learning in Bayesian neural networks. NATO ASI SERIES F COMPUTER AND SYSTEMS SCIENCES, 168:215–238, 1998. [25] Alex Graves. Practical variational inference for neural networks. In NIPS, 2011. [26] Alan Graves, Abdel-rahman Mohamed, and Geoffrey Hinton. Speech recognition with deep recurrent neural networks. In ICASSP. IEEE, 2013. [27] Bo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL. ACL, 2005. [28] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [29] James Bergstra et al. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June 2010. Oral Presentation. [30] fchollet. Keras. https://github.com/fchollet/keras, 2015. 9
2016
19
6,094
Multiple-Play Bandits in the Position-Based Model Paul Lagrée∗ LRI, Université Paris Sud Université Paris Saclay paul.lagree@u-psud.fr Claire Vernade∗ LTCI, CNRS, Télécom ParisTech Université Paris Saclay vernade@enst.fr Olivier Cappé LTCI, CNRS Télécom ParisTech Université Paris Saclay Abstract Sequentially learning to place items in multi-position displays or lists is a task that can be cast into the multiple-play semi-bandit setting. However, a major concern in this context is when the system cannot decide whether the user feedback for each item is actually exploitable. Indeed, much of the content may have been simply ignored by the user. The present work proposes to exploit available information regarding the display position bias under the so-called Position-based click model (PBM). We first discuss how this model differs from the Cascade model and its variants considered in several recent works on multiple-play bandits. We then provide a novel regret lower bound for this model as well as computationally efficient algorithms that display good empirical and theoretical performance. 1 Introduction During their browsing experience, users are constantly provided – without having asked for it – with clickable content spread over web pages. While users interact on a website, they send clicks to the system for a very limited selection of the clickable content. Hence, they let every unclicked item with an equivocal answer: the system does not know whether the content was really deemed irrelevant or simply ignored. In contrast, in traditional multi-armed bandit (MAB) models, the learner makes actions and observes at each round the reward corresponding to the chosen action. In the so-called multiple play semi-bandit setting, when users are presented with L items, they are assumed to provide feedback for each of those items. Several variants of this basic setting have been considered in the bandit literature. The necessity for the user to provide feedback for each item has been called into question in the context of the so-called Cascade Model [8, 14, 6] and its extensions such as the Dependent Click Model (DCM) [20]. Both models are particularly suited for search contexts, where the user is assumed to be looking for something relative to a query. Consequently, the learner expects explicit feedback: in the Cascade Model each valid observation sequence must be either all zeros or terminated by a one, such that no ambiguity is left on the evaluation of the presented items, while multiple clicks are allowed in the DCM thus leaving some ambiguity on the last zeros of a sequence. In the Cascade Model, the positions of the items are not taken into account in the reward process because the learner is assumed to obtain a click as long as the interesting item belongs to the list. Indeed, there are even clear indications that the optimal strategy in a learning context consists in showing the most relevant items at the end of the list in order to maximize the amount of observed feedback [14] – which is counter-intuitive in recommendation tasks. To overcome these limitations, [6] introduces weights – to be defined by the learner – that are attributed to positions in the list, with a click on position l ∈{1, . . . , L} providing a reward wl, where the sequence (wl)l is decreasing to enforce the ranking behavior. However, no rule is given for ∗The two authors contributed equally. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. setting the weights (wl)l that control the order of importance of the positions. The authors propose an algorithm based on KL-UCB [10] and prove a lower bound on the regret as well as an asymptotically optimal upper bound. Another way to address the limitations of the Cascade Model is to consider the DCM as in [20]. Here, examination probabilities vl are introduced for each position l: conditionally on the event that the user effectively scanned the list up to position l, he/she can choose to leave with probability vl and in that case, the learner is aware of his/her departure. This framework naturally induces the necessity to rank the items in the optimal order. All previous models assume that a portion of the recommendation list is explicitly examined by the user and hence that the learning algorithm eventually has access to rewards corresponding to the unbiased user’s evaluation of each item. In contrast, we propose to analyze multiple-play bandits in the Position-based model (PBM) [5]. In the PBM, each position in the list is also endowed with a binary Examination variable [8, 19] which is equal to one only when the user paid attention to the corresponding item. But this variable, that is independent of the user’s evaluation of the item, is not observable. It allows to model situations where the user is not explicitly looking for specific content, as in typical recommendation scenarios. Compared to variants of the Cascade model, the PBM is challenging due to the censoring induced by the examination variables: the learning algorithm observes actual clicks but non-clicks are always ambiguous. Thus, combining observations made at different positions becomes a non-trivial statistical task. Some preliminary ideas on how to address this issue appear in the supplementary material of [13]. In this work, we provide a complete statistical study of stochastic multiple-play bandits with semi-bandit feedback in the PBM. We introduce the model and notations in Section 2 and provide the lower bound on the regret in Section 3. In Section 4, we present two optimistic algorithms as well as a theoretical analysis of their regret. In the last section dedicated to experiments, those policies are compared to several benchmarks on both synthetic and realistic data. 2 Setting and Parameter Estimation We consider the binary stochastic bandit model with K Bernoulli-distributed arms. The model parameters are the arm expectations θ = (θ1, θ2, . . . , θK), which lie in Θ = (0, 1)K. We will denote by B(θ) the Bernoulli distribution with parameter θ and by d(p, q) := p log(p/q) + (1 − p) log((1 −p)/(1 −q)) the Kullback-Leibler divergence from B(p) to B(q). At each round t, the learner selects a list of L arms – referred to as an action – chosen among the K arms which are indexed by k ∈{1, . . . , K}. The set of actions is denoted by A and thus contains K!/(K −L)! ordered lists; the action selected at time t will be denoted A(t) = (A1(t), . . . , AL(t)). The PBM is characterized by examination parameters (κl)1≤l≤L, where κl is the probability that the user effectively observes the item in position l [5]. At round t, the selection A(t) is shown to the user and the learner observes the complete feedback – as in semi-bandit models – but the observation at position l, Zl(t), is censored being the product of two independent Bernoulli variables Yl(t) and Xl(t), where Yl(t) ∼B(κl) is non null when the user considered the item in position l – which is unknown to the learner – and Xl(t) ∼B(θAl(t)) represents the actual user feedback to the item shown in position l. The learner receives a reward rA(t) = PL l=1 Zl(t), where Z(t) = (X1(t)Y1(t), . . . , XL(t)YL(t)) denotes the vector of censored observations at step t. In the following, we will assume, without loss of generality, that θ1 > · · · > θK and κ1 > · · · > κL > 0, in order to simplify the notations. The fact that the sequences (θl)l and (κl)l are decreasing implies that the optimal list is a∗= (1, . . . , L). Denoting by R(T) = PT t=1 ra∗−rA(t) the regret incurred by the learner up to time T, one has E[R(T)] = T X t=1 L X l=1 κl(θa∗ l −E[θAl(t)]) = X a∈A (µ∗−µa) E[Na(T)] = X a∈A ∆aE[Na(T)], (1) where µa = PL l=1 κlθal is the expected reward of action a, µ∗= µa∗is the best possible reward in average, ∆a = µ∗−µa the expected gap to optimality, and, Na(T) = PT t=1 1{A(t) = a} is the number of times action a has been chosen up to time T. 2 In the following, we assume that the examination parameters (κl)1≤l≤L are known to the learner. These can be estimated from historical data [5], using, for instance, the EM algorithm [9] (see also Section 5). In most scenarios, it is realistic to assume that the content (e.g., ads in on-line advertising) is changing much more frequently than the layout (web page design for instance) making it possible to have a good knowledge of the click-through biases associated with the display positions. The main statistical challenge associated with the PBM is that one needs to obtain estimates and confidence bounds for the components θk of θ from the available B(κlθk)-distributed draws corresponding to occurrences of arm k at various positions l = 1, . . . , L in the list. To this aim, we define the following statistics: Sk,l(t) = Pt−1 s=1 Zl(s)1{Al(s) = k}, Sk(t) = PL l=1 Sk,l(t), Nk,l(t) = Pt−1 s=1 1{Al(s) = k}, Nk(t) = PL l=1 Nk,l(t). We further require bias-corrected versions of the counts ˜Nk,l(t) = Pt−1 s=1 κl1{Al(s) = k} and ˜Nk(t) = PL l=1 ˜Nk,l(t). A time t, and conditionally on the past actions A(1) up to A(t −1), the Fisher information for θk is given by I(θk) = PL l=1 Nk,l(t)κl/(θk(1 −κlθk)) (see Appendix A). We cannot however estimate θk using the maximum likelihood estimator since it has no closed form expression. Interestingly though, the simple pooled linear estimator ˆθk(t) = Sk(t)/ ˜Nk(t), (2) considered in the supplementary material to [13], is unbiased and has a (conditional) variance of υ(θk) = (PL l=1 Nk,l(t)κlθk(1 −κlθk))/(PL l=1 Nk,l(t)κl)2, which is close to optimal given the Cramér-Rao lower bound. Indeed, υ(θk)I(θk) is recognized as a ratio of a weighted arithmetic mean to the corresponding weighted harmonic mean, which is known to be larger than one, but is upper bounded by 1/(1−θk), irrespectively of the values of the κl’s. Hence, if, for instance, we can assume that all θk’s are smaller than one half, the loss with respect to the best unbiased estimator is no more than a factor of two for the variance. Note that despite its simplicity, ˆθk(t) cannot be written as a simple sum of conditionally independent increments divided by the number of terms and will thus require specific concentration results. It can be checked that when θk gets very close to one, ˆθk(t) is no longer close to optimal. This observation also has a Bayesian counterpart that will be discussed in Section 5. Nevertheless, it is always preferable to the “position-debiased” estimator (PL l=1 Sk,l(t)/κl)/Nk,l(t) which gets very unreliable as soon as one of the κl’s gets very small. 3 Lower Bound on the Regret In this section, we consider the fundamental asymptotic limits of learning performance for online algorithms under the PBM. These cannot be deduced from earlier general results, such as those of [11, 7], due to the censoring in the feedback associated to each action. We detail a simple and general proof scheme – using the results of [12] – that applies to the PBM, as well as to more general models. Lower bounds on the regret rely on changes of measure: the question is how much can we mistake the true parameters of the problem for others, when observing successive arms? With this in mind, we will subscript all expectations and probabilities by the parameter value and indicate explicitly that the quantities µa, a∗, µ∗, ∆a, introduced in Section 2, also depend on the parameter. For ease of notation, we will still assume that θ is such that a∗(θ) = (1, . . . , L). 3.1 Existing results for multiple-play bandit problems Lower bounds on the regret will be proved for uniformly efficient algorithms, in the sense of [16]: Definition 1. An algorithm is said to be uniformly efficient if for any bandit model parameterized by θ and for all α ∈(0, 1], its expected regret after T rounds is such that EθR(T) = o(T α). For the multiple-play MAB, [2] obtained the following bound lim inf T →∞ EθR(T) log(T) ≥ K X k=L+1 θL −θk d(θk, θL). (3) 3 For the “learning to rank” problem where rewards follow the weighted Cascade Model with decreasing weights (wl)l=1,...,L, [6] derived the following bound lim inf T →∞ EθR(T) log T ≥wL K X k=L+1 θL −θk d(θk, θL). Perhaps surprisingly, this lower bound does not show any additional term corresponding to the complexity of ranking the L optimal arms. Indeed, the errors are still asymptotically dominated by the need to discriminate irrelevant arms (θk)k>L from the worst of the relevant arms, that is, θL. 3.2 Lower bound step by step Step 1: Computing the expected log-likelihood ratio. Denoting by Fs−1 the σ-algebra generated by the past actions and observations, we define the log-likelihood ratio for the two values θ and λ of the parameters by ℓ(t) := t X s=1 log p(Z(s); θ | Fs−1) p(Z(s); λ | Fs−1). (4) Lemma 2. For each position l and each item k, define the local amount of information by Il(θk, λk) := Eθ  log p(Zl(t); θ) p(Zl(t); λ) Al(t) = k  , and its cumulated sum over the L positions by Ia(θ, λ) := PL l=1 PK k=1 1{al = k}Il(θk, λk). The expected log-likelihood ratio is given by Eθ[ℓ(t)] = X a∈A Ia(θ, λ)Eθ[Na(t)]. (5) The next proposition is adapted from Theorem 17 in Appendix B of [12] and provides a lower bound on the expected log-likelihood ratio. Proposition 3. Let B(θ) := {λ ∈Θ |∀l ≤L, θl = λl and µ∗(θ) < µ∗(λ)} be the set of changes of measure that improve over θ without modifying the optimal arms. Assuming that the expectation of the log-likelihood ratio may be written as in (5), for any uniformly efficient algorithm one has ∀λ ∈B(θ), lim inf T →∞ P a∈A Ia(θ, λ)Eθ[Na(T)] log(T) ≥1. Step 2: Variational form of the lower bound. We are now ready to obtain the lower bound in a form similar to that originally given by [11]. Theorem 4. The expected regret of any uniformly efficient algorithm satisfies lim inf T →∞ EθR(T) log T ≥f(θ) , where f(θ) = inf c⪰0 X a∈A ∆a(θ)ca , s.t. inf λ∈B(θ) X a∈A Ia(θ, λ)ca ≥1. Theorem 4 is a straightforward consequence of Proposition 3, combined with the expression of the expected regret given in (1). The vector c ∈R|A| + , that satisfies the inequality P a∈A Ia(θ, λ)ca ≥1, represents the feasible values of Eθ[Na(T)]/ log(T). Step 3: Relaxing the constraints. The bounds mentioned in Section 3.1 may be recovered from Theorem 4 by considering only the changes of measure that affect a single suboptimal arm. Corollary 5. f(θ) ≥inf c⪰0 X a∈A ∆a(θ)ca , s.t. X a∈A L X l=1 1{al = k}Il(θk, θL)ca ≥1 , ∀k ∈{L + 1, . . . , K}. Corollary 5 is obtained by restricting the constraint set B(θ) of Theorem 4 to ∪K k=L+1Bk(θ), where Bk(θ) := {λ ∈Θ|∀j ̸= k, θj = λj and µ∗(θ) < µ∗(λ)} . 4 3.3 Lower bound for the PBM Theorem 6. For the PBM, the following lower bound holds for any uniformly efficient algorithm: lim inf T →∞ EθR(T) log T ≥ K X k=L+1 min l∈{1,...,L} ∆vk,l(θ) d(κlθk, κlθL), (6) where vk,l := (1, . . . , l −1, k, l, . . . , L −1). Proof. First, note that for the PBM one has Il(θk, λk) = d(κlθk, κlλk). To get the expression given in Theorem 6 from Corollary 5, we proceed as in [6] showing that the optimal coefficients (ca)a∈A can be non-zero only for the K −L actions that put the suboptimal arm k in the position l that reaches the minimum of ∆vk,l(θ)/d(κlθk, κlθL). Nevertheless, this position does not always coincide with L, the end of the displayed list, contrary to the case of [6] (see Appendix B for details). The discrete minimization that appears in the r.h.s. of Theorem 6 corresponds to a fundamental trade-off in the PBM. When trying to discriminate a suboptimal arm k from the L optimal ones, it is desirable to put it higher in the list to obtain more information, as d(κlθk, κlθL) is an increasing function of κl. On the other hand, the gap ∆vk,l(θ) is also increasing as l gets closer to the top of the list. The fact that d(κlθk, κlθL) is not linear in κl (it is a strictly convex function of κl) renders the trade-off non trivial. It is easily checked that when (θ1 −θL) is very small, i.e. when all optimal arms are equivalent, the optimal exploratory position is l = 1. In contrast, it is equal to L when the gap (θL −θL+1) becomes very small. Note that by using that for any suboptimal a ∈A, ∆a(θ) ≥PK k=L+1 PL l=1 1{al = k}κl(θL −θk), one can lower bound the r.h.s. of Theorem 6 by κL PK k=L+1(θL −θk)/d(κLθk, κLθL), which is not tight in general. Remark 7. In the uncensored version of the PBM – i.e., if the Yl(t) were observed –, the expression of Ia(θ, λ) is simpler: it is equal to PL l=1 PK k=1 1{Al(t) = k}κld(θk, λk) and leads to a lower bound that coincides with (3). The uncensored PBM is actually statistically very close to the weighted Cascade model and can be addressed by algorithms that do not assume knowledge of the (κl)l but only of their ordering. 4 Algorithms In this section we introduce two algorithms for the PBM. The first one uses the CUCB strategy of [4] and requires an simple upper confidence bound for θk based on the estimator ˆθk(t) defined in (2). The second algorithm is based on the Parsimonious Item Exploration – PIE(L) – scheme proposed in [6] and aims at reaching asymptotically optimal performance. For this second algorithm, termed PBM-PIE, it is also necessary to use a multi-position analog of the well-known KL-UCB index [10] that is inspired by a result of [17]. The analysis of PBM-PIE provided below confirms the relevance of the lower bound derived in Section 3. PBM-UCB The first algorithm simply consists in sorting optimistic indices in decreasing order and pulling the corresponding first L arms [4]. To derive the expression of the required “exploration bonus” we use an upper confidence for ˆθk(t) based on Hoeffding’s inequality: U UCB k (t, δ) = Sk(t) ˜Nk(t) + s Nk(t) ˜Nk(t) s δ 2 ˜Nk(t) , for which a coverage bound is given by the next proposition, proven in Appendix C. Proposition 8. Let k be any arm in {1, . . . , K}, then for any δ > 0, P U UCB k (t, δ) ≤θk  ≤eδ log(t)e−δ. Following the ideas of [7], it is possible to obtain a logarithmic regret upper bound for this algorithm. The proof is given in Appendix D. 5 Theorem 9. Let C(κ) = min1≤l≤L[(PL j=1 κj)2/l+(Pl j=1 κj)2]/κ2 L and ∆= mina∈σ(a∗)\a∗∆a, where σ(a∗) denotes the permutations of the optimal action. Using PBM-UCB with δ = (1 + ϵ) log(t) for some ϵ > 0, there exists a constant C0(ϵ) independent from the model parameters such that the regret of PBM-UCB is bounded from above by E[R(T)] ≤C0(ϵ) + 16(1 + ϵ)C(κ) log T L ∆+ X k/∈a∗ 1 κL(θL −θk) ! . The presence of the term L/∆in the above expression is attributable to limitations of the mathematical analysis. On the other hand, the absence of the KL-divergence terms appearing in the lower bound (6) is due to the use of an upper confidence bound based on Hoeffding’s inequality. PBM-PIE We adapt the PIE(l) algorithm introduced by [6] for the Cascade Model to the PBM in Algorithm 1 below. At each round, the learner potentially explores at position L with probability 1/2 using the following upper-confidence bound for each arm k Uk(t, δ) = sup q∈[θmin k (t),1] ( q L X l=1 Nk,l(t)d  Sk,l(t) Nk,l(t), κlq  ≤δ ) , (7) where θmin k (t) is the minimum of the convex function Φ : q 7→PL l=1 Nk,l(t)d(Sk,l(t)/Nk,l(t), κlq). In other positions, l = 1, . . . , L −1, PBM-PIE selects the arms with the largest estimates ˆθk(t). The resulting algorithm is presented as Algorithm 1 below, denoting by L(t) the L-largest empirical estimates, referred to as the “leaders” at round t. Algorithm 1 – PBM-PIE Require: K, L, observation probabilities κ, ϵ > 0 Initialization: first K rounds, play each arm at every position for t = K + 1, . . . , T do Compute ˆθk(t) for all k L(t) ←top-L ordered arms by decreasing ˆθk(t) Al(t) ←Ll(t) for each position l < L B(t) ←{k|k /∈L(t), Uk(t, (1 + ϵ) log(T)) ≥ˆθLL(t)(t) if B(t) = ∅then AL(t) ←LL(t) else With probability 1/2, select AL(t) uniformly at random from B(t), else AL(t) ←LL(t) end if Play action A(t) and observe feedback Z(t); Update Nk,l(t + 1) and Sk,l(t + 1). end for The Uk(t, δ) index defined in (7) aggregates observations from all positions – as in PBM-UCB – but allows to build tighter confidence regions as shown by the next proposition proved in Appendix E. Proposition 10. For all δ ≥L + 1, P (Uk(t, δ) < θk) ≤eL+1 ⌈δ log(t)⌉δ L L e−δ. We may now state the main result of this section that provides an upper bound on the regret of PBM-PIE. Theorem 11. Using PBM-PIE with δ = (1 + ϵ) log(t) and ϵ > 0, for any η < mink<K(θk − θk+1)/2, there exist problem-dependent constants C1(η), C2(ϵ, η), C3(ϵ) and β(ϵ, η) such that E[R(T)] ≤(1 + ϵ)2 log(T) K X k=L+1 κL(θL −θk) d(κLθk, κL(θL −η)) + C1(η) + C2(ϵ, η) T β(ϵ,η) + C3(ϵ). The proof of this result is provided in Appendix E. Comparing to the expression in (6), Theorem 11 shows that PBM-PIE reaches asymptotically optimal performance when the optimal exploring 6 position is indeed located at index L. In other case, there is a gap that is caused by the fact the exploring position is fixed beforehand and not adapted from the data. We conclude this section by a quick description of two other algorithms that will be used in the experimental section to benchmark our results. Ranked Bandits (RBA-KL-UCB) The state-of-the-art algorithm for the sequential “learning to rank” problem was proposed by [18]. It runs one bandit algorithm per position, each one being entitled to choose the best suited arm at its rank. The underlying bandit algorithm that runs in each position is left to the choice of the user, the better the policy the lower the regret can be. If the bandit algorithm at position l selects an arm already chosen at a higher position, it receives a reward of zero. Consequently, the bandit algorithm operating at position l tends to focus on the estimation of l-th best arm. In the next section, we use as benchmark the Ranked Bandits strategy using the KL-UCB algorithm [10] as the per-position bandit. PBM-TS The observations Zl(t) are censored Bernoulli which results in a posterior that does not belong to a standard family of distribution. [13] suggest a version of Thompson Sampling called “Bias Corrected Multiple Play TS” (or BC-MP-TS) that approximates the true posterior by a Beta distribution. We observed in experiments that for parameter values close to one, this algorithm does not explore enough. In Figure 1(a), we show this phenomenon for θ = (0.95, 0.85, 0.75, 0.65, 0.55). The true posterior for the parameter θk at time t may be written as a product of truncated scaled beta distributions πt(θk) ∝ Y l θαk,l(t) k (1 −κlθk)βk,l(t), where αk,l(t) = Sk,l(t) and βk,l(t) = Nk,l(t) −Sk,l(t). To draw from this exact posterior, we use rejection sampling with proposal distribution Beta(αk,m(t), βk,m(t))/κm, where m = arg max1≤l≤L(αk,l(t) + βk,l(t)). 5 Experiments 5.1 Simulations In order to evaluate our strategies, a simple problem is considered in which K = 5, L = 3, κ = (0.9, 0.6, 0.3) and θ = (0.45, 0.35, 0.25, 0.15, 0.05). The arm expectations are chosen such that the asymptotic behavior can be observed after reasonable time horizon. All results are averaged based on 10, 000 independent runs of the algorithm. We present the results in Figure 1(b) where PBM-UCB, PBM-PIE and PBM-TS are compared to RBA-KL-UCB. The performance of PBM-PIE and PBM-TS are comparable, the latter even being under the lower bound (it is a common observation, e.g. see [13], and is due to the asymptotic nature of the lower bound). The curves confirm our analysis 102 103 104 105 Round t 0 200 400 600 800 1000 1200 1400 Regret R(T ) Lower Bound BC-MP-TS PBM-TS (a) Average regret of PBM-TS and BC-MPTS compared for high parameters. Shaded areas: first and last deciles. 100 101 102 103 104 105 Round t 0 50 100 150 200 250 300 Regret R(T ) Lower Bound PBM-TS PBM-UCB PBM-PIE RBA-KLUCB (b) Average regret of various algorithms on synthetic data under the PBM. Figure 1: Simulation results for the suggested strategies. 7 #ads (K) #records min θ max θ 5 216, 565 0.016 0.077 5 68, 179 0.031 0.050 6 435, 951 0.025 0.067 6 110, 071 0.023 0.069 6 147, 214 0.004 0.148 8 122, 218 0.108 0.146 11 1, 228, 004 0.022 0.149 11 391, 951 0.022 0.084 Table 1: Statistics on the queries: each line corresponds to the sub-dataset associated with a query. 100 101 102 103 104 105 Round t 0 100 200 300 400 500 600 700 800 Regret R(T ) PBM-TS PBM-UCB PBM-PIE RBA-KLUCB Figure 2: Performance of the proposed algorithms under the PBM on real data. for PBM-PIE and lets us conjecture that the true Thompson Sampling policy might be asymptotically optimal. As expected, PBM-PIE shows asymptotically optimal performance, matching the lower bound after a large enough horizon. 5.2 Real data experiments: search advertising The dataset was provided for KDD Cup 2012 track 2[1] and involves session logs of soso.com, a search engine owned by Tencent. It consists of ads that were inserted among search results. Each of the 150M lines from the log contains the user ID, the query typed, an ad, a position (1, 2 or 3) at which it was displayed and a binary reward (click/no-click). First, for every query, we excluded ads that were not displayed at least 1, 000 times at every position. We also filtered queries that had less than 5 ads satisfying the previous constraints. As a result, we obtained 8 queries with at least 5 and up to 11 ads. For each query q, we computed the matrix of the average click-through rates (CTR): Mq ∈RK×L, where K is the number of ads for the query q and L = 3 the number of positions. It is noticeable that the SVD of each Mq matrix has a highly dominating first singular value, therefore validating the low-rank assumption underlying in the PBM. In order to estimate the parameters of the problem, we used the EM algorithm suggested by [5, 9]. Table 1 reports some statistics about the bandit models reconstructed for each query: number of arms K, amount of data used to compute the parameters, minimum and maximum values of the θ’s for each model. We conducted a series of 2, 000 simulations over this dataset. At the beginning of each run, a query was randomly selected together with corresponding probabilities of scanning positions and arm expectations. Even if rewards were still simulated, this scenario is more realistic since the values of the parameters were extracted from a real-world dataset. We show results for the different algorithms in Figure 2. It is remarkable that RBA-KL-UCB performs slightly better than PBM-UCB. One can imagine that PBM-UCB does not benefit enough from position aggregations – only 3 positions are considered – to beat RBA-KL-UCB. Both of them are outperformed by PBM-TS and PBM-PIE. Conclusion This work provides the first analysis of the PBM in an online context. The proof scheme used to obtain the lower bound on the regret is interesting on its own, as it can be generalized to various other settings. The tightness of the lower bound is validated by our analysis of PBM-PIE but it would be an interesting future contribution to provide such guarantees for more straightforward algorithms such as PBM-TS or a ‘PBM-KLUCB’ using the confidence regions of PBM-PIE. In practice, the algorithms are robust to small variations of the values of the (κl)l, but it would be preferable to obtain some control over the regret under uncertainty on these examination parameters. Acknowledgements This work was partially supported by the French research project ALICIA (grant ANR-13-CORD0020) and by the Machine Learning for Big Data Chair at Télécom ParisTech. 8 References [1] Kdd cup 2012 track 2. http://www.kddcup2012.org/. [2] V. Anantharam, P. Varaiya, and J. Walrand. Asymptotically efficient allocation rules for the multiarmed bandit problem with multiple plays - part I: IID rewards. Automatic Control, IEEE Transactions on, 32(11):968–976, 1987. [3] S. Boucheron, G. Lugosi, and P. Massart. Concentration Inequalities: A Nonasymptotic Theory of Independence. OUP Oxford, 2013. [4] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and applications. In Proc. of the 30th Int. Conf. on Machine Learning, 2013. [5] A. Chuklin, I. Markov, and M. d. Rijke. Click models for web search. Synthesis Lectures on Information Concepts, Retrieval, and Services, 7(3):1–115, 2015. [6] R. Combes, S. Magureanu, A. Proutière, and C. Laroche. Learning to rank: Regret lower bounds and efficient algorithms. In Proc. of the 2015 ACM SIGMETRICS Int. Conf. on Measurement and Modeling of Computer Systems, 2015. [7] R. Combes, M. S. T. M. Shahi, A. Proutière, et al. Combinatorial bandits revisited. In Advances in Neural Information Processing Systems, 2015. [8] N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proc. of the Int. Conf. on Web Search and Data Mining. ACM, 2008. [9] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the royal statistical society. Series B, pages 1–38, 1977. [10] A. Garivier and O. Cappé. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proc. of the Conf. on Learning Theory, 2011. [11] T. L. Graves and T. L. Lai. Asymptotically efficient adaptive choice of control laws in controlled markov chains. SIAM journal on control and optimization, 35(3):715–743, 1997. [12] E. Kaufmann, O. Cappé, and A. Garivier. On the complexity of best arm identification in multi-armed bandit models. Journal of Machine Learning Research, 2015. [13] J. Komiyama, J. Honda, and H. Nakagawa. Optimal regret analysis of thompson sampling in stochastic multi-armed bandit problem with multiple plays. In Proc. of the 32nd Int. Conf. on Machine Learning, 2015. [14] B. Kveton, C. Szepesvári, Z. Wen, and A. Ashkan. Cascading bandits : Learning to rank in the cascade model. In Proc. of the 32nd Int. Conf. on Machine Learning, 2015. [15] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesvári. Tight regret bounds for stochastic combinatorial semi-bandits. In Proc. of the 18th Int. Conf. on Artificial Intelligence and Statistics, 2015. [16] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. [17] S. Magureanu, R. Combes, and A. Proutière. Lipschitz bandits: Regret lower bounds and optimal algorithms. In Proc. of the Conf. on Learning Theory, 2014. [18] F. Radlinski, R. Kleinberg, and T. Joachims. Learning diverse rankings with multi-armed bandits. In Proc. of the 25th Int. Conf. on Machine learning. ACM, 2008. [19] M. Richardson, E. Dominowska, and R. Ragno. Predicting clicks: estimating the click-through rate for new ads. In Proc. of the 16th Int. Conf. on World Wide Web. ACM, 2007. [20] K. Sumeet, B. Kveton, C. Szepesvári, and Z. Wen. DCM bandits: Learning to rank with multiple clicks. In Proc. of the 33rd Int. Conf. on Machine Learning, 2016. 9
2016
190
6,095
Learning values across many orders of magnitude Hado van Hasselt Arthur Guez Matteo Hessel Google DeepMind Volodymyr Mnih David Silver Abstract Most learning algorithms are not invariant to the scale of the signal that is being approximated. We propose to adaptively normalize the targets used in the learning updates. This is important in value-based reinforcement learning, where the magnitude of appropriate value approximations can change over time when we update the policy of behavior. Our main motivation is prior work on learning to play Atari games, where the rewards were clipped to a predetermined range. This clipping facilitates learning across many different games with a single learning algorithm, but a clipped reward function can result in qualitatively different behavior. Using adaptive normalization we can remove this domain-specific heuristic without diminishing overall performance. 1 Introduction Many machine-learning algorithms rely on a-priori access to data to properly tune relevant hyperparameters [Bergstra et al., 2011, Bergstra and Bengio, 2012, Snoek et al., 2012]. It is much harder to learn efficiently from a stream of data when we do not know the magnitude of the function we seek to approximate beforehand, or if these magnitudes can change over time, as is typically the case in reinforcement learning when the policy of behavior improves over time. Our main motivation is the work by Mnih et al. [2015], in which Q-learning [Watkins, 1989] is combined with a deep convolutional neural network [cf. LeCun et al., 2015]. The resulting deep Q network (DQN) algorithm learned to play a varied set of Atari 2600 games from the Arcade Learning Environment (ALE) [Bellemare et al., 2013], which was proposed as an evaluation framework to test general learning algorithms on solving many different interesting tasks. DQN was proposed as a singular solution, using a single set of hyperparameters. The magnitudes and frequencies of rewards vary wildly between different games. For instance, in Pong the rewards are bounded by −1 and +1 while in Ms. Pac-Man eating a single ghost can yield a reward of up to +1600. To overcome this hurdle, rewards and temporal-difference errors were clipped to [−1, 1], so that DQN would perceive any positive reward as +1, and any negative reward as −1. This is not a satisfying solution for two reasons. First, the clipping introduces domain knowledge. Most games have sparse non-zero rewards. Clipping results in optimizing the frequency of rewards, rather than their sum. This is a fairly reasonable heuristic in Atari, but it does not generalize to many other domains. Second, and more importantly, the clipping changes the objective, sometimes resulting in qualitatively different policies of behavior. We propose a method to adaptively normalize the targets used in the learning updates. If these targets are guaranteed to be normalized it is much easier to find suitable hyperparameters. The proposed technique is not specific to DQN or to reinforcement learning and is more generally applicable in supervised learning and reinforcement learning. There are several reasons such normalization can be desirable. First, sometimes we desire a single system that is able to solve multiple different problems with varying natural magnitudes, as in the Atari domain. Second, for multi-variate functions the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. normalization can be used to disentangle the natural magnitude of each component from its relative importance in the loss function. This is particularly useful when the components have different units, such as when we predict signals from sensors with different modalities. Finally, adaptive normalization can help deal with non-stationary. For instance, in reinforcement learning the policy of behavior can change repeatedly during learning, thereby changing the distribution and magnitude of the values. 1.1 Related work Input normalization has long been recognized as important to efficiently learn non-linear approximations such as neural networks [LeCun et al., 1998], leading to research on how to achieve scale-invariance on the inputs [e.g., Ross et al., 2013, Ioffe and Szegedy, 2015, Desjardins et al., 2015]. Output or target normalization has not received as much attention, probably because in supervised learning data sets are commonly available before learning commences, making it straightforward to determine appropriate normalizations or to tune hyper-parameters. However, this assumes the data is available a priori, which is not true in online (potentially non-stationary) settings. Natural gradients [Amari, 1998] are invariant to reparameterizations of the function approximation, thereby avoiding many scaling issues, but these are computationally expensive for functions with many parameters such as deep neural networks. This is why approximations are regularly proposed, typically trading off accuracy to computation [Martens and Grosse, 2015], and sometimes focusing on a certain aspect such as input normalization [Desjardins et al., 2015, Ioffe and Szegedy, 2015]. Most such algorithms are not fully invariant to the scale of the target function. In the Atari domain several algorithmic variants and improvements for DQN have been proposed [van Hasselt et al., 2016, Bellemare et al., 2016, Schaul et al., 2016, Wang et al., 2016], as well as alternative solutions [Liang et al., 2016, Mnih et al., 2016]. However, none of these address the clipping of the rewards or explicitly discuss the impacts of clipping on performance or behavior. 1.2 Preliminaries Concretely, we consider learning from a stream of data {(Xt, Yt)}1 t=1 where the inputs Xt 2 Rn and targets Yt 2 Rk are real-valued tensors. The aim is to update parameters ✓of a function f✓: Rn ! Rk such that the output f✓(Xt) is (in expectation) close to the target Yt according to some loss lt(f✓), for instance defined as a squared difference: lt(f✓) = 1 2(f✓(Xt) −Yt)>(f✓(Xt) −Yt). A canonical update is stochastic gradient descent (SGD). For a sample (Xt, Yt), the update is then ✓t+1 = ✓t −↵r✓lt(f✓), where ↵2 [0, 1] is a step size. The magnitude of this update depends on both the step size and the loss, and it is hard to pick suitable step sizes when nothing is known about the magnitude of the loss. An important special case is when f✓is a neural network [McCulloch and Pitts, 1943, Rosenblatt, 1962], which are often trained with a form of SGD [Rumelhart et al., 1986], with hyperparameters that interact with the scale of the loss. Especially for deep neural networks [LeCun et al., 2015, Schmidhuber, 2015] large updates may harm learning, because these networks are highly non-linear and such updates may ‘bump’ the parameters to regions with high error. 2 Adaptive normalization with Pop-Art We propose to normalize the targets Yt, where the normalization is learned separately from the approximating function. We consider an affine transformation of the targets ˜Yt = ⌃−1 t (Yt −µt) , (1) where ⌃t and µt are scale and shift parameters that are learned from data. The scale matrix ⌃t can be dense, diagonal, or defined by a scalar σt as ⌃t = σtI. Similarly, the shift vector µt can contain separate components, or be defined by a scalar µt as µt = µt1. We can then define a loss on a normalized function g(Xt) and the normalized target ˜Yt. The unnormalized approximation for any input x is then given by f(x) = ⌃g(x) + µ, where g is the normalized function and f is the unnormalized function. At first glance it may seem we have made little progress. If we learn ⌃and µ using the same algorithm as used for the parameters of the function g, then the problem has not become fundamentally different 2 or easier; we would have merely changed the structure of the parameterized function slightly. Conversely, if we consider tuning the scale and shift as hyperparameters then tuning them is not fundamentally easier than tuning other hyperparameters, such as the step size, directly. Fortunately, there is an alternative. We propose to update ⌃and µ according to a separate objective with the aim of normalizing the updates for g. Thereby, we decompose the problem of learning an appropriate normalization from learning the specific shape of the function. The two properties that we want to simultaneously achieve are (ART) to update scale ⌃and shift µ such that ⌃−1(Y −µ) is appropriately normalized, and (POP) to preserve the outputs of the unnormalized function when we change the scale and shift. We discuss these properties separately below. We refer to algorithms that combine output-preserving updates and adaptive rescaling, as Pop-Art algorithms, an acronym for “Preserving Outputs Precisely, while Adaptively Rescaling Targets”. 2.1 Preserving outputs precisely Unless care is taken, repeated updates to the normalization might make learning harder rather than easier because the normalized targets become non-stationary. More importantly, whenever we adapt the normalization based on a certain target, this would simultaneously change the output of the unnormalized function of all inputs. If there is little reason to believe that other unnormalized outputs were incorrect, this is undesirable and may hurt performance in practice, as illustrated in Section 3. We now first discuss how to prevent these issues, before we discuss how to update the scale and shift. The only way to avoid changing all outputs of the unnormalized function whenever we update the scale and shift is by changing the normalized function g itself simultaneously. The goal is to preserve the outputs from before the change of normalization, for all inputs. This prevents the normalization from affecting the approximation, which is appropriate because its objective is solely to make learning easier, and to leave solving the approximation itself to the optimization algorithm. Without loss of generality the unnormalized function can be written as f✓,⌃,µ,W,b(x) ⌘ ⌃g✓,W,b(x) + µ ⌘ ⌃(Wh✓(x) + b) + µ , (2) where h✓is a parametrized (non-linear) function, and g✓,W,b = Wh✓(x) + b is the normalized function. It is not uncommon for deep neural networks to end in a linear layer, and then h✓can be the output of the last (hidden) layer of non-linearities. Alternatively, we can always add a square linear layer to any non-linear function h✓to ensure this constraint, for instance initialized as W0 = I and b0 = 0. The following proposition shows that we can update the parameters W and b to fulfill the second desideratum of preserving outputs precisely for any change in normalization. Proposition 1. Consider a function f : Rn ! Rk defined as in (2) as f✓,⌃,µ,W,b(x) ⌘ ⌃(Wh✓(x) + b) + µ , where h✓: Rn ! Rm is any non-linear function of x 2 Rn, ⌃is a k ⇥k matrix, µ and b are k-element vectors, and W is a k ⇥m matrix. Consider any change of the scale and shift parameters from ⌃to ⌃new and from µ to µnew, where ⌃new is non-singular. If we then additionally change the parameters W and b to Wnew and bnew, defined by Wnew = ⌃−1 new⌃W and bnew = ⌃−1 new (⌃b + µ −µnew) , then the outputs of the unnormalized function f are preserved precisely in the sense that f✓,⌃,µ,W,b(x) = f✓,⌃new,µnew,Wnew,bnew(x) , 8x . This and later propositions are proven in the appendix. For the special case of scalar scale and shift, with ⌃⌘σI and µ ⌘µ1, the updates to W and b become Wnew = (σ/σnew)W and bnew = (σb + µ −µnew)/σnew. After updating the scale and shift we can update the output of the normalized function g✓,W,b(Xt) toward the normalized output ˜Yt, using any learning algorithm. Importantly, the normalization can be updated first, thereby avoiding harmful large updates just before they would otherwise occur. This observation is made more precise in Proposition 2 in Section 2.2. 3 Algorithm 1 SGD on squared loss with Pop-Art For a given differentiable function h✓, initialize ✓. Initialize W = I, b = 0, ⌃= I, and µ = 0. while learning do Observe input X and target Y Use Y to compute new scale ⌃new and new shift µnew W ⌃−1 new⌃W , b ⌃−1 new(⌃b + µ −µnew) (rescale W and b) ⌃ ⌃new , µ µnew (update scale and shift) h h✓(X) (store output of h✓) J (r✓h✓,1(X), . . . , r✓h✓,m(X)) (compute Jacobian of h✓) δ Wh + b −⌃−1(Y −µ) (compute normalized error) ✓ ✓−↵JW>δ (compute SGD update for ✓) W W −↵δh> (compute SGD update for W) b b −↵δ (compute SGD update for b) end while Algorithm 1 is an example implementation of SGD with Pop-Art for a squared loss. It can be generalized easily to any other loss by changing the definition of δ. Notice that W and b are updated twice: first to adapt to the new scale and shift to preserve the outputs of the function, and then by SGD. The order of these updates is important because it allows us to use the new normalization immediately in the subsequent SGD update. 2.2 Adaptively rescaling targets A natural choice is to normalize the targets to approximately have zero mean and unit variance. For clarity and conciseness, we consider scalar normalizations. It is straightforward to extend to diagonal or dense matrices. If we have data {(Xi, Yi)}t i=1 up to some time t, we then may desire t X i=1 (Yi −µt)/σt = 0 and 1 t t X i=1 (Yi −µt)2/σ2 t = 1 , such that µt = 1 t t X i=1 Yi and σt = 1 t t X i=1 Y 2 i −µ2 t . (3) This can be generalized to incremental updates µt = (1 −βt)µt−1 + βtYt and σ2 t = ⌫t −µ2 t , where ⌫t = (1 −βt)⌫t−1 + βtY 2 t . (4) Here ⌫t estimates the second moment of the targets and βt 2 [0, 1] is a step size. If ⌫t −µ2 t is positive initially then it will always remain so, although to avoid issues with numerical precision it can be useful to enforce a lower bound explicitly by requiring ⌫t −µ2 t ≥✏with ✏> 0. For full equivalence to (3) we can use βt = 1/t. If βt = β is constant we get exponential moving averages, placing more weight on recent data points which is appropriate in non-stationary settings. A constant β has the additional benefit of never becoming negligibly small. Consider the first time a target is observed that is much larger than all previously observed targets. If βt is small, our statistics would adapt only slightly, and the resulting update may be large enough to harm the learning. If βt is not too small, the normalization can adapt to the large target before updating, potentially making learning more robust. In particular, the following proposition holds. Proposition 2. When using updates (4) to adapt the normalization parameters σ and µ, the normalized targets are bounded for all t by − p (1 −βt)/βt (Yt −µt)/σt  p (1 −βt)/βt . For instance, if βt = β = 10−4 for all t, then the normalized target is guaranteed to be in (−100, 100). Note that Proposition 2 does not rely on any assumptions about the distribution of the targets. This is an important result, because it implies we can bound the potential normalized errors before learning, without any prior knowledge about the actual targets we may observe. 4 Algorithm 2 Normalized SGD For a given differentiable function h✓, initialize ✓. while learning do Observe input X and target Y Use Y to compute new scale ⌃ h h✓(X) (store output of h✓) J (rh✓,1(X), . . . , rh✓,m(X))> (compute Jacobian of h✓) δ Wh + b −Y (compute unnormalized error) ✓ ✓−↵J(⌃−1W)>⌃−1δ (update ✓with scaled SGD) W W −↵δg> (update W with SGD) b b −↵δ (update b with SGD) end while It is an open question whether it is uniformly best to normalize by mean and variance. In the appendix we discuss other normalization updates, based on percentiles and mini-batches, and derive correspondences between all of these. 2.3 An equivalence for stochastic gradient descent We now step back and analyze the effect of the magnitude of the errors on the gradients when using regular SGD. This analysis suggests a different normalization algorithm, which has an interesting correspondence to Pop-Art SGD. We consider SGD updates for an unnormalized multi-layer function of form f✓,W,b(X) = Wh✓(X) + b. The update for the weight matrix W is Wt = Wt−1 + ↵tδth✓t(Xt)> , where δt = f✓,W,b(X)−Yt is gradient of the squared loss, which we here call the unnormalized error. The magnitude of this update depends linearly on the magnitude of the error, which is appropriate when the inputs are normalized, because then the ideal scale of the weights depends linearly on the magnitude of the targets.1 Now consider the SGD update to the parameters of h✓, ✓t = ✓t−1 −↵JtW> t−1δt where Jt = (rg✓,1(X), . . . , rg✓,m(X))> is the Jacobian for h✓. The magnitudes of both the weights W and the errors δ depend linearly on the magnitude of the targets. This means that the magnitude of the update for ✓depends quadratically on the magnitude of the targets. There is no compelling reason for these updates to depend at all on these magnitudes because the weights in the top layer already ensure appropriate scaling. In other words, for each doubling of the magnitudes of the targets, the updates to the lower layers quadruple for no clear reason. This analysis suggests an algorithmic solution, which seems to be novel in and of itself, in which we track the magnitudes of the targets in a separate parameter σt, and then multiply the updates for all lower layers with a factor σ−2 t . A more general version of this for matrix scalings is given in Algorithm 2. We prove an interesting, and perhaps surprising, connection to the Pop-Art algorithm. Proposition 3. Consider two functions defined by f✓,⌃,µ,W,b(x) = ⌃(Wh✓(x) + b) + µ and f✓,W,b(x) = Wh✓(x) + b , where h✓is the same differentiable function in both cases, and the functions are initialized identically, using ⌃0 = I and µ = 0, and the same initial ✓0, W0 and b0. Consider updating the first function using Algorithm 1 (Pop-Art SGD) and the second using Algorithm 2 (Normalized SGD). Then, for any sequence of non-singular scales {⌃t}1 t=1 and shifts {µt}1 t=1, the algorithms are equivalent in the sense that 1) the sequences {✓t}1 t=0 are identical, 2) the outputs of the functions are identical, for any input. The proposition shows a duality between normalizing the targets, as in Algorithm 1, and changing the updates, as in Algorithm 2. This allows us to gain more intuition about the algorithm. In particular, 1In general care should be taken that the inputs are well-behaved; this is exactly the point of recent work on input normalization [Ioffe and Szegedy, 2015, Desjardins et al., 2015]. 5 0 2500 5000 number of samples 10 100 1000 10000 RMSE (log scale) Pop-Art Art SGD Fig. 1a. Median RMSE on binary regression for SGD without normalization (red), with normalization but without preserving outputs (blue, labeled ‘Art’), and with Pop-Art (green). Shaded 10–90 percentiles. Fig. 1b. `2 gradient norms for DQN during learning on 57 Atari games with actual unclipped rewards (left, red), clipped rewards (middle, blue), and using PopArt (right, green) instead of clipping. Shaded areas correspond to 95%, 90% and 50% of games. in Algorithm 2 the updates in top layer are not normalized, thereby allowing the last linear layer to adapt to the scale of the targets. This is in contrast to other algorithms that have some flavor of adaptive normalization, such as RMSprop [Tieleman and Hinton, 2012], AdaGrad [Duchi et al., 2011], and Adam [Kingma and Adam, 2015] that each component in the gradient by a square root of an empirical second moment of that component. That said, these methods are complementary, and it is straightforward to combine Pop-Art with other optimization algorithms than SGD. 3 Binary regression experiments We first analyze the effect of rare events in online learning, when infrequently a much larger target is observed. Such events can for instance occur when learning from noisy sensors that sometimes captures an actual signal, or when learning from sparse non-zero reinforcements. We empirically compare three variants of SGD: without normalization, with normalization but without preserving outputs precisely (i.e., with ‘Art’, but without ‘Pop’), and with Pop-Art. The inputs are binary representations of integers drawn uniformly randomly between 0 and n = 210−1. The desired outputs are the corresponding integer values. Every 1000 samples, we present the binary representation of 216 −1 as input (i.e., all 16 inputs are 1) and as target 216 −1 = 65, 535. The approximating function is a fully connected neural network with 16 inputs, 3 hidden layers with 10 nodes per layer, and tanh internal activation functions. This simple setup allows extensive sweeps over hyper-parameters, to avoid bias towards any algorithm by the way we tune these. The step sizes ↵for SGD and β for the normalization are tuned by a grid search over {10−5, 10−4.5, . . . , 10−1, 10−0.5, 1}. Figure 1a shows the root mean squared error (RMSE, log scale) for each of 5000 samples, before updating the function (so this is a test error, not a train error). The solid line is the median of 50 repetitions, and shaded region covers the 10th to 90th percentiles. The plotted results correspond to the best hyper-parameters according to the overall RMSE (i.e., area under the curve). The lines are slightly smoothed by averaging over each 10 consecutive samples. SGD favors a relatively small step size (↵= 10−3.5) to avoid harmful large updates, but this slows learning on the smaller updates; the error curve is almost flat in between spikes. SGD with adaptive normalization (labeled ‘Art’) can use a larger step size (↵= 10−2.5) and therefore learns faster, but has high error after the spikes because the changing normalization also changes the outputs of the smaller inputs, increasing the errors on these. In comparison, Pop-Art performs much better. It prefers the same step size as Art (↵= 10−2.5), but Pop-Art can exploit a much faster rate for the statistics (best performance with β = 10−0.5 for Pop-Art and β = 10−4 for Art). The faster tracking of statistics protects Pop-Art from the large spikes, while the output preservation avoids invalidating 6 the outputs for smaller targets. We ran experiments with RMSprop but left these out of the figure as the results were very similar to SGD. 4 Atari 2600 experiments An important motivation for this work is reinforcement learning with non-linear function approximators such as neural networks (sometimes called deep reinforcement learning). The goal is to predict and optimize action values defined as the expected sum of future rewards. These rewards can differ arbitrarily from one domain to the next, and non-zero rewards can be sparse. As a result, the action values can span a varied and wide range which is often unknown before learning commences. Mnih et al. [2015] combined Q-learning with a deep neural network in an algorithm called DQN, which impressively learned to play many games using a single set of hyper-parameters. However, as discussed above, to handle the different reward magnitudes with a single system all rewards were clipped to the interval [−1, 1]. This is harmless in some games, such as Pong where no reward is ever higher than 1 or lower than −1, but it is not satisfactory as this heuristic introduces specific domain knowledge that optimizing reward frequencies is approximately is useful as optimizing the total score. However, the clipping makes the DQN algorithm blind to differences between certain actions, such as the difference in reward between eating a ghost (reward >= 100) and eating a pellet (reward = 25) in Ms. Pac-Man. We hypothesize that 1) overall performance decreases when we turn off clipping, because it is not possible to tune a step size that works on many games, 2) that we can regain much of the lost performance by with Pop-Art. The goal is not to improve state-of-the-art performance, but to remove the domain-dependent heuristic that is induced by the clipping of the rewards, thereby uncovering the true rewards. We ran the Double DQN algorithm [van Hasselt et al., 2016] in three versions: without changes, without clipping both rewards and temporal difference errors, and without clipping but additionally using Pop-Art. The targets are the cumulation of a reward and the discounted value at the next state: Yt = Rt+1 + γQ(St, argmax a Q(St, a; ✓); ✓−) , (5) where Q(s, a; ✓) is the estimated action value of action a in state s according to current parameters ✓, and where ✓−is a more stable periodic copy of these parameters [cf. Mnih et al., 2015, van Hasselt et al., 2016, for more details]. This is a form of Double Q-learning [van Hasselt, 2010]. We roughly tuned the main step size and the step size for the normalization to 10−4. It is not straightforward to tune the unclipped version, for reasons that will become clear soon. Figure 1b shows `2 norm of the gradient of Double DQN during learning as a function of number of training steps. The left plot corresponds to no reward clipping, middle to clipping (as per original DQN and Double DQN), and right to using Pop-Art instead of clipping. Each faint dashed lines corresponds to the median norms (where the median is taken over time) on one game. The shaded areas correspond to 50%, 90%, and 95% of games. Without clipping the rewards, Pop-Art produces a much narrower band within which the gradients fall. Across games, 95% of median norms range over less than two orders of magnitude (roughly between 1 and 20), compared to almost four orders of magnitude for clipped Double DQN, and more than six orders of magnitude for unclipped Double DQN without Pop-Art. The wide range for the latter shows why it is impossible to find a suitable step size with neither clipping nor Pop-Art: the updates are either far too small on some games or far too large on others. After 200M frames, we evaluated the actual scores of the best performing agent in each game on 100 episodes of up to 30 minutes of play, and then normalized by human and random scores as described by Mnih et al. [2015]. Figure 2 shows the differences in normalized scores between (clipped) Double DQN and Double DQN with Pop-Art. The main eye-catching result is that the distribution in performance drastically changed. On some games (e.g., Gopher, Centipede) we observe dramatic improvements, while on other games (e.g., Video Pinball, Star Gunner) we see a substantial decrease. For instance, in Ms. Pac-Man the clipped Double DQN agent does not care more about ghosts than pellets, but Double DQN with Pop-Art learns to actively hunt ghosts, resulting in higher scores. Especially remarkable is the improved performance on games like Centipede and Gopher, but also notable is a game like Frostbite which went from below 50% to a near-human performance level. Raw scores can be found in the appendix. 7 Video Pinball Star Gunner James Bond Double Dunk Breakout Time Pilot Wizard of Wor Defender Phoenix Chopper Command Q*Bert Battle Zone Amidar Skiing Beam Rider Tutankham H.E.R.O. River Raid Seaquest Ice Hockey Robotank Alien Up and Down Berzerk Pong Montezuma’s Revenge Private Eye Freeway Pitfall Gravitar Surround Space Invaders Asteroids Kangaroo Crazy Climber Bank Heist Solaris Yars Revenge Asterix Kung-Fu Master Bowling Ms. Pacman Frostbite Zaxxon Road Runner Fishing Derby Boxing Venture Name This Game Enduro Krull Tennis Demon Attack Centipede Assault Atlantis Gopher -1600% -800% -400% -200% -100% 0% 100% 200% 400% 800% 1600% Normalized differences Figure 2: Differences between normalized scores for Double DQN with and without Pop-Art on 57 Atari games. Some games fare worse with unclipped rewards because it changes the nature of the problem. For instance, in Time Pilot the Pop-Art agent learns to quickly shoot a mothership to advance to a next level of the game, obtaining many points in the process. The clipped agent instead shoots at anything that moves, ignoring the mothership. However, in the long run in this game more points are scored with the safer and more homogeneous strategy of the clipped agent. One reason for the disconnect between the seemingly qualitatively good behavior combined with lower scores is that the agents are fairly myopic: both use a discount factor of γ = 0.99, and therefore only optimize rewards that happen within a dozen or so seconds into the future. On the whole, the results show that with Pop-Art we can successfully remove the clipping heuristic that has been present in all prior DQN variants, while retaining overall performance levels. Double DQN with Pop-Art performs slightly better than Double DQN with clipped rewards: on 32 out of 57 games performance is at least as good as clipped Double DQN and the median (+0.4%) and mean (+34%) differences are positive. 5 Discussion We have demonstrated that Pop-Art can be used to adapt to different and non-stationary target magnitudes. This problem was perhaps not previously commonly appreciated, potentially because in deep learning it is common to tune or normalize a priori, using an existing data set. This is not as straightforward in reinforcement learning when the policy and the corresponding values may repeatedly change over time. This makes Pop-Art a promising tool for deep reinforcement learning, although it is not specific to this setting. We saw that Pop-Art can successfully replace the clipping of rewards as done in DQN to handle the various magnitudes of the targets used in the Q-learning update. Now that the true problem is exposed to the learning algorithm we can hope to make further progress, for instance by improving the exploration [Osband et al., 2016], which can now be informed about the true unclipped rewards. References S. I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998. ISSN 0899-7667. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Intell. Res. (JAIR), 47:253–279, 2013. 8 M. G. Bellemare, G. Ostrovski, A. Guez, P. S. Thomas, and R. Munos. Increasing the action gap: New operators for reinforcement learning. In AAAI, 2016. J. Bergstra and Y. Bengio. Random search for hyper-parameter optimization. The Journal of Machine Learning Research, 13(1):281–305, 2012. J. S. Bergstra, R. Bardenet, Y. Bengio, and B. Kégl. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems, pages 2546–2554, 2011. G. Desjardins, K. Simonyan, R. Pascanu, and K. Kavukcuoglu. Natural neural networks. In Advances in Neural Information Processing Systems, pages 2062–2070, 2015. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. D. P. Kingma and J. B. Adam. A method for stochastic optimization. In International Conference on Learning Representation, 2015. Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Y. LeCun, Y. Bengio, and G. Hinton. Deep learning. Nature, 521(7553):436–444, 05 2015. Y. Liang, M. C. Machado, E. Talvitie, and M. H. Bowling. State of the art control of atari games using shallow reinforcement learning. In International Conference on Autonomous Agents and Multiagent Systems, 2016. J. Martens and R. B. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 2408–2417, 2015. W. S. McCulloch and W. Pitts. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4):115–133, 1943. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540): 529–533, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, 2016. I. Osband, C. Blundell, A. Pritzel, and B. Van Roy. Deep exploration via bootstrapped DQN. CoRR, abs/1602.04621, 2016. F. Rosenblatt. Principles of Neurodynamics. Spartan, New York, 1962. S. Ross, P. Mineiro, and J. Langford. Normalized online learning. In Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence, 2013. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. In Parallel Distributed Processing, volume 1, pages 318–362. MIT Press, 1986. T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay. In International Conference on Learning Representations, Puerto Rico, 2016. J. Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015. J. Snoek, H. Larochelle, and R. P. Adams. Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems, pages 2951–2959, 2012. T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 2012. H. van Hasselt. Double Q-learning. Advances in Neural Information Processing Systems, 23:2613–2621, 2010. H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with Double Q-learning. AAAI, 2016. Z. Wang, N. de Freitas, T. Schaul, M. Hessel, H. van Hasselt, and M. Lanctot. Dueling network architectures for deep reinforcement learning. In International Conference on Machine Learning, New York, NY, USA, 2016. C. J. C. H. Watkins. Learning from delayed rewards. PhD thesis, University of Cambridge England, 1989. 9
2016
191
6,096
Attend, Infer, Repeat: Fast Scene Understanding with Generative Models S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, Geoffrey E. Hinton {aeslami,heess,theophane,tassa,dsz,korayk,geoffhinton}@google.com Google DeepMind, London, UK Abstract We present a framework for efficient inference in structured image models that explicitly reason about objects. We achieve this by performing probabilistic inference using a recurrent neural network that attends to scene elements and processes them one at a time. Crucially, the model itself learns to choose the appropriate number of inference steps. We use this scheme to learn to perform inference in partially specified 2D models (variable-sized variational auto-encoders) and fully specified 3D models (probabilistic renderers). We show that such models learn to identify multiple objects – counting, locating and classifying the elements of a scene – without any supervision, e.g., decomposing 3D images with various numbers of objects in a single forward pass of a neural network at unprecedented speed. We further show that the networks produce accurate inferences when compared to supervised counterparts, and that their structure leads to improved generalization. 1 Introduction The human percept of a visual scene is highly structured. Scenes naturally decompose into objects that are arranged in space, have visual and physical properties, and are in functional relationships with each other. Artificial systems that interpret images in this way are desirable, as accurate detection of objects and inference of their attributes is thought to be fundamental for many problems of interest. Consider a robot whose task is to clear a table after dinner. To plan its actions it will need to determine which objects are present, what classes they belong to and where each one is located on the table. The notion of using structured models for image understanding has a long history (e.g., ‘vision as inverse graphics’ [4]), however in practice it has been difficult to define models that are: (a) expressive enough to capture the complexity of natural scenes, and (b) amenable to tractable inference. Meanwhile, advances in deep learning have shown how neural networks can be used to make sophisticated predictions from images using little interpretable structure (e.g., [10]). Here we explore the intersection of structured probabilistic models and deep networks. Prior work on deep generative methods (e.g., VAEs [9]) have been mostly unstructured, therefore despite producing impressive samples and likelihood scores their representations have lacked interpretable meaning. On the other hand, structured generative methods have largely been incompatible with deep learning, and therefore inference has been hard and slow (e.g., via MCMC). Our proposed framework achieves scene interpretation via learned, amortized inference, and it imposes structure on its representation through appropriate partly- or fully-specified generative models, rather than supervision from labels. It is important to stress that by training generative models, the aim is not primarily to obtain good reconstructions, but to produce good representations, in other words to understand scenes. We show experimentally that by incorporating the right kinds of structures, our models produce representations that are more useful for downstream tasks than those produced by VAEs or state-of-the-art generative models such as DRAW [3]. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The proposed framework crucially allows for reasoning about the complexity of a given scene (the dimensionality of its latent space). We demonstrate that via an Occam’s razor type effect, this makes it possible to discover the underlying causes of a dataset of images in an unsupervised manner. For instance, the model structure will enforce that a scene is formed by a variable number of entities that appear in different locations, but the process of learning will identify what these scene elements look like and where they appear in any given image. The framework also combines high-dimensional distributed representations with directly interpretable latent variables (e.g., affine pose). This combination makes it easier to avoid the pitfalls of models that are too unconstrained (leading to data-hungry learning) or too rigid (leading to failure via mis-specification). The main contributions of the paper are as follows. First, in Sec. 2 we formalize a scheme for efficient variational inference in latent spaces of variable dimensionality. The key idea is to treat inference as an iterative process, implemented as a recurrent neural network that attends to one object at a time, and learns to use an appropriate number of inference steps for each image. We call the proposed framework Attend-Infer-Repeat (AIR). End-to-end learning is enabled by recent advances in amortized variational inference, e.g., combining gradient based optimization for continuous latent variables with black-box optimization for discrete ones. Second, in Sec. 3 we show that AIR allows for learning of generative models that decompose multi-object scenes into their underlying causes, e.g., the constituent objects, in an unsupervised manner. We demonstrate these capabilities on MNIST digits (Sec. 3.1), overlapping sprites and Omniglot glyphs (appendices H and G). We show that model structure can provide an important inductive bias that is not easily learned otherwise, leading to improved generalization. Finally, in Sec. 3.2 we demonstrate how our inference framework can be used to perform inference for a 3D rendering engine with unprecedented speed, recovering the counts, identities and 3D poses of complex objects in scenes with significant occlusion in a single forward pass of a neural network, providing a scalable approach to ‘vision as inverse graphics’. 2 Approach In this paper we take a Bayesian perspective of scene interpretation, namely that of treating this task as inference in a generative model. Thus given an image x and a model px ✓(x|z)pz ✓(z) parameterized by ✓we wish to recover the underlying scene description z by computing the posterior p(z|x) = px ✓(x|z)pz ✓(z)/p(x). In this view, the prior pz ✓(z) captures our assumptions about the underlying scene, and the likelihood px ✓(x|z) is our model of how a scene description is rendered to form an image. Both can take various forms depending on the problem at hand and we will describe particular instances in Sec. 3. Together, they define the language that we use to describe a scene. Many real-world scenes naturally decompose into objects. We therefore make the modeling assumption that the scene description is structured into groups of variables zi, where each group describes the attributes of one of the objects in the scene, e.g., its type, appearance, and pose. Since the number of objects will vary from scene to scene, we assume models of the following form: p✓(x) = N X n=1 pN(n) Z pz ✓(z|n)px ✓(x|z)dz. (1) This can be interpreted as follows. We first sample the number of objects n from a suitable prior (for instance a Binomial distribution) with maximum value N. The latent, variable length, scene descriptor z = (z1, z2, . . . , zn) is then sampled from a scene model z ⇠pz ✓(·|n). Finally, we render the image according to x ⇠px ✓(·|z). Since the indexing of objects is arbitrary, pz ✓(·) is exchangeable and px ✓(x|·) is permutation invariant, and therefore the posterior over z is exchangeable. The prior and likelihood terms can take different forms. We consider two scenarios: For 2D scenes (Sec. 3.1), each object is characterized in terms of a learned distributed continuous representation for its shape, and a continuous 3-dimensional variable for its pose (position and scale). For 3D scenes (Sec. 3.2), objects are defined in terms of a categorical variable that characterizes their identity, e.g., sphere, cube or cylinder, as well as their positions and rotations. We refer to the two kinds of variables for each object i in both scenarios as zi what and zi where respectively, bearing in mind that their meaning (e.g., position and scale in pixel space vs. position and orientation in 3D space) and their data type (continuous vs. discrete) will vary. We further assume that zi are independent under the prior, i.e., pz ✓(z|n) = Qn i=1 pz ✓(zi), but non-independent priors, such as a distribution over hierarchical scene graphs (e.g., [28]), can also be accommodated. Furthermore, while the number of objects is bounded as per Eq. 1, it is relatively straightforward to relax this assumption. 2 x z1 z3 x z Decoder x y z Decoder y h2 h3 z1 z2 z3 x h1 Figure 1: Left: A single random variable z produces the observation x (the image). The relationship between z and x is specified by a model. Inference is the task of computing likely values of z given x. Using an auto-encoding architecture, the model (red arrow) and its inference network (black arrow) can be trained end-to-end via gradient descent. Right: For most images of interest, multiple latent variables (e.g., multiple objects) give rise to the image. We propose an iterative, variable-length inference network (black arrows) that attends to one object at a time, and train it jointly with its model. The result is fast, feed-forward, interpretable scene understanding trained without supervision. 2.1 Inference Despite their natural appeal, inference for most models in the form of Eq. 1 is intractable due to the dimensionality of the integral. We therefore employ an amortized variational approximation to the true posterior by learning a distribution qφ(z, n|x) parameterized by φ that minimizes KL [qφ(z, n|x)||pz ✓(z, n|x)]. While such approximations have recently been used successfully in a variety of works [21, 9, 18] the specific form of our model poses two additional difficulties. Trans-dimensionality: As a challenging departure from classical latent space models, the size of the latent space n (i.e., the number of objects) is a random variable itself, which necessitates evaluating pN(n|x) = R pz ✓(z, n|x)dz, for all n = 1...N. Symmetry: There are strong symmetries that arise, for instance, from alternative assignments of objects appearing in an image x to latent variables zi. We address these challenges by formulating inference as an iterative process implemented as a recurrent neural network, which infers the attributes of one object at a time. The network is run for N steps and in each step explains one object in the scene, conditioned on the image and on its knowledge of previously explained objects (see Fig. 1). To simplify sequential reasoning about the number of objects, we parameterize n as a variable length latent vector zpres using a unary code: for a given value n, zpres is the vector formed of n ones followed by one zero. Note that the two representations are equivalent. The posterior takes the following form: qφ(z, zpres|x) = qφ(zn+1 pres = 0|z1:n, x) n Y i=1 qφ(zi, zi pres = 1|x, z1:i−1). (2) qφ is implemented as a neural network that, in each step, outputs the parameters of the sampling distributions over the latent variables, e.g., the mean and standard deviation of a Gaussian distribution for continuous variables. zpres can be understood as an interruption variable: at each time step, if the network outputs zpres = 1, it describes at least one more object and proceeds, but if it outputs zpres = 0, no more objects are described, and inference terminates for that particular datapoint. Note that conditioning of zi|x, z1:i−1 is critical to capture dependencies between the latent variables zi in the posterior, e.g., to avoid explaining the same object twice. The specifics of the networks that achieve this depend on the particularities of the models and we will describe them in detail in Sec. 3. 2.2 Learning We can jointly optimize the parameters ✓of the model and φ of the inference network by maximizing the lower bound on the marginal likelihood of an image under the model: log p✓(x) ≥L(✓, φ) = Eqφ h log p✓(x,z,n) qφ(z,n,|x) i with respect ✓and φ. L is called the negative free energy. We provide an outline of how to construct an estimator of the gradient of this quantity below, for more details see [23]. Computing a Monte Carlo estimate of @ @✓L is relatively straightforward: given a sample from the approximate posterior (z, zpres) ⇠qφ(·|x) (i.e., when the latent variables have been ‘filled in’) we can readily compute @ @✓log p✓(x, z, n) provided p is differentiable in ✓. 3 Computing a Monte Carlo estimate of @ @φL is more involved. As discussed above, the RNN that implements qφ produces the parameters of the sampling distributions for the scene variables z and presence variables zpres. For a time step i, denote with !i all the parameters of the sampling distributions of variables in (zi pres, zi). We parameterize the dependence of this distribution on z1:i−1 and x using a recurrent function Rφ(·) implemented as a neural network such that (!i, hi) = Rφ(x, hi−1) with hidden variables h. The full gradient is obtained via chain rule: @L/@φ = P i @L/@!i ⇥@!i/φ. Below we explain how to compute @L/@!i. We first rewrite our cost function as follows: L(✓, φ) = Eqφ [`(✓, φ, z, n)] where `(✓, φ, z, n) is defined as log p✓(x,z,n) qφ(z,n,|x). Let zi be an arbitrary element of the vector (zi, zi pres) of type {what, where, pres}. How to proceed depends on whether zi is continuous or discrete. Continuous: Suppose zi is a continuous variable. We use the path-wise estimator (also known as the ‘re-parameterization trick’, e.g., [9, 23]), which allows us to ‘back-propagate’ through the random variable zi. For many continuous variables (in fact, without loss of generality), zi can be sampled as h(⇠, !i), where h is a deterministic transformation function, and ⇠a random variable from a fixed noise distribution p(⇠) giving the gradient estimate: @L @!i ⇡@`(✓, φ, z, n)/@zi ⇥@h/@!i. Discrete: For discrete scene variables (e.g., zi pres) we cannot compute the gradient @L/@!i j by back-propagation. Instead we use the likelihood ratio estimator [18, 23]. Given a posterior sample (z, n) ⇠qφ(·|x) we can obtain a Monte Carlo estimate of the gradient: @L/@!i ⇡ @ log q(zi|!i)/@!i `(✓, φ, z, n). In the raw form presented here this gradient estimate is likely to have high variance. We reduce its variance using appropriately structured neural baselines [18] that are functions of the image and the latent variables produced so far. 3 Models and Experiments We first apply AIR to a dataset of multiple MNIST digits, and show that it can reliably learn to detect and generate the constituent digits from scratch (Sec. 3.1). We show that this provides advantages over state-of-the-art generative models such as DRAW [3] in terms of computational effort, generalization to unseen datasets, and the usefulness of the inferred representations for downstream tasks. We also apply AIR to a setting where a 3D renderer is specified in advance. We show that AIR learns to use the renderer to infer the counts, identities and poses of multiple objects in synthetic and real table-top scenes with unprecedented speed (Sec. 3.2 and appendix J). Details of the AIR model and networks used in the 2D experiments are shown in Fig. 2. The generative model (Fig. 2, left) draws n ⇠Geom(⇢) digits {yi att}, scales and shifts them according to zi where ⇠N(0, ⌃) using spatial transformers, and sums the results {yi} to form the image. Each digit is obtained by first sampling a latent code zi what from the prior zi what ⇠N(0, 1) and propagating it through a decoder network. The learnable parameters of the generative model are the parameters of this decoder network. The AIR inference network (Fig. 2, middle) produces three sets of variables for each entity at every time-step: a 1-dimensional Bernoulli variable indicating the entity’s presence, a C-dimensional distributed vector describing its class or appearance (zi what), and a 3-dimensional vector specifying the affine parameters of its position and scale (zi where). Fig. 2 (right) shows the interaction between the inference and generation networks at every time-step. The inferred pose is used to attend to a part of the image (using a spatial transformer) to produce xi att, which is processed to produce the inferred code zi code and the reconstruction of the contents of the attention window yi att. The same pose information is used by the generative model to transform yi att to obtain yi. This contribution is only added to the canvas y if zi pres was inferred to be true. For the dataset of MNIST digits, we also investigate the behavior of a variant, difference-AIR (DAIR), which employs a slightly different recurrent architecture for the inference network (see Fig. 8 in appendix). As opposed to AIR which computes zi via hi and x, DAIR reconstructs at every time step i a partial reconstruction xi of the data x, which is set as the mean of the distribution px ✓(x|z1, z2, . . . , zi−1). We create an error canvas ∆xi = xi −x, and the DAIR inference equation Rφ is then specified as (!i, hi) = Rφ(∆xi, hi−1). 4 x zwhat y1 z1 zwhere z1 zwhat y2 z2 zwhere z2 att y1 att y2 Decoder y h2 h3 zpres z2 zpres z3 zwhat z2 zwhat z3 zwhere z2 zwhere z3 x h1 zpres z1 zwhat z1 zwhere z1 x y zpres zwhat xatt yatt hi zwhere ... VAE yi i i i i i ... Figure 2: AIR in practice: Left: The assumed generative model. Middle: AIR inference for this model. The contents of the grey box are input to the decoder. Right: Interaction between the inference and generation networks at every time-step. In our experiments the relationship between xi att and yi att is modeled by a VAE, however any generative model of patches could be used (even, e.g., DRAW). Data 1k 10k 200k Figure 3: Multi-MNIST learning: Left above: Images from the dataset. Left below: Reconstructions at different stages of training along with a visualization of the model’s attention windows. The 1st, 2nd and 3rd time-steps are displayed using red, green and blue borders respectively. A video of this sequence is provided in the supplementary material. Above right: Count accuracy over time. The model detects the counts of digits accurately, despite having never been provided supervision. Chance accuracy is 25%. Below right: The learned scanning policy for 3 different runs of training (only differing in the random seed). We visualize empirical heatmaps of the attention windows’ positions (red, and green for the first and second time-steps respectively). As expected, the policy is random. This suggests that the policy is spatial, as opposed to identity- or size-based. 3.1 Multi-MNIST We begin with a 50⇥50 dataset of multi-MNIST digits. Each image contains zero, one or two non-overlapping random MNIST digits with equal probability. The desired goal is to train a network that produces sensible explanations for each of the images. We train AIR with N = 3 on 60,000 such images from scratch, i.e., without a curriculum or any form of supervision by maximizing L with respect to the parameters of the inference network and the generative model. Upon completion of training we inspect the model’s inferences (see Fig. 3, left). We draw the reader’s attention to the following observations. First, the model identifies the number of digits correctly, due to the opposing pressures of (a) wanting to explain the scene, and (b) the cost that arises from instantiating an object under the prior. This is indicated by the number of attention windows in each image; we also plot the accuracy of count inference over the course of training (Fig. 3, above right). Second, it locates the digits accurately. Third, the recurrent network learns a suitable scanning policy to ensure that different time-steps account for different digits (Fig. 3, below right). Note that we did not have to specify any such policy in advance, nor did we have to build in a constraint to prevent two time-steps from explaining the same part of the image. Finally, that the network learns to not use the second time-step when the image contains only a single digit, and to never use the third time-step (images contain a maximum of two digits). This allows for the inference network to stop upon encountering the first zi pres equaling 0, leading to potential savings in computation during inference. A video showing real-time inference using AIR has been included in the supplementary material. We also perform experiments on Omniglot ([13], appendix G) to demonstrate AIR’s ability to parse glyphs into elements resembling ‘strokes’, as well as a dataset of sprites where the scene’s elements appear under significant overlap (appendix H). See appendices for details and results. 5 Data DAIR Data DRAW Figure 4: Strong generalization: Left: Reconstructions of images with 3 digits made by DAIR trained on 0, 1 or 2 digits, as well as a comparison with DRAW. Right: Variational lower bound, and generalizing / interpolating count accuracy. DAIR out-performs both DRAW and AIR at this task. 3.1.1 Strong Generalization Since the model learns the concept of a digit independently of the positions or numbers of times it appears in each image, one would hope that it would be able to generalize, e.g., by demonstrating an understanding of scenes that have structural differences to training scenes. We probe this behavior with the following scenarios: (a) Extrapolation: training on images each containing 0, 1 or 2 digits and then testing on images containing 3 digits, and (b) Interpolation: training on images containing 0, 1 or 3 digits and testing on images containing 2 digits. The result of this experiment is shown in Fig. 4. An AIR model trained on up to 2 digits is effectively unable to infer the correct count when presented with an image of 3 digits. We believe this to be caused by the LSTM which learns during training never to expect more than 2 digits. AIR’s generalization performance is improved somewhat when considering the interpolation task. DAIR by contrast generalizes well in both tasks (and finds interpolation to be slightly easier than extrapolation). A closely related baseline is the Deep Recurrent Attentive Writer (DRAW, [3]), which like AIR, generates data sequentially. However, DRAW has a fixed and large number of steps (40 in our experiments). As a consequence generative steps do not correspond to easily interpretable entities, complex scenes are drawn faster and simpler ones slower. We show DRAW’s reconstructions in Fig. 4. Interestingly, DRAW learns to ignore precisely one digit in the image. See appendix for further details of these experiments. 3.1.2 Representational Power Figure 5: Representational power: AIR achieves high accuracy using only a fraction of the labeled data. Left: summing two digits. Right: detecting if they appear in increasing order. Despite producing comparable reconstructions, CAE and DRAW inferences are less interpretable than AIR’s and therefore lead to poorer downstream performance. A second motivation for the use of structured models is that their inferences about a scene provides useful representations for downstream tasks. We examine this ability by first training an AIR model on 0, 1 or 2 digits and then produce inferences for a separate collection of images that contains precisely 2 digits. We split this data into training and test and consider two tasks: (a) predicting the sum of the two digits (as was done in [1]), and (b) determining if the digits appear in an ascending order. We compare with a CNN trained from the raw pixels, as well as interpretations produced by a convolutional autoencoder (CAE) and DRAW (Fig. 5). We optimize each model’s hyper-parameters (e.g. depth and size) for maximal performance. AIR achieves high accuracy even when data is scarce, indicating the power of its disentangled, structured representation. See appendix for further details. 3.2 3D Scenes The experiments above demonstrate learning of inference and generative networks in models where we impose structure in the form of a variable-sized representation and spatial attention mechanisms. We now consider an additional way of imparting knowledge to the system: we specify the generative model via a 3D renderer, i.e., we completely specify how any scene representation is transformed to produce the pixels in an image. Therefore the task is to learn to infer the counts, identities and poses of several objects, given different images containing these objects and an implementation of a 3D renderer from which we can draw new samples. This formulation of computer vision is often called ‘vision as inverse graphics’ (see e.g., [4, 15, 7]). 6 (a) Data (b) AIR (c) Sup. (d) Opt. (e) Data (f) AIR (g) Real (h) AIR Figure 6: 3D objects: Left: The task is to infer the identity and pose of a single 3D object. (a) Images from the dataset. (b) Unsupervised AIR reconstructions. (c) Supervised reconstructions. Note poor performance on cubes due to their symmetry. (d) Reconstructions after direct gradient descent. This approach is less stable and much more susceptible to local minima. Right: AIR can learn to recover the counts, identities and poses of multiple objects in a 3D table-top scene. (e,g) Generated and real images. (f,h) AIR produces fast and accurate inferences which we visualize using the renderer. The primary challenge in this view of computer vision is that of inference. While it is relatively easy to specify high-quality models in the form of probabilistic renderers, posterior inference is either extremely expensive or prone to getting stuck in local minima (e.g., via optimization or MCMC). In addition, probabilistic renderers (and in particular renderers) typically are not capable of providing gradients with respect to their inputs, and 3D scene representations often involve discrete variables, e.g., mesh identities. We address these challenges by using finite-differencing to obtain a gradient through the renderer, using the score function estimator to get gradients with respect to discrete variables, and using AIR inference to handle correlated posteriors and variable-length representations. We demonstrate the capabilities of this approach by first considering scenes consisting of only one of three objects: a red cube, a blue sphere, and a textured cylinder (see Fig. 6a). Since the scenes only consist of single objects, the task is only to infer the identity (cube, sphere, cylinder) and pose (position and rotation) of the object present in the image. We train a single-step (N = 1) AIR inference network for this task. The network is only provided with unlabeled images and is trained to maximize the likelihood of those images under the model specified by the renderer. The quality of the inferred scene representations produced is visually inspected in Fig. 6b. The network accurately and reliably infers the identity and pose of the object present in the scene. In contrast, an identical network trained to predict the ground-truth identity and pose values of the training data (in a similar style to [11]) has much more difficulty in accurately determining the cube’s orientation (Fig. 6c). The supervised loss forces the network to predict the exact angle of rotation. However this is not identifiable from the image due to rotational symmetry, which leads to conditional probabilities that are multi-modal and difficult to represent using standard network architectures. We also compare with direct optimization of the likelihood from scratch for every test image (Fig. 6d), and observe that this method is slower, less stable and more susceptible to local minima. So not only does amortization reduce the cost of inference, but it also overcomes the pitfalls of independent gradient optimization. We finally consider a more complex setup, where we infer the counts, identities and positions of a variable number of crockery items, as well as the camera position, in a table-top scene. This would be of critical importance to a robot, say, which is tasked with clearing the table. The goal is to learn to perform this task with as little supervision as possible, and indeed we observe that with AIR it is possible to do so with no supervision other than a specification of the renderer. We show reconstructions of AIR’s inferences on generated data, as well as real images of a table with varying numbers of plates, in Fig. 6 and Fig. 7. AIR’s inferences of counts, identities and positions are accurate for the most part. For transfer to real scenes we perform random color and size pertubations to rendered objects during training, however we note that robust transfer remains a challenging problem in general. We provide a quantitative comparison of AIR’s inference robustness and accuracy on generated scenes with that of a fully supervised network in Fig. 7. We consider two scenarios: one where each object type only appears exactly once, and one where objects can repeat in the scene. A naive supervised setup struggles with object repetitions or when an arbitrary ordering of the objects is imposed by the labels, however training is more straightforward when there are no repetitions. AIR achieves competitive reconstruction and counts despite the added difficulty of object repetitions. 7 Figure 7: 3D scenes details: Left: Ground-truth object and camera positions with inferred positions overlayed in red (note that inferred cup is closely aligned with ground-truth, thus not clearly visible). We demonstrate fast inference of all relevant scene elements using the AIR framework. Middle: AIR produces significantly better reconstructions and count accuracies than a supervised method on data that contains repetitions, and is even competitive on simpler data. Right: Heatmap of object locations at each time-step (top). The learned policy appears to be more dependent on identity (bottom). 4 Related Work Deep neural networks have had great success in learning to predict various quantities from images, e.g., object classes [10], camera positions [8] and actions [20]. These methods work best when large labeled datasets are available for training. At the other end of the spectrum, e.g., in ‘vision as inverse graphics’, only a generative model is specified in advance and prediction is treated as an inference problem, which is then solved using MCMC or message passing at test-time. These models range from highly specified [17, 16], to partially specified [28, 24, 25], to largely unspecified [22]. Inference is very challenging and almost always the bottle-neck in model design. Several works exploit data-driven predictions to empower the ‘vision as inverse graphics’ paradigm [5, 7]. For instance, in PICTURE [11], the authors use a deep network to distill the results of slow MCMC, speeding up predictions at test-time. Variational auto-encoders [21, 9] and their discrete counterparts [18] made the important contribution of showing how the gradient computations for learning of amortized inference and generative models could be interleaved, allowing both to be learned simultaneously in an end-to-end fashion (see also [23]). Works like that of [12] aim to learn disentangled representations in an auto-encoding framework using special network structures and / or careful training schemes. It is also worth noting that attention mechanisms in neural networks have been studied in discriminative and generative settings, e.g., [19, 6, 3]. AIR draws upon, extends and links these ideas. By its nature AIR is also related to the following problems: counting [14, 27], pondering [2], and gradient estimation through renderers [15]. It is the combination of these elements that unlocks the full capabilities of the proposed approach. 5 Discussion In this paper our aim has been to learn unsupervised models that are good at scene understanding, in addition to scene reconstruction. We presented several principled models that learn to count, locate, classify and reconstruct the elements of a scene, and do so in a fraction of a second at test-time. The main ingredients are (a) building in meaning using appropriate structure, (b) amortized inference that is attentive, iterative and variable-length, and (c) end-to-end learning. We demonstrated that model structure can provide an important inductive bias that gives rise to interpretable representations that are not easily learned otherwise. We also showed that even for sophisticated models or renderers, fast inference is possible. We do not claim to have found an ideal model for all images; many challenges remain, e.g., the difficulty of working with the reconstruction loss and that of designing models rich enough to capture all natural factors of variability. Learning in AIR is most successful when the variance of the gradients is low and the likelihood is well suited to the data. It will be of interest to examine the scaling of variance with the number of objects and alternative likelihoods. It is straightforward to extend the framework to semi- or fully-supervised settings. Furthermore, the framework admits a plug-and-play approach where existing state-of-the-art detectors, classifiers and renderers are used as sub-components of an AIR inference network. We plan to investigate these lines of research in future work. 8 References [1] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. Multiple Object Recognition with Visual Attention. In ICLR, 2015. [2] Alex Graves. Adaptive computation time for recurrent neural networks. abs/1603.08983, 2016. [3] Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Rezende, and Daan Wierstra. DRAW: A Recurrent Neural Network For Image Generation. In ICML, 2015. [4] Ulf Grenander. Pattern Synthesis: Lectures in Pattern Theory. 1976. [5] Geoffrey E. Hinton, Peter Dayan, Brendan J. Frey, and Randford M. Neal. The "wake-sleep" algorithm for unsupervised neural networks. Science, 268(5214), 1995. [6] Max Jaderberg, Karen Simonyan, Andrew Zisserman, and Koray Kavukcuoglu. Spatial Transformer Networks. 2015. [7] Varun Jampani, Sebastian Nowozin, Matthew Loper, and Peter V. Gehler. The Informed Sampler: A Discriminative Approach to Bayesian Inference in Generative Computer Vision Models. CVIU, 2015. [8] Alex Kendall, Matthew Grimes, and Roberto Cipolla. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization. In ICCV, 2015. [9] Diederik P Kingma and Max Welling. Auto-encoding variational Bayes. arXiv preprint arXiv:1312.6114, 2013. [10] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In NIPS 25, 2012. [11] Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash K. Mansinghka. Picture: A probabilistic programming language for scene perception. In CVPR, 2015. [12] Tejas D Kulkarni, William F. Whitney, Pushmeet Kohli, and Josh Tenenbaum. Deep Convolutional Inverse Graphics Network. In NIPS 28. 2015. [13] Brenden M. Lake, Ruslan Salakhutdinov, and Joshua B. Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266), 2015. [14] Victor Lempitsky and Andrew Zisserman. Learning To Count Objects in Images. In NIPS 23. 2010. [15] Matthew M. Loper and Michael J. Black. OpenDR: An Approximate Differentiable Renderer. In ECCV, volume 8695, 2014. [16] Vikash Mansinghka, Tejas Kulkarni, Yura Perov, and Josh Tenenbaum. Approximate Bayesian Image Interpretation using Generative Probabilistic Graphics Programs. In NIPS 26. 2013. [17] Brian Milch, Bhaskara Marthi, Stuart Russell, David Sontag, Daniel L. Ong, and Andrey Kolobov. BLOG: Probabilistic Models with Unknown Objects. In International Joint Conference on Artificial Intelligence, pages 1352–1359, 2005. [18] Andriy Mnih and Karol Gregor. Neural Variational Inference and Learning. In ICML, 2014. [19] Volodymyr Mnih, Nicolas Heess, Alex Graves, and Koray Kavukcuoglu. Recurrent Models of Visual Attention. In NIPS 27, 2014. [20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518, 2015. [21] Danilo J. Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic Backpropagation and Approximate Inference in Deep Generative Models. In ICML, 2014. [22] Ruslan Salakhutdinov and Geoffrey Hinton. Deep Boltzmann Machines. In AISTATS, 2009. [23] John Schulman, Nicolas Heess, Theophane Weber, and Pieter Abbeel. Gradient Estimation Using Stochastic Computation Graphs. In NIPS 28. 2015. [24] Yichuan Tang, Ruslan Salakhutdinov, and Geoffrey Hinton. Tensor Analyzers. In ICML, 2013. [25] Yichuan Tang, Nitish Srivastava, and Ruslan Salakhutdinov. Learning Generative Models With Visual Attention. In NIPS 27, 2014. [26] Emanuel Todorov, Tom Erez, and Yuval Tassa. MuJoCo: A physics engine for model-based control. In ICIRS, 2012. [27] Jianming Zhang, Shuga Ma, Mehrnoosh Sameki, Stan Sclaroff, Margrit Betke, Zhe Lin, Xiaohui Shen, Brian Price, and Radomír M˘ech. Salient Object Subitizing. In CVPR, 2015. [28] Song-Chun Zhu and David Mumford. A Stochastic Grammar of Images. Foundations and Trends in Computer Graphics and Vision, 2(4), 2006. 9
2016
192
6,097
Supervised Learning with Tensor Networks E. M. Stoudenmire Perimeter Institute for Theoretical Physics Waterloo, Ontario, N2L 2Y5, Canada David J. Schwab Department of Physics Northwestern University, Evanston, IL Abstract Tensor networks are approximations of high-order tensors which are efficient to work with and have been very successful for physics and mathematics applications. We demonstrate how algorithms for optimizing tensor networks can be adapted to supervised learning tasks by using matrix product states (tensor trains) to parameterize non-linear kernel learning models. For the MNIST data set we obtain less than 1% test set classification error. We discuss an interpretation of the additional structure imparted by the tensor network to the learned model. 1 Introduction Recently there has been growing appreciation for tensor methods in machine learning. Tensor decompositions can solve non-convex optimization problems [1, 2] and be used for other important tasks such as extracting features from input data and parameterizing neural nets [3, 4, 5]. Tensor methods have also become prominent in the field of physics, especially the use of tensor networks which accurately capture very high-order tensors while avoiding the the curse of dimensionality through a particular geometry of low-order contracted tensors [6]. The most successful use of tensor networks in physics has been to approximate exponentially large vectors arising in quantum mechanics [7, 8]. Another context where very large vectors arise is non-linear kernel learning, where input vectors x are mapped into a higher dimensional space via a feature map Φ(x) before being classified by a decision function f(x) = W · Φ(x) . (1) The feature vector Φ(x) and weight vector W can be exponentially large or even infinite. One approach to deal with such large vectors is the well-known kernel trick, which only requires working with scalar products of feature vectors [9]. In what follows we propose a rather different approach. For certain learning tasks and a specific class of feature map Φ, we find the optimal weight vector W can be approximated as a tensor network—a contracted sequence of low-order tensors. Representing W as a tensor network and optimizing it directly (without passing to the dual representation) has many interesting consequences. Training the model scales only linearly in the training set size; the evaluation cost for a test input is independent of training set size. Tensor networks are also adaptive: dimensions of tensor indices internal to the network grow and shrink during training to concentrate resources on the particular correlations within the data most useful for learning. The tensor network form of W presents opportunities to extract information hidden within the trained model and to accelerate training by optimizing different internal tensors in parallel [10]. Finally, the tensor network form is an additional type of regularization beyond the choice of feature map, and could have interesting consequences for generalization. One of the best understood types of tensor networks is the matrix product state (MPS) [11, 8], also known as the tensor train decomposition [12]. Though MPS are best at capturing one-dimensional correlations, they are powerful enough to be applied to distributions with higher-dimensional correlations as well. MPS have been very useful for studying quantum systems, and have recently 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ⇡ Figure 1: The matrix product state (MPS) decomposition, also known as a tensor train. (Lines represent tensor indices and connecting two lines implies summation.) been investigated for machine learning applications such as learning features by decomposing tensor representations of data [4] and compressing the weight layers of neural networks [5]. While applications of MPS to machine learning have been a success, one aim of the present work is to have tensor networks play a more central role in developing learning models; another is to more easily incorporate powerful algorithms and tensor networks which generalize MPS developed by the physics community for studying higher dimensional and critical systems [13, 14, 15]. But in what follows, we only consider the case of MPS tensor networks as a proof of principle. The MPS decomposition is an approximation of an order-N tensor by a contracted chain of N lowerorder tensors shown in Fig. 1. (Throughout we will use tensor diagram notation: shapes represent tensors and lines emanating from them are tensor indices; connecting two lines implies contraction of a pair of indices. We emphasize that tensor diagrams are not merely schematic, but have a rigorous algorithmic interpretation. For a helpful review of this notation, see Cichocki [16].) Representing the weights W of Eq. (1) as an MPS allows one to efficiently optimize these weights and adaptively change their number by varying W locally a few tensors at a time, in close analogy to the density matrix renormalization group (DMRG) algorithm used in physics [17, 8]. Similar alternating least squares methods for tensor trains have been explored more recently in applied mathematics [18]. This paper is organized as follows: first we propose our general approach and describe an algorithm for optimizing the weight vector W in MPS form. Then we test our approach on the MNIST handwritten digit set and find very good performance for remarkably small MPS bond dimensions. Finally, we discuss the structure of the functions realized by our proposed models. For researchers interested in reproducing our results, we have made our codes publicly available at: https://github.com/emstoudenmire/TNML. The codes are based on the ITensor library [19]. 2 Encoding Input Data Tensor networks in physics are typically used in a context where combining N independent systems corresponds to taking a tensor product of a vector describing each system. With the goal of applying similar tensor networks to machine learning, we choose a feature map of the form Φs1s2···sN (x) = φs1(x1) ⊗φs2(x2) ⊗· · · φsN (xN) . (2) The tensor Φs1s2···sN is the tensor product of a local feature map φsj(xj) applied to each input component xj of the N-dimensional vector x (where j = 1, 2, . . . , N). The indices sj run from 1 to d, where d is known as the local dimension and is a hyper-parameter defining the classification model. Though one could use a different local feature map for each input component xj, we will only consider the case of homogeneous inputs with the same local map applied to each xj. Thus each xj is mapped to a d-dimensional vector, and the full feature map Φ(x) can be viewed as a vector in a dN-dimensional space or as an order-N tensor. The tensor diagram for Φ(x) is shown in Fig. 2. This type of tensor is said be rank-1 since it is manifestly the product of N order-1 tensors. For a concrete example of this type of feature map, which we will use later, consider inputs which are grayscale images with N pixels, where each pixel value ranges from 0.0 for white to 1.0 for black. If the grayscale value of pixel number j is xj ∈[0, 1], a simple choice for the local map φsj(xj) is φsj(xj) = h cos π 2 xj  , sin π 2 xj i (3) and is illustrated in Fig. 3. The full image is represented as a tensor product of these local vectors. The above feature map is somewhat ad-hoc, and is motivated by “spin” vectors encountered in quantum systems. More research is needed to understand the best choices for φs(x), but the most crucial property seems to be that ⃗φ(x) · ⃗φ(x′) is a smooth and slowly varying function of x and x′, and induces a distance metric in feature space that tends to cluster similar images together. 2 s1 s2 s3 s4 s5 s6 = φs1 φs2 φs3 φs4 φs5 φs6 Φ Figure 2: Input data is mapped to a normalized order N tensor with a rank-1 product structure. Figure 3: For the case of a grayscale image and d = 2, each pixel value is mapped to a normalized two-component vector. The full image is mapped to the tensor product of all the local pixel vectors as shown in Fig. 2. The feature map Eq. (2) defines a kernel which is the product of N local kernels, one for each component xj of the input data. Kernels of this type have been discussed previously in Vapnik [20, p. 193] and have been argued by Waegeman et al. [21] to be useful for data where no relationship is assumed between different components of the input vector prior to learning. 3 Classification Model In what follows we are interested in classifying data with pre-assigned hidden labels, for which we choose a “one-versus-all” strategy, which we take to mean optimizing a set of functions indexed by a label ℓ f ℓ(x) = W ℓ· Φ(x) (4) and classifying an input x by choosing the label ℓfor which |f ℓ(x)| is largest. Since we apply the same feature map Φ to all input data, the only quantity that depends on the label ℓis the weight vector W ℓ. Though one can view W ℓas a collection of vectors labeled by ℓ, we will prefer to view W ℓas an order N + 1 tensor where ℓis a tensor index and f ℓ(x) is a function mapping inputs to the space of labels. The tensor diagram for evaluating f ℓ(x) for a particular input is depicted in Fig. 4. Because the weight tensor W ℓ s1s2···sN has NL · dN components, where NL is the number of labels, we need a way to regularize and optimize this tensor efficiently. The strategy we will use is to represent W ℓas a tensor network, namely as an MPS which have the key advantage that methods for manipulating and optimizing them are well understood and highly efficient. An MPS decomposition of the weight tensor W ℓhas the form W ℓ s1s2···sN = X {α} Aα1 s1 Aα1α2 s2 · · · Aℓ;αjαj+1 sj · · · AαN−1 sN (5) ` = ` W ` Φ(x) f `(x) Figure 4: The overlap of the weight tensor W ℓwith a specific input vector Φ(x) defines the decision function f ℓ(x). The label ℓfor which f ℓ(x) has maximum magnitude is the predicted label for x. 3 ` ` ⇡ Figure 5: Approximation of the weight tensor W ℓby a matrix product state. The label index ℓis placed arbitrarily on one of the N tensors but can be moved to other locations. and is illustrated in Fig. 5. Each A tensor has d m2 elements which are the latent variables parameterizing the approximation of W; the A tensors are in general not unique and can be constrained to bestow nice properties on the MPS, like making the A tensors partial isometries. The dimensions of each internal index αj of an MPS are known as the bond dimensions and are the (hyper) parameters controlling complexity of the MPS approximation. For sufficiently large bond dimensions an MPS can represent any tensor [22]. The name matrix product state refers to the fact that any specific component of the full tensor W ℓ s1s2···sN can be recovered efficiently by summing over the {αj} indices from left to right via a sequence of matrix products (the term “state” refers to the original use of MPS to describe quantum states of matter). In the above decomposition Eq. (5), the label index ℓwas arbitrarily placed on the tensor at some position j, but this index can be moved to any other tensor of the MPS without changing the overall W ℓtensor it represents. To do so, one contracts the tensor at position j with one of its neighbors, then decomposes this larger tensor using a singular value decomposition such that ℓnow belongs to the neighboring tensor—see Fig. 7(a). 4 “Sweeping” Optimization Algorithm Inspired by the very successful DMRG algorithm developed for physics applications [17, 8], here we propose a similar algorithm which “sweeps” back and forth along an MPS, iteratively minimizing the cost function defining the classification task. To describe the algorithm in concrete terms, we wish to optimize the quadratic cost C = 1 2 PNT n=1 P ℓ(f ℓ(xn) −yℓ n)2 where n runs over the NT training inputs and yℓ n is the vector of desired outputs for input n. If the correct label of xn is Ln, then yLn n = 1 and yℓ n = 0 for all other labels ℓ(i.e. a one-hot encoding). Our strategy for minimizing this cost function will be to vary only two neighboring MPS tensors at a time within the approximation Eq. (5). We could conceivably just vary one at a time, but varying two tensors makes it simple to adaptively change the MPS bond dimension. Say we want to improve the tensors at sites j and j + 1. Assume we have moved the label index ℓ to the MPS tensor at site j. First we combine the MPS tensors Aℓ sj and Asj+1 into a single “bond tensor” Bαj−1ℓαj+1 sjsj+1 by contracting over the index αj as shown in Fig. 6(a). Next we compute the derivative of the cost function C with respect to the bond tensor Bℓin order to update it using a gradient descent step. Because the rest of the MPS tensors are kept fixed, let us show that to compute the gradient it suffices to feed, or project, each input xn through the fixed “wings” of the MPS as shown on the left-hand side of Fig. 6(b) (connected lines in the diagram indicate sums over pairs of indices). The result is a projected, four-index version of the input ˜Φn shown on the right-hand of Fig. 6(b). The current decision function can be efficiently computed from this projected input ˜Φn and the current bond tensor Bℓas f ℓ(xn) = X αj−1αj+1 X sjsj+1 Bαj−1ℓαj+1 sjsj+1 (˜Φn)sjsj+1 αj−1ℓαj+1 (6) or as illustrated in Fig. 6(c). The gradient update to the tensor Bℓcan be computed as ∆Bℓ= −∂C ∂Bℓ= NT X n=1 (yℓ n −f ℓ(xn))˜Φn . (7) 4 (b) = j+1 j (c) = ` (a) ` = j+1 j ` ` ˜Φn Φn B` ˜Φn f `(xn) (d) = ` ˜Φn ∆B` (y` n −f `(xn)) X n Figure 6: Steps leading to computing the gradient of the bond tensor Bℓat bond j: (a) forming the bond tensor; (b) projecting a training input into the “MPS basis” at bond j; (c) computing the decision function in terms of a projected input; (d) the gradient correction to Bℓ. The dark shaded circular tensors in step (b) are “effective features” formed from m different linear combinations of many original features. The tensor diagram for ∆Bℓis shown in Fig. 6(d). Having computed the gradient, we use it to make a small update to Bℓ, replacing it with Bℓ+ η∆Bℓ for some small η. Having obtained our improved Bℓ, we must decompose it back into separate MPS tensors to maintain efficiency and apply our algorithm to the next bond. Assume the next bond we want to optimize is the one to the right (bond j + 1). Then we can compute a singular value decomposition (SVD) of Bℓ, treating it as a matrix with a collective row index (αj−1, sj) and collective column index (ℓ, αj+1, sj+1) as shown in Fig. 7(a). Computing the SVD this way restores the MPS form, but with the ℓindex moved to the tensor on site j + 1. If the SVD of Bℓis given by Bαj−1ℓαj+1 sjsj+1 = X α′ jαj U αj−1 sjα′ j Sα′ j αjV αjℓαj+1 sj+1 , (8) then to proceed to the next step we define the new MPS tensor at site j to be A′ sj = Usj and the new tensor at site j +1 to be A′ℓ sj+1 = SV ℓ sj+1 where a matrix multiplication over the suppressed α indices is implied. Crucially at this point, only the m largest singular values in S are kept and the rest are truncated (along with the corresponding columns of U and V †) in order to control the computational cost of the algorithm. Such a truncation is guaranteed to produce an optimal approximation of the tensor Bℓ(minimizes the norm of the difference before and after truncation); furthermore if all of the MPS tensors to the left and right of Bℓare formed from (possibly truncated) unitary matrices similar to the definition of A′ sj above, then the optimality of the truncation of Bℓapplies globally to the entire MPS as well. For further background reading on these technical aspects of MPS, see Refs. [8] and [16]. Finally, when proceeding to the next bond, it would be inefficient to fully project each training input over again into the configuration in Fig. 6(b). Instead it is only necessary to advance the projection by one site using the MPS tensor set from a unitary matrix after the SVD as shown in Fig. 7(b). This allows the cost of each local step of the algorithm to remain independent of the size of the input space, making the total algorithm scale only linearly with input space size (i.e. the number of components of an input vector x). The above algorithm highlights a key advantage of MPS and tensor networks relevant to machine learning applications. Following the SVD of the improved bond tensor B′ℓ, the dimension of the new MPS bond can be chosen adaptively based on the number of large singular values encountered in the SVD (defined by a threshold chosen in advance). Thus the MPS form of W ℓcan be compressed as much as possible, and by different amounts on each bond, while still ensuring an accurate approximation of the optimal decision function. 5 Fib Z2 ? ? ? t1, t2, t3 > 0 Z2 ? ? t1, t2, t3 < 0 t3 t1 t2 ? Z2 Z2 ~t = (0, −1, 0) ~t = (1, 0, 0) ~t = (−1, 0, 0) ~t = (0, 0, 1) ~t = (0, 1, 0) ~t = (0, 0, −1) ~t = (0, 0, 1) ~t = (0, −1, 0) Z2 Z2 Z2 (a) ` ⇡ ` SVD = (b) A0 sj A0` sj+1 A0 sj ` Usj S V ` sj+1 = B0` Figure 7: Restoration (a) of MPS form, and (b) advancing a projected training input before optimizing the tensors at the next bond. In diagram (a), if the label index ℓwas on the site j tensor before forming Bℓ, then the operation shown moves the label to site j + 1. The scaling of the above algorithm is d3 m3 N NL NT , where recall m is the typical MPS bond dimension; N the number of components of input vectors x; NL the number of labels; and NT the size of the training data set. Thus the algorithm scales linearly in the training set size: a major improvement over typical kernel-trick methods which typically scale at least as N 2 T without specialized techniques [23]. This scaling assumes that the MPS bond dimension m needed is independent of NT , which should be satisfied once NT is a large, representative sample. In practice, the training cost is dominated by the large size of the training set NT , so it would be very desirable to reduce this cost. One solution could be to use stochastic gradient descent, but our experiments at blending this approach with the MPS sweeping algorithm did not match the accuracy of using the full, or batch gradient. Mixing stochastic gradient with MPS sweeping thus appears to be non-trivial but is a promising direction for further research. 5 MNIST Handwritten Digit Test To test the tensor network approach on a realistic task, we used the MNIST data set [24]. Each image was scaled down from 28 × 28 to 14 × 14 by averaging clusters of four pixels; otherwise we performed no further modifications to the training or test sets. Working with smaller images reduced the time needed for training, with the tradeoff of having less information available for learning. When approximating the weight tensor as an MPS, one must choose a one-dimensional ordering of the local indices s1, s2, . . . , sN. We chose a “zig-zag” ordering meaning the first row of pixels are mapped to the first 14 external MPS indices; the second row to the next 14 MPS indices; etc. We then mapped each grayscale image x to a tensor Φ(x) using the local map Eq. (3). Using the sweeping algorithm in Section 4 to optimize the weights, we found the algorithm quickly converged after a few passes, or sweeps, over the MPS. Typically five or less sweeps were needed to see good convergence, with test error rates changing only hundreths of a percent thereafter. Test error rates also decreased rapidly with the maximum MPS bond dimension m. For m = 10 we found both a training and test error of about 5%; for m = 20 the error dropped to only 2%. The largest bond dimension we tried was m = 120, where after three sweeps we obtained a test error of 0.97%; the corresponding training set error was 0.05%. MPS bond dimensions in physics applications can reach many hundreds or even thousands, so it is remarkable to see such small classification errors for only m = 120. 6 Interpreting Tensor Network Models A natural question is which set of functions of the form f ℓ(x) = W ℓ· Φ(x) can be realized when using a tensor-product feature map Φ(x) of the form Eq. (2) and a tensor-network decomposition of W ℓ. As we will argue, the possible set of functions is quite general, but taking the tensor network structure into account provides additional insights, such as determining which features the model actually uses to perform classification. 6 ` Us1 Us2 Us3 Vs4 Vs5 Vs6 C W ` s1···s6 = U † sj Usj = = Vsj V † sj (a) (b) (c) Φ(x) = ˜Φ(x) Figure 8: (a) Decomposition of W ℓas an MPS with a central tensor and orthogonal site tensors. (b) Orthogonality conditions for U and V type site tensors. (c) Transformation defining a reduced feature map ˜Φ(x). 6.1 Representational Power To simplify the question of which decision functions can be realized for a tensor-product feature map of the form Eq. (2), let us fix ℓto a single label and omit it from the notation. We will also temporarily consider W to be a completely general order-N tensor with no tensor network constraint. Then f(x) is a function of the form f(x) = X {s} Ws1s2···sN φs1(x1) ⊗φs2(x2) ⊗· · · φsN (xN) . (9) If the functions {φs(x)}, s = 1, 2, . . . , d form a basis for a Hilbert space of functions over x ∈[0, 1], then the tensor product basis φs1(x1) ⊗φs2(x2) ⊗· · · φsN (xN) forms a basis for a Hilbert space of functions over x ∈[0, 1]×N. Moreover, in the limit that the basis {φs(x)} becomes complete, then the tensor product basis would also be complete and f(x) could be any square integrable function; however, practically reaching this limit would eventually require prohibitively large tensor dimensions. 6.2 Implicit Feature Selection Of course we have not been considering an arbitrary weight tensor W ℓbut instead approximating the weight tensor as an MPS tensor network. The MPS form implies that the decision function f ℓ(x) has interesting additional structure. One way to analyze this structure is to separate the MPS into a central tensor, or core tensor Cαiℓαi+1 on some bond i and constrain all MPS site tensors to be left orthogonal for sites j ≤i or right orthogonal for sites j ≥i. This means W ℓhas the decomposition W ℓ s1s2···sN = X {α} U α1 s1 · · · U αi αi−1siCℓ αiαi+1V αi+1 si+1αi+2 · · · V αN−1 sN (10) as illustrated in Fig. 8(a). To say the U and V tensors are left or right orthogonal means when viewed as matrices Uαj−1sj αj and V αj−1 sjαj these tensors have the property U †U = I and V V † = I where I is the identity; these orthogonality conditions can be understood more clearly in terms of the diagrams in Fig. 8(b). Any MPS can be brought into the form Eq. (10) through an efficient sequence of tensor contractions and SVD operations similar to the steps in Fig. 7(a). The form in Eq. (10) suggests an interpretation where the decision function f ℓ(x) acts in three stages. First, an input x is mapped into the dN dimensional feature space defined by Φ(x), which is exponentially larger than the dimension N of the input space. Next, the feature vector Φ is mapped into a much smaller m2 dimensional space by contraction with all the U and V site tensors of the MPS. This second step defines a new feature map ˜Φ(x) with m2 components as illustrated in Fig. 8(c). Finally, f ℓ(x) is computed by contracting ˜Φ(x) with Cℓ. 7 To justify calling ˜Φ(x) a feature map, it follows from the left- and right-orthogonality conditions of the U and V tensors of the MPS Eq. (10) that the indices αi and αi+1 of the core tensor C label an orthonormal basis for a subspace of the original feature space. The vector ˜Φ(x) is the projection of Φ(x) into this subspace. The above interpretation implies that training an MPS model uncovers a relatively small set of important features and simultaneously trains a decision function using only these reduced features. The feature selection step occurs when computing the SVD in Eq. (8), where any basis elements αj which do not contribute meaningfully to the optimal bond tensor are discarded. (In our MNIST experiment the first and last tensors of the MPS completely factorized during training, implying they were not useful for classification as the pixels at the corners of each image were always white.) Such a picture is roughly similar to popular interpretations of simultaneously training the hidden and output layers of shallow neural network models [25]. (MPS were first proposed for learning features in Bengua et al. [4], but with a different, lower-dimensional data representation than what is used here.) 7 Discussion We have introduced a framework for applying quantum-inspired tensor networks to supervised learning tasks. While using an MPS ansatz for the model parameters worked well even for the two-dimensional data in our MNIST experiment, other tensor networks such as PEPS [6], which are explicitly designed for two-dimensional systems, or MERA tensor networks [15], which have a multi-scale structure and can capture power-law correlations, may be more suitable and offer superior performance. Much work remains to determine the best tensor network for a given domain. There is also much room to improve the optimization algorithm by incorporating standard techniques such as mini-batches, momentum, or adaptive learning rates. It would be especially interesting to investigate unsupervised techniques for initializing the tensor network. Additionally, while the tensor network parameterization of a model clearly regularizes it in the sense of reducing the number of parameters, it would be helpful to understand the consquences of this regularization for specific learning tasks. It could also be fruitful to include standard regularizations of the parameters of the tensor network, such as weight decay or L1 penalties. We were surprised to find good generalization without using explicit parameter regularization. We anticipate models incorporating tensor networks will continue be successful for quite a large variety of learning tasks because of their treatment of high-order correlations between features and their ability to be adaptively optimized. With the additional opportunities they present for interpretation of trained models due to the internal, linear tensor network structure, we believe there are many promising research directions for tensor network models. Note: while we were preparing our final manuscript, Novikov et al. [26] published a related framework for using MPS (tensor trains) to parameterize supervised learning models. References [1] Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M. Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15:2773–2832, 2014. [2] Animashree Anandkumar, Rong Ge, Daniel Hsu, and Sham M. Kakade. A tensor approach to learning mixed membership community models. J. Mach. Learn. Res., 15(1):2239–2312, January 2014. ISSN 1532-4435. [3] Anh Huy Phan and Andrzej Cichocki. Tensor decompositions for feature extraction and classification of high dimensional datasets. Nonlinear theory and its applications, IEICE, 1(1): 37–68, 2010. [4] J.A. Bengua, H.N. Phien, and H.D. Tuan. Optimal feature extraction and classification of tensors via matrix product state decomposition. In 2015 IEEE Intl. Congress on Big Data (BigData Congress), pages 669–672, June 2015. [5] Alexander Novikov, Dmitry Podoprikhin, Anton Osokin, and Dmitry Vetrov. Tensorizing neural networks. arxiv:1509.06569, 2015. 8 [6] Glen Evenbly and Guifré Vidal. Tensor network states and geometry. Journal of Statistical Physics, 145:891–918, 2011. [7] Jacob C. Bridgeman and Christopher T. Chubb. Hand-waving and interpretive dance: An introductory course on tensor networks. arxiv:1603.03039, 2016. [8] U. Schollwöck. The density-matrix renormalization group in the age of matrix product states. Annals of Physics, 326(1):96–192, 2011. [9] K. R. Muller, S. Mika, G. Ratsch, K. Tsuda, and B. Scholkopf. An introduction to kernel-based learning algorithms. IEEE Transactions on Neural Networks, 12(2):181–201, Mar 2001. [10] E. M. Stoudenmire and Steven R. White. Real-space parallel density matrix renormalization group. Phys. Rev. B, 87:155137, Apr 2013. [11] Stellan Östlund and Stefan Rommer. Thermodynamic limit of density matrix renormalization. Phys. Rev. Lett., 75(19):3537–3540, Nov 1995. [12] I. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5): 2295–2317, 2011. [13] F. Verstraete and J. I. Cirac. Renormalization algorithms for quantum-many body systems in two and higher dimensions. cond-mat/0407066, 2004. [14] Guifré Vidal. Entanglement renormalization. Phys. Rev. Lett., 99(22):220405, Nov 2007. [15] Glen Evenbly and Guifré Vidal. Algorithms for entanglement renormalization. Phys. Rev. B, 79:144108, Apr 2009. [16] Andrzej Cichocki. Tensor networks for big data analytics and large-scale optimization problems. arxiv:1407.3124, 2014. [17] Steven R. White. Density matrix formulation for quantum renormalization groups. Phys. Rev. Lett., 69(19):2863–2866, 1992. [18] Sebastian Holtz, Thorsten Rohwedder, and Reinhold Schneider. The alternating linear scheme for tensor optimization in the tensor train format. SIAM Journal on Scientific Computing, 34(2): A683–A713, 2012. [19] ITensor Library (version 2.0.11). http://itensor.org/. [20] Vladimir Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag New York, 2000. [21] W. Waegeman, T. Pahikkala, A. Airola, T. Salakoski, M. Stock, and B. De Baets. A kernel-based framework for learning graded relations from data. Fuzzy Systems, IEEE Transactions on, 20 (6):1090–1101, Dec 2012. [22] F. Verstraete, D. Porras, and J. I. Cirac. Density matrix renormalization group and periodic boundary conditions: A quantum information perspective. Phys. Rev. Lett., 93(22):227205, Nov 2004. [23] N. Cesa-Bianchi, Y. Mansour, and O. Shamir. On the complexity of learning with kernels. Proceedings of The 28th Conference on Learning Theory, pages 297–325, 2015. [24] Christopher J.C. Burges Yann LeCun, Corinna Cortes. MNIST handwritten digit database. http://yann.lecun.com/exdb/mnist/. [25] Michael Nielsen. Neural Networks and Deep Learning. Determination Press, 2015. [26] Alexander Novikov, Mikhail Trofimov, and Ivan Oseledets. Exponential machines. arxiv:1605.03795, 2016. 9
2016
193
6,098
Structured Prediction Theory Based on Factor Graph Complexity Corinna Cortes Google Research New York, NY 10011 corinna@google.com Vitaly Kuznetsov Google Research New York, NY 10011 vitaly@cims.nyu.edu Mehryar Mohrii Courant Institute and Google New York, NY 10012 mohri@cims.nyu.edu Scott Yang Courant Institute New York, NY 10012 yangs@cims.nyu.edu Abstract We present a general theoretical analysis of structured prediction with a series of new results. We give new data-dependent margin guarantees for structured prediction for a very wide family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These are the tightest margin bounds known for both standard multi-class and general structured prediction problems. Our guarantees are expressed in terms of a data-dependent complexity measure, factor graph complexity, which we show can be estimated from data and bounded in terms of familiar quantities for several commonly used hypothesis sets along with a sparsity measure for features and graphs. Our proof techniques include generalizations of Talagrand’s contraction lemma that can be of independent interest. We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and show that learning is possible even with complex factor graphs. We present new learning bounds for this advanced setting, which we use to design two new algorithms, Voted Conditional Random Field (VCRF) and Voted Structured Boosting (StructBoost). These algorithms can make use of complex features and factor graphs and yet benefit from favorable learning guarantees. We also report the results of experiments with VCRF on several datasets to validate our theory. 1 Introduction Structured prediction covers a broad family of important learning problems. These include key tasks in natural language processing such as part-of-speech tagging, parsing, machine translation, and named-entity recognition, important areas in computer vision such as image segmentation and object recognition, and also crucial areas in speech processing such as pronunciation modeling and speech recognition. In all these problems, the output space admits some structure. This may be a sequence of tags as in part-of-speech tagging, a parse tree as in context-free parsing, an acyclic graph as in dependency parsing, or labels of image segments as in object detection. Another property common to these tasks is that, in each case, the natural loss function admits a decomposition along the output substructures. As an example, the loss function may be the Hamming loss as in part-of-speech tagging, or it may be the edit-distance, which is widely used in natural language and speech processing. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. The output structure and corresponding loss function make these problems significantly different from the (unstructured) binary classification problems extensively studied in learning theory. In recent years, a number of different algorithms have been designed for structured prediction, including Conditional Random Field (CRF) [Lafferty et al., 2001], StructSVM [Tsochantaridis et al., 2005], Maximum-Margin Markov Network (M3N) [Taskar et al., 2003], a kernel-regression algorithm [Cortes et al., 2007], and search-based approaches such as [Daumé III et al., 2009, Doppa et al., 2014, Lam et al., 2015, Chang et al., 2015, Ross et al., 2011]. More recently, deep learning techniques have also been developed for tasks including part-of-speech tagging [Jurafsky and Martin, 2009, Vinyals et al., 2015a], named-entity recognition [Nadeau and Sekine, 2007], machine translation [Zhang et al., 2008], image segmentation [Lucchi et al., 2013], and image annotation [Vinyals et al., 2015b]. However, in contrast to the plethora of algorithms, there have been relatively few studies devoted to the theoretical understanding of structured prediction [Bakir et al., 2007]. Existing learning guarantees hold primarily for simple losses such as the Hamming loss [Taskar et al., 2003, Cortes et al., 2014, Collins, 2001] and do not cover other natural losses such as the edit-distance. They also typically only apply to specific factor graph models. The main exception is the work of McAllester [2007], which provides PAC-Bayesian guarantees for arbitrary losses, though only in the special case of randomized algorithms using linear (count-based) hypotheses. This paper presents a general theoretical analysis of structured prediction with a series of new results. We give new data-dependent margin guarantees for structured prediction for a broad family of loss functions and a general family of hypotheses, with an arbitrary factor graph decomposition. These are the tightest margin bounds known for both standard multi-class and general structured prediction problems. For special cases studied in the past, our learning bounds match or improve upon the previously best bounds (see Section 3.3). In particular, our bounds improve upon those of Taskar et al. [2003]. Our guarantees are expressed in terms of a data-dependent complexity measure, factor graph complexity, which we show can be estimated from data and bounded in terms of familiar quantities for several commonly used hypothesis sets along with a sparsity measure for features and graphs. We further extend our theory by leveraging the principle of Voted Risk Minimization (VRM) and show that learning is possible even with complex factor graphs. We present new learning bounds for this advanced setting, which we use to design two new algorithms, Voted Conditional Random Field (VCRF) and Voted Structured Boosting (StructBoost). These algorithms can make use of complex features and factor graphs and yet benefit from favorable learning guarantees. As a proof of concept validating our theory, we also report the results of experiments with VCRF on several datasets. The paper is organized as follows. In Section 2 we introduce the notation and definitions relevant to our discussion of structured prediction. In Section 3, we derive a series of new learning guarantees for structured prediction, which are then used to prove the VRM principle in Section 4. Section 5 develops the algorithmic framework which is directly based on our theory. In Section 6, we provide some preliminary experimental results that serve as a proof of concept for our theory. 2 Preliminaries Let X denote the input space and Y the output space. In structured prediction, the output space may be a set of sequences, images, graphs, parse trees, lists, or some other (typically discrete) objects admitting some possibly overlapping structure. Thus, we assume that the output structure can be decomposed into l substructures. For example, this may be positions along a sequence, so that the output space Y is decomposable along these substructures: Y = Y1 ⇥· · · ⇥Yl. Here, Yk is the set of possible labels (or classes) that can be assigned to substructure k. Loss functions. We denote by L: Y ⇥Y ! R+ a loss function measuring the dissimilarity of two elements of the output space Y. We will assume that the loss function L is definite, that is L(y, y0) = 0 iff y = y0. This assumption holds for all loss functions commonly used in structured prediction. A key aspect of structured prediction is that the loss function can be decomposed along the substructures Yk. As an example, L may be the Hamming loss defined by L(y, y0) = 1 l Pl k=1 1yk6=y0 k for all y = (y1, . . . , yl) and y0 = (y0 1, . . . , y0 l), with yk, y0 k 2 Yk. In the common case where Y is a set of sequences defined over a finite alphabet, L may be the edit-distance, which is widely used in natural language and speech processing applications, with possibly different costs associated to insertions, deletions and substitutions. L may also be a loss based on the negative inner product of the vectors of n-gram counts of two sequences, or its negative logarithm. Such losses have been 2 f1 f2 1 2 3 f1 f2 1 2 3 (a) (b) Figure 1: Example of factor graphs. (a) Pairwise Markov network decomposition: h(x, y) = hf1(x, y1, y2)+hf2(x, y2, y3) (b) Other decomposition h(x, y) = hf1(x, y1, y3)+hf2(x, y1, y2, y3). used to approximate the BLEU score loss in machine translation. There are other losses defined in computational biology based on various string-similarity measures. Our theoretical analysis is general and applies to arbitrary bounded and definite loss functions. Scoring functions and factor graphs. We will adopt the common approach in structured prediction where predictions are based on a scoring function mapping X ⇥Y to R. Let H be a family of scoring functions. For any h 2 H, we denote by h the predictor defined by h: for any x 2 X, h(x) = argmaxy2Y h(x, y). Furthermore, we will assume, as is standard in structured prediction, that each function h 2 H can be decomposed as a sum. We will consider the most general case for such decompositions, which can be made explicit using the notion of factor graphs.1 A factor graph G is a tuple G = (V, F, E), where V is a set of variable nodes, F a set of factor nodes, and E a set of undirected edges between a variable node and a factor node. In our context, V can be identified with the set of substructure indices, that is V = {1, . . . , l}. For any factor node f, denote by N(f) ✓V the set of variable nodes connected to f via an edge and define Yf as the substructure set cross-product Yf = Q k2N(f) Yk. Then, h admits the following decomposition as a sum of functions hf, each taking as argument an element of the input space x 2 X and an element of Yf, yf 2 Yf: h(x, y) = X f2F hf(x, yf). (1) Figure 1 illustrates this definition with two different decompositions. More generally, we will consider the setting in which a factor graph may depend on a particular example (xi, yi): G(xi, yi) = Gi = ([li], Fi, Ei). A special case of this setting is for example when the size li (or length) of each example is allowed to vary and where the number of possible labels |Y| is potentially infinite. We present other examples of such hypothesis sets and their decomposition in Section 3, where we discuss our learning guarantees. Note that such hypothesis sets H with an additive decomposition are those commonly used in most structured prediction algorithms [Tsochantaridis et al., 2005, Taskar et al., 2003, Lafferty et al., 2001]. This is largely motivated by the computational requirement for efficient training and inference. Our results, while very general, further provide a statistical learning motivation for such decompositions. Learning scenario. We consider the familiar supervised learning scenario where the training and test points are drawn i.i.d. according to some distribution D over X ⇥Y. We will further adopt the standard definitions of margin, generalization error and empirical error. The margin ⇢h(x, y) of a hypothesis h for a labeled example (x, y) 2 X ⇥Y is defined by ⇢h(x, y) = h(x, y) −max y06=y h(x, y0). (2) Let S = ((x1, y1), . . . , (xm, ym)) be a training sample of size m drawn from Dm. We denote by R(h) the generalization error and by bRS(h) the empirical error of h over S: R(h) = E (x,y)⇠D[L(h(x), y)] and bRS(h) = E (x,y)⇠S[L(h(x), y)], (3) 1Factor graphs are typically used to indicate the factorization of a probabilistic model. We are not assuming probabilistic models, but they would be also captured by our general framework: h would then be - log of a probability. 3 where h(x) = argmaxy h(x, y) and where the notation (x, y)⇠S indicates that (x, y) is drawn according to the empirical distribution defined by S. The learning problem consists of using the sample S to select a hypothesis h 2 H with small expected loss R(h). Observe that the definiteness of the loss function implies, for all x 2 X, the following equality: L(h(x), y) = L(h(x), y) 1⇢h(x,y)0. (4) We will later use this identity in the derivation of surrogate loss functions. 3 General learning bounds for structured prediction In this section, we present new learning guarantees for structured prediction. Our analysis is general and applies to the broad family of definite and bounded loss functions described in the previous section. It is also general in the sense that it applies to general hypothesis sets and not just sub-families of linear functions. For linear hypotheses, we will give a more refined analysis that holds for arbitrary norm-p regularized hypothesis sets. The theoretical analysis of structured prediction is more complex than for classification since, by definition, it depends on the properties of the loss function and the factor graph. These attributes capture the combinatorial properties of the problem which must be exploited since the total number of labels is often exponential in the size of that graph. To tackle this problem, we first introduce a new complexity tool. 3.1 Complexity measure A key ingredient of our analysis is a new data-dependent notion of complexity that extends the classical Rademacher complexity. We define the empirical factor graph Rademacher complexity bRG S (H) of a hypothesis set H for a sample S = (x1, . . . , xm) and factor graph G as follows: bRG S (H) = 1 m E ✏ " sup h2H m X i=1 X f2Fi X y2Yf p |Fi| ✏i,f,y hf(xi, y) # , where ✏= (✏i,f,y)i2[m],f2Fi,y2Yf and where ✏i,f,ys are independent Rademacher random variables uniformly distributed over {±1}. The factor graph Rademacher complexity of H for a factor graph G is defined as the expectation: RG m(H) = ES⇠Dm⇥bRG S (H) ⇤ . It can be shown that the empirical factor graph Rademacher complexity is concentrated around its mean (Lemma 8). The factor graph Rademacher complexity is a natural extension of the standard Rademacher complexity to vectorvalued hypothesis sets (with one coordinate per factor in our case). For binary classification, the factor graph and standard Rademacher complexities coincide. Otherwise, the factor graph complexity can be upper bounded in terms of the standard one. As with the standard Rademacher complexity, the factor graph Rademacher complexity of a hypothesis set can be estimated from data in many cases. In some important cases, it also admits explicit upper bounds similar to those for the standard Rademacher complexity but with an additional dependence on the factor graph quantities. We will prove this for several families of functions which are commonly used in structured prediction (Theorem 2). 3.2 Generalization bounds In this section, we present new margin bounds for structured prediction based on the factor graph Rademacher complexity of H. Our results hold both for the additive and the multiplicative empirical margin losses defined below: bRadd S,⇢(h) = E (x,y)⇠S  Φ⇤ ✓ max y06=y L(y0, y) −1 ⇢ ⇥ h(x, y) −h(x, y0) ⇤◆(5) bRmult S,⇢(h) = E (x,y)⇠S  Φ⇤ ✓ max y06=y L(y0, y) ⇣ 1 −1 ⇢[h(x, y) −h(x, y0)] ⌘◆. (6) Here, Φ⇤(r) = min(M, max(0, r)) for all r, with M = maxy,y0 L(y, y0). As we show in Section 5, convex upper bounds on bRadd S,⇢(h) and bRmult S,⇢(h) directly lead to many existing structured prediction algorithms. The following is our general data-dependent margin bound for structured prediction. 4 Theorem 1. Fix ⇢> 0. For any δ > 0, with probability at least 1 −δ over the draw of a sample S of size m, the following holds for all h 2 H, R(h) Radd ⇢(h) bRadd S,⇢(h) + 4 p 2 ⇢RG m(H) + M s log 1 δ 2m , R(h) Rmult ⇢ (h) bRmult S,⇢(h) + 4 p 2M ⇢ RG m(H) + M s log 1 δ 2m . The full proof of Theorem 1 is given in Appendix A. It is based on a new contraction lemma (Lemma 5) generalizing Talagrand’s lemma that can be of independent interest.2 We also present a more refined contraction lemma (Lemma 6) that can be used to improve the bounds of Theorem 1. Theorem 1 is the first data-dependent generalization guarantee for structured prediction with general loss functions, general hypothesis sets, and arbitrary factor graphs for both multiplicative and additive margins. We also present a version of this result with empirical complexities as Theorem 7 in the supplementary material. We will compare these guarantees to known special cases below. The margin bounds above can be extended to hold uniformly over ⇢2 (0, 1] at the price of an additional term of the form p (log log2 2 ⇢)/m in the bound, using known techniques (see for example [Mohri et al., 2012]). The hypothesis set used by convex structured prediction algorithms such as StructSVM [Tsochantaridis et al., 2005], Max-Margin Markov Networks (M3N) [Taskar et al., 2003] or Conditional Random Field (CRF) [Lafferty et al., 2001] is that of linear functions. More precisely, let be a feature mapping from (X ⇥Y) to RN such that (x, y) = P f2F f(x, yf). For any p, define Hp as follows: Hp = {x 7! w · (x, y): w 2 RN, kwkp ⇤p}. Then, bRG m(Hp) can be efficiently estimated using random sampling and solving LP programs. Moreover, one can obtain explicit upper bounds on bRG m(Hp). To simplify our presentation, we will consider the case p = 1, 2, but our results can be extended to arbitrary p ≥1 and, more generally, to arbitrary group norms. Theorem 2. For any sample S = (x1, . . . , xm), the following upper bounds hold for the empirical factor graph complexity of H1 and H2: bRG S (H1) ⇤1r1 m p s log(2N), bRG S (H2) ⇤2r2 m qPm i=1 P f2Fi P y2Yf |Fi|, where r1 = maxi,f,y k f(xi, y)k1, r2 = maxi,f,y k f(xi, y)k2 and where s is a sparsity factor defined by s = maxj2[1,N] Pm i=1 P f2Fi P y2Yf |Fi|1 f,j(xi,y)6=0. Plugging in these factor graph complexity upper bounds into Theorem 1 immediately yields explicit data-dependent structured prediction learning guarantees for linear hypotheses with general loss functions and arbitrary factor graphs (see Corollary 10). Observe that, in the worst case, the sparsity factor can be bounded as follows: s  m X i=1 X f2Fi X y2Yf |Fi|  m X i=1 |Fi|2di m max i |Fi|2di, where di = maxf2Fi |Yf|. Thus, the factor graph Rademacher complexities of linear hypotheses in H1 scale as O( p log(N) maxi |Fi|2di/m). An important observation is that |Fi| and di depend on the observed sample. This shows that the expected size of the factor graph is crucial for learning in this scenario. This should be contrasted with other existing structured prediction guarantees that we discuss below, which assume a fixed upper bound on the size of the factor graph. Note that our result shows that learning is possible even with an infinite set Y. To the best of our knowledge, this is the first learning guarantee for learning with infinitely many classes. 2A result similar to Lemma 5 has also been recently proven independently in [Maurer, 2016]. 5 Our learning guarantee for H1 can additionally benefit from the sparsity of the feature mapping and observed data. In particular, in many applications, f,j is a binary indicator function that is non-zero for a single (x, y) 2 X ⇥Yf. For instance, in NLP, f,j may indicate an occurrence of a certain n-gram in the input xi and output yi. In this case, s = Pm i=1 |Fi|2 m maxi |Fi|2 and the complexity term is only in O(maxi |Fi| p log(N)/m), where N may depend linearly on di. 3.3 Special cases and comparisons Markov networks. For the pairwise Markov networks with a fixed number of substructures l studied by Taskar et al. [2003], our equivalent factor graph admits l nodes, |Fi| = l, and the maximum size of Yf is di = k2 if each substructure of a pair can be assigned one of k classes. Thus, if we apply Corollary 10 with Hamming distance as our loss function and divide the bound through by l, to normalize the loss to interval [0, 1] as in [Taskar et al., 2003], we obtain the following explicit form of our guarantee for an additive empirical margin loss, for all h 2 H2: R(h) bRadd S,⇢(h) + 4⇤2r2 ⇢ r 2k2 m + 3 s log 1 δ 2m . This bound can be further improved by eliminating the dependency on k using an extension of our contraction Lemma 5 to k · k1,2 (see Lemma 6). The complexity term of Taskar et al. [2003] is bounded by a quantity that varies as eO( p ⇤2 2q2r2 2/m), where q is the maximal out-degree of a factor graph. Our bound has the same dependence on these key quantities, but with no logarithmic term in our case. Note that, unlike the result of Taskar et al. [2003], our bound also holds for general loss functions and different p-norm regularizers. Moreover, our result for a multiplicative empirical margin loss is new, even in this special case. Multi-class classification. For standard (unstructured) multi-class classification, we have |Fi| = 1 and di = c, where c is the number of classes. In that case, for linear hypotheses with norm-2 regularization, the complexity term of our bound varies as O(⇤2r2 p c/⇢2m) (Corollary 11). This improves upon the best known general margin bounds of Kuznetsov et al. [2014], who provide a guarantee that scales linearly with the number of classes instead. Moreover, in the special case where an individual wy is learned for each class y 2 [c], we retrieve the recent favorable bounds given by Lei et al. [2015], albeit with a somewhat simpler formulation. In that case, for any (x, y), all components of the feature vector (x, y) are zero, except (perhaps) for the N components corresponding to class y, where N is the dimension of wy. In view of that, for example for a group-norm k · k2,1regularization, the complexity term of our bound varies as O(⇤r p (log c)/⇢2m), which matches the results of Lei et al. [2015] with a logarithmic dependency on c (ignoring some complex exponents of log c in their case). Additionally, note that unlike existing multi-class learning guarantees, our results hold for arbitrary loss functions. See Corollary 12 for further details. Our sparsity-based bounds can also be used to give bounds with logarithmic dependence on the number of classes when the features only take values in {0, 1}. Finally, using Lemma 6 instead of Lemma 5, the dependency on the number of classes can be further improved. We conclude this section by observing that, since our guarantees are expressed in terms of the average size of the factor graph over a given sample, this invites us to search for a hypothesis set H and predictor h 2 H such that the tradeoff between the empirical size of the factor graph and empirical error is optimal. In the next section, we will make use of the recently developed principle of Voted Risk Minimization (VRM) [Cortes et al., 2015] to reach this objective. 4 Voted Risk Minimization In many structured prediction applications such as natural language processing and computer vision, one may wish to exploit very rich features. However, the use of rich families of hypotheses could lead to overfitting. In this section, we show that it may be possible to use rich families in conjunction with simpler families, provided that fewer complex hypotheses are used (or that they are used with less mixture weight). We achieve this goal by deriving learning guarantees for ensembles of structured prediction rules that explicitly account for the differing complexities between families. This will motivate the algorithms that we present in Section 5. 6 Assume that we are given p families H1, . . . , Hp of functions mapping from X ⇥Y to R. Define the ensemble family F = conv([p k=1Hk), that is the family of functions f of the form f = PT t=1 ↵tht, where ↵= (↵1, . . . , ↵T ) is in the simplex ∆and where, for each t 2 [1, T], ht is in Hkt for some kt 2 [1, p]. We further assume that RG m(H1) RG m(H2) . . . RG m(Hp). As an example, the Hks may be ordered by the size of the corresponding factor graphs. The main result of this section is a generalization of the VRM theory to the structured prediction setting. The learning guarantees that we present are in terms of upper bounds on bRadd S,⇢(h) and bRmult S,⇢(h), which are defined as follows for all ⌧≥0: bRadd S,⇢,⌧(h) = E (x,y)⇠S  Φ⇤ ✓ max y06=y L(y0, y) + ⌧−1 ⇢ ⇥ h(x, y) −h(x, y0) ⇤◆(7) bRmult S,⇢,⌧(h) = E (x,y)⇠S  Φ⇤ ✓ max y06=y L(y0, y) ⇣ 1 + ⌧−1 ⇢[h(x, y) −h(x, y0)] ⌘◆. (8) Here, ⌧can be interpreted as a margin term that acts in conjunction with ⇢. For simplicity, we assume in this section that |Y| = c < +1. Theorem 3. Fix ⇢> 0. For any δ > 0, with probability at least 1 −δ over the draw of a sample S of size m, each of the following inequalities holds for all f 2 F: R(f) −bRadd S,⇢,1(f) 4 p 2 ⇢ T X t=1 ↵tRG m(Hkt) + C(⇢, M, c, m, p), R(f) −bRmult S,⇢,1(f) 4 p 2M ⇢ T X t=1 ↵tRG m(Hkt) + C(⇢, M, c, m, p), where C(⇢, M, c, m, p) = 2M ⇢ q log p m + 3M rl 4 ⇢2 log 5 c2⇢2m 4 log p 6m log p m + log 2 δ 2m . The proof of this theorem crucially depends on the theory we developed in Section 3 and is given in Appendix A. As with Theorem 1, we also present a version of this result with empirical complexities as Theorem 14 in the supplementary material. The explicit dependence of this bound on the parameter vector ↵suggests that learning even with highly complex hypothesis sets could be possible so long as the complexity term, which is a weighted average of the factor graph complexities, is not too large. The theorem provides a quantitative way of determining the mixture weights that should be apportioned to each family. Furthermore, the dependency on the number of distinct feature map families Hk is very mild and therefore suggests that a large number of families can be used. These properties will be useful for motivating new algorithms for structured prediction. 5 Algorithms In this section, we derive several algorithms for structured prediction based on the VRM principle discussed in Section 4. We first give general convex upper bounds (Section 5.1) on the structured prediction loss which recover as special cases the loss functions used in StructSVM [Tsochantaridis et al., 2005], Max-Margin Markov Networks (M3N) [Taskar et al., 2003], and Conditional Random Field (CRF) [Lafferty et al., 2001]. Next, we introduce a new algorithm, Voted Conditional Random Field (VCRF) Section 5.2, with accompanying experiments as proof of concept. We also present another algorithm, Voted StructBoost (VStructBoost), in Appendix C. 5.1 General framework for convex surrogate losses Given (x, y) 2 X ⇥Y, the mapping h 7! L(h(x), y) is typically not a convex function of h, which leads to computationally hard optimization problems. This motivates the use of convex surrogate losses. We first introduce a general formulation of surrogate losses for structured prediction problems. Lemma 4. For any u 2 R+, let Φu : R ! R be an upper bound on v 7! u1v0. Then, the following upper bound holds for any h 2 H and (x, y) 2 X ⇥Y, L(h(x), y) max y06=y ΦL(y0,y)(h(x, y) −h(x, y0)). (9) 7 The proof is given in Appendix A. This result defines a general framework that enables us to straightforwardly recover many of the most common state-of-the-art structured prediction algorithms via suitable choices of Φu(v): (a) for Φu(v) = max(0, u(1−v)), the right-hand side of (9) coincides with the surrogate loss defining StructSVM [Tsochantaridis et al., 2005]; (b) for Φu(v) = max(0, u− v), it coincides with the surrogate loss defining Max-Margin Markov Networks (M3N) [Taskar et al., 2003] when using for L the Hamming loss; and (c) for Φu(v) = log(1 + eu−v), it coincides with the surrogate loss defining the Conditional Random Field (CRF) [Lafferty et al., 2001]. Moreover, alternative choices of Φu(v) can help define new algorithms. In particular, we will refer to the algorithm based on the surrogate loss defined by Φu(v) = ue−v as StructBoost, in reference to the exponential loss used in AdaBoost. Another related alternative is based on the choice Φu(v) = eu−v. See Appendix C, for further details on this algorithm. In fact, for each Φu(v) described above, the corresponding convex surrogate is an upper bound on either the multiplicative or additive margin loss introduced in Section 3. Therefore, each of these algorithms seeks a hypothesis that minimizes the generalization bounds presented in Section 3. To the best of our knowledge, this interpretation of these well-known structured prediction algorithms is also new. In what follows, we derive new structured prediction algorithms that minimize finer generalization bounds presented in Section 4. 5.2 Voted Conditional Random Field (VCRF) We first consider the convex surrogate loss based on Φu(v) = log(1 + eu−v), which corresponds to the loss defining CRF models. Using the monotonicity of the logarithm and upper bounding the maximum by a sum gives the following upper bound on the surrogate loss holds: max y06=y log(1 + eL(y,y0)−w·( (x,y)− (x,y0))) log ⇣X y02Y eL(y,y0)−w·( (x,y)− (x,y0))⌘ , which, combined with VRM principle leads to the following optimization problem: min w 1 m m X i=1 log ✓X y2Y eL(y,yi)−w·( (xi,yi)− (xi,y)) ◆ + p X k=1 (λrk + β)kwkk1, (10) where rk = r1|F(k)|plog N. We refer to the learning algorithm based on the optimization problem (10) as VCRF. Note that for λ = 0, (10) coincides with the objective function of L1regularized CRF. Observe that we can also directly use maxy06=y log(1 + eL(y,y0)−w·δ (x,y,y0)) or its upper bound P y06=y log(1 + eL(y,y0)−w·δ (x,y,y0)) as a convex surrogate. We can similarly derive an L2-regularization formulation of the VCRF algorithm. In Appendix D, we describe efficient algorithms for solving the VCRF and VStructBoost optimization problems. 6 Experiments In Appendix B, we corroborate our theory by reporting experimental results suggesting that the VCRF algorithm can outperform the CRF algorithm on a number of part-of-speech (POS) datasets. 7 Conclusion We presented a general theoretical analysis of structured prediction. Our data-dependent margin guarantees for structured prediction can be used to guide the design of new algorithms or to derive guarantees for existing ones. Its explicit dependency on the properties of the factor graph and on feature sparsity can help shed new light on the role played by the graph and features in generalization. Our extension of the VRM theory to structured prediction provides a new analysis of generalization when using a very rich set of features, which is common in applications such as natural language processing and leads to new algorithms, VCRF and VStructBoost. Our experimental results for VCRF serve as a proof of concept and motivate more extensive empirical studies of these algorithms. Acknowledgments This work was partly funded by NSF CCF-1535987 and IIS-1618662 and NSF GRFP DGE-1342536. 8 References G. H. Bakir, T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, and S. V. N. Vishwanathan. Predicting Structured Data (Neural Information Processing). The MIT Press, 2007. K. Chang, A. Krishnamurthy, A. Agarwal, H. Daumé III, and J. Langford. Learning to search better than your teacher. In ICML, 2015. M. Collins. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Proceedings of IWPT, 2001. C. Cortes, M. Mohri, and J. Weston. A General Regression Framework for Learning String-to-String Mappings. In Predicting Structured Data. MIT Press, 2007. C. Cortes, V. Kuznetsov, and M. Mohri. Ensemble methods for structured prediction. In ICML, 2014. C. Cortes, P. Goyal, V. Kuznetsov, and M. Mohri. Kernel extraction via voted risk minimization. JMLR, 2015. H. Daumé III, J. Langford, and D. Marcu. Search-based structured prediction. Machine Learning, 75(3): 297–325, 2009. J. R. Doppa, A. Fern, and P. Tadepalli. Structured prediction via output space search. JMLR, 15(1):1317–1350, 2014. D. Jurafsky and J. H. Martin. Speech and Language Processing (2nd Edition). Prentice-Hall, Inc., 2009. V. Kuznetsov, M. Mohri, and U. Syed. Multi-class deep boosting. In NIPS, 2014. J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In ICML, 2001. M. Lam, J. R. Doppa, S. Todorovic, and T. G. Dietterich. Hc-search for structured prediction in computer vision. In CVPR, 2015. Y. Lei, Ü. D. Dogan, A. Binder, and M. Kloft. Multi-class svms: From tighter data-dependent generalization bounds to novel algorithms. In NIPS, 2015. A. Lucchi, L. Yunpeng, and P. Fua. Learning for structured prediction using approximate subgradient descent with working sets. In CVPR, 2013. A. Maurer. A vector-contraction inequality for rademacher complexities. In ALT, 2016. D. McAllester. Generalization bounds and consistency for structured labeling. In Predicting Structured Data. MIT Press, 2007. M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of Machine Learning. The MIT Press, 2012. D. Nadeau and S. Sekine. A survey of named entity recognition and classification. Linguisticae Investigationes, 30(1):3–26, January 2007. S. Ross, G. J. Gordon, and D. Bagnell. A reduction of imitation learning and structured prediction to no-regret online learning. In AISTATS, 2011. B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. In NIPS, 2003. I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and interdependent output variables. JMLR, 6:1453–1484, Dec. 2005. O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. In NIPS, 2015a. O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015b. D. Zhang, L. Sun, and W. Li. A structured prediction approach for statistical machine translation. In IJCNLP, 2008. 9
2016
194
6,099
The Multiple Quantile Graphical Model Alnur Ali Machine Learning Department Carnegie Mellon University alnurali@cmu.edu J. Zico Kolter Computer Science Department Carnegie Mellon University zkolter@cs.cmu.edu Ryan J. Tibshirani Department of Statistics Carnegie Mellon University ryantibs@cmu.edu Abstract We introduce the Multiple Quantile Graphical Model (MQGM), which extends the neighborhood selection approach of Meinshausen and Bühlmann for learning sparse graphical models. The latter is defined by the basic subproblem of modeling the conditional mean of one variable as a sparse function of all others. Our approach models a set of conditional quantiles of one variable as a sparse function of all others, and hence offers a much richer, more expressive class of conditional distribution estimates. We establish that, under suitable regularity conditions, the MQGM identifies the exact conditional independencies with probability tending to one as the problem size grows, even outside of the usual homoskedastic Gaussian data model. We develop an efficient algorithm for fitting the MQGM using the alternating direction method of multipliers. We also describe a strategy for sampling from the joint distribution that underlies the MQGM estimate. Lastly, we present detailed experiments that demonstrate the flexibility and effectiveness of the MQGM in modeling hetereoskedastic non-Gaussian data. 1 Introduction We consider modeling the joint distribution Pr(y1, . . . , yd) of d random variables, given n independent draws from this distribution y(1), . . . , y(n) ∈Rd, where possibly d ≫n. Later, we generalize this setup and consider modeling the conditional distribution Pr(y1, . . . , yd|x1, . . . , xp), given n independent pairs (x(1), y(1)), . . . , (x(n), y(n)) ∈Rp+d. Our starting point is the neighborhood selection method [28], which is typically considered in the context of multivariate Gaussian data, and seen as a tool for covariance selection [8]: when Pr(y1, . . . , yd) is a multivariate Gaussian distribution, it is a well-known fact that yj and yk are conditionally independent given the remaining variables if and only if the coefficent corresponding to yk is zero in the (linear) regression of yj on all other variables (e.g., [22]). Therefore, in neighborhood selection we compute, for each k = 1, . . . , d, a lasso regression — in order to obtain a small set of conditional dependencies — of yk on the remaining variables, i.e., minimize θk∈Rd n X i=1  y(i) k − X j̸=k θkjy(i) j 2 + λ∥θk∥1, (1) for a tuning parameter λ > 0. This strategy can be seen as a pseudolikelihood approximation [4], Pr(y1, . . . , yd) ≈ d Y k=1 Pr(yk|y¬k), (2) where y¬k denotes all variables except yk. Under the multivariate Gaussian model for Pr(y1, . . . , yd), the conditional distributions Pr(yk|y¬k), k = 1, . . . , d here are (univariate) Gaussians, and maximizing the pseudolikelihood in (2) is equivalent to separately maximizing the conditionals, as is precisely done in (1) (with induced sparsity), for k = 1, . . . , d. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Following the pseudolikelihood-based approach traditionally means carrying out three steps: (i) we write down a suitable family of joint distributions for Pr(y1, . . . , yd), (ii) we derive the conditionals Pr(yk|y¬k), k = 1, . . . , d, and then (iii) we maximize each conditional likelihood by (freely) fitting the parameters. Neighborhood selection, and a number of related approaches that came after it (see Section 2.1), can be all thought of in this workflow. In many ways, step (ii) acts as the bottleneck here, and to derive the conditionals, we are usually limited to a homoskedastic and parameteric family for the joint distribution. The approach we take in this paper differs somewhat substantially, as we begin by directly modeling the conditionals in (2), without any preconceived model for the joint distribution — in this sense, it may be seen a type of dependency network [13] for continuous data. We also employ heteroskedastic, nonparametric models for the conditional distributions, which allows us great flexibility in learning these conditional relationships. Our method, called the Multiple Quantile Graphical Model (MQGM), is a marriage of ideas in high-dimensional, nonparametric, multiple quantile regression with those in the dependency network literature (the latter is typically focused on discrete, not continuous, data). An outline for this paper is as follows. Section 2 reviews background material, and Section 3 develops the MQGM estimator. Section 4 studies basic properties of the MQGM, and establishes a structure recovery result under appropriate regularity conditions, even for heteroskedastic, non-Gaussian data. Section 5 describes an efficient ADMM algorithm for estimation, and Section 6 presents empirical examples comparing the MQGM versus common alternatives. Section 7 concludes with a discussion. 2 Background 2.1 Neighborhood selection and related methods Neighborhood selection has motivated a number of methods for learning sparse graphical models. The literature here is vast; we do not claim to give a complete treatment, but just mention some relevant approaches. Many pseudolikelihood approaches have been proposed, see e.g., [35, 33, 12, 24, 17, 1]. These works exploit the connection between estimating a sparse inverse covariance matrix and regression, and they vary in terms of the optimization algorithms they use and the theoretical guarantees they offer. In a clearly related but distinct line of research, [45, 2, 11, 36] proposed ℓ1-penalized likelihood estimation in the Gaussian graphical model, a method now generally termed the graphical lasso (GLasso). Following this, several recent papers have extended the GLasso in various ways. [10] examined a modification based on the multivariate Student t-distribution, for robust graphical modeling. [37, 46, 42] considered conditional distributions of the form Pr(y1, . . . , yd|x1, . . . , xp). [23] proposed a model for mixed (both continuous and discrete) data types, generalizing both GLasso and pairwise Markov random fields. [25, 26] used copulas for learning non-Gaussian graphical models. A strength of neighborhood-based (i.e., pseudolikelihood-based) approaches lies in their simplicity; because they essentially reduce to a collection of univariate probability models, they are in a sense much easier to study outside of the typical homoskedastic, Gaussian data setting. [14, 43, 44] elegantly studied the implications of using univariate exponential family models for the conditionals in (2). Closely related to pseudoliklihood approaches are dependency networks [13]. Both frameworks focus on the conditional distributions of one variable given all the rest; the difference lies in whether or not the model for conditionals stems from first specifying some family of joint distributions (pseudolikelihood methods), or not (dependency networks). Dependency networks have been thoroughly studied for discrete data, e.g., [13, 29]. For continuous data, [40] proposed modeling the mean in a Gaussian neighborhood regression as a nonparametric, additive function of the remaining variables, yielding flexible relationships — this is a type of dependency network for continuous data (though it is not described by the authors in this way). Our method, the MQGM, also deals with continuous data, and is the first to our knowledge that allows for fully nonparametric conditional distributions, as well as nonparametric contributions of the neighborhood variables, in each local model. 2.2 Quantile regression In linear regression, we estimate the conditional mean of y|x1, . . . , xp from samples. Similarly, in αquantile regression [20], we estimate the conditional α-quantile of y|x1, . . . , xp for a given α ∈[0, 1], formally Qy|x1,...,xp(α) = inf{t : Pr(y ≤t|x1, . . . , xp) ≥α}, by solving the convex optimization problem: minimizeθ Pn i=1 ψα(y(i) −Pp j=1 θjx(i) j ), where ψα(z) = max{αz, (α −1)z} is the α2 quantile loss (also called the “pinball” or “tilted absolute” loss). Quantile regression can be useful when the conditional distribution in question is suspected to be heteroskedastic and/or non-Gaussian, e.g., heavy-tailed, or if we wish to understand properties of the distribution other than the mean, e.g., tail behavior. In multiple quantile regression, we solve several quantile regression problems simultaneously, each corresponding to a different quantile level; these problems can be coupled somehow to increase efficiency in estimation (see details in the next section). Again, the literature on quantile regression is quite vast (especially that from econometrics), and we only give a short review here. A standard text is [18]. Nonparametric modeling of quantiles is a natural extension from the (linear) quantile regression approach outlined above; in the univariate case (one conditioning variable), [21] suggested a method using smoothing splines, and [38] described an approach using kernels. More recently, [19] studied the multivariate nonparametric case (more than one conditioning variable), using additive models. In the high-dimensional setting, where p is large, [3, 16, 9] studied ℓ1-penalized quantile regression and derived estimation and recovery theory for non-(sub-)Gaussian data. We extend results in [9] to prove structure recovery guarantees for the MQGM (in Section 4.3). 3 The multiple quantile graphical model Many choices can be made with regards to the final form of the MQGM, and to help in understanding these options, we break down our presentation in parts. First fix some ordered set A = {α1, . . . , αr} of quantile levels, e.g., A = {0.05, 0.10, . . . , 0.95}. For each variable yk, and each level αℓ, we model the conditional αℓ-quantile given the other variables, using an additive expansion of the form: Qyk|y¬k(αℓ) = b∗ ℓk + d X j̸=k f ∗ ℓkj(yj), (3) where b∗ ℓk ∈R is an intercept term, and f ∗ ℓkj, j = 1, . . . , d are smooth, but not parametric in form. In its most general form, the MQGM estimator is defined as a collection of optimization problems, over k = 1, . . . , d and ℓ= 1, . . . , r: minimize bℓk, fℓkj∈Fℓkj, j=1,...,d n X i=1 ψαℓ  y(i) k −bℓk − X j̸=k fℓkj(y(i) j )  + X j̸=k  λ1P1(fℓkj) + λ2P2(fℓkj) ω . (4) Here λ1, λ2 ≥0 are tuning parameters, Fℓkj, j = 1, . . . , d are univariate function spaces, ω > 0 is a fixed exponent, and P1, P2 are sparsity and smoothness penalty functions, respectively. We give three examples below; many other variants are also possible. Example 1: basis expansion model Consider taking Fℓkj = span{φj 1, . . . , φj m}, the span of m basis functions, e.g., radial basis functions (RBFs) with centers placed at appropriate locations across the domain of variable j, for each j = 1, . . . , d. This means that each fℓkj ∈Fℓkj can be expressed as fℓkj(x) = θT ℓkjφj(x), for a coefficient vector θℓkj ∈Rm, where φj(x) = (φj 1(x), . . . , φj m(x)). Also consider an exponent ω = 1, and the sparsity and smoothness penalties P1(fℓkj) = ∥θℓkj∥2 and P2(fℓkj) = ∥θℓkj∥2 2, respectively, which are group lasso and ridge penalties, respectively. With these choices in place, the MQGM problem in (4) can be rewritten in finite-dimensional form: minimize bℓk, θℓk=(θℓk1,...,θℓkd) ψαℓ  Yk −bℓk1 −Φθℓk  + X j̸=k  λ1∥θℓkj∥2 + λ2∥θℓkj∥2 2  . (5) Above, we have used the abbreviation ψαℓ(z) = Pn i=1 ψαℓ(zi) for a vector z = (z1, . . . , zn) ∈Rn, and also Yk = (y(1) k , . . . , y(n) k ) ∈Rn for the observations along variable k, 1 = (1, . . . , 1) ∈Rn, and Φ ∈Rn×dm for the basis matrix, with blocks of columns to be understood as Φij = φ(y(i) j )T ∈Rm. The basis expansion model is simple and tends to work well in practice, so we focus on it for most of the paper. In principle, essentially all our results apply to the next two models we describe, as well. Example 2: smoothing splines model Now consider taking Fℓkj = span{gj 1, . . . , gj n}, the span of m = n natural cubic splines with knots at y(1) j , . . . , y(n) j , for j = 1, . . . , d. As before, we can then write fℓkj(x) = θT ℓkjgj(x) with coefficients θℓkj ∈Rn, for fℓkj ∈Fℓkj. The work of [27], on high-dimensional additive smoothing splines, suggests a choice of exponent ω = 1/2, and penalties P1(fℓkj) = ∥Gjθℓkj∥2 2 and P2(fℓkj) = θT ℓkjΩjθℓkj, 3 for sparsity and smoothness, respectively, where Gj ∈Rn×n is a spline basis matrix with entries Gj ii′ = gj i′(y(i) j ), and Ωj is the smoothing spline penalty matrix containing integrated products of pairs of twice differentiated basis functions. The MQGM problem in (4) can be translated into a finite-dimensional form, very similar to what we have done in (5), but we omit this for brevity. Example 3: RKHS model Consider taking Fℓkj = Hj, a univariate reproducing kernel Hilbert space (RKHS), with kernel function κj(·, ·). The representer theorem allows us to express each function fℓkj ∈Hj in terms of the representers of evaluation, i.e., fℓkj(x) = Pn i=1(θℓkj)iκj(x, y(i) j ), for a coefficient vector θℓkj ∈Rn. The work of [34], on high-dimensional additive RKHS modeling, suggests a choice of exponent ω = 1, and sparsity and smoothness penalties P1(fℓkj) = ∥Kjθℓkj∥2 and P2(fℓkj) = q θT ℓkjKjθℓkj, respectively, where Kj ∈Rn×n is the kernel matrix with entries Kj ii′ = κj(y(i) j , y(i′) j ). Again, the MQGM problem in (4) can be written in finite-dimensional form, now an SDP, omitted for brevity. Structural constraints Several structural constraints can be placed on top of the MQGM optimization problem in order to guide the estimated component functions to meet particular shape requirements. An important example are non-crossing constraints (commonplace in nonparametric, multiple quantile regression [18, 38]): here, we optimize (4) jointly over ℓ= 1, . . . , r, subject to bℓk + X j̸=k fℓkj(y(i) j ) ≤bℓ′k + X j̸=k fℓ′kj(y(i) j ), for all αℓ< αℓ′, and i = 1, . . . , n. (6) This ensures that the estimated quantiles obey the proper ordering, at the observations. For concreteness, we consider the implications for the basis regression model, in Example 1 (similar statements hold for the other two models). For each ℓ= 1, . . . , r, denote by Fℓk(bℓk, θℓk) the criterion in (5). Introducing the non-crossing constraints requires coupling (5) over ℓ= 1, . . . , r, so that we now have the following optimization problems, for each target variable k = 1, . . . , d: minimize Bk,Θk r X ℓ=1 Fℓk(bℓk, θℓk) subject to (1BT k + ΦΘk)DT ≥0, (7) where we denote Bk = (b1k, . . . , brk) ∈Rr, Φ ∈Rn×dm the basis matrix as before, Θk ∈Rdm×r given by column-stacking θℓk ∈Rdm, ℓ= 1, . . . , r, and D ∈R(r−1)×r is the usual discrete difference operator. (The inequality in (7) is to be interpreted componentwise.) Computationally, coupling the subproblems across ℓ= 1, . . . , r clearly adds to the overall difficulty of the MQGM, but statistically this coupling acts as a regularizer, by constraining the parameter space in a useful way, thus increasing our efficiency in fitting multiple quantile levels from the given data. For a triplet ℓ, k, j, monotonicity constraints are also easy to add, i.e., fℓkj(y(i) j ) ≤fℓkj(y(i′) j ) for all y(i) j < y(i′) j . Convexity constraints, where we require fℓkj to be convex over the observations, for a particular ℓ, k, j, are also straightforward. Lastly, strong non-crossing constraints, where we enforce (6) over all z ∈Rd (not just over the observations) are also possible with positive basis functions. Exogenous variables and conditional random fields So far, we have considered modeling the joint distribution Pr(y1, . . . , yd), corresponding to learning a Markov random field (MRF). It is not hard to extend our framework to model the conditional distribution Pr(y1, . . . , yd|x1, . . . , xp) given some exogenous variables x1, . . . , xp, corresponding to learning a conditional random field (CRF). To extend the basis regression model, we introduce the additional parameters θx ℓk ∈Rp in (5), and the loss now becomes ψαℓ(Yk −bℓk1T −Φθℓk −Xθx ℓk), where X ∈Rn×q is filled with the exogenous observations x(1), . . . , x(n) ∈Rq; the other models are changed similarly. 4 Basic properties and theory 4.1 Quantiles and conditional independence In the model (3), when a particular variable yj has no contribution, i.e., satisfied f ∗ ℓkj = 0 across all quantile levels αℓ, ℓ= 1, . . . , r, what does this imply about the conditional independence between yk and yj, given the rest? Outside of the multivariate normal model (where the feature transformations need only be linear), nothing can be said in generality. But we argue that conditional independence can be understood in a certain approximate sense (i.e., in a projected approximation of the data generating model). We begin with a simple lemma. Its proof is elementary, and given in the supplement. 4 Lemma 4.1. Let U, V, W be random variables, and suppose that all conditional quantiles of U|V, W do not depend on V , i.e., QU|V,W (α) = QU|W (α) for all α ∈[0, 1]. Then U and V are conditionally independent given W. By the lemma, if we knew that QU|V,W (α) = h(α, U, W) for a function h, then it would follow that U, V are conditionally independent given W (n.b., the converse is true, as well). The MQGM problem in (4), with sparsity imposed on the coefficients, essentially aims to achieve such a representation for the conditional quantiles; of course we cannot use a fully nonparametric representation of the conditional distribution yk|y¬k and instead we use an r-step approximation to the conditional cumulative distribution function (CDF) of yk|y¬k (corresponding to estimating r conditional quantiles), and (say) in the basis regression model, limit the dependence on conditioning variables to be in terms of an additive function of RBFs in yj, j ̸= k. Thus, if at the solution in (5) we find that ˆθℓkj = 0, ℓ= 1, . . . , r, we may interpret this to mean that yk and yj are conditionally independent given the remaining variables, but according to the distribution defined by the projection of yk|y¬k onto the space of models considered in (5) (r-step conditional CDFs, which are additive expansions in yj, j ̸= k). This interpretation is no more tenuous (arguably, less so, as the model space here is much larger) than that needed when applying standard neighborhood selection to non-Gaussian data. 4.2 Gibbs sampling and the “joint” distribution When specifying a form for the conditional distributions in a pseudolikelihood approximation as in (2), it is natural to ask: what is the corresponding joint distribution? Unfortunately, for a general collection of conditional distributions, there need not exist a compatible joint distribution, even when all conditionals are continuous [41]. Still, pseudolikelihood approximations (a special case of composite likelihood approximations), possess solid theoretical backing, in that maximizing the pseudolikelihood relates closely to minimizing a certain (expected composite) Kullback-Leibler divergence, measured to the true conditionals [39]. Recently, [7, 44] made nice progress in describing specific conditions on conditional distributions that give rise to a valid joint distribution, though their work was specific to exponential families. A practical answer to the question of this subsection is to use Gibbs sampling, which attempts to draw samples consistent with the fitted conditionals; this is precisely the observation of [13], who show that Gibbs sampling from discrete conditionals converges to a unique stationary distribution, although this distribution may not actually be compatible with the conditionals. The following result establishes the analogous claim for continuous conditionals; its proof is in the supplement. We demonstrate the practical value of Gibbs sampling through various examples in Section 6. Lemma 4.2. Assume that the conditional distributions Pr(yk|y¬k), k = 1, . . . , d take only positive values on their domain. Then, for any given ordering of the variables, Gibbs sampling converges to a unique stationary distribution that can be reached from any initial point. (This stationary distribution depends on the ordering.) 4.3 Graph structure recovery When log d = O(n2/21), and we assume somewhat standard regularity conditions (listed as A1–A4 in the supplement), the MQGM estimate recovers the underlying conditional independencies with high probability (interpreted in the projected model space, as explained in Section 4.1). Importantly, we do not require a Gaussian, sub-Gaussian, or even parametric assumption on the data generating process; instead, we assume i.i.d. draws y(1), . . . , y(n) ∈Rd, where the conditional distributions yk|y¬k have quantiles specified by the model in (3) for k = 1, . . . , d, ℓ= 1, . . . , r, and further, each f ∗ ℓkj(x) = θT ℓkjφj(x)∗for coefficients θ∗ ℓkj ∈Rm, j = 1, . . . , d, as in the basis expansion model. Let E∗denote the corresponding edge set of conditional dependencies from these neighborhood models, i.e., {k, j} ∈E∗⇐⇒maxℓ=1,...,r max{∥θ∗ ℓkj∥2, |θ∗ ℓjk∥2} > 0. We define the estimated edge set ˆE in the analogous way, based on the solution in (5). Without a loss of generality, we assume the features have been scaled to satisfy ∥Φj∥≤√n for all j = 1, . . . , dm. The following is our recovery result; its proof is provided in the supplement. Theorem 4.3. Assume log d = O(n2/21), and conditions A1–A4 in the supplement. Assume that the tuning parameters λ1, λ2 satisfy λ1 ≍(mn log(d2mr/δ) log3 n)1/2 and λ2 = o(n41/42/θ∗ max), where θ∗ max = maxℓ,k,j ∥θ∗ ℓkj∥2. Then for n sufficiently large, the MQGM estimate in (5) exactly recovers the underlying conditional dependencies, i.e., ˆE = E∗, with probability at least 1 −δ. 5 The theorem shows that the nonzero pattern in the MQGM estimate identifies, with high probability, the underlying conditional independencies. But to be clear, we emphasize that the MQGM estimate is not an estimate of the inverse covariance matrix itself (this is also true of neighborhood regression, SpaceJam of [40], and many other methods for learning graphical models). 5 Computational approach By design, the MQGM problem in (5) separates into d subproblems, across k = 1, . . . , d (it therefore suffices to consider only a single subproblem, so we omit notational dependence on k for auxiliary variables). While these subproblems are challenging for off-the-shelf solvers (even for only moderately-sized graphs), the key terms here all admit efficient proximal operators [32], which makes operator splitting methods like the alternating direction method of multipliers [5] a natural choice. As an illustration, we consider the non-crossing constraints in the basis regression model below. Reparameterizing our problem, so that we may apply ADMM, yields: minimize Θk,Bk,V,W,Z ψA(Z) + λ1 Pr ℓ=1 Pd j=1 ∥Wℓj∥2 + λ2 2 ∥W∥2 F + I+(V DT ) subject to V = 1BT k + ΦΘk, W = Θk, Z = Yk1T −1BT k −ΦΘk, (8) where for brevity ψA(A) = Pr ℓ=1 Pd j=1 ψαℓ(Aℓj), and I+(·) is the indicator function of the space of elementwise nonnegative matrices. The augmented Lagrangian associated with (8) is: Lρ(Θk, Bk, V, W, Z, UV , UW , UZ) = ψA(Z) + λ1 r X ℓ=1 d X j=1 ∥Wℓj∥2 + λ2 2 ∥W∥2 F + I+(V DT ) + ρ 2  ∥1BT k + ΦΘk −V + UV ∥2 F + ∥Θk −W + UW ∥2 F + ∥Yk1T −1BT k −ΦΘk −Z + UZ∥2 F  , (9) where ρ > 0 is the augmented Lagrangian parameter, and UV , UW , UZ are dual variables corresponding to the equality constraints on V, W, Z, respectively. Minimizing (9) over V yields: V ←Piso 1BT k + ΦΘk + UV  , (10) where Piso(·) denotes the row-wise projection operator onto the isotonic cone (the space of componentwise nondecreasing vectors), an O(nr) operation here [15]. Minimizing (9) over Wℓj yields the update: Wℓj ←(Θk)ℓj + (UW )ℓj 1 + λ2/ρ  1 − λ1/ρ ∥(Θk)ℓj + (UW )ℓj∥2  + , (11) where (·)+ is the positive part operator. This can be seen by deriving the proximal operator of the function f(x) = λ1∥x∥2 + (λ2/2)∥x∥2 2. Minimizing (9) over Z yields the update: Z ←prox(1/ρ)ψA(Yk1T −1bT k −ΦΘk + UZ), (12) where proxf(·) denotes the proximal operator of a function f. For the multiple quantile loss function ψA, this is a kind of generalized soft-thresholding. The proof is given in the supplement. Lemma 5.1. Let P+(·) and P−(·) be the elementwise positive and negative part operators, respectively, and let a = (α1, . . . , αr). Then proxtψA(A) = P+(A −t1aT ) + P−(A −t1aT ). Finally, differentiation in (9) with respect to Bk and Θk yields the simultaneous updates:  Θk BT k  ←1 2  ΦT Φ + 1 2I ΦT 1 1T Φ 1T 1 −1  [I 0]T (W −UW ) + [Φ 1]T (Yk1T −Z + UZ + V −UV )  . (13) A complete description of our ADMM algorithm for solving the MQGM problem is in the supplement. Gibbs sampling Having fit the conditionals yk|y¬k, k = 1, . . . d, we may want to make predictions or extract joint distributions over subsets of variables. As discussed in Section 4.2, there is no general analytic form for these joint distributions, but the pseudolikelihood approximation underlying the MQGM suggests a natural Gibbs sampler. A careful implementation that respects the additive model in (3) yields a highly efficient Gibbs sampler, especially for CRFs; the supplement gives details. 6 6 Empirical examples 6.1 Synthetic data We consider synthetic examples, comparing the MQGM to neighborhood selection (MB), the graphical lasso (GLasso), SpaceJam [40], the nonparanormal skeptic [26], TIGER [24], and neighborhood selection using the absolute loss (Laplace). Ring example As a simple but telling example, we drew n = 400 samples from a “ring” distribution in d = 4 dimensions. Data were generated by drawing a random angle ν ∼Uniform(0, 1), a random radius R ∼N(0, 0.1), and then computing the coordinates y1 = R cos ν, y2 = R sin ν and y3, y4 ∼N(0, 1), i.e., y1 and y2 are the only dependent variables here. The MQGM was used with m = 10 basis functions (RBFs), and r = 20 quantile levels. The left panel of Figure 1 plots samples (blue) of the coordinates y1, y2 as well as new samples from the MQGM (red) fitted to these same (blue) samples, obtained by using our Gibbs sampler; the samples from the MQGM appear to closely match the samples from the underlying ring. The main panel of Figure 1 shows the conditional dependencies recovered by the MQGM, SpaceJam, GLasso, and MB (plots for the other methods are given in the supplement), when run on the ring data. We visualize these dependencies by forming a d × d matrix with the cell (j, k) set to black if j, k are conditionally dependent given the others, and white otherwise. Across a range of tuning parameters for each method, the MQGM is the only one that successfully recovers the underlying conditional dependencies, at some point along its solution path. In the supplement, we present an evaluation of the conditional CDFs given by each method, when run on the ring data; again, the MQGM performs best in this setting. Larger examples To investigate performance at larger scales, we drew n ∈{50, 100, 300} samples from a multivariate normal and Student t-distribution (with 3 degrees of freedom), both in d = 100 dimensions, both parameterized by a random, sparse, diagonally dominant d × d inverse covariance matrix, following the procedure in [33, 17, 31, 1]. Over the same set of sample sizes, with d = 100, we also considered an autoregressive setup in which we drew samples of pairs of adjacent variables from the ring distribution. In all three data settings (normal, t, and autoregressive), we used m = 10 and r = 20 for the MQGM. To summarize the performances, we considered a range of tuning parameters for each method, computed corresponding false and true positive rates (in detecting conditional dependencies), and then computed the corresponding area under the curve (AUC), following, e.g., [33, 17, 31, 1]. Table 1 reports the median AUCs (across 50 trials) for all three of these examples; the MQGM outperforms all other methods on the autoregressive example; on the normal and Student t examples, it performs quite competitively. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 y1 −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 y2 truth MQGM MQGM Truth ¸1 =8.00000 ¸1 =16.00000 ¸1 =32.00000 ¸1 =64.00000 ¸1 =128.00000 MB Truth ¸1 =0.12500 ¸1 =0.25000 ¸1 =0.50000 ¸1 =1.00000 ¸1 =2.00000 GLasso Truth ¸1 =0.00781 ¸1 =0.01562 ¸1 =0.03125 ¸1 =0.06250 ¸1 =0.12500 SpaceJam Truth ¸1 =0.06250 ¸1 =0.12500 ¸1 =0.25000 ¸1 =0.50000 ¸1 =1.00000 Figure 1: Left: data from the ring distribution (blue) as well as new samples from the MQGM (red) fitted to the same (blue) data, obtained by using our Gibbs sampler. Right: conditional dependencies recovered by the MQGM, MB, GLasso, and SpaceJam on the ring data; black means conditional dependence. The MQGM is the only method that successfully recovers the underlying conditional dependencies along its solution path. Table 1: AUC values for the MQGM, MB, GLasso, SpaceJam, the nonparanormal skeptic, TIGER, and Laplace for the normal, t, and autoregressive data settings; higher is better, best in bold. Normal Student t Autoregressive n = 50 n = 100 n = 300 n = 50 n = 100 n = 300 n = 50 n = 100 n = 300 MQGM 0.953 0.976 0.988 0.928 0.947 0.981 0.726 0.754 0.955 MB 0.850 0.959 0.994 0.844 0.923 0.988 0.532 0.563 0.725 GLasso 0.908 0.964 0.998 0.691 0.605 0.965 0.541 0.620 0.711 SpaceJam 0.889 0.968 0.997 0.893 0.965 0.993 0.624 0.708 0.854 Nonpara. 0.881 0.962 0.996 0.862 0.942 0.998 0.545 0.590 0.612 TIGER 0.732 0.921 0.996 0.420 0.873 0.989 0.503 0.518 0.718 Laplace 0.803 0.931 0.989 0.800 0.876 0.991 0.530 0.554 0.758 7 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 0 50 100 150 200 250 Seconds 101 102 103 104 Objective value MQGM SCS 1 6 11 16 1 6 11 16 30 40 50 8 18 28 Week 0 1 2 3 4 5 6 7 8 9 % of flu-like symptoms Region 6 Figure 2: Top panel and bottom row, middle panel: conditional dependencies recovered by the MQGM on the flu data; each of the first ten cells corresponds to a region of the U.S., and black means dependence. Bottom row, left panel: wallclock time (in seconds) for solving one subproblem using ADMM versus SCS. Bottom row, right panel: samples from the fitted marginal distribution of the weekly flu incidence rates at region 6; samples at larger quantiles are shaded lighter, and the median is in darker blue. 6.2 Modeling flu epidemics We study n = 937 weekly flu incidence reports from September 28, 1997 through August 30, 2015, across 10 regions in the United States (see the top panel of Figure 2), obtained from [6]. We considered d = 20 variables: the first 10 encode the current week’s flu incidence (precisely, the percentage of doctor’s visits in which flu-like symptoms are presented) in the 10 regions, and the last 10 encode the same but for the prior week. We set m = 5, r = 99, and also introduced exogenous variables to encode the week numbers, so p = 1. Thus, learning the MQGM here corresponds to learning the structure of a spatiotemporal graphical model, and reduces to solving 20 multiple quantile regression subproblems, each of dimension (19 × 5 + 1) × 99 = 9504. All subproblems took about 1 minute on a 6 core 3.3 Ghz Core i7 X980 processor. The bottom left panel in Figure 2 plots the time (in seconds) taken for solving one subproblem using ADMM versus SCS [30], a cone solver that has been advocated as a reasonable choice for a class of problems encapsulating (4); ADMM outperforms SCS by roughly two orders of magnitude. The bottom middle panel of Figure 2 presents the conditional independencies recovered by the MQGM. Nonzero entries in the upper left 10 × 10 submatrix correspond to dependencies between the yk variables for k = 1, . . . , 10; e.g., the nonzero (0,2) entry suggests that region 1 and 3’s flu reports are dependent. The lower right 10 × 10 submatrix corresponds to the yk variables for k = 11, . . . , 20, and the nonzero banded entries suggest that at any region the previous week’s flu incidence (naturally) influences the next week’s. The top panel of Figure 2 visualizes these relationships by drawing an edge between dependent regions; region 6 is highly connected, suggesting that it may be a bellwether for other regions, roughly in keeping with the current understanding of flu dynamics. To draw samples from the fitted distributions, we ran our Gibbs sampler over the year, generating 1000 total samples, making 5 passes over all coordinates between each sample, and with a burn-in period of 100 iterations. The bottom right panel of Figure 2 plots samples from the marginal distribution of the percentages of flu reports at region 6 (other regions are in the supplement) throughout the year, revealing the heteroskedastic nature of the data. For space reasons, our last example, on wind power data, is presented in the supplement. 7 Discussion We proposed and studied the Multiple Quantile Graphical Model (MQGM). We established theoretical and empirical backing to the claim that the MQGM is capable of compactly representing relationships between heteroskedastic non-Gaussian variables. We also developed efficient algorithms for both estimation and sampling in the MQGM. All in all, we believe that our work represents a step forward in the design of flexible yet tractable graphical models. Acknowledgements AA was supported by DOE Computational Science Graduate Fellowship DEFG02-97ER25308. JZK was supported by an NSF Expeditions in Computation Award, CompSustNet, CCF-1522054. RJT was supported by NSF Grants DMS-1309174 and DMS-1554123. 8 References [1] Alnur Ali, Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. Generalized pseudolikelihood methods for inverse covariance estimation. Technical report, 2016. Available at http://arxiv.org/pdf/1606.00033.pdf. [2] Onureena Banerjee, Laurent El Ghaoui, and Alexandre d’Aspremont. Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 9:485–516, 2008. [3] Alexandre Belloni and Victor Chernozhukov. ℓ1-penalized quantile regression in high-dimensional sparse models. Annals of Statistics, 39(1):82–130, 2011. [4] Julian Besag. Spatial interaction and the statistical analysis of lattice systems. Journal of the Royal Statistical Society: Series B, 36(2): 192–236, 1974. [5] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–122, 2011. [6] Centers for Disease Control and Prevention (CDC). Influenza national and regional level graphs and data, August 2015. URL http: //gis.cdc.gov/grasp/fluview/fluportaldashboard.html. [7] Shizhe Chen, Daniela Witten, and Ali Shojaie. Selection and estimation for mixed graphical models. Biometrika, 102(1):47–64, 2015. [8] Arthur Dempster. Covariance selection. Biometrics, 28(1):157–175, 1972. [9] Jianqing Fan, Yingying Fan, and Emre Barut. Adaptive robust variable selection. Annals of Statistics, 42(1):324–351, 2014. [10] Michael Finegold and Mathias Drton. Robust graphical modeling of gene networks using classical and alternative t-distributions. Annals of Applied Statistics, 5(2A):1057–1080, 2011. [11] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9 (3):432–441, 2008. [12] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Applications of the lasso and grouped lasso to the estimation of sparse graphical models. Technical report, 2010. Available at http://statweb.stanford.edu/~tibs/ftp/ggraph.pdf. [13] David Heckerman, David Maxwell Chickering, David Meek, Robert Rounthwaite, and Carl Kadie. Dependency networks for inference, collaborative filtering, and data visualization. Journal of Machine Learning Research, 1:49–75, 2000. [14] Holger Höfling and Robert Tibshirani. Estimation of sparse binary pairwise Markov networks using pseudo-likelihoods. Journal of Machine Learning Research, 10:883–906, 2009. [15] Nicholas Johnson. A dynamic programming algorithm for the fused lasso and ℓ0-segmentation. Journal of Computational and Graphical Statistics, 22(2):246–260, 2013. [16] Kengo Kato. Group lasso for high dimensional sparse quantile regression models. Technical report, 2011. Available at http://arxiv. org/pdf/1103.1458.pdf. [17] Kshitij Khare, Sang-Yun Oh, and Bala Rajaratnam. A convex pseudolikelihood framework for high dimensional partial correlation estimation with convergence guarantees. Journal of the Royal Statistical Society: Series B, 77(4):803–825, 2014. [18] Roger Koenker. Quantile Regression. Cambridge University Press, 2005. [19] Roger Koenker. Additive models for quantile regression: Model selection and confidence bandaids. Brazilian Journal of Probability and Statistics, 25(3):239–262, 2011. [20] Roger Koenker and Gilbert Bassett. Regression quantiles. Econometrica, 46(1):33–50, 1978. [21] Roger Koenker, Pin Ng, and Stephen Portnoy. Quantile smoothing splines. Biometrika, 81(4):673–680, 1994. [22] Steffen Lauritzen. Graphical models. Oxford University Press, 1996. [23] Jason Lee and Trevor Hastie. Structure learning of mixed graphical models. In Proceedings of the 16th International Conference on Artificial Intelligence and Statistics, pages 388–396, 2013. [24] Han Liu and Lie Wang. TIGER: A tuning-insensitive approach for optimally estimating Gaussian graphical models. Technical report, 2012. Available at http://arxiv.org/pdf/1209.2437.pdf. [25] Han Liu, John Lafferty, and Larry Wasserman. The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. Journal of Machine Learning Research, 10:2295–2328, 2009. [26] Han Liu, Fang Han, Ming Yuan, John Lafferty, and Larry Wasserman. High-dimensional semiparametric Gaussian copula graphical models. The Annals of Statistics, pages 2293–2326, 2012. [27] Lukas Meier, Sara van de Geer, and Peter Buhlmann. High-dimensional additive modeling. Annals of Statistics, 37(6):3779–3821, 2009. [28] Nicolai Meinshausen and Peter Bühlmann. High-dimensional graphs and variable selection with the lasso. Annals of Statistics, 34(3): 1436–1462, 2006. [29] Jennifer Neville and David Jensen. Dependency networks for relational data. In Proceedings of Fourth IEEE International Conference on the Data Mining, pages 170–177. IEEE, 2004. [30] Brendan O’Donoghue, Eric Chu, Neal Parikh, and Stephen Boyd. Operator splitting for conic optimization via homogeneous self-dual embedding. Technical report, 2013. Available at https://stanford.edu/~boyd/papers/pdf/scs.pdf. [31] Sang-Yun Oh, Onkar Dalal, Kshitij Khare, and Bala Rajaratnam. Optimization methods for sparse pseudolikelihood graphical model selection. In Advances in Neural Information Processing Systems 27, pages 667–675, 2014. [32] Neal Parikh and Stephen Boyd. Proximal algorithms. Foundations and Trends in Optimization, 1(3):123–231, 2013. [33] Jie Peng, Pei Wang, Nengfeng Zhou, and Ji Zhu. Partial correlation estimation by joint sparse regression models. Journal of the American Statistical Association, 104(486):735–746, 2009. [34] Garvesh Raskutti, Martin Wainwright, and Bin Yu. Minimax-optimal rates for sparse additive models over kernel classes via convex programming. Journal of Machine Learning Research, 13:389–427, 2012. [35] Guilherme Rocha, Peng Zhao, and Bin Yu. A path following algorithm for sparse pseudo-likelihood inverse covariance estimation (SPLICE). Technical report, 2008. Available at https://www.stat.berkeley.edu/~binyu/ps/rocha.pseudo.pdf. [36] Adam Rothman, Peter Bickel, Elizaveta Levina, and Ji Zhu. Sparse permutation invariant covariance estimation. Electronic Journal of Statistics, 2:494–515, 2008. [37] Kyung-Ah Sohn and Seyoung Kim. Joint estimation of structured sparsity and output structure in multiple-output regression via inverse covariance regularization. In Proceedings of the 15th International Conference on Artificial Intelligence and Statistics, pages 1081–1089, 2012. [38] Ichiro Takeuchi, Quoc Le, Timothy Sears, and Alexander Smola. Nonparametric quantile estimation. Journal of Machine Learning Research, 7:1231–1264, 2006. [39] Cristiano Varin and Paolo Vidoni. A note on composite likelihood inference and model selection. Biometrika, 92(3):519–528, 2005. [40] Arend Voorman, Ali Shojaie, and Daniela Witten. Graph estimation with joint additive models. Biometrika, 101(1):85–101, 2014. [41] Yuchung Wang and Edward Ip. Conditionally specified continuous distributions. Biometrika, 95(3):735–746, 2008. [42] Matt Wytock and Zico Kolter. Sparse Gaussian conditional random fields: Algorithms, theory, and application to energy forecasting. In Proceedings of the 30th International Conference on Machine Learning, pages 1265–1273, 2013. [43] Eunho Yang, Pradeep Ravikumar, Genevera Allen, and Zhandong Liu. Graphical models via generalized linear models. In Advances in Neural Information Processing Systems 25, pages 1358–1366, 2012. [44] Eunho Yang, Pradeep Ravikumar, Genevera Allen, and Zhandong Liu. Graphical models via univariate exponential family distributions. Journal of Machine Learning Research, 16:3813–3847, 2015. [45] Ming Yuan and Yi Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 94(1):19–35, 2007. [46] Xiao-Tong Yuan and Tong Zhang. Partial Gaussian graphical model estimation. IEEE Transactions on Information Theory, 60(3): 1673–1687, 2014. 9
2016
195